After reading up on Database Activity Monitoring (DAM) tools over the last few weeks I have come up with the following observations about them to help anyone who wants a quick overview of DAM tools. All this information can be found on the vendor websites. They all do things slightly differently and I stand to be corrected if I have misunderstood anything.
The vendor list includes:
DAM tools have come about from the requirement to audit who is doing what in the database. Of course, it is possible to use native auditing (i.e. audit by using database functionality) but then you run the risk of the DBA tampering with the audit logs if, on the off chance, they are that way inclined. Separation of duties is required by most compliance standards so the auditing must be under the control of someone outside the DBA department. This is where DAM tools come in because they site outside the database and are controlled by an individual in another department (possible the audit & compliance team). Separation of duties in this case actually benefits the DBA because if there is a security breach then the DBA's involvement can easily be ruled out. If the DBA had control over the database and the audit information then there would always be a suspicion that the DBA could have done it and had altered the audit logs to hide the tracks.
So from this, it is obvious that native auditing will not satisfy the "separation of duties" requirement and this is where DAM tools come in.
In general, DAM tools can monitor database activity at two points - on the network or on the database host server. Depending on the vendor, they may be used separately or in combination with one another.
Monitoring the database SQL that passes over the network has the advantage that there is no load put on the database server but the disadvantage is that they have no knowledge of local connections made to the database. A network based DAM tool will require new hardware to be attached to the network to sniff the traffic.
To overcome this shortcoming of a pure network based solution it is possible to install an agent on the database host to monitor the local traffic to the database. This agent will work in conjunction with the network sniffer. The obvious concern with an agent is that it will have a performance hit on the database server so to keep the load on the database to a minimum these agents simply relay audit data back to a central location where it can be monitored and / or analyzed.
If the DAM tool is purely a software solution then no network sniffer is required. However, you will still need somewhere to store all your audit data.
Once you have the ability to monitor all database traffic, then the next step is tell your DAM implementation what exactly you want to monitor and what to ignore. There are GUI interfaces for this. I haven't played with any evaluation copies yet so that is all I can say.
When the rules are in place, you will have a system to monitor who is accessing what on your databases and you can set up alerts to tell you when suspicious activity takes place. You may also be provided with pre-configured reports to assist you with all the various compliance checks.
So far the implementation is a passive one for monitoring who is doing what in your database and reporting on suspicious activity. It is possible to extend this solution to actively monitor so that pre-determined suspicions activity can be blocked in much the same way as an application firewall blocks traffic. We have now moved from a system that purely monitors to one that can take action. This can be referred to as a database firewall.
A neat use of active monitoring is "virtual patching" where by a rule can be put in place to prevent a known database vulnerability being exploited by blocking the traffic if it looks suspicious. The ultimate solution to vulnerabilities is to apply the patch but when dealing with critical databases there are usually change control processes in place that require proper testing of any software patch. This results in a time lag between the release of the patch and it's application to the database which will leave the database exposed to newly discovered vulnerabilities. A DAM rule can be put in place to recognise when an attempt is made to exploit a known vulnerability and so the database connection can be blocked and the incident reported (i.e. a virtual patch is in place to protect the database). The database administrator has now been given some breathing space in order to get the patch on to the production in a controlled manner.
Some things to consider when evaluating a DAM tool:
- Impact on the database server (CPU and memory)
- Can it handle encrypted client-server communication
- Does it need encryption / description keys to see encrypted traffic
- Can it see what SQL a stored procedure executes or does it just record that a particular stored procedure has been run.
- Can it track what an application user is doing when pooled connections are being used.
- Where is the audit data stored and is the location tamper proof?
- Does it store the complete SQL statement in the DAM data repository? If so you run the risk of keeping sensitive data in your DAM repository.
- Pre-configured reports to help with compliance regulations