


The next image shows Cisco Stealtwatch using baselining to detect various types of network based threats. Anomaly tools actually get smarter over time on their own just by collecting data. This means unlike other threat detection approaches, administrators do not need to spend countless hours adjusting the tools also called “tunning”. In both the endpoint and network example, the longer the tools run, the more normal data is collected, and a better baseline is established. Cisco Stealthwatch and Plixer are two examples of tools that learn network behavior through baselining and identify when unusual behavior is seen. There are network tools that can also function in a similar manner. The next image shows an example of Cisco Tetration baselining workload distribution behavior, network traffic behavior via how applications are communicating with the network, and child activity also known as processes that are being run. This larger baseline concept leads to the ability to dynamically learn and adjust an allow list without human interaction since changes are considered normal, hence the technology can permit the traffic based not just on a permit list but permit behavior that has been established within the baseline. Over a period of time, one process talking to different processes can also be part of the baseline if this activity is considered normal behavior.

Learning everything about a host includes monitoring all processes running as well as how they communicate with other processes. The concept works by not only mapping normal behavior but also learning relationships between applications. Cisco Tetration is an example of this type of technology, which allows the creation of allow lists (also referred to in the industry as whitelisting). As this occurs, if any unusual activities are seen, an anomaly can be detected and investigated. Many bleeding-edge host security tools are designed to monitor any process running on a host and map out its behavior. Learning about an environment means collecting data. First, let’s look closer at the first approach, which is baselining. The other approach is to pull in a ton of historic data and immediately compare things against it to find outliers. One way is to learn an environment over a period of time and map out common behavior known as baselining the environment. Establishing normal can occur in two ways. This makes up the first part of any anomaly detection capability which is understanding normal behavior. I believe everybody needs to be aware of how this approach to security to better understand the future of the security market space.īefore you can determine something is an anomaly, you must first understand what is considered normal. It takes into consideration hot topics including big data, threat intelligence, and “zero-day” detection. I believe this capability is the future of security and I see this is where a lot of the current innovation is occurring in the security space. The third capability is what I’ll focus on in this post, which is to look for anomalies.
