University of Portsmouth announces new tool for security intelligence

Researchers at the University of Portsmouth are working on new ways to analyse security intelligence data. The project is one of eight that are due to be announced by the Centre for Research and Evidence on Security Threats (CREST). The goal is to detect any bias in the work of security analysts. It is also one of the reasons why there has been an increase in the number of AI and machine-based solutions in this area.

Professor Ashraf Labib who is leading the Portsmouth team said: “Intelligence analysts need to process large volumes of data quickly, extracting crucial information to detect potential security threats. This could be identifying certain key words used to denote people, places or objects that need to be highlighted to security services for further investigation.”

Data overload a major flaw in security intelligence

The biggest problem in security intelligence is the vast volume of data to be assessed. The data that is being gathered comes from a variety of different sources. Some of those are easily relatable such as log files that have a common timeline. Other sets of data are harder to correlate. More importantly, a lot of data lacks context. This means that there is a lot of emphasis on the individual security analyst.

Prof Ashraf Labib, Strategy, Enterprise & Innovation, Portsmouth Business School
Prof Ashraf Labib, Strategy, Enterprise & Innovation, Portsmouth Business School

Labib commented: “This kind of analysis relies on consistent judgements, but research and historical evidence shows us that analysts’ judgements are often inconsistent due to the sheer mass of data, the variation in types and nature of intelligence information and the time pressures in which they are operating. This means decisions can be made that deviate significantly from those of their colleagues, from their own prior decisions and from the guidelines and rules they are trained to follow.

“This disparity is mainly due to two types of errors. ‘The ‘noise’ or sheer volume of data, and bias, which can occur over time with individuals and also groups. Both can complicate the intelligence analysis process and can result in key pieces of data being misclassified or overlooked with potential security threat implications. We are developing an innovative analytic approach to address these errors and enable analysts to achieve better judgements and to test it in a group decision making context.”

Enter Dominance-based Rough Set Approach (DRSA)

Over the last year, the major security vendors have resorted to AI and machine-based learning approaches. These systems have advantages. They can ingest and react to very large and disparate data sets. They also lack built-in bias and do not get “word blind”. Importantly they are also able to create correlations and establish context when analysing data.

The research team at the University of Portsmouth are going down a different route. They are looking at Dominance-based Rough Set Approach (DRSA). It is an interesting fit for the problem of bias. It builds on multi-criteria decision analysis which is a good way of describing the data looked at by security analysts. MCDA is already used in complex problems such as bankruptcy risk so extending it and applying it to cybersecurity is not a huge leap.

DRSA provides an analytical framework for MCDA. It is able to deal with the challenge of inconsistent data by the use of a decision table. It allows different sets of ranking values to be assigned to the data. This provides the basis for faster analysis of the data and a reduction in errors.

There are a number of DRSA frameworks already available to researchers. The most popular of these is the jMAF software tool from the Institute of Computer Science of the Poznan University of Technology in Poland.

What is the University of Portsmouth doing?

Interestingly the research team at the University of Portsmouth are not focusing their DRSA work on the data but on the security analyst. They are looking at the patterns and behaviour of the analyst to evaluate their judgement. Importantly they are looking for key factors or biases that are influencing the analysts decision making.

Bias is a particular problem in security intelligence. Individuals have bias based on their own preferences and can overlook key indicators based on this. By comparison it is felt that AI and machine learning systems are bias free. This is not true. Those systems are a product of the materials that are used to train them. If there is an inbuilt bias in the materials the resulting system will be biased.

With humans the bias is more complex to spot and the root-cause can be even harder to resolve. To resolve this the University of Portsmouth is using researchers with experience of root-cause decision analysis. The team also includes specialists in operational research techniques, information elicitation and criminal intelligence.

What is not clear is if the goal is to create a framework similar to jMAF or one that will compete with it.

What does this mean?

Dealing with the soft skills around cybersecurity is hard. Bias, in particular, is not only hard to spot but even harder to resolve. Look around many cybersecurity teams and you see a lack of diversity in terms of gender, religion, race and age. This means that those security analysts that are dealing with security intelligence are predisposed to certain biases. What the University of Portsmouth is doing is looking to provide a solution to that problem.

For large enterprises with security teams all over the world this should be interesting. It will enable them to not only discover bias in their staff but also see where that bias between teams is leading to gaps in cybersecurity threat detection.

LEAVE A REPLY

Please enter your comment!
Please enter your name here