Blackwire Labs has announced Blackwire.ai as a means of democratising cybersecurity expertise. It describes its first tech-enabled product as an “Enterprise-grade, GenAI tool that combines human expert-vetted knowledge with blockchain-powered integrity to deliver unparalleled cybersecurity insights.”
The company is led by cybersecurity experts from commercial, public, private and military sectors. It has previously focused on its blockchain-based Trustwire technology. It describes it as “a proprietary trust layer ensuring AI integrity, data security, and accountability.” It aims to address organisations’ increasing concerns about how GenAI makes up data and its inability to identify where it comes from.
To ensure the trustworthiness of its data, the company developed its own methodology for examining source and training data. It believes this makes it more trustworthy than generalist AI platforms.
Josh Ray, CEO and Founder of Blackwire Labs, states, “We apply a rigorous source evaluation methodology to ensure our training data is vetted and industry-relevant. This sets us apart from generalist AI platforms and addresses the critical need for trust in AI-powered cybersecurity solutions.
“In an era where AI “hallucinations” and unexplainable outputs are common concerns, we’ve built our entire system around the principle of transparency and accountability. We believe that trust in AI, particularly in the cybersecurity domain, is not just a nice-to-have – it’s an absolute necessity and how else do you unlock the true potential of AI.“
What is Blackwire.ai?
Unsurprisingly, the company has built its GenAI solution on top of Blackwire. It allows it to take advantage of the methodology and controls and claims that Blackwire.ai is “Regulatory-compliance and audit-ready.”
One key to that audit-ready claim is the blockchain that Trustwire uses. It means that any data used in the AI can be identified and is immutable. That will significantly reduce the problem of explaining how the AI came up with any claims or statements. From a privacy perspective, it provides a way to show where data came from and to mark data to be forgotten if it is inaccurate.
Key features
In the announcement, the company says that Blackwire.ai offers:
- Collaborative Analysis: Just as an experienced professional collaborates with various team members, this feature mimics the way a senior practitioner would lead a cross-functional team through complex analyses, bringing together insights from different security domains.
- Smart Prompting: Is like having a mentor who knows exactly what questions to ask and how to ask it, whether it’s a network security, application security, or compliance challenge.
- ContextCraft: Much like a seasoned consultant understands how to tailor their communication to different stakeholders. Our ContextCraft feature does just that, customizing outputs based on the user’s persona, industry, and tech stack. Whether you’re a CISO needing a high-level risk assessment or a SOC analyst requiring detailed mitigations, Blackwire AI adjusts its output accordingly.
- Dedicated Cybersecurity Models: Just as a veteran security professional would have expertise in offensive security, our dedicated models will support red team and penetration testing activities. This is akin to having a seasoned, ethical hacker on your team.
Enterprise Times: What does this mean?
There has been significant growth in AI governance solutions, but most of those are related to managing AI from other vendors. This is different in several ways. The first is that the AI uses Trustwire, which is blockchain-based.
Second, the methodology that Blackwire has developed means that there is high trust in the data. The Precisely Data Integrity Trends and Insights report published last week shows that “only 12% report their data is of sufficient quality and accessibility for AI.” If Blackwire can improve that through the use of its Trustwire technology underpinning the AI, that is a significant win.
A third benefit is the strength of Blackwire’s cybersecurity skills. Few AI companies have the same knowledge and internal skills in this area.
The question is, how will Blackwire insert itself into the growing AI governance market while becoming an AI player? Additionally, this is a closed, not an open-source system, so how will it convince customers that it is the better option for building an AI?
It would also have been interesting for the company to discuss how it sees customers using the solution. It would seem to fit the growing small language model (SML) approach, where organisations build smaller but focused AI solutions. If that is the approach, Blackwire must show how those SMLs will interact to share data.
Perhaps the biggest outstanding question, however, is around the compliance-ready claim. There is no information yet on which compliance legislation the company has proven it can easily meet. Proof of how the audits work and how they deal with PII will be important to customer trust.