Quarterly insights: Cybersecurity
RSA 2023 and Q2 highlights: Key insights and takeaways

We present our key takeaways from the 2023 RSA cybersecurity conference and our follow-up meetings and conversations during the quarter.
AI, which has been used in cybersecurity for years, was a hot topic, but we were surprised to hear many attendees focusing on the challenges of using AI in cybersecurity.
Generative AI in cybersecurity remains at an early stage, but attendees identified some areas where it is likely to be most useful in the near term.
Expanding enterprise attack surfaces, increasing complexity, and expanding purviews for CISOs are all escalating cybersecurity challenges despite growing cybersecurity budgets.
The worsening cybersecurity talent shortage is pushing enterprises to continue consolidating their cybersecurity solutions with fewer vendors and increase their use of managed cybersecurity services.
TABLE OF CONTENTS
- AI use promising, but comes with challenges
- Generative AI still early in adoption for cybersecurity
- Expanding attack surface and greater complexity keep security top of mind
- Cybersecurity budgets continue to grow…
- …but so does budget pressure due to expanding scope
- Worsening talent shortage requires more vendor consolidation and managed cybersecurity services
- Still an exceptionally fertile market
- Cybersecurity index near one-year high
- Cybersecurity M&A: Notable transactions include Armorblox, Absolute Software
- Cybersecurity private placements: Notable transactions include Blackpoint, CyberQP, and NetRise
AI use promising, but comes with challenges
Artificial intelligence (AI) was a hot topic at RSA, as it has been in many technology areas this year. The promise of AI in cybersecurity is well known, and AI-based cybersecurity solutions have long existed. For example, AI-based cybersecurity solutions use AI to establish baselines of normal data traffic and user behavior that they can use to detect anomalies indicating insider threats, account compromises and unauthorized activities. By continuously monitoring and analyzing traffic and user behavior, AI can identify early warnings of breaches and generate alerts to potential malicious activities.
The aspect of AI that caught our attention this year was the number of conversations we had that focused on challenges of using AI in cybersecurity. One example is that AI systems create a high level of noise, or false positives. While AI-based solutions can find novel threats that signature-based solutions cannot, they also tend to flag many benign anomalies, meaning only a fraction of their alerts are relevant and actionable for cybersecurity analysts. To address this, these AI-based systems are being paired with non-AI based systems to reduce the workloads they create and make them more practical to implement.

Request full report
To access the full report, please provide your contact information in the form below. Thank you for your interest in First Analysis research.