
London-based Callsign has closed a $35 million Series A, led by Accel and early stage investor PTB Ventures, for an authentication platform which uses deep learning technology to power adaptive access control for enterprises — saying it can verify a person is who they say they are just from a swipe on a touchscreen.
Other investors in the Series A include Allegis Capital and cybersecurity industry veteran David DeWalt’s NightDragon Security.
The company, which was founded in 2012 — though only launched its platform to major customers 18 months ago — name-checks the likes of Lloyds Bank and Deutsche Bank as clients, and says the platform has been deployed “to hundreds of thousands of users” globally at this stage.
Its approach essentially combines multi-factor authentication with fraud analytics powered by deep-learning technology to offer a authentication platform that can adapt to potentially suspicious signals to combat the threat of unauthorized logins.
The wider aim is to help enterprises mitigate the risk of unauthorized access after login credentials have been stolen or compromised via a data breach or phishing attack.
It’s worth noting that Callsign does not replace any existing authenticator technologies — rather the aim is to enable businesses to more effectively deploy these technologies, based upon the intelligence it gathers and the policies it allows enterprises to flexibly set.
The platform works by analyzing a variety of signals in real-time, pertaining to each login attempt, and then adapts dynamically to offer “the most appropriate security challenge(s)” — based on its analysis of “hundreds of data-points”, according to founder and CEO Zia Hayat.
This means it might prompt for a user to be asked for a password, PIN, fingerprint, face, voice biometric — or “even nothing” — at the point of login.
The approach aims to balance “security with user experience”, says Hayat.
AI and crypto mechanisms
He describes its core tech as “AI and crypto mechanisms combined with a highly intuitive policy manager”. “We have several patents (both granted and filed) as well as trade secrets,” he adds. “We have developed our own unique AI models in-house based upon a combination of techniques that our team (mainly ex-BAE Systems and Lloyds Banking Group data scientists) have developed in the deep learning space.”
Examples of the kinds of signals it’s looking at to verify identity include GPS, Cell tower ID, IP, WiFi, gyroscope, accelerometer, Force Touch, screen co-ordinates, timings of taps, mouse movement coordinates, tcp/ip settings, clock settings, browser type — “and many more”.
“The purpose of this data analysis (this is our unique AI) is to spot potentially suspicious usage and then adapt the authentication journey (security challenges),” Hayat tells TechCrunch via email. “For example if the user has the correct password but the circumstances around this are unrecognised then the system may request that the user provides a fingerprint as well.”
“From an operational perspective, the system allows businesses… to easily define and evolve policies that adapt to changing circumstances (i.e. threat landscape), either automatically based upon the data analysis or manually (by security ops team) based upon other intelligence.”
According to Hayat it takes around six to 10 logins to “sufficiently train” the platform to identify each user. Before which a “non-trained journey is executed” — which means a user is always prompted for a fixed number of factors, such as PIN and fingerprint, with the specific combination set by the…