This idea is original, and made me think of the saying that the best firewall would be that one made by a human being analyzing packets one by one, manually.
I don't think so, no. Regular users lack the understanding of machines' inner workings and the protocol subtleties needed to determine what to do in even the most trivial scenarios. They want to watch a movie they got from a friend, but they have no clue if that translates to clicking Yes or No in some firewall pop-up prompt.
Of course we could hire professional security experts to somehow perform near real-time review of someone else's traffic--but then, they have no way of knowing what the user really wants or expects his computer to do. Clicking on a link to some obscure .exe file may be a phishing attack or a legitimate download of a software update for a Mongolian Tetris-alike game we never heard of. Without feedback from the user, it's hard to tell what should be done.
Why do you think we cannot develop a software that behaves autonomously like a human being in such restricted environments?
We did! Most antivirus programs and personal firewalls usually attempt to implement a certain level of "smart" adaptive protection, rather than forcing the user to define everything from scratch. I doubt this could be much improved without reading the user's mind and scanning it for true intentions and motivation for each mouse click.
Four years ago, you wrote an interesting paper entitled Against the System: Rise of the Robots for Phrack #57. The main idea was to use search engines' spiders as an attack vector. Just writing specially formatted URL on a web page, and wait for the spider to follow those links. After all this time, do you see a better or worse situation?
Our ability to understand how search engines address potential security threats and other abuse scenarios is very limited; most of them, Google included, are very secretive and not willing to discuss their business. As such, I can only guess--but my impression is that a good number of major crawlers implemented basic checks to trap the most obvious exploitation attempts, simply by rejecting URLs that either appear clearly malicious or very strange.
That said, I don't think the problem can ever be fully addressed; what exploits one web script is a valid and expected parameter to another.
You developed a chatting bot that uses Google to create answers. Obviously it's far from being perfect, but do you think that AI software could use Google as a repository of human knowledge to make decisions such as what is spam and what is a valid email?
Certainly Google and other search engines are in possession of databases that may come handy for various applications--AI, automated learning, and content classification included. Unfortunately for us mortals, they do not share their databases with others; they only provide you with a very limited search front end or similarly limited API, which is obviously not enough to write much more than a toy chatbot (and even that is a possible violation of their Terms of Service, as I learned from a friendly cease-and-desist-style letter a while ago).
I don't know if you play modern videogames, but I saw some projects about AI on your website. I think we could learn something from the work game developers have done with AI. We both act in a restricted context (rules of the virtual world/rules of protocols), but for my experience their "intelligence" is much better than most security tools.
Funny you say that--many gamers complain that mainstream game AIs are awful and too dumb, and ruin all the realism.
As to your question--AI is a meaningless term; it can mean just about any algorithm that either appears to have certain poorly defined "human" characteristics, or an algorithm that mimics a certain aspect of wetware processing, or simply a program that adopts in some way. As such, yes, some of the techniques and tricks collectively labeled "AI" can be of some use--now, if we had a great idea how to put this to work. ;-)
Some of them seem to work as regular expression catchers. Think of Snort, for example; it seems a pattern-matching tool. Should we start playing with AI in security too?
Snort could be greatly improved by simply working on signature quality, making it more stateful, incorporating more heuristics and so on. (Some may argue that heuristic algorithms are AI, but that's what most commercial AV and IDS products can already do, and so this is beyond the scope of that question.)
It's easy to claim that "adding AI" would help (or to claim the opposite), but as I mentioned, this means nothing--it's like saying "computers would be faster if we invented some cool new computing method." Sure, but so what? Unless you're talking about a very specific use of a specific, known algorithm, you're not going to get anywhere.
Federico Biancuzzi is a freelance interviewer. His interviews appeared on publications such as ONLamp.com, LinuxDevCenter.com, SecurityFocus.com, NewsForge.com, Linux.com, TheRegister.co.uk, ArsTechnica.com, the Polish print magazine BSD Magazine, and the Italian print magazine Linux&C.
Return to the Security DevCenter.