Tuesday, February 13, 2018

Messaging the use of AI against terrorist propaganda

In this ultra-communicative world we now occupy, part of the challenge faced by any authority is to get its message out there. It’s not enough to do the right thing, quietly and in a corner: you have to put it on a press release. No, more than that, you have to make a statement that shows you mean business.

Such as, “We’re not going to rule out taking legislative action if we need to do it,” the money statement, the soundbite that has taken the headlines about the UK’s funding of £600,000’s worth of terrorism-related image recognition.

It’s worth unpicking this action, and this statement. First a negative: the figure sounds like a lot, until you actually think about it. To put it in perspective, the InnovateUK governmental funding body has allocated £6,251,375,051 in grants over 14 years, or about half a billion a year, to technology projects. The figure is so big because £600K doesn’t actually buy you much.

On the upside, it’s a big enough figure to show more than a passing interest. The government is investing in AI, and not just that, it is spending on the kind of AI that might make a difference. Which has to be seen as a good thing.

Flipping back, the trouble is that the headline news (that AI can recognize jihadist material) is subject to the law of consequences. Simply put, if software becomes very good at recognising black and white scarves, the terrorists will stop wearing them.

The straightforward answer here is that algorithms will continue to evolve, to take evolving imagery into account — but this doesn’t account for the complexity and breadth of the system. For example recruiters can take a different tack (such as the funny-meme based strategy adopted by UK far-right group Britain First).

I’m not saying this in some kind of yah-boo-sucks-it-ain’t-gonna-work rant. Thinkers and do-ers in both governments and corporations know that you can’t just stick an AI band-aid on a complex problem. They also know that a million bucks is a drop in the technological ocean. Which begs the question — why, therefore, is this headline news?

Behind the statements are messages of intent: that public and private institutions alike recognise they have created a monster they don’t know how to control; and that they need to work together to deal with the consequences. The intriguing thing is, to talk about it, they make use of the same, democratizing yet dangerous tools that cause the damage in the first place.

We are all in the same boat, left trying to interpret what is being said wherever it comes from. Perhaps — indeed, this could be a prediction — we will soon be able to apply AI to all communications from any organization, filter out the nasty stuff and see what is behind the messaging. In the meantime, we need to recognize that what we see on the surface is as much about controlling the narrative as representing what is going on.




via Tech Republiq

No comments:

Post a Comment