The crowdsourced surveillance that unfolded in the wake of the terrorist attacks in Boston is no surprise to futurists. Indeed, it sounded rather familiar. Poking around, I found this scenario vignette in a paper I wrote for the Proteus security conference five years ago:
They thought they had the assassin: he was at the rally when it happened, affiliated with a radical pro-male group in his homeland; he had ranted violently against Ms. Harstad’s UN activities on his blog, and he had a traditional blowgun like that used to deliver the tiny dart. Crime scene investigators ran the coordinates against public images and video: there were approximately 15,000 still images and 300 videos of the square that day. After a moment’s collation, the computer delivered the three-dimensional reconstruction of the hours around the speech. The team zoomed in on where Mwazi claimed he had been standing. There he was. They watched him from the front and each side, and it became clear: he never raised his hands near his mouth, and never used a visible comm device or even seemed to say anything. Unless they could hold a man for scowling, they were going to have to let him walk.
As I allude to in the vignette, it will be possible to automate many of the processes now being done laboriously and partly by hand. Experiments have shown that public photos can be used to reconstruct 3D images of places.
Meanwhile, the spontaneous crowdsourcing around the Boston case has shown uneven results. The most prominent effort, on Reddit, failed to spot the real suspects, while singling out many uninvolved people, and crowd-based efforts also seem to have misidentified an innocent missing person as a potential terrorist. On the other hand, a bystander did discover a high-quality image of a suspect in one of his snapshots.