STOCKHOLM, SWEDEN - When a picture of you is online, it creates the potential for surveillance from across the world.
Adam Harvey, director and founder of VFRAM, a computer vision project based in Berlin, Germany, spoke with Kiripost in Sweden about data, artificial intelligence and his machine learning to detect munitions in Ukraine.
Harvey, whose firm develops open-source software for human rights researchers and mainly works on detecting objects and algorithms, was in Sweden for the Stockholm Internet Forum (SIF) last month.
Harvey said that when people share photographs online, it is hard to understand how much information you are sharing. In one photograph, faces that appear in the background even have enough information for facial recognition, he said, declining a request to have his picture taken for the interview.
“More specifically, all you need is between 40 by 40 to 100 by 100 pixels when you take a photo on a 20-megapixel phone, there’s a lot of faces that are at high resolution so it creates a threat. It creates a danger for people that aren't expected to lead photographs or don’t want to be photographed.
“I think we have to understand the real life about the internet. The internet is a place where it’s difficult to defend against surveillance because data is published, it’s exposed to surveillance from all over the world.”
Data is the new oil
Harvey said that a lot of people have heard the expression that data is the new oil. Eventually, people have realised that data has a lot of value to a lot of different people all over the world, and people have to understand that when we publish data, we give something away for free and often, we do not get anything in return.
“In this new era of artificial intelligence, data is an important commodity for developing algorithms. When you post photos on social media, you can be 100 percent sure that those images are being used by someone, somewhere to develop their artificial intelligence and the question you should ask yourself is what are you getting in return? Because the answer, unfortunately, is usually nothing.”
Trust the person
Harvey also touched on the issue of journalists using AI to write stories. He said it is important that people read over their work and that there is a problem that it will decrease trust.
Using AI to write stories is the future and the present, it is already happening, Harvey said, adding that there’s no turning back at this point.
“We have to accept that there’s so much data published that we can use that data to regenerate more data. It creates a weird skewism in reality of the internet because now so much data we know is untrue,” he said, adding the “problem is it will decrease trust”.
“As a reader, I think it’s important to look at the author and make sure that they are taking responsibility because it’s okay to use these tools as long as someone reads over and makes sure that everything is true,” he said.
“We have to trust the person, we put the responsibility on the author and on the journalist.”
Harvey also talked about VFRAM’s project, which is developing a machine learning software that can detect munitions or cluster munitions in videos that are posted online. He said he is using computer vision trained on images using 3D modelling.
“It's a new breakthrough to be able to use what’s called synthetic data,” he said, adding that cluster munitions are being detected in videos in Ukraine. In addition, submunitions used in the attack by Russia on Ukraine are able to be detected with computer vision. This automates the perception label that a small group of researchers otherwise could not accomplish.
The videos that are detected are then provided to a legal team of researchers to help build the case to bring accountability and justice for what is happening in Ukraine, he said.
It is being developed as part of a pilot project with Berlin-based human rights group, Mnemonic, and it is now at the stage where they run on archive videos from Syria and has gained good results in detecting the leftover remains from cluster bombs that were used there.
“So, it's at the stage where it does need more work to be more effective and needs to include more objects. I would say it passed the prototype stage and is in the stage where it’s been optimised and scaled up, accuracy is being improved, things like that.”
He added that these tools will eventually have a lot of benefits to human rights researchers to analyse videos and images from conflict zones.