📐Switch back subject
"kill your darlings"
Explanation
While working on my topic, I came to the conclusion that a bias in the field of covid-19 cough (self diagnostics) is too difficult for me to represent and that is why I made the choice to fall back on biases in predictive policing systems.
The topic is not changing very much, it is still about biases in artificial intelligence systems, but now i want to link this to predictive policing systems that are applied to people.
Security systems are everywhere these security cameras are becoming more and more sophisticated our world. The police use these systems in combination with artificial intelligence to create risk profiles. This can ensure that certain people from a certain country or ethnicity are more likely to be classified as suspicious by the system.
I think this is a huge problem because we have contribute a bit more of ethical responsibility for the artificial intelligence systems that we create in this world. Biases can creep in the system where we police officers assault certain groups of people as suspicious without this person having done anything wrong.
Because the systems are fed with a selected number of datasets and historical data, the biases remain in the system and certain people are automatically labeled as potentially suspect.
With this concept I want to show the biases which are projected onto us as human beings without our knowing about it. The purpose of my application is to create awareness of the fact that these systems are already being applied to us and that we have to think about the extent to which we want to apply these systems in our society.
Concept
The idea is to build a risk profile based on input that the webcam can register (like a surveillance camera on the streets) . The algorithm will be fed with data to make a profile of you as person. You can think of estimated age, ethnicity, gender, face expressions, etc.
Once a profile has been created, the system will give you an insight into what the chances are that you are a suspicious person and what crime you could possibly commit.
By collecting different type of crime data i want to connect the "predicted profile data" to make a risk profile. This will show different type of biased labels which the system has connected to you based on historical data.
With this concept I want to show that your appearance, face expression, ethnicity, gender may be used by predictive policing systems and when you change one of the variables your risk profile can generate a different outcome. Which can lead to discrimination of certain persons, ethnic profiling and maintaining self fulfilling prophecy by predictive policing systems.
To show the biases you can compare your data with other people, so you get a who's more likely to commit a certain crime.
Data collection to use
Geregistreerde criminaliteit; soort misdrijf, regio (https://opendata.cbs.nl/statline/#/CBS/nl/dataset/83648NED/table?fromstatweb)
Verdachten van misdrijven; leeftijd, geslacht en recidive (https://opendata.cbs.nl/statline/#/CBS/nl/dataset/81997NED/table)
Registraties en aanhoudingen van verdachten; nationaliteit (https://opendata.cbs.nl/statline/#/CBS/nl/dataset/82315NED/table)
Face expressions (https://www.kaggle.com/shawon10/ckplus )
Faces collected from international students (when they agree)
For more information you can check my new designbrief :
📌Designbrief v2
Last updated
Was this helpful?