Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Well, I work in this area and I can tell many of the "automatically learned" models still depend on what features you pick.
For example, if you use "race" as a feature in crime prediction, in the US I am pretty sure "black" people are more likely to commit crimes, and the statistical model will learn it. However, humans decide what features to use to begin with.
In the camera example, if you use both white people and black people in the training data set to train the model, of course black people will form a tight cluster, which leads to problems. But if you train the model with only black people's data, it will be totally different.
Geniuses like Stephen Hawking and Elon Musk warn of the dangers of Terminator-like AIs running amok, but this NYT writer wants you to know that's just silly "hand wringing". The real problem with artificial intelligence is that it's mostly being created by white men. And since white men are racist and sexist, AI will be too.
An even more serious problem is that, when AIs crunch all the crime data, they give racist advice to the police. Police departments across the United States are also deploying data-driven risk-assessment tools in “predictive policing” crime prevention efforts. In many cities, including New York, Los Angeles, Chicago and Miami, software analyses of large sets of historical crime data are used to forecast where crime hot spots are most likely to emerge; the police are then directed to those areas... this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less.
If racist trees weren't bad enough now we have racist robots.
Well, I work in this area and I can tell many of the "automatically learned" models still depend on what features you pick.
For example, if you use "race" as a feature in crime prediction, in the US I am pretty sure "black" people are more likely to commit crimes, and the statistical model will learn it. However, humans decide what features to use to begin with.
If the machine's purpose is to predict criminal activity and advise police on where to deploy, I doubt the results would depend on it knowing about race. Dozens of other categories of data would lead it to predict more crime in black neighborhoods.
Quote:
In the camera example, if you use both white people and black people in the training data set to train the model, of course black people will form a tight cluster, which leads to problems. But if you train the model with only black people's data, it will be totally different.
I imagine clearly distinguishing people with dark skin, hair and eyes is just a fundamentally harder task for the software, but doable in the long run.
Well, I work in this area and I can tell many of the "automatically learned" models still depend on what features you pick.
For example, if you use "race" as a feature in crime prediction, in the US I am pretty sure "black" people are more likely to commit crimes, and the statistical model will learn it. However, humans decide what features to use to begin with.
In the camera example, if you use both white people and black people in the training data set to train the model, of course black people will form a tight cluster, which leads to problems. But if you train the model with only black people's data, it will be totally different.
Geniuses like Stephen Hawking and Elon Musk warn of the dangers of Terminator-like AIs running amok, but this NYT writer wants you to know that's just silly "hand wringing". The real problem with artificial intelligence is that it's mostly being created by white men. And since white men are racist and sexist, AI will be too.
An even more serious problem is that, when AIs crunch all the crime data, they give racist advice to the police.
If racist trees weren't bad enough now we have racist robots.
You seem to frequent a lot of white supremacy sites...
On the op-ed itself, it mentioned both racism and sexism in regards to technology. It also mentioned more than just black people in regards to the photo issues of AI and how internet job posting automatically show female browsers lower paying jobs versus male browsers looking for jobs.
IMO as a couple of the people have shared, this actually is a more important issue that "The Terminator" scenario of killer robots because Google especially is a part of nearly everyone's lives today.
Also FWIW, the op-ed piece did not mention Stephen Hawking...
Quote:
Originally Posted by pknopp
Most of this is simply a fact that new technology takes time to work out the bugs. It is a good argument to keep it out of the hands of law enforcement until it is more precise though.
I agree with this. I also feel that it is a good thing to review the kinks in technology and to ensure that developers are creating apps and websites that can handle a broad array of individuals.
Also, on the OP's comments that darker skinned people are more difficult to discern in pictures.... that is silly. You should also realize that the majority of the world's population is brown to dark skinned individuals and that all sorts of non-white people use these apps and technologies.
This is a common data science problem. "Data is biased", but this has always been an issue. This actually isn't "artificial intelligence" it's actually just a predictive model. It uses simple regressions and the fitting of data to make accurate predictions about what decisions to make next. I don't think "crime prevention" is a useful application for machine learning. But if more black people like yours truly learn more machine learning, we can counteract these with our own models that are more sophisticated. I like the idea of "racist big data". This means that there is an actual market for black people who want to go into the vast field of machine learning and data science.
These are popular articles, and I'm still somewhat learning about machine learning. But it sounds to me that this is an example of "supervised learning", meaning they take historical data, use some sort of regression, and then come up with a prediction. It is among the most crude applications of machine learning. It would be interesting to see what is the best model. I may need to chase down the methodology. I've been looking to cut my teeth on some machine learning problems, and I may have just found one I may find interesting.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.