170
In NYC, companies will have to prove their AI hiring software isn't sexist or racist
(www.nbcnews.com)
This is a most excellent place for technology news and articles.
Hey, I am a machine learning engineer that works with people data. Generally you measure bias in the training data, the validation sets, and the outcomes ( in an ongoing fashion - AIF 360 is a common library and approach ). There are lots of ways to measure bias and or fairness. Just checking if a feature was used isn’t considered “enough” by any standards or practitioner. There are also ways to detect and mitigate some of the proxy relationships you’re pointing to. That being said, I am 100% skeptical that any hiring algorithm isn’t going to be extremely bias. A lot of big companies have tried and quit because despite using all the right steps the models were still bias https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. Also many of the metrics used to report fairness have some deep flaws ( disprate impact ).
All that being said the current state is that there are no requirements for reporting so vendors don’t do the minimum 90% of the time because if they did it would cost a lot more and get in the way of the “AI will solve all your problems with no effort” narrative they want to put forward so I am happy to see any regulation coming into place even if it won’t be perfect.