Main menu


How to write effective AI policies

featured image

For your first question, I think you are correct. Policy makers should really define guardrails, but I don’t think they should do it for everything. The EU calls them high risk. And perhaps from there we might get some models to help us think about what the risks are, where to spend more time, and potentially where to spend more time with policy makers. not.

When it comes to co-designing and co-evolving feedback, I’m a big fan of regulatory sandboxes. Well, there is an article published in the Oxford University press book about incentive-based grading systems. But conversely, I think you all have to take reputation risk into account.

As we move into a more digital world, developers need to exercise due diligence. As a company, we can’t afford to publish what we think is the algorithm or the best idea for an autonomous system and put it on the front page of a newspaper. Because it reduces consumer confidence in the product.

What I’m trying to say is that on both sides I think it’s worth discussing that there are certain guardrails when it comes to facial recognition technology. Because it lacks technical accuracy when applied to all populations. In terms of different impacts on financial products and services, there are good models that I have found in my work in the banking industry. Because these models actually have triggers and regulators that help us understand the proxies that actually have different impacts. There are areas where we’ve seen this right in the housing and appraisal markets where AI is being used to replace subjective decision-making, but even more so in the types of discrimination and predatory appraisals we see. Contributing. Sometimes we need to impose guardrails on policy makers, but more than that, we need to be proactive. I always tell policy makers that data can’t blame her scientists. If the data is terrible.

Anthony Green: right.

Nicole Turner Lee: Put more money into R and D. Help us create better data sets that are overrepresented in certain regions or underrepresented with respect to minority populations. The important thing is that it must work together. I don’t think you’ll get a good solution if policy makers actually lead this, or if data scientists lead it in certain areas. I think it’s really necessary for people to work together and collaborate on what those principles are. Computers are not. You know what you’re doing with these models when creating algorithms, autonomous systems, or advertising targeting. we know! We in this room can’t sit down and say we don’t understand why we use these technologies. We know because there is precedent for how they have been magnified in our society, but they require accountability. Who holds us accountable for these systems we are creating?

Anthony, it is very interesting as many of us have seen the conflict in Ukraine in the last few weeks. My daughter was like, I know she’s 15, so she comes up to me with her various TikToks and other stuff and she’s like, ‘Mom, did you know this was going on? I saw you saying. And she went down that road with her, and in a way, without knowing it, she really got into the conversation, so I had to pull myself back in. I had to pull myself back into that well. I’m going to go deep into it.

Anthony Green: yes.