How could we possibly expect that there wouldn’t be bias? Is based off the patterns that humans use. Humans have bias. The difference is that humans he can recognize their bias and worked overcome it. As far as i know chat GPT can’t do that.
Because they don’t know what “AI” is so they think it’s this technical thing that just knows things, all the things, magically. I’ve seen confident statements like “we use AI in our recruiting process because it has no bias!!” 🤦♂️
You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.
It’s not easy, but it can be done. And if you have smart people working on it you’ll get it done.
You start off by claiming that humans can’t recognise their biases and end up by saying that there’s no problem because humans can recognise their biases so well they can programme it out of AI.
Not the original commenter, but I would guess that the goal would be to reflect the population. Women are about 50% of the population, so assuming all things created equal, they should be about 50% of any other population, like those with a specific job title.
You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.
Only if you can recognise the bias, and what the cause of the bias is to fix it.
It’s not implausible that the AI might come to the same trend using similar patterns, even if you excised the gender data. People with particular names, hobbies, whether they’d joined a sorority, etc.
A slapdash fix to try to patch the bias by just adding a positive spin might not do that much, and most of the time, you don’t know the specifics of what goes on inside a model, and what different parts specifically contribute to what. Let alone one owned by another company like ChatGPT, who would very much not like people pulling apart their LLMs to figure out how they work, and what they were trained on.
Consider the whole Google Bard image generation debacle, where it’s suspected that they secretly added additional keywords to prompts to try to minimise bias, causing a whole bunch of other problems because it had unpredicted effects.
Any LLM won’t have the right architecture to implement that kind of math. They are built specifically to find patterns, even obscure ones, that nobody knows of. They could start flagging random shit indirectly associated with gender like relative timing between jobs or rate of promotions, etc, and you wouldn’t even notice it’s doing it
How could we possibly expect that there wouldn’t be bias? Is based off the patterns that humans use. Humans have bias. The difference is that humans he can recognize their bias and worked overcome it. As far as i know chat GPT can’t do that.
Because they don’t know what “AI” is so they think it’s this technical thing that just knows things, all the things, magically. I’ve seen confident statements like “we use AI in our recruiting process because it has no bias!!” 🤦♂️
Can they? I’m not convinced.
You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.
It’s not easy, but it can be done. And if you have smart people working on it you’ll get it done.
You start off by claiming that humans can’t recognise their biases and end up by saying that there’s no problem because humans can recognise their biases so well they can programme it out of AI.
Which is it?
Why is 50% the target?
Not the original commenter, but I would guess that the goal would be to reflect the population. Women are about 50% of the population, so assuming all things created equal, they should be about 50% of any other population, like those with a specific job title.
Just as we stopped beating our chest with fist when angry, humans can understand pattern and forcefully prevent them.
Only if you can recognise the bias, and what the cause of the bias is to fix it.
It’s not implausible that the AI might come to the same trend using similar patterns, even if you excised the gender data. People with particular names, hobbies, whether they’d joined a sorority, etc.
A slapdash fix to try to patch the bias by just adding a positive spin might not do that much, and most of the time, you don’t know the specifics of what goes on inside a model, and what different parts specifically contribute to what. Let alone one owned by another company like ChatGPT, who would very much not like people pulling apart their LLMs to figure out how they work, and what they were trained on.
Consider the whole Google Bard image generation debacle, where it’s suspected that they secretly added additional keywords to prompts to try to minimise bias, causing a whole bunch of other problems because it had unpredicted effects.
Any LLM won’t have the right architecture to implement that kind of math. They are built specifically to find patterns, even obscure ones, that nobody knows of. They could start flagging random shit indirectly associated with gender like relative timing between jobs or rate of promotions, etc, and you wouldn’t even notice it’s doing it