Комментарии:
lol fat
ОтветитьI was compelled by my employer to be programmed by this Ted Talk.
I'm not letting some fat Karen with blue hair tell me what to do.
Watched the whole thing with the sound off.
DEI training credit earned, BS not learned.
Extensive checks are done in use cases In insurance and finance industry. Examples like credit card offer and insurance premium are not correct. Checking by gender is extensively tested
Ответить😂😂😂
Ответитьthe best ted talk on data. truly inspiring
ОтветитьThis is so amazing. There is so much need of taking into consideration the results produced by AI algorithms
ОтветитьCouldn't agree more. "Algorithms don't have systems of appeal.", we need to change that.
ОтветитьAnyone who found the Ted Talk to be good, definitely needs to go through her book, "Weapons of Math Destruction". A worthy, concise read that covers a lot of sectors where algorithms are biased.
ОтветитьI can be truthful in this modern age. I am biasedly discriminated because I am a male. My problem is we shouldn't look at genda. I am all for Equal rights but equal of opportunity. Not outcome x
ОтветитьNo I loved what you say. But saying that top companies use this is so so wrong...
ОтветитьAlgorithmic selection for projected success or failure is a form of eugenics. It will have devastating consequences if continued, and should be considered a national security issue.
ОтветитьI once worked for a company taking calls from customers. They used to judge the customer's satisfaction by a follow up call to the customer and an automated survey. Sure customers who were angry were more likely to take that survey but you were hearing it from them. Then they decided to turn this over to an algorithm that listened to the calls and scored the worker based on that. It was utter garbage, demonstratively inaccurate. A customer could profusely thank you for your help at the end of the call and this junk code would say they had a bad experience. We employees and local management had no access to the algorithm, very little data on what it was actually looking for, we were just supposed to trust the process. What it came up with factored into our performance scores and ultimately our raises. It wasn't long before I left.
ОтветитьWe need to learn more about algorithms, they can do amazing things and also very dangerous things
ОтветитьAlgorithms are not necessarily objective. They can have errors that create random outputs (like the teacher ratings), they don't automatically produce fair outcomes but rather yield more of what has already worked (in the case of Fox News' hiring practices and predicting crime).
Ответитьthis needs more views
ОтветитьQ: - Can it be created an algorithm based on a near perfect past?
Abstract thought: - If algorithms repeat past practice, past patterns, then to me the universe is a one source of algorithm. Repeated events on regular basis (super volcano eruption, asteroid impact, mass extinction) and the universal algorithm presumes what will be success, and sends us an asteroid. For this algorithm success would be: every once and awhile, mass extinction. That is its success.
Kind of sci-fi thoughts, or perhaps the matrix 🤔 😎 👀 😉 😳 😏
Great talk, thanks!
ОтветитьLast update
Apparently my professor for this course DOES consider racial bias in AI a serious issue; however, the sources I provided were lacking.
*lesson learned
Be specific to the point of boilerplate explanations
*update from my last post
She gave me an extra 5 points but still an F because:
“ The paper does not provide a controversial area associated with AI and ethics based on literature”
and
“ This paper is more philosophical then scientific”
Even though the whole assignment was about discussing the controversial areas of research and providing examples to support MY THOUGHTS.
It’s ok because this has just made me realize how some people don’t consider racial bias a problem. 🤷🏾♂️ still getting a good grade in the class overall. Good to know I shouldn’t bring up race around this instructor.
Literally wrote a paper about ethics in AI and used this argument as the base for my research. Instructor gave me an F and said racial bias and discrimination in healthcare systems has nothing to do with AI 🤦🏾♂️.
Had to resubmit my paper, still waiting on the results. 🤷🏾♂️
My favorite Ted video - so important to think about this in our present age and culture
ОтветитьAnyone here from Kashipur? XD
Ответить👏🏻👏🏻👏🏻
Ответитьit is not the problem of the algorithm ... it is what you feed it that is causing the problem ... in my early days in university (101 computer programming I guess it was) the instructor once said, this machine is GIGO machine ... if you feed it garbage in it will produce garbage out.
Of course, she does have a great point here, however what I am stressing here is that our modern human societies are so ideological that we are not even able to recognise it anymore
An algo on move to dislike video😂
ОтветитьI am a quite conservative European white male and I can't suffer feminists that much, but this lady opened my eyes and I really appreciated her book. The only serious gender theories come from STEM-related scientists, this is a fact! Chapeau Ms. O' Neil! No one on this Earth deserves to be discriminated, and surely not by a stupid algorithm!
ОтветитьWhy all the dislikes? For her hair or something?
ОтветитьI’ve now seen her in at least two documentaries! Persona on HBO Max being the latest.
ОтветитьTo those who disliked because "she promotes all these leftist ideology" (LOL): As a data scientist, I can easily design an algorithm which helps landlord to find tenants, that takes no fairness in consideration and give 99% offers to female and 1% to male :D because mostly landlords would prefer female and that helps with acceptance rate. Does it sound less leftist?
ОтветитьI read her book, Weapons of Math Destruction. Fairly good book with many convincing examples.
But I feel this presentation didn't have much substance to it. It felt like just one grievance after the other without much supporting substantiation.
I am a data scientist and part of my job is to build a dynamite can't be used for building a bomb.
I do algorithm audits
So what are key implications of a data-driven strategy for managing?
Ответитьthe algorithms made her fat and ugly 😂😂😂😂
ОтветитьWish she was one of my professor!
ОтветитьShe sees what she wants to see out of data. She thinks she is being fair with her point of view, but she is clearly injecting her own bias into it. I am all for having fair and balanced algorithms, but not ones designed by someone like her.
ОтветитьDamn that was on point! Looks like we've found Shapiro's Achilles' Heel.
ОтветитьPossible victim of unethical insurance practices brought to light .My wife and bought a convenience store on a corner of the city that had the second highest crime rate. To be fair the downtown area in general had systiccaly low crime. My wife and I applied for Obama care because she was leaving the private sector to work with me in the store . Within 3 months after being on Obama care our coverage increased 68% for no reason. We couldn't afford to stay on this coverage so she went back to work in the private sector. This left us with a gapping hole with or management. After running the business by myself for 18 month I ran the business into the ground and my health started to suffer. The algorithm the insurance company use to assesse are risk, creater the very problem it was designed to protect against.
ОтветитьI worked as a graduate math teaching assistant at a very ethnically diverse university. I am not proud of it but I should admit that came to the US with an unfounded idea that some ethnic groups are not as good at math as other groups. However, what I found out through firsthand experience was that I was very wrong. I only find one thing that is correlated with an increment in math proficiency that is how much you are willing to try to master the subject. It is the most valuable lesson for me as a potential math teacher. And I am also glad for myself that I was able to be open and humble, and didn't perpetuate my unfounded idea through my mental algorithm to differentiate my students.
ОтветитьTech has destroyed many lives. Most people working as data scientists haven't got a clue what they are doing 👈
ОтветитьYea no, there is no reason to suggest an algorythum would filter out women. that's an assumption. in fact women have made up most of the teachers in history. absurd to say otherwise.
Ответитьyes
ОтветитьI had to read her book for stats class. It was awful, full of unsubstantiated claims.
ОтветитьThis has aged unexpectedly well
Ответить大数据所提供的信息不会百分之百正确,算法所给的答案也是一样,讲演者一直在讲那些个例,他确实存在,但不代表全部,他也许在这个算法中是错误答案,但是他也许在另一套算法中是正确答案。并且,人工智能终究是人类创造出来的东西,不会去带有情感去思考,这一点我们已经非常清楚。所以讨论一下讲演者说的那几个例子,并不是算法或者大数据的错误,而是在那种情况下,是不应该是用算法或大数据做出判断。错的并不是大数据或者算法,在什么情况下来应用数据算法才是关键吧。so,dislike
Ответить