Make request
Kimberly Houser
PROFESSOR
Assistant Professor, Oklahoma State University
stillwater, United States
Activity description
Kimberly is an Assistant Professor of Legal Studies at Oklahoma State University and tech attorney. Research available at https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2649609
Areas of interest
Activities
Conference calls
Speaker
Workshops
Brainstorming
Presentations
Strategy checks
Technology transfer
Whitepapers
Feasibility studies
Testing products
Innovation projects
Research projects
Internships
Student thesis
Board member
Coaching
Research consortiums
Case studies
Desired rate per hour
185.00
Portfolio links
https://www.linkedin.com/in/kimberly-a-houser-13a787/


Research
Research name Contact person Description Organization Keywords Start / End year
Can AI solve the diversity problem in the tech industry? Mitigating noise and bias in employment decision-making Kimberly Houser After the first diversity report was issued in 2014 revealing the dearth of women in the tech industry, companies rushed to hire consultants to provide unconscious bias training to their employees. Unfortunately, recent diversity reports show no significant improvement, and, in fact, women lost ground during some of the years. According to a 2016 Human Capital Institute survey, nearly 80% of leaders were still using gut feeling and personal opinion to make decisions that affected talent-management practices. By incorporating AI into employment decisions, we can mitigate unconscious bias and variability in human decision-making. While some scholars have warned that using artificial intelligence (AI) in decision-making creates discriminatory results, they downplay the reason for such occurrences – humans. The main concerns noted relate to the risk of reproducing bias in an algorithmic outcome (“garbage in, garbage out”) and the inability to detect bias due to the lack of understanding of the reason for the algorithmic outcome (“black box” problem). In this paper, I argue that responsible AI will abate the problems caused by unconscious biases and noise in human decision-making, and in doing so increase the hiring, promotion, and retention of women in the tech industry. The new solutions to the garbage in, garbage out and black box concerns will be explored. The question is not whether AI should be incorporated into decisions impacting employment, but rather why in 2019 are we still relying on faulty human-decision making? Oklahoma State University 2018 - 2019
Can AI solve the diversity problem in the tech industry? Mitigating noise and bias in employment decision-making Kimberly Houser After the first diversity report was issued in 2014 revealing the dearth of women in the tech industry, companies rushed to hire consultants to provide unconscious bias training to their employees. Unfortunately, recent diversity reports show no significant improvement, and, in fact, women lost ground during some of the years. According to a 2016 Human Capital Institute survey, nearly 80% of leaders were still using gut feeling and personal opinion to make decisions that affected talent-management practices. By incorporating AI into employment decisions, we can mitigate unconscious bias and variability in human decision-making. While some scholars have warned that using artificial intelligence (AI) in decision-making creates discriminatory results, they downplay the reason for such occurrences – humans. The main concerns noted relate to the risk of reproducing bias in an algorithmic outcome (“garbage in, garbage out”) and the inability to detect bias due to the lack of understanding of the reason for the algorithmic outcome (“black box” problem). In this paper, I argue that responsible AI will abate the problems caused by unconscious biases and noise in human decision-making, and in doing so increase the hiring, promotion, and retention of women in the tech industry. The new solutions to the garbage in, garbage out and black box concerns will be explored. The question is not whether AI should be incorporated into decisions impacting employment, but rather why in 2019 are we still relying on faulty human-decision making? Oklahoma State University 2018 - 2019
Can AI solve the diversity problem in the tech industry? Mitigating noise and bias in employment decision-making Kimberly Houser After the first diversity report was issued in 2014 revealing the dearth of women in the tech industry, companies rushed to hire consultants to provide unconscious bias training to their employees. Unfortunately, recent diversity reports show no significant improvement, and, in fact, women lost ground during some of the years. According to a 2016 Human Capital Institute survey, nearly 80% of leaders were still using gut feeling and personal opinion to make decisions that affected talent-management practices. By incorporating AI into employment decisions, we can mitigate unconscious bias and variability in human decision-making. While some scholars have warned that using artificial intelligence (AI) in decision-making creates discriminatory results, they downplay the reason for such occurrences – humans. The main concerns noted relate to the risk of reproducing bias in an algorithmic outcome (“garbage in, garbage out”) and the inability to detect bias due to the lack of understanding of the reason for the algorithmic outcome (“black box” problem). In this paper, I argue that responsible AI will abate the problems caused by unconscious biases and noise in human decision-making, and in doing so increase the hiring, promotion, and retention of women in the tech industry. The new solutions to the garbage in, garbage out and black box concerns will be explored. The question is not whether AI should be incorporated into decisions impacting employment, but rather why in 2019 are we still relying on faulty human-decision making? Oklahoma State University 2018 - 2019
Reviews
Contact person Review Date Rating
No reviews yet.