Hi, I'm Jeff Altman, The Big Game Hunter. I coach people to play their professional and personal games big. I was listening to an episode on NPR of "All Things Considered." The episode is called, "Can Computers Be Racist." It talks about the human like biases of different systems. As a matter of fact,"The Human-like Biases of Algorithms," is the subtitle for the show. It begs the question, it poses different search results, and how different algorithms reinforced certain interpretations. So that, for example, there was a racist bias in Google images that describe the person of color as an animal based upon some interpretations of facial appearances. It also looked at certain behaviors, it doesn't specifically say Google. Itsays a search engines. So it could be any of them. They interpreted certain behaviors as being more indicative that a person of color should find out more about bail than a white person might. So, it begs the follow up question, in the way that your applicant tracking system is being programmed are there inherent biases to it that get reinforced time and time again. The probability is, "yes." For example, when you look at education, Silicon Valley loves people from certain universities but, sometimes, individuals can't afford those schools. So, yes, you can say they could get a scholarship, but they might not be able to get a scholarship. So, it reinforces a certain stereotype of people of particular races, and particular social classes. Thus, you're missing the opportunity to attract people with diverse backgrounds in your interviewing and selection process. If someone could not afford to do that internship . . . could not afford to take an internship because they, frankly, needed a job they'll pay for college, your system probably looks down on that behavior and thus, it becomes an issue of class. Ultimately, look at your system's programming and how it interprets things. Is it purely based upon terms or are there behavioral analyses that it's doing along the way that cause it to interpret and screen out people who don't fit a particular profile. As the NPR story says, ultimately, what happens with algorithms is that humans create them. So, invariably, garbage in, garbage out. You know, that expression. It reinforces the biases that that humans have all around this nice analytical model that firms can go, "Gee, it's the the program. It's the algorithm!" But who programmed the algorithm. Asone of the guests on the show said that those biases that caused one search to interpret a person of color as a type of animal, was programmed by humans and, thus, made those mistakes. Your system can experience much the same sort of thing. So, start considering going through the ATS and see how it evaluates. Do it with a discerning eye toward bias around class, bias around certain types of education. I've seen systems that have biases about people from the South of all ridiculous things, because I now live in the south. I'll simply say I'm an ex-New Yorker who moved here and, you know, there are issues of class that people who live in urban environments in the North look down on people from rural environments in the South. That behavior gets acted out PLUS you have the international scope of many of your departments where the bias gets reinforced that people of particular nationalities are superior individuals of different nationalities. Thus, those people get screened out because people of that nationality have been screened out previously. It's insane. But do you really want to have your firm's behavior picked apart by someone who sues?