Are robots eating our jobs? Not according to AI


Automation has been gradually transforming the workplace for years (think Excel spreadsheets or chatbots). As artificial intelligence (AI), machine learning and deep learning systems that can learn from each other become more prevalent and smarter (think Alexa or IBM Watson), they continue to replace more manual, repetitive job tasks. Consequently, automation and robots are changing more jobs globally at breakneck speed.

A McKinsey Global Institute report suggests that between 400 million to 800 million jobs worldwide will be lost due to automation by 2030. The report claims that the U.S. could lose between 16 to 54 million jobs by 2030. The pace at which robots are entering our workforce is staggering. Oxford Economics expects robots and automation to replace 20 million (8.5%) global manufacturing jobs by 2030.

Keep in mind that these predictions came before anyone predicted the Covid-19 pandemic or its impact on our workforce. The pandemic has made the need for digital transformation and automation more urgent as the critical need to work from home, physical distancing and contactless become the new normal.

Continue reading… “Are robots eating our jobs? Not according to AI”


Study: Only 18% of data science students are learning about AI ethics


The neglect of AI ethics extends from universities to industry

 A study by data science firm Anaconda found an absence of AI ethics initiatives in both academia and industry.

Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent.

The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.”

At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

Continue reading… “Study: Only 18% of data science students are learning about AI ethics”


Robot racism? Yes, says a study showing humans’ biases extend to robots

Robot Racism

“Robots And Racism,” a study conducted by the Human Interface Technology Laboratory in New Zealand and published by the country’s University of Canterbury, suggests people perceive physically human-like robots to have a race and therefore apply racial stereotypes to white and black robots.

Have you ever noticed the popularity of white robots?

You see them in films like Will Smith’s “I, Robot” and Eve from “Wall-E.” Real-life examples include Honda’s Asimo, UBTECH’s Walker, Boston Dynamics’ Atlas, and even NASA’s Valkyrie robot. All made of shiny white material. And some real-life humanoid robots are modeled after white celebrities, such as Audrey Hepburn and Scarlett Johansson.

The reason for these shades of technological white may be racism, according to new research.

Continue reading… “Robot racism? Yes, says a study showing humans’ biases extend to robots”


Data Isn’t ‘Truth’


It has become perhaps the most important guiding principle of today’s world of data science: “data is truth.” The statisticians, programmers and machine learning experts that acquire and analyze the vast oceans of data that power modern society are seen as uncovering undeniable underlying “truths” about human society through the power of unbiased data and unerring algorithms. Unfortunately, data scientists themselves too often conflate their work with the search for truth and fail to ask whether the data they are analyzing can actually answer the questions they ask of it. Why can’t data scientists be more like those of the physical sciences that see not “universal truths” but rather “current consensus understanding?”

Given the sheer density of statisticians in the data sciences, it is remarkable how poorly the field adheres to statistical best practices like normalization and characterizing data before analyzing it. Programmers in the data sciences, too, tend to lack the deep numerical methods and scientific computing backgrounds of their predecessors, making them dangerously unaware of the myriad traps that await numerically-intensive codes.

Most importantly, however, somewhere along the way data science became about pursuing “truth” rather than “evidence.”

Continue reading… “Data Isn’t ‘Truth’”


The Achilles’ Heel of AI


AI & Big Data

Garbage in is garbage out. There’s no saying more true in computer science, and especially is the case with artificial intelligence. Machine learning algorithms are very dependent on accurate, clean, and well-labeled training data to learn from so that they can produce accurate results. If you train your machine learning models with garbage, it’s no surprise you’ll get garbage results. It’s for this reason that the vast majority of the time spent during AI projects are during the data collection, cleaning, preparation, and labeling phases.

According to a recent report from AI research and advisory firm Cognilytica, over 80% of the time spent in AI projects are spent dealing with and wrangling data. Even more importantly, and perhaps surprisingly, is how human-intensive much of this data preparation work is. In order for supervised forms of machine learning to work, especially the multi-layered deep learning neural network approaches, they must be fed large volumes of examples of correct data that is appropriately annotated, or “labeled”, with the desired output result. For example, if you’re trying to get your machine learning algorithm to correctly identify cats inside of images, you need to feed that algorithm thousands of images of cats, appropriately labeled as cats, with the images not having any extraneous or incorrect data that will throw the algorithm off as you build the model. (Disclosure: I’m a principal analyst with Cognilytica)

Continue reading… “The Achilles’ Heel of AI”


Artificial intelligence hates the poor and disenfranchised


The biggest actual threat faced by humans, when it comes to AI, has nothing to do with robots. It’s biased algorithms. And, like almost everything bad, it disproportionately affects the poor and marginalized.

Machine learning algorithms, whether in the form of “AI” or simple shortcuts for sifting through data, are incapable of making rational decisions because they don’t rationalize — they find patterns. That government agencies across the US put them in charge of decisions that profoundly impact the lives of humans, seems incomprehensibly unethical.

Continue reading… “Artificial intelligence hates the poor and disenfranchised”


Artificial intelligence is quickly becoming as biased as we are

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:

Continue reading… “Artificial intelligence is quickly becoming as biased as we are”


How AI is finding gender-inequality in the workplace


The Fair Pay Act is a strict gender-equality law recently enacted in California. The law puts the burden of proof on a company to show that it has not shortchanged an employee’s salary based on gender. It’s a powerful tool to address a wrong that has already happened. But can discrimination be prevented in the first place? Even managers who don’t think they are biased may be—and just their word choices can send a signal. A new wave of artificial intelligence companies aims to spot nuanced biases in workplace language and behavior in order to root them out.

Continue reading… “How AI is finding gender-inequality in the workplace”


Women held back from advancement in the workplace by colleagues’ wives


Only 28% of men, compared with 49% of women, see gender bias as still prevalent in the workplace.

There hasn’t been much progress for women seeking top leadership roles in the workplace in the past decade.  The percentages of women running large companies, or serving as managing partners of their law firms, or sitting on corporate boards have barely budged even though female graduates continue to pour out of colleges and professional schools.

Continue reading… “Women held back from advancement in the workplace by colleagues’ wives”


Symptons Of Heart Disease Attributed To Stress More Frequent In Women Than Men

Symptons Of Heart Disease Attributed To Stress More Frequent In Women Than Men

Research presented at the 20th annual Transcatheter Cardiovascular Therapeutics (TCT) scientific symposium, sponsored by the Cardiovascular Research Foundation (CRF), found that coronary heart disease (CHD) symptoms presented in the context of a stressful life event were identified as psychogenic in origin when presented by women and organic in origin when presented by men. The study could help explain why there is often a delay in the assessment of women with heart disease.

Continue reading… “Symptons Of Heart Disease Attributed To Stress More Frequent In Women Than Men”


UCLA Study: Media Bias Is Real

 UCLA Study: Media Bias Is Real

Media bias, is it fixable?

While the editorial page of The Wall Street Journal is conservative, the newspaper’s news pages are liberal, even more liberal than The New York Times. The Drudge Report may have a right-wing reputation, but it leans left. Coverage by public television and radio is conservative compared to the rest of the mainstream media. Meanwhile, almost all major media outlets tilt to the left.

These are just a few of the surprising findings from a UCLA-led study, which is believed to be the first successful attempt at objectively quantifying bias in a range of media outlets and ranking them accordingly.

Continue reading… “UCLA Study: Media Bias Is Real”