These AI Laws Aren’t All Bad

Mohan Reddy
Share this Article on:
These AI Laws Aren’t All Bad

As the chief technology officer at a company whose machine learning is used by companies and governments to manage their workforce, you might think I was worried sick about the new laws coming out of New York and elsewhere.

I’m not.

In fact, some of these rules may not be all bad.

Let me level with you. I’m skeptical about some of these AI tools, too. When someone judges me by how I play an online game, I’m not sure it really leads to the best hire. And, I can see how some of those games could disadvantage people with disabilities.  

I have similar feelings about some of the personality tests out there.  

All told, having some rules about the ethics of AI is probably a good thing.

The Ethics of AI

At SkyHive, we didn’t need a law regarding ethics, as it’s always been at the forefront, versus an afterthought. We built our patented technology always being mindful that it’s transparent, explainable, and prevents discrimination. We have our own certification in AI ethics, called MIDAS. MIDAS certification promises a gold standard to data quality and machine-learning model accuracy. It certifies standardized use of data and machine learning models with emphasis on data quality, data completeness, data lineage, data accuracy, model accuracy, model reliability, and Ethical AI principles.  

We’re working with the Responsible Artificial Intelligence Institute and the GPAI and have recently won an award for our ethical AI. (I talk more about AI ethics in this webinar.)

We believe in a “Whitebox/Glassbox” AI approach. That means we disclose what we do and why we do it without compromising our intellectual property.  

We have the world’s largest skills dataset. SkyHive processes more than 24 TB of raw data every day, including anonymized worker profiles and job descriptions from over 180 countries in multiple languages.  

You may be thinking: how does more data make you more ethical? Well, for one, such a large amount of data helps us remove bias by ensuring that any one organization isn’t over-represented, which could introduce bias. Also, it enables us to uncover many more of people’s skills (often that they are unaware they have), leading to more job possibilities for individuals, particularly from non-traditional experiences. Or, as one of our Passport users said, “I got to know things about myself I hadn’t thought about.”  

Still Murky

You can see why we’re comfortable being held accountable to a standard of AI ethics.  

What that standard will be remains unclear. New York’s law, in brief, requires employers to: have machine-learning or similar hiring tools audited for any bias; notify candidates that the company is using the technology; tell candidates and employees what job qualifications and characteristics are being assessed; and provide accommodation for people who need an alternative selection process.  

This raises questions. For example, who is required to do the audit?  

We don’t yet know whether an organization, such as the company or government using the technology, needs to audit it, probably with the help of an outside company (such as Mercer, Accenture, etc.). Or maybe an audit by the company that provided the technology (e.g. SkyHive) suffices. We should find out soon.

We’ve been auditing our technology regardless. Any time we release a new feature, it is tested for bias. The purpose of our features, our enterprise technology, our platform, and our Skills Passport in the first place is to provide opportunities for people to be measured on their skills and the transferability of those skills, not on gender, ethnicity, pedigree, or any other potential sources of bias.  

What to Look for

Are you trying to figure out if a company you may want to work with is on the right side of the law? Are they practicing ethical AI? Some suggestions for questions to ask yourself:

  • Is there any historical record showing that fairness and ethics are part of the technology supplier’s DNA, or is it scrambling to comply with the law?
  • Are they transparent about where their data comes from? Are they transparent about the criteria used in their algorithm? Do you understand how the technology works … because it has been explained to you clearly?  
  • Is the company compliant with data security and privacy rules worldwide, like GDPR?
  • Does the company employ any experts on the ethics of AI, or lead in any way on the subject?  
  • How large a dataset is being used?  
  • Has the AI been audited by a third party? When/how frequently?

I Get the Fear

There are some nervous people right now and some nervous companies. I can understand why. Their technologies are a black box. The software may or may not be fair or compliant. Ethics has not been priority No. 1 for these companies.

Our ethics-by-design culture requires that we create products that can be justified, and for which the output can be interpreted and explained. We are a Certified B Corporation. The very purpose of our company is to help democratize work, help bring opportunities to people left out of the workforce, and help communities and companies optimize the world's transition from jobs-based to skills-based. That will never change; we’ll be a B Corp as long as we exist, and we won’t be going away any time soon, as we plan on helping billions of people get better work.

We’re looking forward to hearing more about how the New York and other laws will be enforced, and what the particulars are. In the meantime, we’re going to keep doing what we’re doing, because it’s the right thing to do.

Ready to unleash potential across your workforce?

The world's most ethical Al people technology to help you transition from jobs-based to skills-based — award-winning, demonstrated, and internationally recognized.

Request a demo

Related resources

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.