In a recent post I offered a few predictions on how the Biden Administration may change the AI playing field. The main takeaway is that AI regulation is coming sooner than you think, and that you had better start preparing by implementing internal AI governance. If you operate in North America, it’ll help you to get ahead of regulators and competition. If you’re doing business in Europe, you’ll need it to be compliant with European laws.
Algorithmic Impact Assessments (AIAs) and tools like AI registers are a simple way to get started with documenting your AI. Given recent developments, though…
With President Biden having made some important appointments recently, there’s a lot of speculation about what we can expect from his administration over the course of the next four years with respect to AI/ML and in particular, with regulating Artificial Intelligence applications — to make the technology safer, fairer and more equitable.
As an analyst covering this space at Info-Tech Research Group, I’m naturally going to throw my hat into the ring. Here are my top four predictions.
Regulation of AI will be fast-tracked through the House and Senate
Transparency, explainability and trust are big and pressing topics in AI/ML today. Nobody wants to find themselves at the receiving end of a black box system that makes consequential decisions (e.g., about jobs, healthcare, citizenship, etc.), especially if those decision are unfair, biased, or just plainly not in our favor. And most organizations agree that consumer trust and confidence that AI is being used ethically and transparently are key to unlocking its true potential.
And while there are literally hundreds of documents describing and prescribing AI principles, frameworks and other good things, there haven’t been any practical tools that could…
Recently I had the opportunity to attend the inaugural Emotion AI Conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing (NLP), text analytics, sentiment analysis and their business applications. (Seth also organized the first Text Analytics Summit 15 years ago, which I also had the privilege to attend, and his next conference, CX Emotion, takes place July 22nd online.) The conference was attended by about 70 people (including presenters and panelists) from industry and academia in the US, Canada, and Europe.
Given the conference topic, what is emotion AI, why is…
Yesterday, I attended a webinar by the Vector Institute, a Canadian not-for-profit collaboration between leading research universities, the government, and private sector dedicated to advancing AI, machine learning, and deep learning research. The topic was “Using AI to guide re-opening of workplaces in the wake of COVID-19.”
The webinar consisted of a presentation by Avi Goldfarb, professor at the Rotman School of Management at the University of Toronto (U of T), Vector Faculty Affiliate, and chief data scientist at the tech incubator Creative Destruction Lab, and a follow-up discussion with:
Industry analyst demystify AI/ML and helping clients to deploy it ethically and responsibly. Help develop AI policies, governance, standards and certification.