Category Archives: IEEE Spectrum

Too Perilous For AI? EU Proposes Risk-Based Rules

As part of its emerging role as a global regulatory watchdog, the European Commission published a proposal on 21 April for regulations to govern artificial intelligence use in the European Union.

The economic stakes are high: the Commission predicts European public and private investment in AI reaching €20 billion a year this decade, and that was before the additional earmark of up to €134 billion earmarked for digital transitions in Europe’s Covid-19 pandemic recovery fund, some of which the Commission presumes will fund AI, too. Add to that counting investments in AI outside the EU but which target EU residents, since these rules will apply to any use of AI in the EU, not just by EU-based companies or governments.

Things aren’t going to change overnight: the EU’s AI rules proposal is the result of three years of work by bureaucrats, industry experts, and public consultations and must go through the European Parliament—which requested it—before it can become law. EU member states then often take years to transpose EU-level regulations into their national legal codes.

The proposal defines four tiers for AI-related activity and differing levels of oversight for each. The first tier is unacceptable risk: some AI uses would be banned outright in public spaces, with specific exceptions granted by national laws and subject to additional oversight and stricter logging and human oversight. The to-be-banned AI activity that has probably garnered the most attention is real-time remote biometric identification, i.e. facial recognition. The proposal also bans subliminal behavior modification and social scoring applications. The proposal suggests fines of up to 6 percent of commercial violators’ global annual revenue.

The proposal next defines a high-risk category, determined by the purpose of the system and the potential and probability of harm. Examples listed in the proposal include job recruiting, credit checks, and the justice system. The rules would require such AI applications to use high-quality datasets, document their traceability, share information with users, and account for human oversight. The EU would create a central registry of such systems under the proposed rules and require approval before deployment.

Limited-risk activities, such as the use of chatbots or deepfakes on a website, will have less oversight but will require a warning label, to allow users to opt in or out. Then finally there is a tier for applications judged to present minimal risk.

As often happens when governments propose dense new rulebooks (this one is 108 pages), the initial reactions from industry and civil society groups seem to be more about the existence and reach of industry oversight than the specific content of the rules. One tech-funded think tank told the Wall Street Journal that it could become “infeasible to build AI in Europe.” In turn, privacy-focused civil society groups such as European Digital Rights (EDRi) said in a statement that the “regulation allows too wide a scope for self-regulation by companies.”

“I think one of the ideas behind this piece of regulation was trying to balance risk and get people excited about AI and regain trust,” says Lisa-Maria Neudert, AI governance researcher at the University of Oxford, England, and the Weizenbaum Institut in Berlin, Germany. A 2019 Lloyds Register Foundation poll found that the global public is about evenly split between fear and excitement about AI.

“I can imagine it might help if you have an experienced large legal team,” to help with compliance, Neudert says, and it may be “a difficult balance to strike” between rules that remain startup-friendly and succeed in reining in mega-corporations.

AI researchers Mona Sloane and Andrea Renda write in VentureBeat that the rules are weaker on monitoring of how AI plays out after approval and launch, neglecting “a crucial feature of AI-related risk: that it is pervasive, and it is emergent, often evolving in unpredictable ways after it has been developed and deployed.”

Europe has already been learning from the impact its sweeping 2018 General Data Protection Regulation (GDPR) had on global tech and privacy. Yes, some outside websites still serve Europeans a page telling them the website owners can’t be bothered to comply with GDPR, so Europeans can’t see any content. But most have found a way to adapt in order to reach this unified market of 448 million people.

“I don’t think we should generalize [from GDPR to the proposed AI rules], but it’s fair to assume that such a big piece of legislation will have effects beyond the EU,” Neudert says. It will be easier for legislators in other places to follow a template than to replicate the EU’s heavy investment in research, community engagement, and rule-writing.

While tech companies and their industry groups may grumble about the need to comply with the incipient AI rules, Register columnist Rupert Goodwin suggests they’d be better off focusing on forming the industry groups that will shape the implementation and enforcement of the rules in the future: “You may already be in one of the industry organizations for AI ethics or assessment; if not, then consider them the seeds from which influence will grow.”

First published by IEEE Spectrum: [html] [pdf].

Countries Debate Openness of Future National IDs

Kenya’s High Court ruled Thursday that a recent amendment requiring citizens to register for a national biometric digital identification system overreached on some counts, such as allowing for links to DNA or GPS records, and failed to guarantee sufficient inclusion of Kenyan residents. 

The ID system, called the National Integrated Identity Management System (NIIMS), was a homegrown answer to India’s pioneering Aadhaar system, which two years ago faced its own Indian Supreme Court ruling that upheld some components while modifying others. 

More than half of African countries are developing some form of biometric or digital national ID in response to major international calls to establish legal identification for the almost 1 billion people who now lack it. But this ID boom, also taking place outside Africa, often gets ahead of data protection laws, as occurred in Kenya. 

Continue reading Countries Debate Openness of Future National IDs

LoRa’s Bid to Rule the IoT.

Cattle may be at home on the range, but modern ranchers need to be able to find their wayward cattle, and inefficiencies in tracking cost the cattle industry around US $4.8 billion a year. At a recent conference about connected devices in Amsterdam, Jan Willem Smeenk of the Dutch company Sodaq and Thomas Telkamp of the startup Lacuna Space talked about connecting cattle into a future Internet of bovines.

This news story first appeared in the March 2018 issue of IEEE Spectrum [html] [pdf].

Automated Eyes Watch Plants Grow

A decade ago, a group of crop scientists set out to grow the same plants in the same way. They started with the same breeds and adhered to strict growing protocols, but nonetheless harvested a motley crop of plants that varied in leaf size, skin-cell density, and metabolic ability. Small differences in light levels and plant handling had produced outsize changes to the plants’ physical traits, or phenome.

The plunging price of genomic sequencing has made it easier to examine a plant’s biological instructions, but researchers’ understanding of how a plant follows those instructions in a given environment lags. “There is a major bottleneck for a lot of breeders to be able to get their phenotypic evaluation in line with their genetic capabilities,” says Bas van Eerdt, business development director at PhenoKey, in ’s-Gravenzande, Netherlands.

Read the rest of this news story in the January issue of IEEE Spectrum: [html] [pdf].