Posts Tagged ‘Artificial intelligence’

Last year a U.S. intelligence sector contest — to see who could develop the best AI algorithm for identifying surveillance images among millions of photos — was won, not by an American firm, but by a small Chinese company.  The result served perhaps as a warning shot that the race to dominate the realm of artificial intelligence, or AI, is on – and American victory is by no means assured.  (This post is based on reporting by 3 reporters in the Jan 23rd Wall Street Journal.)

The Chinese company, Yitu Tech beat out 15 rivals due to one big advantage: its access to Chinese security databases of millions of people on which it could train and test its AI algorithms.  Privacy rights in the U.S. make that particular sort of application harder to develop, as companies lack access to the enormous trove of these data common in China, and often accomplished with the aid of government agencies.

AI algorithm development depends on vast troves of data to develop and test their not always obvious hypotheses and angles.  Microsoft chief legal officer Brad Smith notes that “The question is whether privacy laws will constrict AI development or use in some parts of the world.”

The U.S. leaders in AI include Apple, Alphabet (Google), Amazon, Facebook and Microsoft, and they’re up against Chinese giants like Alibaba, Baidu and Tencent Holdings.  On the academic side, the U.S., particularly in the regions surrounding Silicon Valley, holds a strong lead, in terms of budgets and in patents filed in areas like neural networks and something called unsupervised learning.  The U.S. outnumbers China in terms of number s of AI companies by about two to one.  As well, current spending on AI in the U.S. is massive, with investments of over $13 billion in R&D each from Microsoft and Alphabet.  By comparison, Alibaba only recent pledged to triple its R&D spending to $5 billion over the next three years.

But China plans to close the gap with a new government led AI effort to lead the field by 2030 in areas including medicine, agriculture and the military.  Thus, AI startups in China are seeing a tenfold rise in funding over last year.  PwC expects China to catch up and reap about 46% of the $15.7 trillion it expects AI to contribute to global output by 2030, with North America a distant second at about half that percentage.

In the West there is a general reluctance to give companies wide use of customers’ data.  Even tough U.S. laws and restrictions are still weaker than those in Europe, where privacy rights have become even more contentious, and where new tougher laws are scheduled to take effect in May.  Some experts think this reluctance could hamper U.S. AI development efforts and allow Chinese companies to pull ahead in the race for global AI dominance.

Regulation “could be a help or a detriment,” according to former Google and Baidu exec Andrew Ng, who recently founded an AI startup.  He adds that “Despite the U.S.’s current lead in basic AI research, it can be easily squandered in just a few years if the U.S. makes bad decisions.”

The race is indeed on.

Read Full Post »

automation_3We’ll conclude with our third post in a series derived from a recent group of articles published in the June 25, 2016 issue of The Economist discussing artificial intelligence, the rise of machines, and the potential impact on jobs in the future.

In our prior post, we ended by noting that in prior revolutions (like the Industrial Revolution) it’s always been true that as old jobs were replaced by automation new jobs sprang up in their place to perform other tasks that could not be automated.  History is full of examples, such as farming, weaving and one more recent entry: the ATM.

When ATMs were thought to be the death knell for bank employees a couple of decades ago, bank tellers did indeed see their average number fall from 20 per branch in 1988 to 13 in 2004, according to The Economist’s editors.  But… that reduced the cost of running a branch, and in turn banks opened more branches.  The number of urban branches rose by 43% during that time, so the total number of employees actually increased.  Rather than destroying jobs, ATMs changed the work mix for bank employees, and they moved away from routine tasks towards sales and customer service, tasks machines could not do.

The same pattern can be seen across industry with the rise of computers; rather than destroying jobs, technology redefines them, often in ways “that reduce costs and boost demand.”  Between 1982 and 2012, employment actually grew faster in occupations that made more use of computers, according to a study by James Bessen, an economist at Boston University School of Law.  More computer-intensive jobs ended up replacing less computer-intensive jobs.  Thus, jobs were reallocated more than replaced.  It’s true across a wide range of fields.

One low note: only in manufacturing did jobs expand more slowly than the workforce did over the period of the study.  That had more to do with business cycles and offshoring to China during that time period than with technology, Besson notes.

While in the end we can’t predict which jobs will be replaced by technology or what jobs will created in the future, “we do know that it’s always been like that” says Joel Mokyr, an economic historian at Northwestern Univ.  Think about it: Who knew 100 years ago that there would be jobs like video game designer or cybersecurity specialists?

So while the truck driver of the future may be no more, we can only speculate about what heretofore uninvented job may take that one’s place.  Remember, 100 years ago there was great concern about the impact of the switch from horses to cars.  While the horse jobs went away, countless new jobs were created at motels, fast food joints, and travel agencies (now another in a dying breed of jobs).  Tomorrow’s autonomous vehicles, the editors note, may also greatly expand the demand for food product delivery.

So who is right: the pessimists who say this time it’s different and machines really will take all the jobs (the techie sentiment) or the optimists “who insist that in the end technology always crates more jobs than it destroys?” as the editors question.  The truth, The Economist concludes, probably lies somewhere in between.  AI, they note, will not cause mass unemployment but it may speed up the trend toward computerized automation at a faster pace than heretofore known.  It may disrupt the labor market – it’s happened before, certainly – and will require as always that workers learn new skills.

These are difficult transitions, though not necessarily as Besson notes “a sharp break with history.”  But regardless of your viewpoint, most agree: what’s required is that companies and governments make it easier for workers to acquire new skills and to switch jobs as needed.  In the U.S. in particular, we have far to go in this regard, and there is indeed a role for government, education and the private sector.  Hard change will be required.  But then, like job displacements and replacements themselves, they create their own necessary forms of reinvention.  Always have, always will.

But the pace of change has never been faster, and therein lies the ultimate jobs challenge for the next generation of jobs and security both here and abroad.



Read Full Post »

automation_2We introduced the fear of the rise of machines and artificial intelligence (A.I.) as reviewed by the editors of The Economist in our prior post, where we ended up asking the question: What will it mean?  We’ll parse through what economists and others are saying in today’s post, which attempts to answer the larger question of whether smarter machines are causing (or poised to cause) mass unemployment.

Machines today are imposing on even the highest tech jobs, such as those produced by Enlitic, a startup involved in deep learning in the medical field, which has produced a system for scanning lungs for abnormalities.  In a test against three expert human radiologists, Enlitic machines were 50% better at classifying malignant tumors.  Another of the company’s machines which examines x-rays to detect fractures outperformed human experts, and the firm’s technology is already being deployed in 40 clinics.  That’s just one example of how white collar jobs can now be automated.

It turns out that what determines whether a person can be replaced by a machine – thus becoming highly vulnerable – is “not so much whether the work concerned is manual or white-collar, but whether or not it is routine,” notes the editors.  Thus, a highly trained and specialized radiologist may in fact be in greater danger of being replaced by a machine than his own executive assistant.

Among the most vulnerable, 47% of U.S. jobs are said to be at “high risk” of potential automation.  A 2013 tally published by Carl Frey & Michael Osborne on job susceptibility to computerization found the following had at least a 50% probability of being replaced:

  • Telemarketers (99%)
  • Accountants and auditors (94%)
  • Retail salespeople (92%)
  • Technical writers (86%)
  • Real estate sales agents (86%)
  • Word processors & typists (81%)
  • Machinists (65%)
  • Commercial pilots (55%)

Among the least vulnerable:

  • Recreational therapists, dentists, athletic trainers, clergy, chemical engineers, editors, firefighters, actors, health technologists and (of course) economists.

Clearly, a substantial risk exists across a broad swath of the employment spectrum.  Some, like Sebastian Thrun of Stanford, say this is only the tip of the iceberg.  Martin Ford, a software entrepreneur and author of “Rise of the Robots” warns of the threat of a “jobless future,” noting that most jobs can be broken down into a set of routine tasks, and are thus increasingly vulnerable to A.I. and machines.

As we noted in our prior post, these sorts of job-obliterating threats have been around since at least the Industrial Revolution, when the Luddites protested against machines and steam engines that they felt would destroy their livelihoods.

Such declarations have reappeared regularly since, in the 1930s-40s, in the 60s, and most recently with the advent of personal computers in the 80s.  Invariably, the progress of technology has always ended up creating more jobs than it destroys.  Once something can be done more quickly and cheaply, it is.  But that in turn “increases the demand for human workers to do the other tasks around it that have not been automated.”

We’re running long, so we’ll conclude our thinking in our third and final post on this topic. Stay tuned…

Read Full Post »

automation_1There is what appears to be a larger than usual fear these days about the ominous likelihood of our jobs being replaced by machines… or some artificial intelligence fueled automation hybrid capable of rendering many folks permanently unemployed.

Visions of driverless cars and trucks, A.I. (artificial intelligence) infused paralegals and robots on the shop floor have justly scared many into thinking that our post-industrial ‘knowledge revolution’ is leading to a hollowing-out of the middle class that will leave massive swaths of our populace grimly unemployed.

While this same sort of thing has repeated itself for centuries, when one revolution (agricultural, industrial, financial, knowledge…) replaces the prior one, only to see the old jobs replaced by newer, heretofore non-existent ones, this time, say the doomsayers, it’s different.

Or is it?

That’s the basis for a series of articles published recently in The Economist (6/25/16).  We parsed through them to get to the core of the matter, and we’ll share with you in our next couple of posts what leading economists think about the current jobs (or joblessness) situation, the new economy, and what it all means for you and me.

Technologists and economists alike today are debating the implications of A.I., a field which has held the promise of machines performing previously human tasks any day now… for about 50 years.  But this time, many say, that time really is getting close at hand.  A study out of Oxford in 2013 found that nearly half of all American jobs were at high risk of being “substituted by computer capital” soon.  Merrill Lynch recently predicted that within ten years the “annual creative disruption impact” from artificial intelligence could amount to $14 to $33 trillion, including a $9 trillion reduction in employment costs thanks to automation of knowledge work, and another $8 trillion in manufacturing and health care.  $2 trillion in savings alone from self-driving cars and drones are expected.

Most ominously, McKinsey Global says that in terms of both time and scale, artificial intelligence (think robots, among other form factors) is contributing to a transformation of society with roughly three thousand times the impact of the Industrial Revolution.

Now, as we noted, we’ve heard these concerns before, dating back hundreds of years.  Machines have been grimly viewed as the destroyer of jobs since at least 1821 when economist David Ricardo spoke of the “machinery question… and the influence of machinery on the interest of the different classes of society.”  In 1839 Thomas Carlyle railed against the “demon of mechanism” which was guilty of “oversetting whole multitudes of workmen,” as the Economist article points out.

Today, “deep learning” systems are allowing machines to accelerate their learning capabilities as never before.  In fact, “Instead of people writing software, we have data writing software” notes the CEO of NVIDIA, a chip company.  Systems are learning for themselves today, mining their data to get smarter faster, without the need for much human intervention.  The progress is real.  The results are real.  This stuff works, notes tech pioneer and venture capitalist Marc Andreesen.

The question then becomes: What will it mean?  We’ll take a look at a few of The Economist’s editors conclusions in our next post, so stay tuned…




Read Full Post »

aiOkay, so maybe they didn’t… but they do have one other thing in common (according to an article in the May 9th Economist): all have warned against the potential threats of artificial intelligence (or “AI”) and all are hardly Luddites.  From Hawking’s warning that AI “could spell the end of the human race” to Musk’s fears that AI may be “the biggest existential threat humanity faces,” all have voiced opposition to vast investments in AI by firms like Google and Microsoft.

And with most people today carrying supercomputers in their pockets and robots on every battlefield, their comments may be worth heeding.  As the article points out, the question is how to worry wisely.

AI has been around a long time.  Lately it’s gotten a boost as “deep learning” systems mimic the neuronal patterns of our human brains along with crunching vast amounts of data, teaching themselves even to perform some tasks including pattern recognition (in case you ever wondered why lately Facebook has been able to identify your friends’ faces automatically – which thanks to something called the DeepFace algorithm it now does accurately 97% of the time).

But today’s AI capacity is still narrow and specific.  It’s basically brute-force number crunching, largely ignoring the human mind’s interests, desire and nuance.  In other words, computers still cannot truly “infer, judge and decide” in the conventional human sense.

But AI is powerful enough to make a real difference in human life.  With AI, doctors can spot cancers in medical images better than ever before.  Speech recognition on smartphones will bring the Internet to the illiterate in developing countries.  Digital assistants can parse data, from the academic to financial, finding nuggets of value.  Wearable computers will change how we view the world.

On the other side of things, AI brings new power to the apparatus of state security, for both autocracies and democracies.  Billions of conversations are monitored regularly utilizing AI (see: NSA), and faces can be scanned and identified quickly for risks to security.

But these are not the things of which the luminaries in our title line worry.  No, they’re worried about a more apocalyptic threat: that of “autonomous machines with superhuman cognitive capacity and interests that conflict with those” of people today.  For example: the car that drives itself is a boon to society; a car with its own ideas of where to go, perhaps not so much.  Shades of 2001’s Hal perhaps.

But AI can be developed safely, as the Economist article points out. They point out that “just as armies need oversight, markets are regulated and bureaucracies must be transparent and accountable,” so AI systems need to be open to scrutiny.  Because designers cannot oversee every contingency, there also must be an on-off switch.  This can be done without compromising progress.  From nuclear weapons to traffic laws, humans have “used technical ingenuity and legal strictures to constrain other such powerful innovations.”  Such diligence must be applied every bit as carefully and judiciously in the burgeoning world of AI.

Autonomous, non-human intelligence is so compelling and potentially so rewarding that, while there are perils, they should not obscure the huge benefits coming from AI.  But they must be managed.


Read Full Post »