Feeds:
Posts
Comments

Posts Tagged ‘AI’

In a recent article entitled “The Human Promise of the AI Revolution,” the former President of Google China and current CEO of Sinovation Ventures, Kai-Fu Lee, says that artificial intelligence (AI) will radically disrupt the world of work, but the right policy choices can make it a force for a more compassionate social contract.

By now it’s become clear that AI is going to be a disruptive force.  A lot of jobs in all colors of collars may not be safe (though some are, at least for now).  The whole world of autonomous driving is one quick example, where increasingly AI and software are starting to have a big influence on how we get around.  This is only the beginning.

Some are optimistic about AI’s promise of newer, better jobs that challenge our ingenuity and lead to bold new industries.  Others, notably Tesla CEO Elon Musk among others, warn of dire and ominous consequences.  Lee notes that the application of existing technology to new problems will hit many white-collar professionals just as hard as it hits blue-collar factory workers.

Lee is careful to point out AI’s strengths and weaknesses in order to propose those jobs that would be most affected.  For instance, while AI is “great at optimizing for a highly narrow objective, it is unable to choose its own goals or think creatively.”  AI may be superhuman in numbers and data, but it lacks social skills or empathy.  Hence driving a car or diagnosing diseases across massive datasets that are incomprehensible to mere mortals may play to AI’s strengths — not to mention fast food cooks and insurance adjusters — but home care nurses, most attorneys, hairstylists and CEOs are probably safe for the foreseeable future.

Despite the challenges, Lee remains hopeful, as he sees an opportunity for us “to redirect our energy as a society to more human pursuits: to taking care of each other and our communities.  While you can read his full thoughts in his book “AI Superpowers: China, Silicon Valley and the New World Order,” or in The Wall Street Journal’s Sept 15th “Review” section, the gist of his thinking goes like this…

While some propose a Universal Basic Income (or UBI) as a possible cure to the massive job dislocations many see coming, Lee thinks otherwise.  While UBI would provide a subsistence level of income for those so displaced (experiments with UBI are currently underway in several places around the world, with less than robust reviews it would appear), the very concept, says Lee, lacks the pride and dignity that work focused on enhancing our communities would provide instead.

So why not, in lieu of a UBI, create jobs (and pay people) instead?  Maybe a different form of available guaranteed income?  Lee proposes a kind of stipend, or what he calls the Social Investment Stipend, for those who devote themselves to three categories of labor: care work, community service and education.

Lee suggests these activities could form the core for a new social contract, and jobs would run the gamut from parenting and home schooling to assisting aging parents, or focusing on the efforts of non-profit and volunteer groups.  Service efforts could include leading after-school programs, guiding tours at parks, or collecting oral histories from elders.  Education activities could range from professional training for the jobs of the AI age to taking classes to turn a hobby into a career.

Lee admits many difficult questions remain.  But at least he’s asking them… posing suggestions for the new work landscape, and trying to fashion a viable solution from the thorny issue of job displacement that may be harboring under the guise of technological advancement, modern times and the new age of artificial intelligence.

In other words, at least he’s trying to get us to think, and talk, about it.

Read Full Post »

As machines get smarter, will jobs become increasingly scarcer?  That’s the fear of many, including some economists.  And while yes, some jobs will be lost, those with the right skills will partake of the silver lining of smarter machines and the future of artificial intelligence.  According to an article in the April 30th issue of The Wall Street Journal, AI “opens up opportunities for many new jobs to be created — some that we can imagine and many we probably can’t right now.”

For those who tool up, the Journal expects the following list of jobs to be among those that will benefit from our smart new future.

  1. AI Builder. Case in point: iRobot, a maker of robotic mops and vacuums has quadrupled its staff of software engineers focused on consumer robots, making robots smarter through advanced AI and computer-vision systems.  Many of these are Ph.D.-level scientists, so they’re certainly not Everyman-style jobs, and so the company has expanded its talent search to a global effort.
  2. Customer-Robot Liaisons. People are going to need help easing into working with robotic systems, and this role is currently among the most sought-after AI-related positions according to jobsite ZipRecruiter.  Ensuring clients are happy with robots that have been rented out as security guards on a graveyard shift is the job of one 36 year-old at Palo Alto’s Cobalt Robotics.  The trick, they say, is to get a “good handle” on how comfortable clients are interacting with robots by monitoring usage reports, interacting with customers through calls, texts and visits, and (what else?) building relationships.
  3. Robot Managers. While robots can be amazingly smart, their judgment and the judgment within the AI realm generally, is lacking when compared to humans. Ditto for empathy, customer relationships and a myriad of other soft skills.  The need for human oversight might be the most underestimated part of all according to one McKinsey partner who focuses on automation.
  4. Data Labelers. For AI to understand the world, it needs humans to explain what things are. That means labels.  Identifying objects in images or parsing sentences may be things we humans take for granted, but robots will need our help.  Self-driving car developers, for instance, can have hundreds of folks labeling data.  Sometimes it’s simple, sometimes it’s subtle.  Posters and pictures around an office may seem like trespassers to a robot, and they need to be ‘taught’ that these are not potential threats.
  5. Drone Performance Artists. Drones are becoming more and more a fixture in the film world, flying props, handling lighting and providing overhead shots at sporting events.  Artists who can customize them to suit the needs of different performances like concerts, musicals, circuses and sporting events are increasingly in demand.  Said one such artist, “It’s a crazy opportunity because I have a blank slate and can develop whatever I want this field to be.”
  6. AI Lab Scientists. Smart software is remaking drug development, sifting through vast troves of data faster than humans to come up with new directions for medical research.  Data scientists like computational biologists can help AI systems learn so that computers can surface novel ideas, with human technicians also testing the AI results to see which are valid and which are not.  The feedback they give AI machines only serves to make them smarter.
  7. Safety and Test Drivers. Self-driving vehicles are not there yet, in the opinion of industry insiders, but they are expected to spread slowly across the automotive landscape.  That provides opportunities for people to help the vehicles do their jobs safely, and take over when necessary.  Testers today provide feedback to manufacturers when a vehicle encounters a situation it’s unsure how to handle.  One company doing so is tripling its number of testers, and hiring test engineers to devise scenarios for shuttles, not to mention maintenance, testing and cleaning crews.

Read Full Post »

Last year a U.S. intelligence sector contest — to see who could develop the best AI algorithm for identifying surveillance images among millions of photos — was won, not by an American firm, but by a small Chinese company.  The result served perhaps as a warning shot that the race to dominate the realm of artificial intelligence, or AI, is on – and American victory is by no means assured.  (This post is based on reporting by 3 reporters in the Jan 23rd Wall Street Journal.)

The Chinese company, Yitu Tech beat out 15 rivals due to one big advantage: its access to Chinese security databases of millions of people on which it could train and test its AI algorithms.  Privacy rights in the U.S. make that particular sort of application harder to develop, as companies lack access to the enormous trove of these data common in China, and often accomplished with the aid of government agencies.

AI algorithm development depends on vast troves of data to develop and test their not always obvious hypotheses and angles.  Microsoft chief legal officer Brad Smith notes that “The question is whether privacy laws will constrict AI development or use in some parts of the world.”

The U.S. leaders in AI include Apple, Alphabet (Google), Amazon, Facebook and Microsoft, and they’re up against Chinese giants like Alibaba, Baidu and Tencent Holdings.  On the academic side, the U.S., particularly in the regions surrounding Silicon Valley, holds a strong lead, in terms of budgets and in patents filed in areas like neural networks and something called unsupervised learning.  The U.S. outnumbers China in terms of number s of AI companies by about two to one.  As well, current spending on AI in the U.S. is massive, with investments of over $13 billion in R&D each from Microsoft and Alphabet.  By comparison, Alibaba only recent pledged to triple its R&D spending to $5 billion over the next three years.

But China plans to close the gap with a new government led AI effort to lead the field by 2030 in areas including medicine, agriculture and the military.  Thus, AI startups in China are seeing a tenfold rise in funding over last year.  PwC expects China to catch up and reap about 46% of the $15.7 trillion it expects AI to contribute to global output by 2030, with North America a distant second at about half that percentage.

In the West there is a general reluctance to give companies wide use of customers’ data.  Even tough U.S. laws and restrictions are still weaker than those in Europe, where privacy rights have become even more contentious, and where new tougher laws are scheduled to take effect in May.  Some experts think this reluctance could hamper U.S. AI development efforts and allow Chinese companies to pull ahead in the race for global AI dominance.

Regulation “could be a help or a detriment,” according to former Google and Baidu exec Andrew Ng, who recently founded an AI startup.  He adds that “Despite the U.S.’s current lead in basic AI research, it can be easily squandered in just a few years if the U.S. makes bad decisions.”

The race is indeed on.

Read Full Post »

ai_picTwo recent articles, one from Christopher Sims of the Wall Street Journal, and another featuring IBM CEO Ginni Rometty (also writing for the Journal), provide glimpses into where Artificial Intelligence – AI – is likely to take us.  And from both, one conclusion is clear: it’s all in the data.

The end of the year is a great time to be thinking about the future.  And AI will increasingly be a part of everyone’s future.  The gist of the arguments from both Rometty and Sims make clear that data – big data – will be what makes AI truly possible.

While today’s newer smart assistants, like Alexa and Siri, are entering into our everyday lives, they represent only the beginning.  Already, Alphabet (Google), Amazon and Microsoft are making their AI smarts available to other businesses on a for-hire basis.  They can help you make a gadget or app respond to voice commands, for example.  They can even transcribe those conversations for you.  Add to that abilities like face recognition to identify objectionable content in images, and you begin to see how troves of data (in these cases, voice and image) are being transformed into usable function.

But all this data and technology, notes Sims, are not going to suddenly blossom into AI.  According to data scientist Angela Bassa, the real intelligence is still about ten years away.

Why?  Three obstacles:

  • Not enough data. Most companies simply don’t have enough data to do deep learning that can make much more than an incremental difference in company performance.  Customers are “more interested in analytics than in the incremental value that sophisticated AI-powered algorithms could provide.”’
  • Small differences generally cannot yet justify the expense of creating an AI system.
  • There is a scarcity of people to build these systems.

All that being said, Ms. Bassa, noting that there are only about 5,000 people in the world who can put together a real AI system, says that “creating systems that can be used for a variety of problems, and not just the narrow applications to which AI has been put so far, could take decades.”

IBM CEO Ginni Rometty notes that the term artificial intelligence was coined way back in 1955 to convey the concept of general intelligence: the notion that “all human cognition stems from one or more underlying algorithms and that by programming computers to think the same way, we could create autonomous systems modeled on the human brain.”  Other researchers took different approaches , working from the bottom up to find patterns in growing volumes of data, called IA, or Intelligence Augmentation.  Ironically, she notes, the methodology not modeled on the human brain led to the systems we now describe as ‘cognitive.’

Rometty concludes, fittingly, that “it will be the companies and the products that make the best use of data” that will be the winners in AI.  She goes on to say… “Data is the great new natural resource of our time, and cognitive systems are the only way to get value from all its volume, variety and velocity.”

She concludes with a noteworthy commentary: “Having ingested a fair amount of data myself, I offer this rule of thumb: If it’s digital today, it will be cognitive tomorrow.”

Read Full Post »

aiOkay, so maybe they didn’t… but they do have one other thing in common (according to an article in the May 9th Economist): all have warned against the potential threats of artificial intelligence (or “AI”) and all are hardly Luddites.  From Hawking’s warning that AI “could spell the end of the human race” to Musk’s fears that AI may be “the biggest existential threat humanity faces,” all have voiced opposition to vast investments in AI by firms like Google and Microsoft.

And with most people today carrying supercomputers in their pockets and robots on every battlefield, their comments may be worth heeding.  As the article points out, the question is how to worry wisely.

AI has been around a long time.  Lately it’s gotten a boost as “deep learning” systems mimic the neuronal patterns of our human brains along with crunching vast amounts of data, teaching themselves even to perform some tasks including pattern recognition (in case you ever wondered why lately Facebook has been able to identify your friends’ faces automatically – which thanks to something called the DeepFace algorithm it now does accurately 97% of the time).

But today’s AI capacity is still narrow and specific.  It’s basically brute-force number crunching, largely ignoring the human mind’s interests, desire and nuance.  In other words, computers still cannot truly “infer, judge and decide” in the conventional human sense.

But AI is powerful enough to make a real difference in human life.  With AI, doctors can spot cancers in medical images better than ever before.  Speech recognition on smartphones will bring the Internet to the illiterate in developing countries.  Digital assistants can parse data, from the academic to financial, finding nuggets of value.  Wearable computers will change how we view the world.

On the other side of things, AI brings new power to the apparatus of state security, for both autocracies and democracies.  Billions of conversations are monitored regularly utilizing AI (see: NSA), and faces can be scanned and identified quickly for risks to security.

But these are not the things of which the luminaries in our title line worry.  No, they’re worried about a more apocalyptic threat: that of “autonomous machines with superhuman cognitive capacity and interests that conflict with those” of people today.  For example: the car that drives itself is a boon to society; a car with its own ideas of where to go, perhaps not so much.  Shades of 2001’s Hal perhaps.

But AI can be developed safely, as the Economist article points out. They point out that “just as armies need oversight, markets are regulated and bureaucracies must be transparent and accountable,” so AI systems need to be open to scrutiny.  Because designers cannot oversee every contingency, there also must be an on-off switch.  This can be done without compromising progress.  From nuclear weapons to traffic laws, humans have “used technical ingenuity and legal strictures to constrain other such powerful innovations.”  Such diligence must be applied every bit as carefully and judiciously in the burgeoning world of AI.

Autonomous, non-human intelligence is so compelling and potentially so rewarding that, while there are perils, they should not obscure the huge benefits coming from AI.  But they must be managed.

 

Read Full Post »