Feeds:
Posts
Comments

Archive for the ‘Software, Technology, and Wow I Didn't Know That’ Category

If we needed further proof of just how far and fast the world of computing is changing, we note now the first ‘official’ recognition of the power of cloud computing in Microsoft’s recent business reorganization.  Microsoft is essentially “downgrading Windows,” in the words of Jay Greene in an article in the March 30th edition of The Wall Street Journal.  The firm has announced that Terry Myerson, who ran the Windows business, is leaving the company.  His division, the More Personal Computing division, which includes Windows, saw its revenues increase by 2% (to $12 billion+), while Microsoft’s Azure cloud platform grew at a whopping 41% clip.

For over 40 years, Microsoft and Windows have been nearly synonymous, notes Greene.  Now, Microsoft is reorganizing its business around its growing Azure cloud-computing operations and its Office productivity business.  It’s part of CEO Satya Nadella’s willingness to shift Microsoft’s focus away from its traditional roots (e.g., Windows) in the gradual migration from personal computing to smaller, more mobile devices and the web.

Microsoft has grown its Azure operation into the Number 2 cloud hosting operation in the world, behind only Amazon Web Services.  Meanwhile, its Office and Dynamics business software operations are growing rapidly and are already multi-billion dollar businesses.  (Disclaimer: PSSI is an authorized Microsoft Dynamics partner.)

The Journal reports as well that Microsoft is breaking Windows into “pieces.”  The platform technology upon which partners and ISVs build devices, apps and services will fall under the Azure business line, run by Scott Guthrie, in something called the Cloud + AI Platform, and will also include the Augmented Reality business that includes Microsoft’s Hololens device and the AI (Artificial Intelligence) business.  Another division, known as Experiences & Devices will include Microsoft’s effort to develop new feature for Windows, notes the Journal.

It’s worth noting that today some version of Windows runs on more than 1.5 billion devices worldwide, and it still accounts for 42% of Microsoft’s revenues.  That makes the recent decision all the more sweeping.  Not that long ago, Windows was at the heart of the Justice Dept.’s monopoly suit against Microsoft that sought to break up the company, and to which the company eventually signed a consent agreement.

Speaking of this newest development, Brad Silverburg, who ran the Windows division back it’s seminal Windows 95 heyday, noted of the change engineered by Mr. Nadella that “He recognizes the world for what it is, not what it used to be.”

 

Read Full Post »

Did you know that one of the largest “hack” attacks in internet history occurred in 2016 when signals generated by tens of thousands of baby monitors, webcams and like devices across America and Europe were hacked in a way that took down broad swaths of the web?

Yes, baby monitors.  These simple internet-hooked devices lack the security of your PC or phone and make them vulnerable to attack.  And there are no fewer than eight million of these devices in existence, according to editors at Bloomberg BusinessWeek.

A fellow named Louis Parks, who runs a small Connecticut company called SecureRF Corp., says he has the answer.  His firm sells software aimed at safeguarding the IoT (Internet of Things) – in a really efficient fashion.  So efficient in fact, that his software runs very clean on some pretty weak hardware.  It’s all in the math, he says, which “allows us to work with smaller numbers and simpler processes.”

Apparently, it’s a lot of math.  Most security relies on exchanges of public and private “keys,” those very large numbers that are used to generate shared secret codes that authenticate that you are who you say you are, and which encrypt modern-day communications.

It turns out that many smart devices (IoT things) are easy to hack because they “don’t have the battery life to handle powerful chips, and they struggle to use standard keys.”  Instead, they rely on passwords that don’t secure traffic between themselves and the internet.

SecureRF’s software manages with its sophisticated underlying math to require calculation of only 8-bit number to provide secure encryption, versus the 256 digits required with standard software.  The benefit, it says, is that its security software can then run 100 times faster – and on lower-power chips – than conventional software, all while using just half the memory.  The result is the ability to run securely on far less security-sophisticated devices.  Like baby monitors.

SecureRF has licensed its technology to others, like Intel and ARM.  They’re focused less on the chip itself, and more on the communication between chips.  They’ve quietly spent over ten years researching ways to defend various types of mobile communications and the devices that depend on them, including RFID and near-field communications.  They shifted their attention to IoT devices in recent years and are counting on the fee paid by chip makers – starting at just a few cents per chip.

That’s an added layer of “protection” for baby monitors for which their creators likely never envisioned the need.  And it’s all in the math.

Read Full Post »

If you haven’t already, be sure to read our prior post before this one.  It’s the brief historical story of Lena, the early quality standard for algorithms that enable the transfer of digital images, and the precursor of today’s ubiquitous JPEG picture-file format.  If you know Lena’s original story, then please read on.  (Our post is excerpted from the work of Emily Chang of Bloomberg BusinessWeek and her new book “Brotopia: Breaking Up the Boys Club of Silicon Valley.”)

When Deanna Needell, now a math prof at UCLA, first encountered “Lena” in a computer science class, she quickly realized that the original image model was nude (she was culled from the pages of Playboy in 1972) and it made her realize, “Oh, I am the only woman here.  I am different.”  Needell says, “It made gender an issue for me where it wasn’t before.”

Her male colleagues, predictably, didn’t see the big deal.  Said one, “when you use a picture like that for so long, it’s not a person anymore, it’s just pixels,” in a statement that naively laid out the problem of sexism that Needell and her colleagues tried to point out.  But with so few women among the ranks of the programming class, it’s no surprise.

It wasn’t always that way.

As we’ve pointed out previously a post here, the early days of programming were predominantly fueled by women.  In that early, post-WWII era, programmers were mostly women, and the work was considered more of a clerical nature, and thus ‘better suited’ to women.  Only later, when the economy turned down and computers looked to be a key tool of the future, did men begin to enter the programming ranks, eventually even pushing women out as the image of computers and programming pivoted to something more suited to “introverts and antisocial nerds.”

In one pivotal study in the 1960s, two psychologists, William Cannon and Dallis Perry profiled 1,378 programmers, of whom by the way only 186 were women.  Their results formed the basis for a “vocational interest scale” they believed could predict “satisfaction” – and thus, success – in the field.  They concluded that people who liked solving various types of puzzles made for good programmers, and that made sense.

But then they drew a second conclusion, drawn, remember, from their mostly male sample size, in which they concluded that happy software engineers “shared one striking characteristic” according to Ms. Chang: They don’t like people.  They concluded in the end that programmers “dislike activities involving close personal interaction and are more interested in things than people.”  As Ms. Chang pointedly notes then… “There’s little evidence to suggest that antisocial people are more adept at math or computers.  Unfortunately, there’s a wealth of evidence to suggest that if you set out to hire antisocial nerds, you’ll wind up hiring a lot more men than women.”

So while in 1967 Cosmopolitan was letting it be known that “a girl senior systems analyst gets $20,000 – and up!” (equivalent to $150,000 today) and heralded women as ‘naturals’ at computer programming, by 1968, Cannon’s and Perry’s work had tech recruiters noting the “often egocentric, slightly neurotic, bordering on schizophrenic” demeanor of what was becoming a largely male cadre of coders, sporting “beards, sandals and other forms of nonconformity.”

Tests such as these remained the industry standard for decades, ensuring that eventually the ‘pop culture trope’ of the male nerd wound up putting computers on the boy’s side of the toy aisle.

By 1984, the year of Apple Inc.’s iconic “1984” Super Bowl commercial, the percentage of females earning degrees in computer science had peaked at 37%.  As the number of overall computer science degrees increased during the dot-com boom, notes Chang, “far more men than women filled those coveted seats,” and the percentage of women in the field would dramatically decline for the next 25 years.

We’ll finish out this series of posts with a look at the state of women in tech today and what that might mean for tomorrow, so stay tuned.

 

 

Read Full Post »

Emily Chang is a journalist and weekly anchor of Bloomberg BusinessWeek, and the author of “Brotopia: Breaking Up the Boys Club of Silicon Valley.”  Recently, she penned an article there about an old (in tech terms) digital artifact by the name of Lena Soderberg.  Lena first became famous in November 1972 when, as Lenna Sjooblom, she was featured as a centerfold in Playboy magazine.  That spread might have been the end of it but for the fact that researchers at the Univ. of Southern California computer lab were busy trying to digitize physical photographs into what would eventually become the JPEG (or .jpg) format we all know from Internet images today.

According to the lab’s co-founder, William Pratt, now 80, the group chose Lena’s portrait from a copy of Playboy brought to the lab by a student.  The team needed to test their photo-digitization algorithms on suitable photos for compressing large image files so they could be digitally transferred between devices.  Apparently their search led them to Lena, the 21 year old Swedish centerfold.  Go figure.

Lena ended up becoming famous in early engineering circles, and some refer to her as “the first lady of the Internet.”  Others see her as Silicon Valley’s original sin – the larger point of Ms. Chang’s article – but that’s a topic for another post.

Apparently, Lena’s photo was attractive from a technical perspective because the photo included, according to Pratt, “lots of high frequency detail that is difficult to code.”  That would include apparently her boots, boa and feathered hat.

According to Ms. Chang, for the next 45 years, Lena’s photo (seen at the top of this post), featuring her face and bare shoulder, served as “the benchmark for image processing quality for the teams working on Apple Inc.’s iPhone camera, Google Images, and pretty much every other tech product having anything to do with photos.”

To this day engineers joke that if you want your image compression algorithm to make the grade, it had better perform well on Lena.

So to a lot of male engineers, Lena thus became an amusing historical footnote. But to their female peers, it was seen as “just alienating.”  And it has a lot to do with some of the inborn gender biases that permeate the tech industry to this day, where the majority of employees are still male.

That’s a much longer extract from Emily Chang’s essay that we’ll try to sum up in a succeeding post.  Stay tuned…

Read Full Post »

Mike Lazaridis is justly famous and wealthy for the being the guy who co-invented the Blackberry, the first ‘must-have ‘personal digital assistant.  And Lazaridis says he “won’t be iPhoned again.”

With colleague Doug Fregin, Lazaridis has poured nearly half a billion dollars into projects involving quantum computing over the past 20 years, and now runs a venture company that supports the effort.  Noting past failures and the scope and breadth of computing’s next frontier, quantum computing, he notes that “you have to build an industry.”  The importance of being nimble, close to customers and constantly moving forward “can’t be done with just one company” says Lazaridis.

Companies including Google, IBM and others are also chomping at the quantum bit, and so Lazaridis has chosen to make his well-placed, narrower venture bets on companies and technologies that could be commercialized in just a few years.

That’s important because quantum (as we’ve written about in this blog several times before) is tricky.  While classical computers handle their information bits as 1s and 0s, in quantum, a bit can be both a one and zero at the same time, enabling a level of multi-tasking previously unthinkable.  The potential, in fields as diverse as weather, aviation and warfare, is enormous.  But quantum as it exists today is still on shaky ground.  The existing number of quantum computers is small, and as Bloomberg BusinessWeek reports in a recent article, “they become error-prone after mere fractions of a second – and researchers say perfecting them could take decades.”

Hence the importance of aiming carefully.  Several of Lazaridis’ investments have come to market, or are close.  Isara Corp. sells security software it says can block quantum hacks and projects sales of $3M in 2018. High Q Technologies claims that by year-end it will be selling quantum sensors 100,000 times more sensitive than the tools pharmaceutical companies use today to develop drugs. (Our featured photo today is of a device used to test the superconducting films used on silicon at the atomic level.)

Lazaridis has teamed with former Blackberry teams to connect quantum computers with conventional computers, in order to make quantum more accessible to a wider audience.  Those efforts will still need to prove themselves viable as businesses, but the mere idea reinforces the industry certainty that the current state of computing will not remain the status quo, and that the future of computing is quantum.

It’s a race, he knows.  One driven by venture capital and the ability to put one’s money where one’s mouth is.

 

Read Full Post »

Last year a U.S. intelligence sector contest — to see who could develop the best AI algorithm for identifying surveillance images among millions of photos — was won, not by an American firm, but by a small Chinese company.  The result served perhaps as a warning shot that the race to dominate the realm of artificial intelligence, or AI, is on – and American victory is by no means assured.  (This post is based on reporting by 3 reporters in the Jan 23rd Wall Street Journal.)

The Chinese company, Yitu Tech beat out 15 rivals due to one big advantage: its access to Chinese security databases of millions of people on which it could train and test its AI algorithms.  Privacy rights in the U.S. make that particular sort of application harder to develop, as companies lack access to the enormous trove of these data common in China, and often accomplished with the aid of government agencies.

AI algorithm development depends on vast troves of data to develop and test their not always obvious hypotheses and angles.  Microsoft chief legal officer Brad Smith notes that “The question is whether privacy laws will constrict AI development or use in some parts of the world.”

The U.S. leaders in AI include Apple, Alphabet (Google), Amazon, Facebook and Microsoft, and they’re up against Chinese giants like Alibaba, Baidu and Tencent Holdings.  On the academic side, the U.S., particularly in the regions surrounding Silicon Valley, holds a strong lead, in terms of budgets and in patents filed in areas like neural networks and something called unsupervised learning.  The U.S. outnumbers China in terms of number s of AI companies by about two to one.  As well, current spending on AI in the U.S. is massive, with investments of over $13 billion in R&D each from Microsoft and Alphabet.  By comparison, Alibaba only recent pledged to triple its R&D spending to $5 billion over the next three years.

But China plans to close the gap with a new government led AI effort to lead the field by 2030 in areas including medicine, agriculture and the military.  Thus, AI startups in China are seeing a tenfold rise in funding over last year.  PwC expects China to catch up and reap about 46% of the $15.7 trillion it expects AI to contribute to global output by 2030, with North America a distant second at about half that percentage.

In the West there is a general reluctance to give companies wide use of customers’ data.  Even tough U.S. laws and restrictions are still weaker than those in Europe, where privacy rights have become even more contentious, and where new tougher laws are scheduled to take effect in May.  Some experts think this reluctance could hamper U.S. AI development efforts and allow Chinese companies to pull ahead in the race for global AI dominance.

Regulation “could be a help or a detriment,” according to former Google and Baidu exec Andrew Ng, who recently founded an AI startup.  He adds that “Despite the U.S.’s current lead in basic AI research, it can be easily squandered in just a few years if the U.S. makes bad decisions.”

The race is indeed on.

Read Full Post »

According to The Wall Street Journal (1/17/18), Google is expanding its already large network of undersea cables to access new regions around the world not currently well served by its competitors, as well as to give itself some rerouting capabilities if a region fails or gets overloaded.  It’s all part of the now unstoppable growth of cloud computing services, as well as a play to keep up with its two greatest competitors, Amazon and Microsoft.

Google VP Ben Treynor professes that he “would prefer not to have to be in the cable-building consortium business” but found there weren’t a lot of other options.

Google parent Alphabet already owns a massive of network of fiber optic cables and data centers, already handling about 25% of the world’s internet traffic.  These facilities allow Google to control its data-intensive software without having to rely on the bit telecommunications providers.

After a decade of construction, Google will soon have 11 underwater cables around the world.  They’re used to “refresh research results, move video files and serve cloud computing customers” around the globe.

And at that, Google currently ranks third in cloud-computing revenue behind Amazon and Microsoft in the biggest tech race going on the planet now.  Billions of dollars of revenue annually are at stake, the Journal points out, as companies increasingly move various data operations to the cloud.

Currently, its longest cable stretches 6,200 miles from Los Angeles to Chile.  But Google has also teamed up with others, like Facebook, for its latest build-out.  They plan to share capacity on a 4,500 mile cable from the east coast of the U.S. to Denmark, with an additional terminal in Ireland, thus increasing its bandwidth across the Atlantic.

Another cable of 2,400 miles will be run from Hong Kong to Guam, hooking up with cable systems from Australis, East Asia and North America.

The internet build-out continues, as does the march to cloud dominance.  Only today, you’ve got to have a few billion dollars in your pockets to play.

 

Read Full Post »

Older Posts »