Aug 202012
 
cloudscreens

Re-posted from Time Techland

Tim Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market intelligence firm in Silicon Valley. He contributes to the “Big Picture” opinion column that appears every Monday on Techland.

We are witnessing the greatest shakeup in the world of computing that has ever taken place.

From a historical perspective, we started out with mainframes, moved on to mini-computers and in the early ’80s entered the era of the personal computer. Over a period of about 50 years, these three kinds of devices defined what computing was all about.

But the conventional wisdom that drove these three eras of computing, which moved us from distributed computing to personal computing, is being shaken at its core as tablets and smartphones take center stage and are poised to redefine computing again.

But instead of evolving the concept of personal computing, these two form factors are actually ushering in a new era of “personalized computing.” And they’re taking the industry from selling hundreds of millions of computing devices a year to now selling over 1 billion annually.

(MORE: Making PCs Truly Personal: Visions of a Computer in Every Pocket)

As someone who gets paid to predict the future of personal computing and consumer electronics, I have been closely watching the disruptive nature of the tablet. Tablets have forced traditional PC vendors to try and regroup, since their market for desktops and laptops is starting to flatten out. Yes, the PC market can still grow but it now grows in single digits each year.

On the other hand, the tablet market is growing as much as 200% a year, and smartphones are growing at about 50% annually. The PC vendors now struggle to enter the tablet and smartphone markets in order to jump on the personalized computing bandwagon. They want to try and extend their strong positions in personal computing by becoming part of the personalized computing revolution.

Except for Apple, making the transition from being a personal computing company to a personalized computing company has been very difficult for vendors. Apple has about a three-year lead on traditional PC vendors, which has solidified Apple as the market leader. An even more ominous problem for PC vendors is that tablets are now starting to eat into the demand for PCs as users realize that they can use a tablet to do up to 80% of what they can do on a PC, and only need to use their PCs for what I call “heavy lifting” — tasks such as advanced spreadsheets, editing documents and creating images and graphics that can’t be done well on a tablet.

(MORE: Why the iPhone Has a Head Start on the Future of Personal Computing)

But there’s an even more disruptive technology in the works that will take personalized computing to the next level by setting up a computing world where personalized computing devices go beyond the ones we know today: the cloud, and its role in delivering personalized, distributed computing in the very near future.

There is a rather interesting irony to this new development. When computing started out, it was based on mainframes being the center of the computing experience. Users accessed the mainframes from what we referred to as “dumb terminals.” All of the intelligence, data and data processing were done on the mainframes; these terminals just accessed the content.

Now, history is repeating itself with the cloud. The cloud is starting to become the “mainframe” in the sky. Even though we have “smart terminals” in the way of PCs, tablets and smartphones, the cloud is where much of our digital stuff will reside. The cloud allows us to access our digital stuff and keep it synchronized with our various personalized computing devices.

Perhaps a more accurate way to think about this is that in the future, people will have a lot of screens, each of which will be tied to their own cloud so they can access all of their personal data and content on whichever screen is the handiest at the time. If I am in the living room watching TV, my television could become the smart terminal of the moment that I use to access my pictures, movies and anything else of mine that is in my cloud. Or if I’m in the kitchen and need something from my cloud, I might use the screen on my refrigerator to get what I need. Or perhaps in the future if I’m on the road and need something from my personal cloud, I could pull over and use the car’s Internet-connected screen to get me what I want from my personal cloud.

The future of personalized computing will not be limited to our PCs, tablets or smartphones. In our digital future, we will have all types of screens in our lives that are tied to the cloud and can be used to access our digital stuff. In fact, with companies like Corning working on flexible displays, I fully expect to someday have a screen on my arm that’s like a bracelet. The screen would connect to the Internet and could even be smart, but more likely it would serve as another gateway to my personal cloud. Google’s Project Glass represents another screen that I could use to access my stuff in the cloud as needed.

The future of personalized computing is a future where we will have all types of screens in our lives and the one that is the most important is the one that is closest at hand at the time someone needs access to their cloud.

Here again Apple has a major lead on its competitors because of its iCloud solution, which allows me to store my pictures, music, movies and more in the cloud and then access them on any other connected Apple device I have available to me. Amazon and Google are taking similar paths that allow you to keep your content in the cloud and access it from whatever screen you have available at the time you need it. However, they are both far behind in making their cloud experiences as easy and seamless as Apple’s solution already is today.

While the cloud still has a lot of maturing to do, make no mistake: it sits at the heart of the future of personal computing. The cloud will be tied to all types of screens, and we’ll access our digital stuff on the screen that’s closest to us at the time.

Within the next three to five years, we will make a dramatic move from personal to personalized computing. While PCs, tablets, smartphones and connected TVs may be the main devices we tie to our personal clouds, they will represent just four of the screens we’ll be able to use. Over time, we’ll see a whole crop of new screens emerge that will tie us each to our personal cloud to make personal computing truly personalized.

MORE: A Tablet in Every Room: How to ‘Think Different’ About the Future

Tim Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market intelligence firm in Silicon Valley. He contributes to the “Big Picture” opinion column that appears every Monday on Techland.

Read more: http://techland.time.com/2012/08/13/the-future-of-personal-computing-cloud-connected-screens-everywhere/#ixzz244aO4wN4.

 August 20, 2012  Posted by on August 20, 2012 Uncategorized No Responses »
Aug 172012
 
cloud2

Published by on August 14, 2012 at 6:07

Is The Future Of Cloud Computing Open Source? A Few Things To Consider

Companies are embracing cloud computing solutions because of their flexibility, scalability and cost-effectiveness, and those who have successfully integrated the cloud into their infrastructure have found it quite economic. They can expand and contract, and add and remove services as per requirement, giving them a lot of control over the resources being used and the funds being spent on those resources. This highly controllable environment not only cuts the costs of services, but also saves funds that are spent on the infrastructure of the company.

Replacement of Personal Computers with Personal Clouds

Cloud computing is not only becoming popular in business, but also among individual consumers. With the passage of time, personal computers are being replaced by personal clouds, and more and more companies are offering personal cloud services. People prefer to store their images, videos and documents online, both as a backup and to make them secure. Storing data on personal clouds makes it available anytime, anywhere. You just need a computing device and an Internet connection, and you can access all your photos, videos and documents.

Stability, Scalability and Reliability of Open-Source Software

Open-source software is becoming popular on an enterprise level because of its stability, scalability and reliability. Companies love to use open-source technologies because they are highly customizable, secure, reliable and accountable. With proprietary software, we are highly dependent on the software company for its development and support. But for open-source, we can find huge support from developers across the world, and we can tweak it according to our needs. Just hire a team of developers, and there you go.

Lessons Learned from Linux and Android

The Linux operating system running on Web servers is a great example of the success of open-source software. The ability of Linux to be customized has made it popular among the developer community. It is because of the openness of Linux servers that they are highly stable and scalable. Enterprise-level applications love to run on Linux servers. We can learn a similar lesson from the Android mobile operating system. When iOS was consuming the huge mobile market, no one thought that an open-source mobile operating system could snatch such a market share. These two operating systems prove that enterprises and individual consumers love transparency and openness in software.

Why the Future of Cloud Computing Is Open-Source

Just like Web and mobile space is embracing open-source technologies, cloud space will soon be embracing open-source software, too. Projects such as Openstack are playing a great role in making the cloud space open-source. Openstack is a project founded by NASA and Rackspace Cloud to develop an open-source cloud computing operating system that can run on standard hardware. This would allow anyone to deliver cloud computing services to others. Many renowned companies, such as Dell, AMD, Intel, etc., are also supporting this project. It has formed a nice community of individual developers and organizations around the world. This eagerness of the technology giants, small organizations and individual developers alike to make a massively scalable open-source cloud operating system shows that cloud space will soon be embracing open-source technologies.

By Seth Bernstein

photo credit: kern.justin

.

 August 17, 2012  Posted by on August 17, 2012 Uncategorized No Responses »
Jul 232012
 

By David Eagleman, Special to CNN July 10, 2012

(CNN) — The Internet was designed to be robust, fault-tolerant and distributed, but its technology is still in its infancy.

The fact that the Web has not stopped functioning in its initial decades sometimes encourages us to assume that it never will. But like any system, biological or man-made, the Internet has the potential to fail.

1. Space weather

When you think about Web surfing, you probably don’t worry about what’s happening on the surface of the sun 92 million miles away. But you should. Solar flares are one of our most serious threats for our communication systems.

Consider satellite failures. One afternoon in 1998, the Galaxy IV, a $250 million satellite floating 35,000 kilometers above the planet, suddenly spun out of control. The main suspect is a solar flare: the sun was acting up at that time, and several other satellites (owned by Germany, Japan, NASA and Motorola) all failed at the same moment.

The effects were instant and worldwide. Eighty percent of pagers instantly went down. Physicians, managers and drug dealers all across the United States looked down and realized they were no longer receiving pages. NPR, CBS, Direct PC Internet and dozens of other services went down. It is estimated that in recent years at least 12 satellites have been lost due to the effects of space weather.

But it’s not just satellites that we have to worry about. When a massive solar flare erupts on the sun, it can cause geomagnetic storms on the Earth. The largest solar eruption recorded so far was in 1859. Known as the Carrington flare, it sent telegraph wires across Europe and America into a sparking frenzy.

Since that time, the technology blanketing the planet has changed quite a bit. If we were to get another solar flare of that size now, what would happen? The answer is clear to space physicists and electrical engineers: it would blow out transformers and melt down our computer systems. In a small disruption in 1989, an electromagnetic storm arrested power throughout most of Quebec and halted the Toronto stock market for three hours.

A major solar event could theoretically melt down the whole Internet. What earthquakes, bombs, and terrorism cannot do might be accomplished in moments by a solar corona.

Given our dependence on the communication systems of our planet, both satellite- and ground-based, this is not simply a theoretical worry. The next major geomagnetic storms are expected at the peak of the next solar sunspot cycle in mid-2013, so hang on tight.


2. Cyberwarfare

As our dependence shifts onto the Net, so do our vulnerabilities.

Wars of the future will be fought less by rugged soldiers in the field and more by smart kids perched in front of computers slamming energy drinks. As our dependence shifts onto the Net, so do our vulnerabilities.

This future can already be detected in the tight relationship between corporeal conflicts and cyber attacks. When one examines the physical conflicts between India and Pakistan, the Israelis and Palestinians or the parties in the collapse of Yugoslavia, the escalation of real-world violence is immediately mirrored by cyber-space warfare.

The main targets in cyberwar are largely military targets, but increasingly large multinational corporations serve just as well. Take one of them down, even temporarily, and you have done more damage to the economy of your enemy than scores of soldier deaths.

Since the beginning of the computer era, the 1960s, there have been computer viruses: programs that latch onto a host system to reproduce themselves and send out new copies. Just as in biology, as computers have evolved in sophistication, so have viruses co-evolved. And the cousins to the viruses, worms, do not even need a host system but can multiply themselves over networks.

Given the defenses in place, are these parasites only a minor theoretical concern? No. Consider the Stuxnet worm that raised its head in 2010. This worm zigzagged its way into Iranian industrial systems, reprogrammed them, hid its tracks and wrecked the factory operations. Seemingly coming from nowhere, Stuxnet introduced itself as a destructive, unstoppable herald of what’s to come.

It will surprise no one that cyberwarfare of the future will involve targeting not only military and industrial targets but Internet connectivity for the general population. If you want to take down your enemy, start by shredding his Net.


3. Political mandate

In the face of the 2010 post-election riots in Iran, the government there shut down the Internet for 45 minutes, presumably to set up filtering of YouTube, Twitter and other sites. Egypt did the same during its revolution of early 2011. China is actively pursuing the capability to shut down its own Internet this way.

In the face of 2010 post-election riots in Iran, the government there briefly shut down the Internet.

But it’s not just countries like Iran and China that think about this kind of control over the Web. On June 24, 2010, a Homeland Security committee in the U.S. Senate approved a bill giving the president authority to wield an “Internet kill switch.” The bill, Protecting Cyberspace as a National Asset Act (PCNAA), proposed to give the president “emergency authority to shut down private sector or government networks in the event of a cyber attack capable of causing massive damage or loss of life.”

The “kill switch” provision was removed from the version of the cybersecurity bill that’s before the current Congress.

It’s probably just as well. Almost unanimously, Internet security analysts feel that shutting down the Web would inevitably do more harm than good, given our predicted level of dependency on it in time of war for news, communication with loved ones and crisis information aggregation.

Security guru Bruce Schneier identifies at least 3 problems with the shutdown idea. First, the hope of building an electronic line of fortifications is flawed because there will always be hundreds of ways for enemies to get around it. No nation or legal decree can plug all the holes.

The second major problem is that we will be entirely unable to predict the effects of such an attempted shutdown. As Schneier puts it: “The Internet is the most complex machine mankind has ever built and shutting down portions of it would have all sorts of unforeseen ancillary effects.”

The third major problem is the security hole it exposes. Once a domestic Internet kill switch has been built, why would a cyberattacker concentrate his efforts on anything else?

Given that the number of people who could use the Internet for good in a crisis situation will presumably outnumber the bad guys, it is probably best to not cut off our heavy dependence on the Web just as things are going bad. Given that a recent survey by Unisys found 61% of Americans approve of the Internet kill-switch concept, this issue will require constant vigilance.

Tell your congressmen: Back away from the switch, slowly.


4. Cable cutting

Although satellites are used for some Internet traffic, more than 99 percent of global Web traffic is dependent on deep-sea networks of fiber-optic cables that blanket the ocean floor like a nervous system. These are a major physical target in wars, especially at special choke-points in the system. And this is not simply a theoretical prediction, the underwater battles are well underway.

As much as three-fourths of the international communications between the Middle East and Europe have been carried by two undersea cables: SeaMeWe-4 and FLAG Telecom’s FLAG Europe-Asia
cable. On January 30, 2008, both of these cables were cut, severely disrupting Internet and telephone traffic from India to Egypt.

It is still not clear how the cables were cut, or by whom. And for that matter, it is not clear how many cables were cut: some news reports suggest that there were at least eight. Initial speculations proposed that the cuts came from a ship anchor, but a video analysis soon revealed there were no ships in that region from 12 hours before until 12 hours after the slice.

Those cables were only the beginning. A few days later, on February 1, 2008, an undersea FLAG Falcon cable in the Persian Gulf was cut 55 miles off the coast of Dubai. On February 3rd, a cable between the United Arab Emirates and Qatar was cut. On February 4th, the Khaleej Times reported that not only these cables, but also two more, a Persian Gulf cable near Iran, and a SeaMeWe4 cable off the coast of Malaysia.

These cuts led to widespread outages of the Internet, especially in Iran. Suspicions that this reflected underwater sabotage derived in no small part from the geographical pattern: almost all the cables were cut in Middle Eastern waters near Muslim nations. Who might have done it? No one knows. But it is known that the U.S. Navy has deployed undersea special operations for decades. In Operation Ivy Bells, for example, Navy divers appear to have swum from submarines to tap an undersea cable in the Kuril Islands.

Whatever the truth behind the incident, we see that if a government or organization wants badly enough to sabotage the telecommunications across a wide swath, it is possible. New deep-sea cables are urgently needed to protect the global economy because businesses worldwide are vulnerable to the targeting of “choke points” in underwater communications.

Whether by terrorists, governments or cyber-pirates, these weak points in the chain should be keeping us all up at night..

 July 23, 2012  Posted by on July 23, 2012 Uncategorized No Responses »
May 232012
 

Geneva: India has called on countries for creating a democratic Internet governance structure to ensure a balance between private, commercial and public policy interests.

“The ability of the existing internet infrastructure to be used globally for delivering programmes for development requires a free and secure internet,” Dilip Sinha, Permanent Representative of India to the United Nations Office at Geneva, said in a statement.

“Creating a democratic internet governance structure will ensure a balance between private, commercial and public policy interests and address developmental concerns,” he said at the UN Committee on Science and Technology for Development (UNCSTD) open meeting on Enhanced Cooperation pertaining to the Internet on Friday.

India reiterated at the meeting that it believes in the freedom of Internet and free deliberations on public policy for Internet Governance.

“India is committed to tapping the tremendous potential of cyber space and the tremendous opportunity it provides and for creating a citizen-centric and business-centric environment to connect all human beings to the information highway,” the statement said.

“India wishes to emphasise the need for global coordination to ensure that internet continues to be a free and secure medium for the whole world,” it added.

Source: IBNLive.Com.

 May 23, 2012  Posted by on May 23, 2012 Uncategorized No Responses »