Senin, 31 Agustus 2015

My iPhone Wish List fifianahutapea.blogspot.com

Unless Tim Cook wishes to be a full-time guest on CNBC’s Mad Money, it’s a safe bet that Apple will introduce new iPhones at it’s keynote event next week. As usual, there is no great shortage of rumors and posturing.

The invitation to the event simply states: “Hey Siri, give us a hint.”

Given this single clue, I suspect there will be a major emphasis on Siri, with speed and query enhancements comprising only a small bit of the change. I would not be surprised if Siri appears on new products, such as an entirely new Apple TV, and quite possibly on select third-party devices. This may be necessary as Apple continues its slow march toward embracing — and overseeing — smart homes and the Internet of Things.

We also know that this keynote will take place in a unique setting — San Francisco’s Civic Auditorium, which seats 7,000. That’s huge. Toss in the fact that it’s going to be streamed not only to iOS devices but Windows 10 users as well, and I have convinced myself that we are going to witness a rather bold, boisterous Apple, one eager to reveal its intentions to be everywhere we are and with us everywhere we go.

I am less confident there will be any glorious new revolutionary product. Rather, we will see the breadth of Apple. I predict there will be so much revealed, in fact, that there’s a small chance that the world’ most profitable product, iPhone, gets shockingly less time on stage than it deserves. Whether that’s true or not, here is a wishlist of items that I hope Apple makes come true.

1. Size Matters

iPhone 6S and iPhone 6S Plus seem a given, despite the rather unwieldy names. A bit faster, a bit thinner than their predecessors. By now, this has become boring. It doesn’t have to be. I wish for Apple to take us back to the future and introduce a non-phablet device, one about the dimensions of an iPhone 4S, but with the latest specs, Touch ID, better screen, better camera. Phablets are awesome, but they’re not for everyone.

2. Force Touch

The rumor mill strongly suggests that Force Touch, currently available on Apple Watch and select MacBooks, will now be available on the new iPhones. I hope so. Force Touch senses how hard or light the user is touching the screen, enabling an entirely new palette of controls and customizable options. Touch the screen with force to call up a secondary menu, for example, or possibly to access a new weapons cache in your favorite game. Touch the screen with a gentle tap to get an overview of the weather, but hold down on the cloud icon, say, to reveal the hourly forecast.

Clever developers should make great use of Force Touch. Sensing pressure will no doubt lead to groaning UX errors, but it also has the potential to create new modes of interaction. At the very least, Force Touch should save time by reducing the number of taps or swipes to call up an action.  In fact, if executed properly, Force Touch just might negate the need for a home button. No home button means much more screen real estate. Dare I wish such an outcome?

3. Battery

This one’s simple: I wish for a longer-lasting battery. Unless you’re rocking an iPhone 6 Plus, with its massive 2,910 mAh battery, at some point during the day you’re scoping out a wall socket. It’s 2015 and this needs to end. As other components get thinner, lighter, and as Apple continues to improve manufacturing, a larger battery should be a given, especially now.

Fact is, while we think we use our iPhones all the time, there are often sleeping. That’s no longer the case. Streaming music and soon steaming television, plus the numerous social media and messaging platforms, the growing use of ambient apps which sense sounds around us, will keep our iPhone in near-constant use. More battery power is critical. Otherwise, it’s like the world’s best off-road vehicle, but with a tiny gas tank. Lots of places you could take it, but you better not.

4. Design

Confession: the iPhone 6 and iPhone 6 Plus always struck me as rather lazy designs. They were Apple’s response to the rapid rise of the phablet. Bigger, yes, but forgettable. Look at the iPhone 4S, for example. It’s beautiful. The 6 series, not so much.

I wish Apple would get rid of the rounded casing, the obtrusive sleep button on the right side, the bulging camera lens. I wish for the new iPhones to be the most beautiful ever.

I also wish that Apple makes them stronger. It’s been rumored that the new iPhones will be constructed with a stronger aluminum body. Let’s hope. Not only would we get to hear Sir Jony Ive say “aluminum” many more times, we can once again place the device in our back pocket without any fears, unfounded or not, of the return of Bendgate.

5. Need For Speed

If the new iPhone isn’t appreciably faster, I’m not sure of the point. I wish — and no doubt so do you — that the new iPhones are more responsive, offer faster downloads, better gameplay, improved graphics, and no lag, even if I’m running Pandora in the background. A new A9 processor is practically a given, as is more RAM, so this wish is not at all far-fetched.

Somewhat less likely is that the new iPhones also include an improved wireless chip, enabling much faster LTE download speeds. A 9to5Mac post is convinced this is going to happen, certain that Apple will incorporate a new Qualcomm chip, thereby increasing download speeds from 150 Mbps to 300 Mbps — a noticeable improvement.

6. Camera

Sure, it’s called iPhone, but it’s much more an iComputer, iMedia Player and, most especially, a always-with-you camera. It’s time for the latest iPhones to have the very best camera on the smartphone market. This is what I wish for. Assume this wish comes true.

Multiple sites have all but confirmed that the new iPhones will include a 12mp camera — a nice leap in resolution over the iPhone 6’s 8mp offering. More megapixels means a sharper image, one that’s easier to zoom in on and edit. Optical zoom would also be nice. I’m not entirely sure how this could be done inside a smartphone, but it’s my wishlist, after all. No harm in asking.

There are also multiple sources stating that the front — FaceTime — camera will get a much-deserved boost. An improved camera, slo-mo video and possibly a flash. If so, expect an entirely new selfie revolution — and assume that Facebook and Instagram will need to buy many more servers.

As for video, I wish the new iPhone will record in 4K quality, a sharp boost over the current 1080p recording quality. This isn’t a must-have, however.

7. Screen

I have no complaints about the screen on my iPhone 6 Plus. At least, I had none till I saw a giant, beautiful Samsung Galaxy (OLED) screen. I wish for that — or better.

8. Fun

The iPhone is more than just a small, mobile supercomputer. It’s a fun device. I wish for it to be even more fun: Offer split screen. Include new ways of editing and enhancing images, possibly using Apple Music clips, free stickers.

The few people I know who have an Apple Watch seem utterly enamored with the “motion wallpapers” that ebb and flow on screen. I wish for these for the new iPhones.

I want there to be more sharing options and for AirDrop to work every time and be  just as easy to use as taking a quick snapshot. Make it water resistant so I can take it with me on the beach, maybe even forget it when I wade into the water. Also, the device should come in many more colors.

9. Junk Drawer

I do not want Apple News. I don’t want most of Apple’s stock apps, in fact. I wish these to be vanished from my device forever. Why do I even have to waste a wish on this?

10. Fight The Power

I’m a long-time Nokia and Lumia user and love “wireless” Qi charging. It’s essentially a plate you set your device on and it begins charging. No reaching for the cord, no fighting to plug it in, no shredding an expensive accessory by pulling it out at the wrong spot. I wish for wireless charging.

I also wish to stick it to the wireless carriers. An Apple SIM card that can automatically determine the best service provider and the best price and use that for my calling, text and data in real-time, wherever I go, would be a godsend. I wish for this even though I don’t think it’s a terribly smart business move for Apple.

What do you wish for?

Hurry! You only have a week.

My iPhone Wish List originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Twitter takes on Facebook, Snapchat with improved photo tools fifianahutapea.blogspot.com

New image and video editing tools revealed in recent tweets from various celebrities show that Twitter is, once again, bringing the fight to Facebook and Snapchat.

The new tools appear to allow Twitter users to share images with text overlays, stickers, and other modifications. Twitter’s existing tools merely allow people to crop images or run them through filters that greatly change their appearance, whether it’s by upping the contrast or making them look like old Polaroid shots.

Here’s one of the more popular examples of what the new tools can do, courtesy of Taylor Swift:

Much about the new tools, such as whether they’ll debut in a standalone product or if they’ll be included in Twitter’s existing mobile applications, is currently unknown. Twitter declined to comment to Gigaom on the record. Historically, Twitter tends to add new features to its app instead of introducing new ones.

But it seems clear that these tools are meant to bring Twitter to parity with Facebook and Snapchat, both of which have offered similar tools for a while. The service isn’t content with being the Internet’s live broadcast network; it wants to convince people to use its apps instead of other social media tools, too.

Twitter isn’t alone in these efforts, of course. Facebook has tried to copy various aspects of the micro-blogging service for years, without much success, and it’s reportedly working on a tool it hopes will supplant Twitter’s role as a news wire. It’s almost like both companies are holding funhouse mirrors in front of the other and creating new services based on whatever they see in the reflections.

Yet these features appear to be targeted more at Snapchat. The ephemeral messaging service has offered similar tools for longer than both Facebook and Twitter, and it’s clear that both companies fear their younger competitor. Facebook tried to fight it with stickers and other features for Messenger. Now it’s Twitter’s turn to try to fight off the threat posed by Snapchat’s popularity.

“What’s interesting is that Twitter is still fairly poor at private messaging, and yet other than for celebrities it feels like a lot of these features would be best suited to stuff you’d share with your friends rather than the world at large,” says Jan Dawson, the chief analyst at Jackdaw Research. “So I’m curious to see how Twitter positions these new features when it formally announces them.”

Dawson is right. Twitter is known mostly for the public nature of its service; that’s what makes it useful during live events, breaking news, and other times when it’s nice to have access to a few million opinions just a few clicks away. The company is working to change that, however, and become more private.

Earlier this month, Twitter removed the 140-character limit from direct messages on its service and said that was one of its users’ most-requested changes. I argued at the time that this change makes Twitter more like Google+ and the semi-private “circles” it decided to hang its all-too-ill-fated hat on.

Now it seems like this is part of a coordinated effort to combat Snapchat, Facebook Messenger, and other messaging services that are just starting to become popular in the West. Twitter’s emphasis on public sharing is waning — now it’s giving private communication a chance to thrive on its service. And, of course, it’s giving celebrities new toys to draw a little more attention to itself.

Let’s see if this transition makes a difference. People who want to use Snapchat will probably continue to use Snapchat. The same goes for Facebook, Twitter, and other social websites. All these mirrors, yet both Facebook and Twitter seem so uncomfortable with their own reflections that they try to emulate the other instead of trying to compete by being the best versions of themselves.

Someone get that bird a self-esteem boost.

Twitter takes on Facebook, Snapchat with improved photo tools originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Jumat, 28 Agustus 2015

Dropbox might not die off in a market correction after all fifianahutapea.blogspot.com

On the leeward side of the stock market correction on August 24, it would be natural to speculate which big private tech companies might run out of gas if markets took a more persistent downturn. Resoundingly, one of the most common picks for bubble-poppers is Dropbox.

Ever since Steve Jobs famously disparaged it as “not a product” there have been speculations that it would die along with the file sharing business. But new features indicate a direction that would let it thrive.

Increasingly, software interfaces are moving towards dialog-based paradigm: both smart assistants like Facebook M or messenger interfaces like Magic are examples where you “talk” to the computer to accomplish a task.

There’s not much use for traditional file sharing in a world of messengers, because everything you want to share just gets dumpted into the dialog, living on the servers of whatever messenger you uploaded it to.

However, cloud storage providers like Dropbox usually contain about 20 percent duplicate files–mostly songs, movies, and images that other users also uploaded. Naturally, the server can achieve some efficiency by keeping a single file instead of zillions of copies. In many ways, Dropbox is less a “store” of your personal files and more like a curated list.

So it makes sense that Dropbox recently announced link-storing inside both web and mobile products. Drag in a link, and it gets represented as a clickable icon in whatever folder you’ve added it to. Effectively this turns Dropbox into a product more like Pocket: a curation tool, but with a massive cloud storage tool attached.

For now, the message wars are in full effect: iMessage, Slack, Lync, Skype, Whatsapp, Facebook Messenger, Kik, and dozens of other enterprise and consumer chat platforms still balkanize the world of text-based communication.

Between networks, the lingua franca is links: files and videos and photos aren’t treated as objects to be placed in folders, but as shortlinks to be curated. Whatever energy we save by using messengers and abandoning the file system has been at least partly offloaded into link management.

Dropbox built a $10+ billion company on file sharing. What can it build on curation?

Photo credit Benny_bloomfield on Flickr

Dropbox might not die off in a market correction after all originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

How cord cutting is changing the nature of audience reach fifianahutapea.blogspot.com

Frank is CEO of Beachfront Media.

In the emerging multi-platform era, reaching an audience is no longer merely as simple as waiting for them to turn on their television sets. As a viewer-based society, the multitude of formats currently available makes it possible to quite literally “cut the cord,” and take our content with us.

However, with this proverbial cord-cutting comes consequences that are going to require creators and advertisers alike to rethink their content delivery strategies. In this new era, the rules of broadcast cable do not apply. Content creators require a whole new playbook since different rules exist for different formats, and in some cases, the rules don’t exist at all.

The three areas where this is most evident are how to measure views, how to target viewers, and how to create custom content for each platform and device.

Views

With all the varying ways to watch video–including YouTube, Facebook, Snapchat, etc.–we can agree almost unilaterally that the best way to gauge engagement is by looking at views. But what counts as a view varies widely across platforms.

The need for an industry standard definition for a “view” is required, but as of yet, does not exist. Its absence lies in the fact that the godfather of online video — YouTube — never clearly defined what constitutes a view, leaving those emerging in its wake the freedom to create their own definitions. This delineation would be a strong first step in helping creators and advertisers begin to standardize their engagement with the multi-format audience.

Targeting

Where advertisers once had a captive audience in front of a TV, they now have to contend with multiple devices and mediums that have different methods of reaching viewers.

The question, now, becomes how to adequately target the right viewers with ads across this fragmented landscape. Websites use cookies, television uses the Nielsen ratings system, and mobile devices use IDFA or Android Advertisind ID.

It’s important for advertisers to view these different targeting tools not as a system of competition from one platform to another, but rather a larger environment for engagement. In the era of the digital native, the viewer is engaging with multiple formats on a daily basis, and it is part of their lives. To assume any one format or device will somehow prevail over others is to miss how this generation engages with their content.

Content

Creators can no longer repurpose the same piece of content across the various platforms. This goes against the notion of the audience’s differing expectations. Instead, the creator needs to customize that content, tweaking and shaping it for each platform to fulfill those expectations and reach wider audiences in new ways. What’s more, creators in the current digital climate do not view this as a “rat race” that requires them to keep up with formats, but rather a stronger means to extend their brand and get content in front of fans in all the places they live. It creates more work, but also more opportunity.

Those who create and adhere to these rules will be the winners in the multiplatform age. Complicating matters though is the broader spectrum of creators now entering the digital video game. The grassroots creators on YouTube who arose from the first generation or so of online video now have to compete with advertisers and corporations who are seeing the multiformat potential and engaging themselves.

When YouTube first launched, content creators had leeway, because major corporations weren’t paying attention to what was going on in the space. But today, companies like Disney are now leveraging their content to an online audience, and view platforms like YouTube as a legitimate medium to engage viewers. So naturally, we’re seeing these “YouTube Personalities” migrate to other platforms like Facebook, Vine, Snapchat, and other emerging formats to find the space needed for a new grassroots movement.

Because of this, each of new platforms is getting a surge in creation and viewership thanks to the cutting of the cord and the great creative migration it spawns. Savvy advertisers and creators alike need to look at this multiformat migration as a means to engage audiences in new, exciting ways. While a little work is required, the means of reaching an audience are larger than ever before. Customization is key in this new era of engagement, and the creator who can manage that approach will not only survive the cutting of the cord, they will achieve the uniting of platforms.

Frank Sinton is the CEO of Beachfront Media, a video app creation toolkit and ad mediation platform for creators and publishers to distribute and monetize video.

How cord cutting is changing the nature of audience reach originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Kamis, 27 Agustus 2015

Facebook sees over 1 billion people using the service in a single day fifianahutapea.blogspot.com

Facebook reached a new milestone earlier this week as the social network saw one-seventh of the world’s population (aka 1 billion people) logged into the service in a single day.

The world’s biggest social network is still driving user growth, but as most people have a Facebook account by now, seeing a significant portion of the population all using the site over a 24-hour period seems like an achievement worth bragging about. This is especially true with the company attempting to increase the amount of time its users spend on Facebook, which has an impact on advertising revenue.

The 1 billion milestone was achieved for the first time ever this past Monday, according to Facebook CEO Mark Zuckerberg, who made the announcement today via a status update. However, he was quick to point out that the figure is simply the peak number of users logged in for a single day, not an average:

“We just passed an important milestone. For the first time ever, one billion people used Facebook in a single day.

On Monday, 1 in 7 people on Earth used Facebook to connect with their friends and family.

When we talk about our financials, we use average numbers, but this is different. This was the first time we reached this milestone, and it’s just the beginning of connecting the whole world.

I’m so proud of our community for the progress we’ve made. Our community stands for giving every person a voice, for promoting understanding and for including everyone in the opportunities of our modern world.

A more open and connected world is a better world. It brings stronger relationships with those you love, a stronger economy with more opportunities, and a stronger society that reflects all of our values.

Thank you for being part of our community and for everything you’ve done to help us reach this milestone. I’m looking forward to seeing what we accomplish together.”

Facebook sees over 1 billion people using the service in a single day originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Tech workers aren’t disloyal, they’re under appreciated fifianahutapea.blogspot.com

It’s become something of a cliché to note that tech workers bounce between jobs faster than a pinball rebounds against the confines of its rubber-lined machine. Common wisdom says this is because millennials are flaky, and that loyalty has no place in a job market where changing jobs can often lead to a higher income. But new research suggests this phenomenon has another motivator: appreciation.

TINYpulse (hereafter stylized as “TinyPulse”) recently surveyed more than 5,000 people who work at tech companies in the United States. It found that many tech workers who see themselves sticking with their current employer for at least a year are the same workers who feel like the company values them. And while “value” can sometimes mean “pay,” it can also mean other things, too.

The survey showed that the majority of workers at tech companies aren’t particularly happy at work, feel under appreciated, and don’t feel they’re provided with sufficient opportunities for growth or support in their careers. Many of these feelings were more pronounced in people who work in IT, but most workers offered negative or milquetoast in response to the survey.

“There’s widespread workplace dissatisfaction in the tech space, and it’s undermining the happiness and engagement of these employees,” TinyPulse said. “The problem goes beyond workplace satisfaction […] engagement is one of the key ingredients for employee innovation. If we aren’t engaging our IT workers, we aren’t setting them up to perform the way we need them to.”

Anyone who questions the effect feeling valued can have on someone changing careers should just look at Amazon. A New York Times report showed that the company’s offices — and its warehouses — are brutal. Researchers from the University of Kansas lent more evidence to that idea by showing Amazon’s workers have a worse work-life balance than workers at other tech companies.

And, surprise, surprise, workers aren’t willing to put up with that. As the New York Times said in its report:

Employees, human resources executives and recruiters describe a steady exodus. ‘The pattern of burn and churn at Amazon, resulting in a disproportionate number of candidates from Amazon showing at our doorstep, is clear and consistent,’ Nimrod Hoofien, a director of engineering at Facebook and an Amazon veteran, said in a recent Facebook post.

I wouldn’t be surprised if more reports like the one on Amazon start to appear. Maybe tech workers, especially young ones who are accused of lacking loyalty or belonging to a generation of frenzied dilettantes, are simply trying to find places to work that give them the appreciation they’re looking for. It’s not just about the money — it’s about finding a place that treats workers like human beings.

Not that the money hurts, of course.

Tech workers aren’t disloyal, they’re under appreciated originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Rabu, 26 Agustus 2015

Facebook announces a new digital assistant, M fifianahutapea.blogspot.com

Facebook has announced a new utility that will assist Messenger users with booking appointments, shopping online, and performing other mundane tasks. The service, called M, will reportedly debut to “a few hundred” people in the Bay Area before it expands to everyone else.

M is billed as the first digital assistant that can actually help people with their daily lives. Most of its competitors — Cortana, Google Now, Siri, etc. — are limited to presenting their users with information. Facebook’s David Marcus says M was designed to be able to “actually complete tasks on your behalf.”

That’s a hell of a promise. And in its effort to fulfill it, Facebook seems to be playing it safe, whether it’s by limiting M to information it collects on its own or promising that the algorithms that dictate its behavior are overseen by humans. (They’re the ones who make sure M doesn’t mistakenly spend users’ money.)

Facebook users concerned about their privacy should know that M doesn’t use information shared with its parent service. It asks questions about what people want, and if it can’t perform well based on those answers, it asks follow-ups. Marcus told Wired that this could eventually change, but that Facebook would require users’ consent before it started spoon-feeding M their personal data.

That’s a far cry from Google Now’s seeming omniscience, or the amount of information Microsoft collects for its digital assistant, Cortana. There’s a slight weirdness factor given that real human beings know what you want M to do, but at least the service doesn’t seem to be gathering all kinds of private data.

Facebook is also playing it safe with M by limiting it to just a few hundred users. That’s a very, very small portion of its 1 billion users — and restricting M to that small an audience, at least at the beginning, makes sense. Better to get people excited about a toy they have to wait for than to force something that doesn’t work as well as people expect (obligatory Louis CK reference) into their hands.

A seemingly thoughtful approach to user privacy and a desire to get the product right instead of just shipping it to a few million people all at once? That’s much more cautious than Facebook was about new product rollouts in the past. It’s enough to make me wonder if the company is emphasizing quality over speed.

Facebook announces a new digital assistant, M originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Seattle vs. San Francisco: Who is tops in the cloud? fifianahutapea.blogspot.com

In football, city livability rankings — and now in the cloud — San Francisco and Seattle are shaping up as fierce rivals.

Who’s winning? Seattle, for now. It’s due mostly to the great work, vision and huge head-start of Amazon and Microsoft, the two top dogs in the fast-growing and increasingly vital cloud infrastructure services market. Cloud infrastructure services, also called IaaS, for Infrastructure as a Service, is that unique segment of the cloud market that enables dreamers, start-ups and established companies to roll-out innovative new applications and reach customers anytime, anywhere, from nearly any device.

Amazon Web Services (AWS) holds a commanding 29 percent share of the market. Microsoft (Azure), is second, with 10 percent. Silicon Valley’s Google remains well behind, as does San Francisco-based Salesforce (not shown in the graph below).

cloud leaders

The Emerald city shines

I spoke with Tim Porter, a managing director for Seattle-based Madrona Venture Group. Porter told me that “Seattle has clearly emerged as the cloud computing capital.  Beyond the obvious influence of AWS and strong No. 2, (Microsoft) Azure, Seattle has also been the destination of choice for other large players to set up their cloud engineering offices.  We’ve seen this from companies like Oracle, Hewlett-Packard, Apple and others.”

Seattle is also home to industry leaders ConcurChef, and Socrata, all of whom can only exist thanks to the cloud, and to 2nd Watch, which exists to help businesses successfully transition to the cloud. Google and Dropbox have also set up operations in the Emerald City to take advantage of the region’s cloud expertise. Not surprisingly, the New York Times said “Seattle has quickly become the center of the most intensive engineering in cloud computing.”

Seattle has another weapon at its disposal, one too quickly dismissed in the Bay Area: stability. Washington has tougher non-compete clauses than California, preventing some budding entrepreneurs from leaving the mother ship to start their own company. The consequence of such laws can lead to larger, more stable businesses, with the same employees interfacing with customers over many years. In the cloud, dependability is key to customers, many of whom are still hesitant to move all their operations off-premise.

Job hopping is also less of an issue. Jeff Ferry, who monitors enterprise cloud companies for the Daily Cloud, told me that while “Silicon Valley is great at taking a single idea and turning it into a really successful company, Seattle is better for building really big companies.”

The reason for this, he said, is that there are simply more jobs for skilled programmers and computing professionals in the Bay Area, making it easier to hop from job to job, place to place. This go-go environment may help grow Silicon Valley’s tech ecosystem, but it’s not necessarily the best environment for those hoping to create a scalable, sustainable cloud business. As Ferry says, “running a cloud involves a lot of painstaking detail.” This requires expertise, experience, and stability.

San Francisco (and Silicon Valley)

The battle is far from over. The San Francisco Bay Area has a sizable cloud presence, and it’s growing. Cisco and HP are tops in public and private cloud infrastructure. Rising star Box, which provides cloud-based storage and collaboration tools, started in the Seattle area but now has its corporate office in Silicon Valley. E-commerce giant Alibaba, which just so happens to operate the largest public cloud services company in China, recently announced that its first cloud computing center would be set up in Silicon Valley.

That’s just for starters.

I spoke with Byron Deeter, partner at Bessemer Venture Partners (BVP), which tracks the cloud industry. He told me that five largest “pure play” cloud companies by market cap are all in the Bay Area: Salesforce, LinkedIn, Workday, ServiceNow and NetSuite.

The Bay Area also has money. Lots of money. According to the National Venture Capital Association, nearly $50 billion in venture capital was invested last year. A whopping 57 percent went to California firms, with San Francisco, San Jose and Oakland garnering a rather astounding $24 billion. The Seattle area received only $1.2 billion.

venture capital by region

The Bay Area’s confluence of talent, rules and money will no doubt continue to foster a virtuous and self-sustaining ecosystem, one that encourages well-compensated employees to leave the nest, start their own business, and launch the next evolution in cloud innovation. If Seattle has big and focused, San Francisco has many and iterative.

The cloudy forecast

Admittedly, this isn’t sports. There’s no clock to run out and not everyone keeps score exactly the same. Just try to pin down Microsoft’s Azure revenues, for example. It’s also worth noting that the two regions do not compete on an even playing field. Washington has no personal or corporate income tax, and that is no doubt appealing to many — along with the mercifully lower price of real estate, both home and office.

The cloud powers healthcare, finance, retail, entertainment, our digital lives. It is increasingly vital to our always-on, from anywhere-economy, and a key driver of technical and business model innovation. If software is eating the world, the cloud is where it all goes to get digested. Here’s hoping both cities keep winning.

Seattle vs. San Francisco: Who is tops in the cloud? originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Selasa, 25 Agustus 2015

Politwoops shutdown raises questions about Twitter’s rules fifianahutapea.blogspot.com

Can social websites protect their users while still allowing outside groups to hold politicians and other public figures accountable for their statements?

That’s the question at the heart of a recent controversy between Twitter and Politwoops, a series of websites that archived politicians’ deleted tweets whose access to Twitter’s public API was revoked without warning over the weekend.

Twitter made a similar move earlier this year when it shut down the United States version of Politwoops. The many versions of this tweet-archiving tool were run by two separate groups — the Sunlight Foundation and the Open State Foundation — and both have expressed their concern over Twitter’s decision.

Both groups tell me there are no negotiations in place to restore Politwoops’ access to Twitter’s API. But Arjan el Fassad, the director of the Open State Foundation, did say that the group is “exploring a number of legal and technical options” to see if it can build a similar tool without access to Twitter’s API.

“We believe that what public officials, especially politicians, publicly say is a matter of public record,” Fassad told me. “Even when tweets are deleted, it’s part of parliamentary history. Although Twitter can restrict access to its API, it will not be able to keep deleted tweets by elected public officials in the dark.”

A spokesperson for the Sunlight Foundation said that group has no plans to rebuild Politwoops without access to Twitter’s data stream. Yet the group is no less damning in its stance on Twitter’s decision to revoke the tool’s access to information that was publicly available through multiple outlets before now.

“To prevent public oversight when our representatives try to discreetly change their messaging represents a significant corporate change of heart on the part of Twitter and a major move on their part to privatize public discourse,” they said.

“Imagine if the Washington Post printed a retraction of a story, would it demand that all copies delivered to the home with the original story be returned? When a public statement is made, no matter the medium, can it simply be deleted and claimed as a proprietary piece of information?”

Of course, there is a difference between the Washington Post trying to retrieve a physical object and Twitter cutting off access to its service. And “unpublishing” stories, whether it’s to appease advertisers or because they contained factual errors or were plagiarized from another source, happens at online publications.

For its part, Twitter says it’s merely trying to protect its users. A spokesperson said in a statement that the “ability to delete one’s Tweets – for whatever reason – has been a long-standing feature of Twitter for all users” and that it “will continue to defend and respect our users’ voices in our product and platform.”

I came to a similar conclusion when the U.S. version of Politwoops was shut down. As I wrote at the time:

Twitter isn’t only defending politicians; it’s protecting all of its users. I suspect there are more private citizens than politicians using the platform, so if having a reasonable expectation of privacy makes things harder for a site that collects politicians’ gaffes, well, I’m happy to bid Politwoops a fond, but prompt, adieu.

Both the Sunlight Foundation and Open State Foundation have said that they avoided this issue by focusing Politwoops on politicians. There should be a clear distinction between public figures and other Twitter users, both argued, and others have said that Twitter is either incompetent or capitulating to politicians.

This is a thorny issue without a clear solution. Twitter can be blamed for blocking Politwoops’ access to its service because each of these groups argues that they were holding politicians accountable; it could also be chastised for allowing these groups to break the rules meant to protect its users’ privacy.

Let me put it another way: If a restaurant had tinted windows to prevent outside observers from taking pictures of its diners, should it have to smash them whenever a politician or other public figure enters? Probably not. Its patrons, regardless of their status, expect to be afforded the same privacy.

It’s more troublesome that Twitter changed its mind about Politwoops. As the Sunlight Foundation notes in its blog post about this weekend’s shut down:

In 2012, Sunlight started the U.S. version of Politwoops. At the time, Twitter informed us that the project violated its terms of service, but we explained the goals of the project and agreed to create a human curation workflow to ensure that the site screened out corrected low-value tweets, like typos as well as incorrect links and Twitter handles. We implemented this layer of journalistic judgment with blessings from Twitter and the site continued. In May, Twitter reversed course and shut down Sunlight’s version of Politwoops.

It seems that Twitter was fine with smashing its own windows for three years, provided that Politwoops only used its exceptional ability to ignore the rules governing its API for things that actually matter to the public. Why did it change its mind, and why did months pass between the shuttering of the U.S. version of Politwoops and the revocation of these international versions’ access?

Consistent rules can be lived with and worked around. Inconsistent rules, however, lend some credence to the idea that Twitter might not be wise enough to decide what outside groups can do with public tweets. The company should have either shut down Politwoops before or allowed it to run into perpetuity.

In a way, it’s a lot like the controversy created whenever Politwoops did catch deleted tweets that shamed the politicians who sent them. Many of those tweets would have been fine if they hadn’t been deleted; it was only when their senders tried to act like they never existed that problems arose. It’s hard not to appreciate the symmetry between that and Twitter’s current situation.

Politwoops shutdown raises questions about Twitter’s rules originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Signily keyboard brings sign language to the emoji landscape fifianahutapea.blogspot.com

Attempting to create a universal glyph-based language for human beings in the digital age is a noble endeavor. But unfortunately, the entity behind our semi-ubiquitious emoji system, Unicode, sometimes comes up short — forcing us to push the language forward ourselves.

Though members of the deaf and signing communities can, and do, communicate digitally via text and e-mail, there are instances in which words don’t quite translate expressions in sign language. And so, one organization stepping up to bring sign language into the emoji space in an effort to simplify digital communications with ASL. Enter Signily, a keyboard that includes basic American Sign Language handshapes.

Signily is a part of the efforts of ASLized, a nonprofit that focuses on advancing ASL within the digital landscape and in visual media by creating learning and teaching tools and preserving the history and culture of the language.

One of the latest attempts to augment the emoji experience is Signily, a keyboard that includes basic American Sign Language handshapes.  Signily is a part of the efforts of ASLized, a nonprofit that focuses on advancing ASL within the digital landscape and in visual media by creating learning and teaching tools and preserving the history and culture of the language.

For a long time I would type abbreviations in English to describe how I would sign in ASL,” says Suzanne Stecker, creator of Signily and lead for the project. For example, typing “You 8585″to another deaf person to say “You’re so good at what you do!,” or she would get creative with existing characters like “\|,,|” (aka I love you).

“In ASL, there is only one handshape that represents those three words. That’s the beauty of ASL,” she adds.

Knowing that there must be a better way to communicate digitally with ASL than using numbered shortcodes or symbols to convey and approximate handshapes, Stecker created an emoji-like glyph system that displays common handshapes precisely.

The Signily keyboard includes handshapes for the alphabet, numbers 1-30, and some common phrases, such as “I love you”, “What’s up?, and even “Live Long and Prosper” (you all know that one). The keyboard also allows you to change the skin tone of the hands and toggle between right and left handshapes.

The Signily keyboard includes A-Z, 1-30 and handshapes for common words and phrases

The Signily keyboard includes A-Z, 1-30 and handshapes for common words and phrases

At their best, emoji keyboards that are intended to augment language help us communicate in ways that are more nuanced, more natural, and more personal than words on a screen. Signily is ushering a new language into the digital communication paradigm, helping the signing community use technology to replace the time consuming process of using numbered shortcodes, typing makeshift handshape emoji, and making and sending videos to in order to communicate with ASL.

But Signily is more than just images of handshapes, though. These emoji are GIF-based, because movement is essential ASL. The GIF-based Signily emoji symbols capture meanings in motion — the meanings and expressions that don’t always quite translate to written language. Sign language is rich and expressive, and now, a part of it lives within the world of emoji.

A standard for sign language emojis

However, even though Signily has helped fill a need within the signing community, ASLized is pushing Unicode for the inclusion of basic handshapes in the universal emoji set.

One may wonder if ASLized is trying to make their own app obsolete as they push their handshapes into Unicode,” Stecker says. “The answer is no. Unicode Consortium will most likely incorporate basic ASL handshapes such as A-Z, 0-9 and ILY, but they also have other sign languages to consider as well.

“Ever since Signily’s release, ASLized has been educating the signing community about the differences between emoji in Unicode and Signily’s GIF-based emoji,” Stecker says.

In the midst of overwhelming data collection, tailored ads, and branded content, it’s easy to look at new applications of technology and feel discouraged. We see advertisers everywhere, foisting their messaging into our communications uninvited. We see our authenticity compromised and co-opted by those who seek to sell us something, to “bundle” our essential us-ness into something marketable. But then, something like Signily emerges and reminds us that sometimes, the intention of technology isn’t to sell, but to create progress and make something like saying “I love you” in simple for the signing community just because, well, it should be.

And truly, progress is never as beautiful as it is when technology meets us where we’re comfortable and brings a level of humanity to our culture of burgeoning innovation.

Signily keyboard brings sign language to the emoji landscape originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

Senin, 24 Agustus 2015

Here’s why American students don’t learn computer science fifianahutapea.blogspot.com

America’s youth isn’t getting a decent education when it comes to the basics of technology, and now we’re seeing some data on why that’s the case.

A survey conducted by Google and Gallup shows that many Americans believe computer science should be taught between kindergarten and the 12th grade. Yet most schools don’t offer the courses due to budget constraints, a lack of teachers, and the need to focus more on subjects included in standardized tests.

The results are another mark against standardized tests, which have become a point of contention among parents, students, teachers, principals, and essentially anyone else who doesn’t profit off their continued existence. Yet these reviled constructs aren’t the only cause of computer science courses’ woes.

Another problem might be the lack of communication between administrators, parents, students, and teachers. The survey showed that 91 percent of parents want their children to learn computer science; less than 8 percent of principals thought demand for the courses was that high. That can’t be blamed on tests — it’s simply the byproduct of a good-ol’ fashioned breakdown in communication.

The rising number of low-income students also contributes to the problem. More students qualify for free or reduced-price meals at school (a sign of belonging to a low income family) than ever before. Yet the schools these children attend receive less than their fair share of state or federal funding, according to a 2011 report published by the US Department of Education.

That could help explain why many superintendents who responded to the survey said there isn’t enough money to train or hire a teacher (57 percent); nor a sufficient budget to purchase necessary equipment (31 percent) or software  (33 percent); nor enough equipment (20 percent) or software (27 percent) already in their schools for them to introduce computer science courses.

All those factors combine to create a system where computer science is limited to students privileged enough to belong to schools that value the subject, have the equipment necessary to teach it, and reliable Internet access they can use to complete any homework. The barriers to computer science being taught more widely don’t end with schools; they extend into student’s home lives, too.

None of these problems are unique to computer science. The influence of standardized tests, budget shortfalls, and a student’s lack of resources at home aren’t limited to this one aspect of education held near-and-dear by the tech industry’s top companies. They pervade every aspect of America’s education system — and that means introducing computer science courses shouldn’t necessarily be a goal unto itself, but should instead be another bullet point in any argument meant to overhaul much of this country’s education system.

Here’s why American students don’t learn computer science originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download

50 years ago today the word “hypertext” was introduced fifianahutapea.blogspot.com

On August 24, 1965 Ted Nelson used the word “hypertext” (which he coined) in a paper he presented at the Association for Computing Machinery. I was able to interview him earlier this month about the event and his early thoughts on the future of computing.  

It is hard to know where to start when writing an introduction for Ted Nelson because his interests and accomplishments have spanned so many areas across six decades. To get a sense of the breadth and depth of them, the best thing to do is to visit this page on his website

Byron Reese: Well, we are coming upon the 50th anniversary of your presentation of your paper “A File Structure for the Complex, the Changing and the Indeterminate” at the Association for Computing Machinery where you introduced the world to the word “hypertext.” What do you recollect about that event?

Ted Nelson: First of all, remember that I was building up to it for years beforehand, thinking about hypertext and how to present the idea to the world.

So for me it was an important rollout, a rollout of my ideas. And I took it very seriously. And because of my partially theatrical background, I was very conscious of giving a good show.

This was in New York in midsummer?

Pittsburgh. I think it was hot, but we were in an air-conditioned hotel.

Tell us about what it was like to give that talk.

Well, from my point of view, I saw it as my major career rollout, daring and intense. I wasn’t so much scared as excited and keyed up. I was going to tell the world, from a literary and philosophical point of view, where interactive documents would go. I was about to tell a technical group that their whole world would be redefined.

That must have taken some amount of confidence! What made you think you were qualified?

I wasn’t a techie, I had an entirely different background. I knew something of literature, history, and the invention of media. I saw hypertext as the medium of the future, and I wanted to tell them that convincingly.

I was a media guy, already with a background in showbiz and publishing. I had won prizes for poetry and playwriting, I’d published a kite-shaped magazine, and I’d written the first rock musical.

I’d acted on television and summer stock stage, courtesy of my father. So I had no stage fright. [ed note: his father was the Emmy Award-winning director Ralph Nelson; his mother was the Academy Award-winning actress Celeste Holm].

Also, I could ad lib on technical issues. But that would come later.

Mainly I thought of myself as a philosopher and a filmmaker. I had majored in philosophy, and I had been led to believe by my professors that I was good at it, and I made my first film [“The Epiphany of Slocum Furlow”]. Which is, by the way, on the Net. You can see it on YouTube. It’s a half-hour comedy about loneliness at college, and I think it’s very good, but it’s very unusual and surrealistic and badly synced, and in black-and-white, so most people can’t handle it. You know the film director Wes Anderson?

Yes.

He’s the first other director I’ve seen with the same style of surrealistic comedy that I came up with. So you could say that Wes Anderson is, in some sense, my cinematic descendant.

Anyway, I called myself a philosopher and a filmmaker, and I believed that I was going to be a serious, multifaceted intellectual (like, say, Norman Mailer or Christopher Hitchens), and that I was going to get to Hollywood and direct films (as my father later did, to my surprise).

How did you get into computer science?

I never thought of myself as a computer scientist, till last year when they gave me a degree in it.

I went to graduate school because I still wanted to continue my education—which graduate schools don’t like. But then I took a course in computers, and that blew the lid off my head, because it became entirely clear that the public stereotype of computers was absolutely incorrect, that the computer was an all-purpose machine, and that you could put a screen on it.

Well, screens! I can do that, I’m a filmmaker! So, then the issue is what should be the conceptual unification, the design of such interaction? Well, I can do conceptual unification, I’m a philosopher! It’ll be a new medium, and I’m a media guy! I had, by chance, the ideal background to design this new world of the future.

The cosmic joke is that everybody has a different reason for thinking the same thing—I’m the one who’s perfectly qualified to design software.

So this epiphany was in 1960?

Yep. For the ensuing five years I was thinking and designing how computer screens should interact.

What kind of reaction did you get from others?

No one, absolutely no one that I met, could imagine interactive computer screens. Whereas I could see them with my eyes closed, practically touch them and make them respond. It was very sensual.

And all during the 1960s and 1970s I was trying to tell people what interactive screens would be like, in my writings and my talks. But no one got it.

My great-grandfather, for example, who was a very smart man, a science teacher—he couldn’t understand what I was talking about. No one could imagine what an interactive screen would be. No one I talked to could imagine what an interactive screen would be, whereas I saw and felt them sensually in my mind and at my fingertips. Yet to me this was an extension of literature as we had always known it.

But books aren’t interactive.

Of course they are! You turn the pages and see different things. Children’s books were often very interactive, with pages cut into strips you could recombine, clock dials you could turn, and the like.

And interaction was hardly a new concept. I’d been to penny arcades since the 1940s. Put in two pennies or a nickel, and you could shoot at things or knock them over. They were mechanical and electrical, but they challenged your coordination and could hold your interest for quite a few nickels.

So interaction on a screen was the logical next step?

Of course! It was just a matter of software [laughs].

That extended the range of possibility.

Infinitely. There were no electromechanical limitations. It would be entirely different, an extremely new and exciting possibility. But I couldn’t tell anybody about this, they wouldn’t listen. So, it all came out of my own head.

But they were working on screen interaction at different places …

A few. I knew that they were working on interactive screens for air defense—the SAGE system—and for air traffic control. But those were special purpose. When the public got interactive screens, they would be general purpose.

But how did you imagine that they would they get to the public?

I immediately believed that there would be a personal computer industry, and that there would be computer screens for the public. I didn’t know when, I didn’t know how long it would take, I thought it would come much sooner. But something like Moore’s Law was bruited about in my class, so that the falling prices were clear. There was nothing standing in the way of computers for the public except for imagination, it seemed to me, and so I was trying to supply that.

I planned a company that I called the General Creative Corporation, with a picaresque and crusading style—very like the pose Apple has taken. But in 1960, Steve Jobs was five years old. It took longer than I thought, and I never got leverage. Neither did a lot of other people; Jobs grabbed a brass ring and knew what to do with it.

But what about hypertext? What about electronic documents?

That’s what I was mostly thinking about—electronic documents, and what they would be like. But nobody could imagine it. They would almost always ask, “Is it like a tape?” I should just have said yes.

I was sure I knew how electronic would look and feel. Unfortunately I overemphasized the jump link, jumping from page to page, which is all the Web does. (Along with the regrettable emphasis on fonts and layout, foisted on the public by Simonyi and Warnock.) [Charles Simonyi, who oversaw Microsoft’s development of Word, and John Warnock, the co-founder of Adobe Systems.]

What else should electronic documents do?

We’ll get to that.

What did this have to do with making movies?

To me, the computer was just another kind of movie camera, another system of details to be dealt with.

A moviemaker has to understand about sprockets and footage, exposure and focus; he has to understand actors; he doesn’t have to play the violin or ride a horse, but he has to know how to arrange these matters with those who do.

So what did you see as the relationship between software and movies?

I still believe software is a branch of movies. Movies are events on a screen that affect the heart and mind of the viewer, right? And software—interactive software—is events on a screen that affect the heart and mind of the user, and interact, and have consequences. So understanding the theatrics (some say rhetoric, some say cascading) of interaction is the real issue, not just making the wheels go around.

So to me, computer technicalities, including programming, were just more technicalities of moviemaking.

Did you understand the technical issues?

Well enough to get three patents and independently invent ray tracing, if anyone is interested.

[Nelson’s patent application for ray-tracing hardware is available as a paperback.]

The real issue was selecting the appropriate technicalities. I came to see that the issues I was facing for electronic documents were not algorithms but data structure.

So all this was leading up to your ACM presentation, 50 years ago.

Right.

I had worked hard through the early ’60s, learning all I could about computers and electronics. Meanwhile, I got a two-year appointment to Vassar teaching sociology. The first year at Vassar I had to work hard preparing my courses, but the second year I had free time to start submitting papers. I submitted five papers to conferences that would meet in 1965; all were accepted! But the biggie was to the ACM National Conference.

I was a member of the ACM, so I knew the mindset. I’d been reading a lot of journal articles, so I knew how ACM people thought: they were interested in files and operating systems and the like. But I was going to be talking about revolutionary and radical ways of thinking.

And so I knew that I’d be facing an attentive but skeptical audience. And I knew it would be of vast significance for my career and my hopes, and I prepared carefully.

I spent a great deal of time and work on it. And, as I recall, my great-grandfather died while I was working on it, and that was a great sorrow to me, but I had to keep on it. He died on a Monday, and I got the news while I was playing the Mamas & the Papas as I worked, singing “Monday, Monday, can’t trust that day.” But there was no time to grieve. I had to keep going.

I think the deadline was June 15, but because of his death I was allowed to get it in later.

And they accepted the paper! It was refereed, it was peer-reviewed! But the peer review was light. I talked to a couple of guys on the phone, as I recall, and they were very enthusiastic, they thought the paper was radical and exciting. I made the few changes they asked for, and that was it.

So this became the rollout of all these ideas, told in the best way I could, given that I knew I was going to be addressing computer professionals. And while I respected them very much, I also thought that I was opening a new chapter into a new part of the world.

I was by no means modest. Although I wasn’t telling anybody about it, I thought hypertext would lead to a millennial system of changes, and so it has, but much less influenced by my own work—my designs and ideas—than I’d hoped.

The written paper is in academic style. It bears almost no relation to the oral presentation I gave, which was intended to be rousing. I was used to off-the-cuff public speaking, but I scripted this one tightly.

In those days a talk was accompanied by 35-millimeter slides, and I believe the last three slides had the same word on it: CHANGE. The first one said CHANGE” in small letters, and the next one said “CHANGE” in bigger letters, and the third one said “CHANGE” in really big letters. I told them we had to be prepared for ever-changing documents.

And my recollection is that I got thunderous applause.

How many people were in the audience?

It was a huge room, at least as I recall. I think I counted the seats and it was something like 600, but again, this is only my wild recollection now, and I have no access to those diaries. I know I have a tape recording of the talk, and I know I have the original artwork and slides, so if anyone wanted to put it all together and restore it to an audiovisual presentation, it could be done.

What happened then?

I thought my work would be a watershed, because I didn’t know that anyone else in the world was working on text-on-screens. It was only after the talk that Bob Taylor came up to me, whom I did not know, and asked me if I had heard of Douglas Engelbart [Engelbart was an early computer pioneer, best known for inventing word processing, multiple windows on a screen, and the mouse, all rolled out in his 1968 “Mother of all Demos”].

Taylor told me that Engelbart had been working on similar things, so I made a note to get in touch with Engelbart. But I had very few resources and no secretary, so that actually carrying on any correspondence was essentially beyond my capabilities. I only found out later that Taylor had been Engelbart’s principal backer through ARPA [Advanced Research Projects Agency of the Department of Defense, now called DARPA].

Ironically, when Taylor took over Xerox PARC in the 1970s, he dropped Engelbart, who was tragically out in the cold for the rest of his career.

Did you get in touch with Doug Engelbart?

Yes. The next year, 1966, I flew out to see him with William Jovanovich, head of Harcourt, Brace Publishers, where I worked at the time. He showed us the mouse, and I was instantly converted.

Eventually Doug Engelbart and I became close friends. In fact, he performed the marriage ceremony when I married Marlene in 2012.

So what happened after the presentation?

As I said, I believe I got thundering applause, and I also think that most of the computer scientists in the world were in the room. Those were the days when it was possible to get all the computer scientists in one room, but of course I don’t know. You could say it was the high-water mark of my career, just as Engelbart’s 1968 demo was the high-water mark of his.

But because I was unable to carry on any correspondence about it, and had to pay for the conferences out of my own pocket, I couldn’t stay in the computer-science swim.

I hardly understood academic politics. Underneath the handshakes and overt appreciation, everyone is backstabbing for the same money.

I was hoping to get backing for my work, but I was very naive about how backing worked, at that time, and the amazing thing was that I did get one approach—a very prestigious and amazing approach.

I got a call from the Central Intelligence Agency—at least the guy said he was from the Central Intelligence Agency—and he intimated that they might back me, and I said, “sure.” That conversation went on for several years, but no backing appeared. I actually did go to McLean to meet there once, so it had been an authentic call.

What was that meeting like?

The surprise at the meeting was that I was attacked by several Artificial Intelligence guys in the room. It was years before I understood that there was a dog-and-cat relation between AI and hypertext—AI guys thought we were stealing their rightful territory. (I was actually followed at one conference by a famous AI guy who began, “Hypertext is evil.”)

Was there no further interest in your work?

Like ripples in a pond, it died down quickly. But the idea of hypertext was out and about.

It’s been an uphill fight all these years. Not only did the AI guys hate hypertext, but it turns out that everybody has a different notion of what hypertext should be. For example, HyperCard on the early Macintosh. I couldn’t understand it then; I still don’t understand it now. A very strange system. But that just shows the kaleidoscopic variety of thoughts that these concepts engender.

You’ve coined a lot of words.

Yep. I think a dozen of them may be in the dictionary, or in use in some degree. Of course I’ve coined a lot of words that people aren’t using, but my score is good.

It’s almost Shakespearean. How does that go about? Do you just think, “I need a new word here,” and just make one up?

Of course! My motto is, you can’t think new thoughts in old words.

But I always knew words were made up all the time. They either catch on or they don’t. I was a fan of Lewis Carroll, and I knew he’d put half a dozen words in the dictionary. The notion of inventing words was straightforward to me.

In what sense did you think “hypertext” was hyper?

“Hyper” in the sense of extended and generalized, as in “hypercube” and “hyperspace.” My father-in-law was a psychologist, and he was disturbed at the word because he thought “hyper” meant pathological and agitated.

Like a hyper child, or something.

Doctors and psychologists use “hyper” for sickness; mathematicians use it for generality.

The term seems tied to Xanadu, which goes back even before 1963, goes back to 1960. Can you talk about Xanadu?

I didn’t choose the name “Xanadu,” I don’t think, until ’66 or ’67, when I was at Harcourt, Brace Publishers. But there is an exact Xanadu model. If we had a whiteboard and a couple of hours, I could go through this with you in detail [laughs], but clearing up different notions takes a long time. For example, just the other evening, I was chatting with a friend of 30 years standing, and I cleared up some misunderstandings that he’d had about it for 30 years. So, the Xanadu model, and again, you can only say “the Xanadu model” because I control that trademark. It’s actually a registered trademark, and so I can say exactly what it means, whereas everybody else is still guessing.

[Laughs] Right. Has it changed over the past 50 years?

The fundamental notions haven’t changed—parallel pages with visible connection.

Other people’s hypertext just use jump links—that’s what the World Wide Web is, just jump links—whereas I consider it essential to see pages side by side, as in the Talmud, as in medieval manuscripts, as in any number of documents over the centuries. This is an essential part of the electronic document which we don’t have yet.

Okay.

The different instantiations of Xanadu have changed repeatedly, because of the resource issues and whoever was working on it, and what language we were working in. There have been a dozen different tragic stories of attempted implementations. And each of them somewhat different. Notably the one we did in 1979. My team—I don’t take any credit for it, but the guys I was leading—came up with a brilliant system of addressing, based on what is now called tumblers. If you look up “tumblers” on Wikipedia, tumbler numbers, that was used in the 1979 Xanadu system. That version is now called Xanadu Green.

Then we got backing from Autodesk, and unfortunately, due to a power shift, and the demotion of Roger Gregory, it became a debate about what Xanadu should be. (I was no longer in charge.) After four years the project delivered nothing and Autodesk shut it down, and in the meantime, in the last of those two years, Tim Berners-Lee created the Web. So we might well have been the hypertext system of the world if we had stuck with the original 1979 design.

The Xanadu concept has always remained the same, and it involves two visible relations, links and transclusions, which in fact you’ll see in that ACM paper. Transclusion—Xanadu transclusion, not the kinds other people have come up with—is based on the notion that you often want to compare things, and use the same material in two places, and you want to see that it’s the same material, so you want to have a visualization, saying, this is in fact that.

How is this related to the 1965 design? You called it the Evolutionary List File, or ELF.

It showed parallel documents with visible connections, both links and transclusions! Except it was divided into paragraphs, which were actual objects. It would have been trivially implementable. It’s really a terrible design, but it was the best I could come up with at the time, and very oddly, it has essentially diverged into my two fundamental inventions now, one system I call Xanadu® and the other which I call ZigZag®, and you can see them both there, in the proto-structure described in the paper.

So, the idea was that as a hypertext would evolve, it would consist of a number of side-by-side lists of this type. That’s still the general idea.

Now, one of my fundamental notions, that you don’t see anywhere else, is the notion of a visible bridge between pages. That to me is absolutely fundamental. Actually, I’ve got one public instantiation, called “OpenXanadu.”

It works in the Web browser; you can see visible bridges of connection between each quotation and its source context. But it shows how unsuited the Web browser is to these concepts. Let others deal with JavaScript and HTML; I’m going to stay on higher ground.

You have always been a “high ground” kind of guy.

That’s how I see better.

50 years ago today the word “hypertext” was introduced originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Easy Way to Download