Total Pageviews

Sprint’s 8-antenna LTE tech moves out of the labs and into Chicago

Sprint's big Chicago event on Monday may have all about smartphone exclusives, HD voice and fitness and family apps, but CEO Dan Hesse also provided an update on a new technology making its way into Sprint's LTE network. For the last year, Sprint has been testing a new antenna technology called 8T8R, which would boost capacity and coverage in its LTE network, and that technology is now deployed on a trial basis in Chicago.

8T8R stands for “eight transmitters/eight receivers”, and it's basically the smart antenna technology we now know as multiple input-multiple output (MIMO) on steroids. Today's LTE systems mostly have two transmitting antennas and two receive antennas at the cell site. As its name implies, 8T8R quadruples those numbers.

Dan Hesse speaking at Sprint's Chicago event (Photo: Kevin Fitchard)

Dan Hesse speaking at Sprint’s Chicago event (Photo: Kevin Fitchard)

If there are more antennas at the tower, that means more signals are flying at your phone or tablet, creating much stronger signals and more resilient wireless links even at the fringes of a mobile cell where reception is always spotty. On the return path to the tower, more antennas can help pick your phone’s generally weaker signals out of the airwaves.

So 8T8R isn't going to boost the overall speed of the network, but it will help even out the wild peaks and dips in performance you usually experience when moving through the network. Basically it means you'll experience faster speeds in more places, instead of only when you're near a tower. Hesse said Sprint has seen huge improvements in mobile broadband coverage in its tests: between 20 percent and 25 percent in urban and suburban areas and up to 70 percent in flat areas like highways.

T-Mobile has done similar upgrades to its own networks, but instead of using eight antennas it's only using four (think 4T4R rather than 8T8R). It's managed to commercially launch its 4×2 MIMO technology in its LTE systems in several cities and plans to make it a key component of its new wideband LTE rollout, which will double the speeds and capacity of its network.

A rooftop loaded up with multiple wireless antennas (Source: Flickr / FlySi)

A rooftop loaded up with multiple wireless antennas (Source: Flickr / FlySi)

Sprint considered rolling out four antenna rigs when it launched its Spark network last year, Hesse told me, but Sprint decided to hold out for 8T8R's additional multiple of antennas in order make Spark the most sophisticated 4G network of its kind when fully deployed. Hesse said all three of its network equipment suppliers — Nokia, Alactel-Lucent and Samsung — have worked the kinks out of the technology and it should be ready for commercial rollout later this year.

What Sprint is calling Spark is actually a combination of multiple technologies and three LTE networks on three different frequency bands, but not all of those components are yet in place. As it stands now, Spark is really only an average LTE network compared to the competition and it's only available in 27 markets.

But by the end of the year, Sprint plans to pile a lot more spectrum onto the network using an LTE-Advanced technique called carrier aggregation. Combine that with Sprint's planned antenna frenzy, and Spark will become a lot more impressive.

This post was updated at 6:45 PM PT to correct Sprint’s list of TD-LTE vendors.

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Player FM relaunches with new UI and in-app indexing

Podcast discovery service Player FM is in the process of launching a new version of its Android app that features a completely revamped user interface as well as something that will make Google very happy: in-app indexing, which will allow the search engine to highlight Player FM's app within its mobile search results and further blur the lines between the web and native apps.

Player FM founder Michael Mahemoff told me Monday that the launch of the new 2.0 version of the app should complete by Monday night. By that time, all upgraded users will have access to a new UI that comes with a card-like interface for podcast episodes, a full-screen mode during playback, a sleep timer and the ability to change the playback speed of a podcast, which I guess could be helpful if you're using podcasts to learn a foreign language.

Player FMs new card-based UI.

Player FMs new card-based UI.

But one of the bigger changes is under the hood: Player FM is one of a number of Android apps that has added app indexing, which means that Google (A GOOG) is capable of crawling and indexing its in-app content as if it was a website. Google is using in-app indexing to surface app content in its user's mobile search results. In the case of Player FM, this means that users who already have the app installed can search for a podcast title and find a link to the specific episode within the Player FM app directly within the search results.

in app indexing

For Google, these kinds of in-app results are a way to remain relevant in an era where online usage increasingly moves from desktops to mobile devices, and with that from the browser to dedicated apps. Google first announced in-app search at its Google I/O developer conference a year ago, and is likely going to update us on the development of the program at this year's Google I/O conference in San Francisco later this week.

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Is time spent a better metric than pageviews? Upworthy says it is, and open-sources its code for attention minutes

At this point, almost everyone — online publishers and advertisers alike — agrees that raw pageviews are a poor measure of the value that a media outlet provides, but no one can figure out exactly what to replace them with. Some sites have chosen to focus on social sharing, while others are creating their own ways of measuring the actual time that a reader spends with a site’s content. Upworthy’s version of this metric is called “attention minutes,” and the company said Monday it has open-sourced the code so others can use it.

The need for a solid metric is fairly obvious: publishers of all kinds mostly rely on advertising, so they have to be able to show the brands, agencies and networks they deal with that they are reaching large numbers of the people they’re being paid to reach. The easiest ways to do that are pageviews and unique visitors, but both of these metrics are flawed to some extent — and so are raw clicks, likes and shares, as Upworthy points out in its blog post:

“Clicks and pageviews, long the industry standards, are drastically ill-equipped for the job. Even the share isn't a surefire measure that the user has spent any time engaging with the content itself. It's in everyone's interest — from publishers to readers to advertisers — to move to a metric that more fully measures attention spent consuming content. In other words, the best way to answer the question is to measure what happens between the click and the share.”

Many people see social as the new SEO, but focusing on shares is flawed because — as Chartbeat co-founder and CEO Tony Haile has pointed out — the data shows virtually no correlation whatsoever between whether people share a link to a piece of content and whether they have actually read it, since many users are apparently happy to share links to things they haven’t spent any time with at all.

Trying to measure actual reader attention

So Upworthy and Chartbeat both have their own metrics: Upworthy calls its version “attention minutes” and Chartbeat calls its measurement “engaged time.” Although they use somewhat different methods, both track how far readers get through a page of content or a video before they click away, and use other signals to determine whether a page is simply open in a tab or whether the reader is actually involved in reading or watching the content. Interestingly enough, Upworthy says that Twitter does very well as a source when measured by total visitors, but somewhat less so when measured by actual engaged time or attention minutes.

Upworthy attention minutes

Other sites such as YouTube, Medium and the Financial Times also focus on total time spent rather than just measuring page loads or unique visitors — and Medium said recently that it has even started compensating some of its writers based on the amount of time readers spend with their story. But not everyone believes that “time spent” is an effective metric: Gawker editorial director Joel Johnson, for example, told BuzzFeed recently that he would rather measure user satisfaction rather than just the amount of time they spent on a page:

“Perhaps someday, but that's cart before horse, really; what we want to measure is user engagement through satisfaction. Maybe time-on-page will be part of that, maybe not.”

A Vox Media spokesperson told BuzzFeed that different types of content serve different purposes, and some may not be designed to hold a reader’s attention for a long time. “We don't really care if someone spends five minutes on a TV schedule, game-time post because that isn't the point… so time on site as an end-all, be-all metric doesn't really work at the moment.” Haile, however, pointed out that the service’s “engaged time” is an aggregate over a period of a week or month, so posts that take a short time or a long time both have a place.

Post and thumbnail images courtesy of Shutterstock / ollyy

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Sponsored post: A practical guide to cloud onboarding

There are three critical areas of preparation that ensure successful cloud onboarding. The first step is workload analysis. Because workloads differ in terms of their importance and cost to your organization, a thorough workload analysis is critical to a successful onboarding process. For example, a highly optimized application may be relatively easy to migrate to the cloud, but the move may offer little or no additional benefit for the effort. As well as helping you identify and prioritize candidate workloads for cloud migration, this kind of assessment will help you detail the requirements for onboarding those applications that you do migrate.

The next stage is getting the application and associated services ready for onboarding. For example preparing an application workload for forklifting into a cloud environment will require virtualization if the application is not already running in a virtualized environment. Associated databases may also need to be rationalized to work within, or with, the cloud environment. As you tackle all of these, there are three fundamental characteristics to pay attention to: scalability, resilience and security.

The final step is choosing the right provider. When migrating applications to a cloud environment, the specific environment you choose and the services offered by your cloud service provider will determine the work you need to do and the help you can get in doing it. Ideally, therefore, you should choose your provider in parallel with analyzing and preparing workloads for migration.

To learn more download the cloud onboarding white paper from Interxion.

Ouch! Verizon pulls free LTE for Chromebook Pixel one year early

About 14 months ago, Google started selling the Chromebook Pixel. The flagship laptop cost $1,299 for a Wi-Fi model or you can get one with integrated Verizon LTE for $150 more. I opted for the latter, partially because Verizon was also throwing in 100 MB of free LTE service each month for two years. But somehow, the carrier thinks one year really equals two years.

As JR Raphael reported on Monday at Computerworld, Verizon is only honoring the free LTE offer for the first 12 months of Pixel ownership. I didn’t even realize the free service had stopped but Verizon told Raphael the enforcement of this policy started in April, which is one year after the Pixel with LTE was first introduced. I don’t recall any notice being sent informing customers of the change. Then again, Verizon appears to think this isn’t a change at all based on Rafael’s chat with Verizon customer service:

“Verizon is telling customers that as far as it’s concerned, the plans were valid only for one year — and that’s why those initiated last spring are now expiring. I called the carrier’s customer service line and, after holding for 15 minutes and then talking in circles to an agent for another 10, was able to get through to a supervisor. That person politely told me he wasn’t aware of any two-year commitment and that — despite my pointing out official documentation to the contrary — there was nothing he could do to help me. “

I actually thought the LTE deal was good for three years, the same deal as the 1 TB of free Google Drive storage that comes with every Pixel purchase. I verified my storage expiration date — it’s April 2016. And Raphael has evidence showing the free LTE service was supposed to be valid for three years as well although some internet cache sites show it to be two years. Either way, none of the evidence says one year.

Two-Pixels

Frankly, 100 MB of LTE service isn’t much, but that’s not really the point. It’s enough to use in an emergency situation for a quick web search, email check or to handle some brief online issue; I’ve used the free service to edit or make corrections on blog posts, for example. The bigger issue here is Verizon reneging on a deal that may have had some influence on purchase decisions. It’s the same type of benefit, for example, that got me to buy an LTE version of Apple’s iPad Air: T-Mobile offers 200 MB of free data each month. And when I need more, I can purchase it. I took the same approach with the Pixel.

Unfortunately, I’m doubtful that Verizon will see the common sense behind reinstating the free offer for Pixel owners. Android Police scoured the Google Product Forums and found this entry from Joe Ellet:

“That was Verizon’s unilateral decision with no advance notice and no discussion with Google, HP or anyone else. You can contact Verizon but you’ll get the same answer they’ve given to thousands of people before you, which is basically “pay up”. Now that Verizon has reneged on their deal the only thing I can hope is that someone will be able to unlock the 3G and 4G radios so people can go elsewhere, but don’t hold your breath on that.”

Verizon should do the right thing here and honor the original agreement made to Chromebook Pixel owners. I still think it was supposed to be three years but at this point, I think people would be happy if they were met in the middle and given 100 MB of free LTE for two years.

 

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

How to pick the best mobility management strategy for your business

Today’s mobility management services market is wide, varied, and confusing. For businesses wanting to employ a strategy here, what matters most is finding a clearly defined strategy, the right partners, and a host of other interdependent aspects.

Table of Contents

    Related research and analysis from Gigaom Research:
    Subscriber content. Sign up for a free trial.

    Parking space apps driven out of San Francisco, and that’s a good thing

    Why just drive away from a great parking spot when you could auction it off? It came as news to me, but people are already doing this in San Francisco thanks to iPhone apps like MonkeyParking and Parkmodo, which let you sell your parking spot to the highest bidder.

    The fun appears to be at an end, however, as San Francisco’s top lawyer has sent out a cease-and-desist telling MonkeyParking to stop renting out city property without permission.

    “Technology has given rise to many laudable innovations in how we live and work — and MonkeyParking is not one of them,” City Attorney Dennis Herrera said in a statement, adding that he is also asking Apple to pull such apps from its store.

    The head of MonkeyParking disagrees, and is framing the city’s decision as an example of overbearing regulation.

    "We believe that a new company providing value to people should be regulated and not banned. This applies also to companies like Airbnb, Uber and Lyft that are continuously facing difficulties while delivering something that makes users happy,” said CEO Paolo Dobrowolny in an email statement to the Wall Street Journal.

    The legal case is interesting. San Francisco is arguing that the parking apps violate a city law that forbids anyone to “enter into a lease, rental agreement or contract of any kind…for the use of any street or sidewalk." MonkeyParking, for its part, makes a clever counter-argument that people are simply selling information about their departure time and not, as the city says, renting real estate.

    So who’s right? My own two cents is that San Francisco and other cities should smother these services as possible. Despite MonkeyParking’s attempt to try to frame its apps as part of the so-called “sharing economy,” the comparison doesn’t hold up.

    While other “sharing economy” examples involve companies using technology to expand and better allocate existing inventory such as taxis (Uber) or apartments (Airbnb), the parking apps are predicated on introducing new scarcities. They create an incentive to introduce more cars to further limit parking, which will aggravate city stress — and in way that favors the rich to boot.

    “It’s as if AirBnB were paying people to burn down hotels,” wrote one wag on HackerNews, pointing to ParkModo’s reported practice of paying people $13 to sit in cars and rent out spots.

    Update: it turns out there’s a peer-to-peer service called CARMAnation that lets users swap and sell private parking spaces. Co-founder Ashley Cummings wrote to say San Francisco gave them a green light this week (which makes good sense given that the site involves the leasing of private property, and will increase parking resources.)

    Here’s a copy of the cease-and-desist and a press release with more details:

    Monkey Cease Presskit

    Intel promises bigger, faster chips and silicon photonics for HPC

    Intel on Monday announced a new brawny HPC processor and a family of network fabric components that incorporates silicon photonics technology. The new chip, the next-generation of the Xeon Phi family, comes with up to 16 gigabytes of high-performance memory and more than 60 computing cores. The fabric lineup, called Omni Scale will include PCIe adapters, switches and software, as well as director switches that replace electrical transceivers with silicon photonics for improved speed and fewer cables. Intel is promising better performance and greater efficiency with the new tech — something it might need considering the introduction of ARM-plus-GPU-based chips into the HPC market that Intel presently dominates.

    knightsbridge

    Related research and analysis from Gigaom Research:
    Subscriber content. Sign up for a free trial.

    Thanks to tablets and better TV tech, there are more gadgets but they’re using less energy

    Americans are buying more gadgets than ever before, but those gadgets are not using more energy collectively. Why not? Well, according to a new report from the research firm Fraunhofer USA and commissioned by the Consumer Electronics Association, more efficient TV technology and a switch from desktop and laptop computers to tablets delivered a sizable drop in the total energy use by gadgets in the U.S. in 2013.

    That’s good news for the argument that technology and innovation can help solve some of the world’s most pressing issues, and it also shows that there’s considerable room for energy efficiency upgrades for digital devices. A similar situation was discovered with the increasing energy use of the world’s data centers. The CEA report is also an indicator that the push for energy efficient standards by government groups like Energy Star is working, to some extent.

    Broken, tossed TV, courtesy of sinada / Shutterstock.

    Broken, tossed TV, courtesy of sinada / Shutterstock.

    According to the report, there were 3.8 billion consumer electronics used in American homes in 2013, which collectively consumed 169 TWh. In 2010 there were 2.9 billion consumer electronics owned, but those devices consumed collectively 193 TWh, or 12 percent more energy than the number of gadgets in 2013.

    The most widely owned consumer electronic device in the U.S. is the television. There are more TVs owned than people in the U.S., with 338 million owned in 2013. So it’s not surprising that TVs have long accounted for the biggest energy consumer of our home gadgets, with 30 percent of the total.

    But the report estimates that the number of plugged-in TVs — that is TVs currently in use, not just owned — is down by 50 million (or 14 percent) from 2010 for the first time in over a decade. The conclusion is that Americans are finally unplugging — or stashing in the garage or basement — their older TVs that use cathode ray tubes instead of display technologies. Old TVs are so dated, and so inferior to widely used display TVs, that they are just being dumped or ignored.

    What ever happened to FreeTime on Fire TV? Image courtesy of Amazon.

    Energy efficiency display tech is now the norm. Image courtesy of Amazon.

    This shift from cathode ray tube TVs to LCD TVs led to a 20 percent drop in overall TV energy consumption. Ninety percent of TVs sold in 2013 were LCDs. Use per TV actually went up over this time.

    Once all of the old TVs are ditched or unplugged, though, this trend could change over the coming years. More internet-connected TVs could actually raise overall energy consumption of TVs because the average use of active mode on internet-connected TVs is much higher than it is for non-connected TVs — in other words, internet-connected TVs are “always on.” Currently, there’s only a 15 percent penetration of Internet-connected TVs.

    Together, TVs and computers accounted for 43 percent of the overall electricity use coming from U.S. gadgets in 2013. For computers, there was another sizable technology shift: the emergence of tablets.

    Tablet ownership is up, image courtesy of Thinkstock.

    Tablet ownership is up, image courtesy of Thinkstock.

    While there’s been a 10-fold increase in the ownership of tablets — from 4 percent penetration in 2011 to 39 percent in 2013 — there’s been an overall decrease in the use of desktop and “portable” (laptops and netbooks) computers, as well as less use of monitors. Tablets are considerably more energy efficient than desktop and laptop computers, so overall collective computer energy use has dropped by 25 percent since 2010, thanks to tablets.

    Tablets use 8.8 times less energy than portable PCs (per unit), but have an 11 percent higher install base, says the report. As more and more computing is done by smartphones (which this report did not count as computing devices), that energy efficiency trend will continue. The per-unit energy consumption of smartphones is lower than tablets.

    Other consumer electronics saw some energy efficiency progress, as well, due to technology gains. For video game consoles, recent versions of the PS3 and the Xbox 360 are considerably more energy efficient than older versions of the PS3 and Xbox 360 (65 and 52 percent less power used, respectively). For networking devices (modems, routers) the first energy specifications for Energy Star were due in Sept 2013. Set-top boxes — the third-largest energy consumer among home gadgets — will lower energy consumption thanks to a voluntary agreement in December 2012. It’s progress for sure.

    Overall the use of consumer electronics is growing quickly in the U.S., but these devices aren’t the main energy draw in our homes. Rather, most energy consumption comes from the heating and cooling of homes, heating of water and lighting of homes. But consumer electronics collectively consume more energy than refrigeration and cooking in the home.

    While the report was done by a third party, Fraunhofer, remember that the Consumer Electronics Agency commissioned it, and CEA has considerable stake in getting more people to buy more gadgets. So keep that in mind if you read the entire thing.

    OneDrive storage gets a whole lot cheaper, with 1TB for Office 365 home subscribers

    Space on Microsoft's OneDrive cloud storage service just got a lot cheaper. Microsoft has more than doubled the base storage from 7GB to 15GB, and it cut the prices of paid storage. 100GB has been slashed from $7.49 per month to $1.99. 200GB has gone from $11.49 a month to $3.99, meaning that, strangely, two 100GB subscriptions are slightly cheaper than one 200GB option.

    The storage bundled with Office 365 has also been substantially upgraded. Office 365 Home, at $9.99 a month, Office 365 Personal, at $6.99 a month, and Office 365 University, at $79.99 for four years, will all come with 1TB of OneDrive space per user. The Office 365 Home subscription is good for up to five users, and as such it provides a total of 5TB.

    Business users haven't been left out either. All business Office 365 schemes will now come with 1TB of OneDrive for Business per user. OneDrive for Business, though similarly named to OneDrive, is a very different service that's built on top of SharePoint.

    Read 3 remaining paragraphs | Comments

    Poorly anonymized logs reveal NYC cab drivers’ detailed whereabouts

    In the latest gaffe to demonstrate the privacy perils of anonymized data, New York City officials have inadvertently revealed the detailed comings and goings of individual taxi drivers over more than 173 million trips.

    City officials released the data in response to a public records request and specifically obscured the drivers' hack license numbers and medallion numbers. Rather than including those numbers in plaintext, the 20 gigabyte file contained one-way cryptographic hashes using the MD5 algorithm. Instead of a record showing medallion number 9Y99 or hack number 5296319, for example, those numbers were converted to 71b9c3f3ee5efb81ca05e9b90c91c88f and 98c2b1aeb8d40ff826c6f1580a600853, respectively. Because they're one-way hashes, they can't be mathematically converted back into their original values. Presumably, officials used the hashes to preserve the privacy of individual drivers since the records provide a detailed view of their locations and work performance over an extended period of time.

    It turns out there's a significant flaw in the approach. Because both the medallion and hack numbers are structured in predictable patterns, it was trivial to run all possible iterations through the same MD5 algorithm and then compare the output to the data contained in the 20GB file. Software developer Vijay Pandurangan did just that, and in less than two hours he had completely de-anonymized all 173 million entries.

    Read 6 remaining paragraphs | Comments

    “Civilian casualties” authorized under secret US drone-strike memo

    A secret Obama administration memo disclosed Monday outlines the legal justification for the government's drone-targeted killing program, a lethal strategy that authorizes the killing of innocents as collateral damage.

    The memo (PDF), released by a US federal appeals court under a Freedom of Information Act request, describes the government's legal underpinnings for its so-called overseas targeted-killing program where drones from afar shoot missiles at buildings, cars, and people. It began under the George W. Bush administration but was broadened under Obama and now includes the killing of Americans.

    The Obama administration fought for years to keep the Justice Department Office of Legal Counsel memo from becoming public. The document says that lethal force is authorized under international war rules and the US war on terror. Rights groups, however, decried the 41-page document, saying that it amounted to a legal blueprint for other nations to follow.

    Read 10 remaining paragraphs | Comments

    Supercomputer slowdown: World’s fastest system sees no new challengers

    The group that measures the world's Top 500 fastest supercomputers hasn't crowned a new champion in more than a year.

    Tianhe-2, of China's National Super Computer Center, took over the top spot in June 2013 with a measured speed of 33.86 petaflop/s, and it held on to #1 in both the November 2013 list and the June 2014 list released yesterday.

    The follow-up to Tianhe-1A, Tianhe-2 uses Ivy Bridge-based Intel Xeons and Intel Xeon Phi for a total of 3.12 million cores. The computer uses 17,808 kilowatts of power for 1.9 gigaflop/s per watt and can theoretically hit speeds of up to 54.9 petaflops.

    Read 10 remaining paragraphs | Comments