Apple Acquires SensoMotoric, a German Firm Specialising in Eye-Tracking Tech

If Apple’s announcement of ARKit at WWDC 2017 was not proof enough for company’s dedication towards augmented reality, it has now acquired a computer vision company that specialises in augmented reality. As per the latest information, Apple has acquired German company SensoMotoric Instruments, which provides eye-tracking solutions.

While a MacRumors report first suggested that the company has acquired SensoMotoric Instruments, the Cupertino-based company gave Axios its standard boilerplate statement: “Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans.” While Apple almost never officially discloses acquisitions to public, this standard response can be seen as a confirmation.

As SensoMotoric Instruments provides eye-tracking technology, which plays a key role in the development of augmented reality applications, it is clear that Apple might be focusing heavily on this area going ahead.

Apple Acquires SensoMotoric, a German Firm Specialising in Eye-Tracking Tech
However, this should not come as a surprise to anyone who has been tracking news regarding the company currently headed by Tim Cook in last two year. Tim Cook himself has openly made several statements where he has talked about the potential that augmented reality holds.

As we mentioned, Apple introduced ARKit along with the announcement of iOS 11 at WWDC 2017. The ARKit provides fast and stable motion tracking, ambient light estimation and so on which is made possible by using the sensors in the iPhone and iPad devices. “This will make ARKit the largest AR platform in the world,” Craig Federighi said at the event.

As there is no official information available from the company’s side, currently it is unclear what Apple is planning to do with its new acquisition. However, we can now expect the company to make an announcement regarding its augmented reality efforts sometime in the near future.

Uber knew fired engineer had information about Google’s self-driving car tech

Uber has acknowledged hiring a former Google engineer — now accused of stealing self-driving car technology — despite having received warnings that he was still carrying around some of his former employer’s property.

The admission, contained in a Thursday court filing, is the latest twist in a high-profile legal fight between the ride-hailing company and a Google spin-off, Waymo. Both companies are battling to build self-driving cars that could reshape the way people travel.

Waymo alleges that Anthony Levandowski, the former Google engineer at the crux of the case, ripped off its trade secrets before departing in January 2016 to found a robotic vehicle startup that Uber acquired seven months later.

The lawsuit maintains that Uber then transplanted the intellectual property allegedly stolen by Levandowski into its own fleet of self-driving vehicles — a charge that Uber has adamantly denied since Waymo filed its complaint in federal court four months ago.

In May, US district judge William Alsup ordered Uber to return the stolen files, writing that evidence indicated the company “knew or should have known that he possessed over 14,000 confidential Waymo files.”

Now, Uber has for the first time has acknowledged that Levandowski informed its now-departed CEO , Travis Kalanick, that he had five disks filled with Google’s information five months before joining Uber. The disclosure, made in March 2016, lends credence to Waymo’s allegation that Levandowski downloaded 14,000 documents on to a computer before leaving Google.

Uber

Uber, though, says Kalanick told Levandowski not to bring any of the Google information with him to Uber. At that time, a deal had been reached for Uber to buy Levandowski’s startup, Otto, for $680 million, though the acquisition wasn’t completed until August 2016.

The filing asserts that Levandowski destroyed the disks containing Google’s material not long after Kalanick told him that Uber didn’t want the information on them.

Levandowski’s lawyers didn’t immediately respond to requests for comment. They have been advising Levandowski to assert his Fifth Amendment right against self-incrimination since Waymo filed its lawsuit.

Based on the evidence he has seen so far, Alsup has already referred the case to the Justice Department for a potential criminal investigation.

The scenario sketched by Uber comes a few weeks after the company fired Levandowski for refusing to relinquish his Fifth Amendment rights and cooperate with its efforts to defend itself against Waymo’s suit.

Kalanick resigned as Uber’s CEO Tuesday week after investors demanded he step down. The investors who have financed Uber’s growth had concluded Kalanick had to go following revelations of sexual harassment in the company’s office, a federal investigation into company tactics used to thwart regulators, and the threat of even more trouble posed by the Waymo lawsuit.

Consumer tech should be cheaper by July, when import tariffs end

mam 240046 391378 de

In a move meant to cut the cost of IT and consumer electronics products, the World Trade Organization is ending tariffs on imports including game consoles, TVs, GPS receivers and advanced chips from July 2016.

The tariffs—in some countries as high as 35 percent on products such as video cameras—will be phased out over seven years under an agreement finalized Wednesday, and applies to all 192 member countries of the WTO.

The vast majority of the savings will take effect next July, when tariffs are abolished on around 130 categories of IT equipment, accounting for 88 percent of affected imports, WTO Director-General Roberto Azevêdo said Wednesday. By 2019, 95 percent of imports affected by the changes will be tariff-free, with all 201 product categories covered by the agreement being exempted within seven years, Azevêdo said.

Other products that will become tariff-exempt include touch screens, telecommunications satellites, tools for manufacturing printed circuits and some medical products, he said.

While cars aren’t directly affected by the trade deal, components to make them safer and more fuel-efficient are, as the multicomponent integrated circuits they contain were recently reclassified as semiconductors, one of the categories exempted from tariffs, according to the European Semiconductor Industry Association.

The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually, according to IT industry lobby group DigitalEurope. It expects a $190 billion boost to global GDP from the changes.

The deal doesn’t go far enough for Digital Europe, which wants to see a similar agreement on trade in IT-related services introduced by the end of 2016.

Shopping tech firm Powa in major Chinese joint venture

Powa Technologies boss Dan Wagner

Powa Technologies, a British e-commerce tech firm, has formed a “strategic alliance” with China’s biggest payments processor, UnionPay Network Payments.

The joint venture could generate $5bn (£3.3bn) in revenues over three years, said Powa’s boss, Dan Wagner.

UnionPay Network Payments is owned by China UnionPay, which has 4.5 billion credit and debit card users worldwide.

Powa’s technology enables shoppers to pay for goods quickly in-store and online using their smartphones.

“This is undoubtedly a huge deal for Powa,” said electronic payments expert, Dave Birch of Hyperion Consulting.

The joint venture, PowaTag UnionPay, will launch first in Guangdong Province, targeting 400,000 retailers, the company says, before rolling out to one million by the end of 2016.

“We have a target to reach at least 50 million consumers regularly using the platform within one year from launch,” said PowaTag UnionPay’s chairman, Mr Hu Jinxiong.

China’s merchants – there are six million in total – will pay about 13p per transaction to the joint venture for access to the technology, said Mr Wagner.

‘We’ve trumped Apple Pay’

The PowaTag system relies on digital tags – quick response (QR) codes – that can be attached to physical goods or inserted into self-service checkout screens, emails, websites, posters, images – even the audio from TV ads.

Wherever Chinese shoppers see the PowaTag UnionPay symbol they will be able to buy products by scanning them with their phones and tapping the “buy now” button, the company says.

US retail giant Walmart recently launched a similar quick pay system for mobiles in its stores.

China’s Commerce Department says the “online to offline” market, whereby shoppers search for products online then complete the purchase in-store, grew 80% in the first half of 2015 and is worth about £31bn ($47bn).

“Why did China UnionPay decide to partner with a little British technology company?” said Mr Wagner. “We’ve trumped ApplePay and the rest of the world here.”

‘Tap-and-go’

State-owned China UnionPay, has been responding to the rapid take-up of smartphones across the country – about 68% of the population now has one.

On 12 December, it launched QuickPass – a “tap-and-go” payment system for mobile phones similar to Apple Pay and other digital wallets – in co-operation with more than 20 commercial banks.

QuickPass is already available at more than 10,000 locations in mainland China, says UnionPay, including at retailers such as Carrefour, McDonald’s, and Costa.

“The Chinese market is going mobile very quickly,” says Mr Birch. “And the integration of payment systems and messaging platforms such as WeChat is a very interesting development.”This latest deal with Powa will give Chinese shoppers yet another way to shop using their mobiles.

‘X-Ray Vision’ Tech Uses Radio Waves to ‘See’ Through Walls

RF-Capture Tech

“X-ray vision” that can track people’s movements through walls using radio signals could be the future of smart homes, gaming and health care, researchers say.

A new system built by computer scientists at MIT can beam out radio waves that bounce off the human body. Receivers then pick up the reflections, which are processed by computer algorithms to map people’s movements in real time, they added.

Unlike other motion-tracking devices, however, the new system takes advantage of the fact that radio signals with short wavelengths can travel through walls. This allowed the system, dubbed RF-Capture, to identify 15 different people through a wall with nearly 90 percent accuracy, the researchers said. The RF-Capture system could even track their movements to within 0.8 inches (2 centimeters).

Researchers say this technology could have applications as varied asgesture-controlled gaming devices that rival Microsoft’s Kinect system, motion capture for special effects in movies, or even the monitoring of hospital patients’ vital signs.

“It basically lets you see through walls,” said Fadel Adib, a Ph.D. student at MIT’s Computer Science and Artificial Intelligence Lab and lead author of a new paper describing the system. “Our revolution is still nowhere near what optical systems can give you, but over the last three years, we have moved from being able to detect someone behind a wall and sense coarse movement, to today, where you can see roughly what a person looks like and even get a person’s breathing and heart rate.”

The team, led by Dina Katabi, a professor of electrical engineering and computer science at MIT, has been developing wireless tracking technologies for a number of years. In 2013, the researchers used Wi-Fi signals to detect humans through walls and track the direction of their movement.

The new system, unveiled at the SIGGRAPH Asia conference held from Nov. 2 to Nov. 5 in Japan, uses radio waves that are 1,000 times less powerful than Wi-Fi signals. Adib said improved hardware and software make RF-Capture a far more powerful tool overall.

“These [radio waves used by RF-Capture] produce a much weaker signal, but we can extract far more information from them because they are structured specifically to make this possible,” Adib told Live Science.

The system uses a T-shaped antenna array the size of a laptop that features four transmitters along the vertical section and 16 receivers along the horizontal section. The array is controlled from a standard computer with a powerful graphics card, which is used to analyze data, the researchers said.

Because inanimate objects also reflect signals, the system starts by scanning for static features and removes them from its analysis. Then, it takes a series of snapshots, looking for reflections that vary over time, which represent moving human body parts.

However, unless a person’s body parts are at just the right angle in relation to the antenna array they will not redirect the transmitted beams back to the sensors. This means each snapshot captures only some of their body parts, and which ones are captured varies from frame to frame. “In comparison with light, every part of the body reflects the signal back, and that’s why you can recover exactly what the person looks like using a camera,” Adib said. “But with [radio waves], only a subset of body parts reflect the signal back, and you don’t even know which ones.”

The solution is an intelligent algorithm that can identify body parts across snapshots and use a simple model of the human skeleton to stich them together to create a silhouette, the researchers said. But scanning the entire 3D space around the antenna array uses a lot of computer power, so to simplify things, the researchers borrowed concepts from military radar systems that can lock onto and track targets. [6 Incredible Spy Technologies That Are Real]

Using a so-called “coarse-to-fine” algorithm, the system starts by using a small number of antennas to scan broad areas and then gradually increases the number of antennas in order to zero in on areas of strong reflection that represent body parts, while ignoring the rest of the room.

This approach allows the system to identify which body part a person moved, with 99 percent accuracy, from about 10 feet (3 meters) away and through a wall. It could also trace letters that individuals wrote in the air by tracking the movement of their palms to within fractions of an inch (just a couple of centimeters).

Currently, RF-Capture can only track people who are directly facing the sensors, and it can’t perform full skeletal tracking as traditional motion-capture solutions can. But Adib said that introducing a more complex model of the human body, or increasing the number of arrays, could help overcome these limitations.

The system costs just $200 to $300 to build, and the MIT team is already in the process of applying the technology to its first commercial application — a product called Emerald that is designed to detect, predict and prevent falls among the elderly.

“This is the first application that’s going to hit the market,” Adib said. “But once you have a device and lots of people are using it, the cost of producing such a device immediately gets reduced, and once it’s reduced, you can use it for even more applications.”

The initial applications of the technology are likely to be in health care, and the team will soon be deploying the technology in a hospital ward to monitor the breathing patterns of patients suffering from sleep apnea. But as the resolution of the technology increases, Adib said, it could open up a host of applications in gesture control and motion capture.

“We still have a long path to go before we can get to that kind of level of fidelity,” he added. “There are a lot of technical challenges that still need to be overcome. But I think over the next few years, these systems are going to significantly evolve to do that.”

Virtual Reality Tech Lets You ‘Teleport’ Back in Time

The Teleport 3D camera attaches to most smartphones.

The feeling you got when you first saw your newborn’s face. That glorious moment when the entire family was laughing over dinner. The epiphany you had when you reached the peak of your favorite mountain. If only you could travel back and experience those instances again.

A group of engineers is hoping to do just that with a virtual reality (VR) system that lets you take 3D videos with your phone and an accompanying virtual reality headset that lets you experience those memories again, whenever you want.

“Family started the idea,” said Justin Lucas, one of the technology’s creators. “Viewing 2D videos is how we look back at past moments. We wanted to create a more immersed feeling when viewing those favorite past moments.”

And they wanted to do it on the cheap.

“We wanted to create something affordable that anyone with a smartphone can use,” said Lucas, adding that current technology to take 3D videos and then experience them through VR already exists. However, that existing technology costs thousands of dollars, he told Live Science.

Teleport VR headset.

Called Teleport, the new system includes an aluminum 3D camera with two lenses, each of which acts like one of your eyes to capture the images from a slightly different perspective. Like your brain, the camera then combines these two views into a 3D picture.

Once you clamp the small camera onto your phone — the system works with iPhones and Androids — an app lets you use your phone to snap and store the video from the clamped-on camera. Then, you can attach the phone to either a VR headset created by Lucas and his team or a Google cardboard VR headset, he said.

And you can, for instance, “fly” through the Toys ‘R’ Us in Times Square — with a face-to-face with Mr. Potato Head, to boot — or as one team member did, watch your daughter eating a cream puff at a bakery in New York City.

For a limited time, Lucas and his team are selling the VR camera, with a free VR cardboard headset, for $49; the Teleport VR headset for $19; and the Teleport VR camera and Teleport headset for $69. The team is raising money for their project through Indiegogo. As of today (Dec. 12), the campaign had raised $62,242 in funding, with 43 days left.

Other affordable virtual reality systems are in the pipeline by the likes of Samsung, whose Samsung Gear VR, selling for $199, works with the Galaxy Note 4 and lets you experience apps and games in a 3D world.

‘Star Wars’ Tech: 8 Sci-Fi Inventions and Their Real-Life Counterparts

A long time ago in a studio far, far away, filmmaker George Lucas created one of the seminal works of science fiction: the “Star Wars” movie series.

Nearly 40 years later, the ideas introduced by the films are still staples of the genre, and with the theatrical premiere of the latest installment,”Star Wars: The Force Awakens,” just days away, fans will be pleased to see lightsabers, hyperdrives and speeders in abundance.

While the science and technologies behind the franchise are firmly rooted in fantasy, their enduring appeal has served as inspiration for many real-life scientists and engineers. Here are some of the most notable attempts to turn “Star Wars'” science fiction into science fact.

Exoplanets

Central to the plot of “Star Wars” is the existence of vast numbers of planets, all connected by a galaxy-wide trade network. But it wasn’t until 1995, nearly 20 years after the 1977 release of the first movie, that the first exoplanet — a planet located outside Earth’s solar system — was definitively detected.

“To me, the most important thing about ‘Star Wars’ is the idea that humans’ future is in space,” said British science communicator Mark Brake, who put on a show called “The Science of Star Wars” last year. “Effectively, what it’s all about is trade and imperial development in a series of solar systems, and actually now we are beginning to discover these solar systems.”

More than 2,000 exoplanets have now been found, and in 2011, NASA’s Kepler space telescope discovered the first planet orbiting around two suns, just like Luke Skywalker’s fictional home planet Tatooine. The planet, dubbed Kepler-16b, is an uninhabitable gas giant, but in 2012, the telescope was used to discover two more planets in binary star systems that are extremely close to the so-called habitable zone. (This is the region around a star in which liquid water could flow on a planet’s surface.)

Hyperspace

star wars hyperdrive

In the films, spaceships like Han Solo’s Millennium Falcon are able to jet between solar systems that are light-years apart. According to “Star Wars” canon, these “hyperdrive” propulsion systems let intergalactic travelers jump into a shadow dimension called “hyperspace,” which provides shortcuts between points in real space.

While the movies are hazy on the details, the idea of hyperspace andfaster-than-light (FTL) travel has a basis in real science, said Eric Davis, a physicist at the Institute for Advanced Studies at Austin, in Texas, who researches the possibility of FTL travel.

While it’s impossible to travel faster than light, the curved nature of space-time proposed by Albert Einstein suggests space could be distorted to shorten the distance between two points. One way of doing this would be a warp drive that contracts space in front of a ship and expands it behind the vessel. Another would be to create a wormhole, or a section of space that curves in on itself to create a shortcut between distant locations. Creating these kinds of distortions would require exotic matter with so-called “negative energy,” Davis told Live Science, a phenomenon that has been demonstrated in the lab using the Casimir effect, which can be measured as the force of attraction or repulsion between two parallel mirrors that are placed just tiny distances apart in a vacuum.

Earlier this year, a lab called Eagleworks, based at NASA’s Johnson Space Center in Houston, Texas, claimed to have created a warp drive that appears to exploit this effect to create spatial distortions in a vacuum. But, sadly for sci-fi fans, the lab’s unpublished findings have been met with skepticism. And Davis, an FTL optimist, called the claims “bizarre and questionable.”

“These remain as speculative theoretical concepts at present because they remain under further theoretical study and also because there is no technology envisioned that can implement them,” he said. “It might take between 50 and 300 years to develop the technology that produces traversable wormholes or warp drives.”

Speeders

A less conceptually troublesome mode of transport featured in “Star Wars” is likely a lot closer to being realized. A number of firms are currently trying to create working versions of “hoverbikes,” known as “speeders” in the films.

Aerofex Hoverbike

Aerofex, a California-based startup company, developed the Aero-X vehicle, which is described as “a hovercraft that rides like a motorcycle,” and can fly at 45 mph (72 km/h) up to 10 feet (3 meters) off the ground. For speed demons, U.K.-based Malloy Aeronautics’ Hoverbike is projected to reach speeds of more than 170 mph (274 km/h) at the same altitude as a helicopter.

Both Aerofex and Malloy Aeronautics’ hoverbikes use standard gasoline, but environmentally conscious “Star Wars” fans could soon have futuristic transportation alternatives, too. Bay Zoltan Nonprofit Ltd., a Hungarian state-owned applied research institute, has created an electric battery-powered tricopter called the Flike. Before you get your hopes up, though, all three vehicles are still firmly in the design phase.

Droids

Another ever-present feature of the “Star Wars” universe are droids, which are robots that act as personal servants, pilots, technicians and even soldiers. Today, there are a growing number of analogies in the real world, ranging from automated military drones to Google’s driverless cars and robotic surgical assistants.

R2D2 and C-3PO

his summer, robots competed in the U.S. Defense Advanced Research Projects Agency (DARPA) Robotics Challenge Finals. The humanoid robots tackled complex challenges, including driving a vehicle, opening a door, climbing steps and turning off a valve.

The majority of the bots performed well in the competition, but these machines were only semiautonomous, meaning a human operator was almost constantly in control of the robot. So, while the machinery of modern robotics can match the clunky “Star Wars” droids, there’s a long way to go in making real robots as smart, said Jerry Pratt, an expert in algorithms for bipedal walking and co-leader of the Florida-based Institute for Human and Machine Cognition’s team, which competed in the Robotics Challenge Finals, winning second place.

“The hard part is the artificial intelligence,” Pratt told Live Science. “We’re getting to the point where sensory-input devices are nearly as good [as], if not better than human sight. But [having the robots] actually understanding what [they] are looking at is what’s difficult. It’s small things like being able to look at a cup and understand what a cup is and understand that it’s something you put liquid in. Unless it is hand-coded by a human, we are pretty much nowhere at this point, and it’s hard to say what needs to happen.”

Lightsabers

Kylo Ren Lightsaber

The most iconic piece of “Star Wars” technology is the lightsaber, but it’s also probably the most far-fetched, experts say. The photons that make up light have long been considered massless particles that don’t interact with each other, which makes the prospect of clashing beams of light in epic lightsaber duels unlikely.

In 2013, however, researchers from Harvard University and the Massachusetts Institute of Technology (MIT) demonstrated that when pairs of photons were fired through a cloud of supercooled atoms, the photons emerged as a single molecule. Talking about the interaction between the particles to the Harvard Gazette, Mikhail Lukin, a professor of physics at Harvard, said, “It’s not an inapt analogy to compare this to lightsabers.”

But Davis said re-creating the effect in real life is a whole other ball game. “Lightsabers are purely fictional and will never be developed,” he said. “Using the contraptions and cryogenic equipment to produce trapped quantum gases 2 feet [0.6 m] from the end of a lightsaber emitter is impractical.”

But all is not lost when it comes to light-based weapons: Scientists are close to developing weapons similar to the blaster guns featured in “Star Wars.” In fact, the U.S. Navy has already demonstrated a ship-based laser weapon capable of shooting drones out of the sky and disabling small boats. And this summer, the U.S. Air Force began testing another laser-based weapon that is five times as powerful as the Navy’s version, and small enough to be fitted to fighter jets and Humvees.

Tractor beams

Light could also help replicate another interesting technology from the “Star Wars” franchise: the tractor beam, which is an invisible energy field that can grab, trap and move objects. Since the early 2010s, scientists have been creating lasers with unusual beam-intensity profiles that allow them to attract and repulse tiny particles.

The Millennium Falcon in "Star Wars Episode IV: A New Hope."

 

Just last year, researchers from the Australian National University broke the distance record for tractor beams by using a doughnut-shaped laser to drag hollow, glass spheres for up to 7.8 inches (20 centimeters), roughly 100 times further than in previous experiments.

And just a couple months ago, a team from the University of Bristol, in the United Kingdom, showed that sound could rival light as the source of future tractor beams. The researchers used a precisely timed sequence of sound waves from an array of tiny loudspeakers to create a region of low pressure that effectively counteracts gravity, levitating tiny balls of polystyrene in midair. The balls could then be pulled, pushed and spun using only sound waves.

Holograms

When you’re trapped in the tractor beam of an Imperial Star Destroyer and facing certain doom, there’s no better way of sending a mayday message than via hologram. But while specially designed glasses have been used to create the illusion of 3D images for decades, free-standing holographic videos have been hard to reproduce.

hologram

In recent years, an old stage trick invented by John Pepper in the 19th century to give the illusion of a ghostlike apparition on stage has been revived, most notably to seemingly resurrect deceased rapper Tupac Shakur at the Coachella music festival in 2012. The method relies on a superthin sheet of foil hung at a 45-degree angle from the stage that is invisible to the naked eye but reflects images from a projector. The trick gives the illusion of a 3D image but only if you are standing in front of it.

Closer to the mark is the Voxiebox “swept surface volumetric display” made by Voxon, the result of a merger between two groups of Australian and American inventors. 3D models are sliced into hundreds of horizontal cross sections before a superfast projector beams them onto a flat screen that rapidly moves up and down. The human eyeblends these projections together to create a 3D image that can move and be viewed from any angle, just like during Princess Leia’s message to Obi-Wan Kenobi in “Star Wars: Episode IV – A New Hope.”

The Force

Binding the entire “Star Wars” universe together is the concept of the Force, which gives Jedi knights their magical powers and provides the backdrop for the battle between good and evil.

Luke Skywalker holds a light saber.Earlier this year, researchers at the Large Hadron Collider announced they had discovered the first unequivocal evidence for the phenomenon, with a certain diminutive green spokesperson remarking, “Very impressive, this result is.” Sadly for those aspiring Jedi out there, it was an elaborate April Fools’ Day prank.

But, with the Force due to awaken later this month, there may be hope for them yet.

Tech tip: Create your own music streaming service

These days we don’t carry around our music. We YouTube it or we stream it from various services like Apple Music and Saavn. But the quality of the music, the sound per se, is questionable at best for various reasons. In some cases, the music is also censored. What if you want to listen to your own library, which is not censored, which is of the highest quality sonically speaking? What if you have a song, which is not available to stream? What is your solution? Well, the cloud is your friend again, because there are multiple ways to upload your entire library and access it remotely, irrespective of what you are using. It will not just act like a back-up but will solve a bevy of issues. We show you how.

Things to Note

1. Backing up a music library can be a time-consuming process as normally upload speeds are much lower than download speeds. It also depends on how big is the library that you’re trying to back up. So if you’re trying to upload 5GB it should happen in a few hours, but 50 GB can take a few days.

2. You may need to pay for a cloud storage service like a OneDrive, DropBox, or Google Drive. If that’s not the case then perhaps you may need a subscription of an audio streaming service. In the case of streaming service, your music isn’t actually going up in the cloud, because, in most cases, the service will match the song you have to its library. In many cases, it will upgrade the file if your file is of a lower quality.

3. If it’s in the cloud, you really don’t own the space, you’re a tenant, so you will have access only till you pay the rent for the storage service.

iTunes Match method

Since iTunes is one of the most popular music management tools on the planet for many, it makes sense for them to use this service to also back-up their music library.

For iTunes users, iTunes Match is the best option. It costs around Rs.1,200 per year and allows users to sync all of their locally stored music with Apple’s library. In the case you happen to have a song Apple doesn’t, then Apple allows you to upload 100,000 songs, which is a number ought to be good enough for most users.

If you opt for this path, you have to make sure that you aren’t using lossless audio formats like Ogg Vorbis or FLAC because then Apple’s Match service will not be able to detect those files. Instead, you will need to convert those files to MP3s, AAC or WMA formats, which are more universally acceptable.

Microsoft OneDrive

Microsoft’s OneDrive cloud storage service isn’t one of the most cost-effective cloud storage services out there, but it is a great way to backup music. This is particularly true of Office 365 users (Rs.420 per month) who get access to 1TB of storage, with a cap of 20,000 files. This basically means you can upload a lot of music on your personal cloud and stream it or download it whenever you want.

CES 2016 TV tech: 4K yawns, high dynamic range dawns

samsung-suhd-ces-2015-010.jpg

Every few years at the massive Consumer Electronics Show in Las Vegas, the world gets introduced to a new buzzword in television technology. Not too long ago it was 3D, then it was OLED, then it was 4K and Ultra High Definition.

In January 2016, I’m betting the buzzword most uttered in the presence of 100-inch flat panels on the show floor will be “HDR,” which stands for high dynamic range.

TV reviewers — folks like me who try to help you decide what’s actually worth buying — look at HDR as a potentially more important improvement than 3D or 4K. Its benefits can bemore visible than 4K and more broadly appealing than 3D.

And since 4K TVs are now basically mainstream, with dirt-cheap prices and correspondingly thin profit margins, the voracious capitalist beast of technological progress needs its next shiny new object. Something higher, in both price and ambition. And that’s why TV manufacturers, the ones who spend millions to power shows like CES, look at HDR as something that might convince you to spend more money on a bigger and better flat-panel.

technicolor-tour-1.jpg

Not just more pixels, but better pixels

First off, TV HDR is not the same as photo HDR. The “HDR” option on your phone or camera combines a few pictures at different exposures to compensate for the limited capabilities of the sensor.

The high dynamic range we’re talking about, and the one you’ll hear about at CES this year, is a catch-all term for new, improved content, namely TV shows and movies, and the brand-new televisions that can display it. The most concise pitch I’ve heard is “not just more pixels, but better pixels.”

While regular 4K resolution increases only the pixel count–the physical number of the little dots that make up an image — HDR significantly expands the range of contrast and color that those pixels can show. Bright parts of the image can get much brighter, so the image seems to have more “depth.” Colors get expanded to show more bright blues, greens, reds and everything in between.

Just like you needed a new TV to watch a movie in 3D or 4K resolution, you’ll need a new TV to watch HDR. We expect most TV makers to introduce numerous TVs that can handle HDR content in 2016. But how well will they be able to handle it? In our review of one of the least expensive HDR TVs from 2015, the Samsung UNJS8500, it didn’t provide much improvement. On the other hand, the extremely expensive Samsung UNJS9500 delivered the HDR goods nicely.

samsung-un65js9500-screenhot-004.jpg

HDR is neither cheap nor easy to implement well on a TV, and requires extra capabilities beyond mere 4K resolution (all HDR I’ve seen proposed is also in 4K). On the content side it requires cooperation from a broad range of industry interests — including Hollywood studios, distributors/producers like Netflix and Amazon Instant Video and TV makers themselves.

As a result, actual HDR content is likely remain rare for awhile. It will definitely be less common for the foreseeable future than 3D and 4K content, for example. At its best HDR can look spectacular, and you’ll probably read about quite a few impressive demos from the show floor this year. How well those will translate into the living room remains to be seen.

For much more check out What is HDR for TVs and why should you care.

panasonic-oled-tv-1.jpg

Other CES TV trends to watch

HDR is the newest and potentially most exciting of the bunch, but it’s not alone. Here’s a few more.

The ubiquity of 4K TVs: Nearly all of the TVs introduced at the show will have 4K resolution, and you’ll hear almost nothing about lower resolutions like 1080p. 4K TVs are cheap, plentiful and (finally)absolutely worth buying for many TV shoppers these days.

The slow introduction of 4K content: It’s not strictly a TV hardware trend, but the official debut of 4K Blu-rays will happen at the show. The Samsung UBD-K8500, first introduced at IFA in September, should ship in early 2016, and we wouldn’t be surprised to see another player or two announced. The first disc titles will also see the light of day, also arriving early 2016 from Sony and others. And if we’re lucky, perhaps someone will also announce more 4K streaming content (note that YouTube and Netflix both have keynote addresses at the 2016 CES show).

LG OLED TVs vs. Samsung LCD TVs: LG enjoyed an unprecedented run in 2015 at the top of CNET’s Best TVs for Picture quality list, thanks to its monopoly on OLED TV technology. We don’t expect a major competitor like Samsung to announce OLED TVs, but maybe a brand like Panasonic, which sells an OLED TV in Europe, will enter the O ring worldwide.

In the meantime expect more messaging from companies like Samsung touting the advantages of LCD over OLED, namely improved light output, especially with HDR sources.

sharp-lc-lb371u-series-roku-tv-2015-19.jpg

The platformization of Smart TV: Last year saw the first introduction of official Android TVs (from Sony and Sharp) and an expansion of Roku TV brands (including Sharp, Insignia and LG). Will platform-based Smart TV systems replace the in-house software common today? Don’t count on it from Samsung and LG, which have significant investments in Tizen and Web OS respectively, but a smaller manufacturer might decide to go with a platform like Android or Roku for its Smarts.

The proliferation of curved and style: As much as Samsung is a trend-setter in general, we don’t expect many other TV makers — aside from LG with a few OLED models, and maybe Hisense — to jump on the curved TV bandwagon. But TV style will continue to be talked about, whether in the form of materials like glass or metal, ever-slimmer designs, or even smaller frames around the picture.

TV makers have always tried to differentiate their offerings with mini-brands — think Sony Trinitron and Panasonic Viera. Last year Samsung brought back the minibrand with a vengeance when it introduced SUHD, an LCD TV marketed as something more. Will another manufacturer try to emulate its success?

The marginalization of major brands: Making TVs is a tough business, and only Samsung, LG and Vizio (which doesn’t exhibit at CES) seem to enjoy strong mainstream success. Panasonic and Sony have faced trouble in recent years and pared down their offerings significantly (although Sony has improved somewhat after spinning off its TV business as a subsidiary) while Sharp has just been gobbled up by Hisense. Meanwhile that Chinese giant, along with fellow China-based brand TCL, make inroads worldwide.

Big screens will be big

Maybe TVs don’t have the tech trend cachet of drones or Internet of Things or virtual reality, but they’ll still be everywhere at CES. It’s still largely (no pun intended) a TV show (seriously, not intended!) and we’ll be there, standing next to the biggest ones, to tell you about all the latest and greatest display technology. We may or may not employ gratuitous exclamation points.

So what do you want to see this year in terms of TV hardware and software?

Infiniti revitalizes its sporty Q50 sedan with new engines and tech

The Q50 sedan is Infiniti’s top seller in the United States, and the saying goes that you don’t mess with success. Of course, a car needs to evolve as the marketplace does, but Infiniti was smart in its approach to refreshing the Q50 — stylistically, the car remains the same. It’s the inside that counts.

Here, “inside” really refers to a whole slew of new powertrain options. Buyers will be able to choose from a 208-horsepower four-cylinder, a brand new six-cylinder with two turbos and outputs of 300 hp or 400 hp, and a 360-hp hybrid that relies on the old six-cylinder engine. No matter what engine you choose, your sole transmission option is a seven-speed automatic. All trim levels come standard with rear-wheel drive, but all-wheel drive is an available option.

The greatest number of aesthetic changes takes place when you select the new range-topping trim, the Q50 Red Sport 400. This performance-oriented variant includes fancy new alloy wheels, different exhaust tips and summer tires. Otherwise, the Q50’s appearance is very nearly the same as the car it replaces — again, it’s smart not to meddle with your best seller.

The interior soldiers on with the same dual-screen infotainment layout as before. Both screens are touch-sensitive and work with your standard smartphone-style gestures. The top screen is 8 inches, and the lower screen is just a touch smaller at 7 inches.

Infiniti Q50

For all but the four-cylinder models, Infiniti will roll out a new version of its steer-by-wire system, which replaces traditional steering linkage in favor of a decidedly digital setup. The automaker relied on ample customer feedback in the system’s second iteration, which promises more feedback and an experience that’s just a bit closer to traditional steering systems.

The Sport trim, which lies just under the Red Sport 400, also receives a new adaptive suspension setup that constantly adjusts to changing road conditions. It also has a mode switch, in the event you like your dampers as stiff as humanly possible, no matter how many potholes lie ahead.

The Q50 will go on sale in “selected markets” (Infiniti did not clarify beyond that) some time in 2016. The automaker did not divulge any pricing information at this time.