The Amazing Artificial Intelligence We Were Promised Is Coming, Finally

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world’s chess champion. That didn’t happen until 1996. And despite Marvin Minsky’s 1970 prediction that “in from three to eight years we will have a machine with the general intelligence of an average human being,” we still consider that a feat of science fiction.
The pioneers of artificial intelligence were surely off on the timing, but they weren’t wrong; AI is coming. It is going to be in our TV sets and driving our cars; it will be our friend and personal assistant; it will take the role of our doctor. There have been more advances in AI over the past three years than there were in the previous three decades.

Even technology leaders such as Apple have been caught off guard by the rapid evolution of machine learning, the technology that powers AI. At its recent Worldwide Developers Conference, Apple opened up its AI systems so that independent developers could help it create technologies that rival what Google and Amazon have already built. Apple is way behind.

The AI of the past used brute-force computing to analyze data and present them in a way that seemed human. The programmer supplied the intelligence in the form of decision trees and algorithms. Imagine that you were trying to build a machine that could play tic-tac-toe. You would give it specific rules on what move to make, and it would follow them. That is essentially how IBM’s Big Blue computer beat chess Grandmaster Garry Kasparov in 1997, by using a supercomputer to calculate every possible move faster than he could.

Today’s AI uses machine learning in which you give it examples of previous games and let it learn from those examples. The computer is taught what to learn and how to learn and makes its own decisions. What’s more, the new AIs are modeling the human mind itself using techniques similar to our learning processes. Before, it could take millions of lines of computer code to perform tasks such as handwriting recognition. Now it can be done in hundreds of lines . What is required is a large number of examples so that the computer can teach itself.

The Amazing Artificial Intelligence We Were Promised Is Coming, Finally

 

The new programming techniques use neural networks which are modeled on the human brain, in which information is processed in layers and the connections between these layers are strengthened based on what is learned. This is called deep learning because of the increasing numbers of layers of information that are processed by increasingly faster computers. These are enabling computers to recognize images, voice, and text and to do human-like things.

Google searches used to use a technique called Page Rank to come up with their results. Using rigid proprietary algorithms, they analyzed the text and links on Web pages to determine what was most relevant and important. Google is replacing this technique in searches and most of its other products with algorithms based on deep learning, the same technologies that it used to defeat a human player at the game Go. During that extremely complex game, observers were themselves confused as to why their computer had made the moves it had.

In the fields in which it is trained, AI is now exceeding the capabilities of humans.

AI has applications in every area in which data are processed and decisions required. Wired founding editor Kevin Kelly likened AI to electricity: a cheap, reliable, industrial-grade digital smartness running behind everything. He said that it “will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now ‘cognitize.’ This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI This is a big deal, and now it’s here.”

AI will soon be everywhere. Businesses are infusing AI into their products and helping them analyze the vast amounts of data they are gathering. Google, Amazon, and Apple are working on voice assistants for our homes that manage our lights, order our food, and schedule our meetings. Robotic assistants such as Rosie from “The Jetsons” and R2-D2 of Star Wars are about a decade away.

Do we need to be worried about the runaway “artificial general intelligence” that goes out of control and takes over the world? Yes but perhaps not for another 15 or 20 years. There are justified fears that rather than being told what to learn and complementing our capabilities, AIs will start learning everything there is to learn and know far more than we do. Though some people, such as futurist Ray Kurzweil, see us using AI to augment our capabilities and evolve together, others, such as Elon Musk and Stephen Hawking, fear that AI will usurp us. We really don’t know where all this will go.

What is certain is that AI is here and making amazing things possible.

 

Everything You Think You Know About AI Is Wrong

Robots are coming for our jobs. Terminators will soon murder us all. There is no escape. Resistance is futile.
These doom-laden predictions probably sound familiar to anyone who’s read or seen any movies lately involving artificial intelligence. Sometimes they’re invoked with genuine alarm, as in the case of Elon Musk and Stephen Hawking warning against the danger of killer automatons. Other times, the anxiety comes across as a kind of detached, ironic humor masking the true depths of our dread, as if tweeting nervous jokes about #Skynet will somehow forestall its rise.

AI raises unsettling questions about our place in the economy and society; even if by some miracle 99 percent of employers agree not to use robots to automate labor, that still leaves many hardworking people potentially in the lurch. That’s why it’s important to talk about the impact AI will have on our future now, while we have a chance to do something about it. And the questions are complicated: Whose jobs will be at stake, exactly? How do we integrate those people back into the economy?

But the more I learn about artificial intelligence, the more I’ve come to realize how little most of us – myself included – really understand about how the technology is actually developing, which in turn has a direct impact on the way we experience AI in the real world. It’s one thing to get excited about Siri and chatbots. It’s something else entirely to hear that certain fields of AI research are progressing much more rapidly than others, with implications for the way that technology will shape our culture and institutions in the years to come.

 

Everything You Think You Know About AI Is Wrong

Killer robots may be much further off than you think
For something like the Terminator to become reality, a whole bunch of technologies need to be sufficiently advanced at the same time. What’s really happening is that AI researchers are making much greater progress on some ideas such as natural-language processing (i.e., understanding plain English) and data analysis, and far less quickly on other branches of AI such as decision-making and deductive reasoning. Why? Because starting in the mid-to-late 2000s, scientists achieved a breakthrough in the way they thought about neural networks, or the systems that allow AI to interpret data.

Along with the explosion of raw data made possible by the Internet, this discovery allowed machine learning to take off at a near-exponential rate, whereas other types of AI research are plodding along at merely a linear pace, said Guruduth Banavar, an IBM executive who oversees the company’s research on cognitive computing and artificial intelligence.

“What is not talked about much in the media is that AI is really a portfolio of technologies,” said Banavar. “Don’t just look at one field and assume that all of the remaining portions of the AI field are moving at the same pace.”

This doesn’t mean scientists won’t make breakthroughs in those other AI fields that eventually make killer robots possible. But it does mean, for now, that the limits of our research may be putting important constraints on our ability to create the fully sentient machines of our nightmares. This is vital, because in the meantime, the other advances we’ve made are pushing us toward creating very specific kinds of artificial intelligence that do not resemble the Terminator robot at all.

For instance, consumers are already seeing our machine learning research reflected in the sudden explosion of digital personal assistants like Siri, Alexa and Google Now – technologies that are very good at interpreting voice-based requests but aren’t capable of much more than that. These “narrow AI” have been designed with a specific purpose in mind: To help people do the things regular people do, whether it’s looking up the weather or sending a text message.

Narrow, specialized AI is also what companies like IBM have been pursuing. It includes, for example, algorithms to help radiologists pick out tumors much more accurately by “learning” all the cancer research we’ve ever done and by “seeing” millions of sample X-rays and MRIs. These robots act much more like glorified calculators – they can ingest way more data than a single person could hope to do with his or her own brain, but they still operate within the confines of a specific task like cancer diagnosis. These robots are not going to be launching nuclear missiles anytime soon. They wouldn’t know how, or why. And the more pervasive this type of AI becomes, the more we’ll understand about how best to build the next generation of robots.

So who is going to lose their job?
Partly because we’re better at designing these limited AI systems, some experts predict that high-skilled workers will adapt to the technology as a tool, while lower-skill jobs are the ones that will see the most disruption. When the Obama administration studied the issue, it found that as many as 80 percent of jobs currently paying less than $20 an hour might someday be replaced by AI.

“That’s over a long period of time, and it’s not like you’re going to lose 80 percent of jobs and not reemploy those people,” Jason Furman, a senior economic adviser to President Obama, said in an interview. “But [even] if you lose 80 percent of jobs and reemploy 90 percent or 95 percent of those people, it’s still a big jump up in the structural number not working. So I think it poses a real distributional challenge.”

Policymakers will need to come up with inventive ways to meet this looming jobs problem. But the same estimates also hint at a way out: Higher-earning jobs stand to be less negatively affected by automation. Compared to the low-wage jobs, roughly a third of those who earn between $20 and $40 an hour are expected to fall out of work due to robots, according to Furman. And only a sliver of high-paying jobs, about 5 percent, may be subject to robot replacement.

Those numbers might look very different if researchers were truly on the brink of creating sentient AI that can really do all the same things a human can. In this hypothetical scenario, even high-skilled workers might have more reason to fear. But the fact that so much of our AI research right now appears to favor narrow forms of artificial intelligence at least suggests we could be doing a lot worse.

How to live with your robot
The trick, then, is to move as many low-skilled workers as we can into higher-skilled jobs. Some of these jobs are currently held by people; other jobs have yet to be invented. So how do we prepare America’s labor force for work that doesn’t exist yet?

Part of the answer involves learning to be more resilient and flexible, according to Julia Ross, the dean of engineering and IT at the University of Maryland Baltimore County. We should be nurturing our children to interact with people from different backgrounds and to grapple with open-ended questions, teaching them how to be creative and how to think critically – and doing it all earlier and better.

“How do we get people to understand and embrace that concept?” said Ross at a recent event hosted by The Washington Post. “That you need to be a lifelong learner, that the things that you’re learning today may be obsolete in 5 years – and that’s okay? You can get comfortable with that idea if you’re comfortable with your capacity to learn. And that’s something we have to figure out how to instill in every student today.”

Soon, teachers themselves may come to rely on narrow AI that can help students get the most out of their educational experiences, guiding their progress in the way that’s best for them and most efficient for the institution. We’re already seeing evidence of this in places like Georgia Tech, where a professor recently revealed – much to the surprise of his students – that one of his teaching assistants was a chatbot he had built himself.

Making artificial intelligence easy for regular people to use and love depends on a field of research called human-computer interaction, or HCI. And for Ben Shneiderman, a computer science professor at the University of Maryland, HCI is all about remembering the things that make people human.

This means giving people some very concrete ways to interact with their AI. Large, high-definition touchscreens help create the impression that the human is in control, for example. And designers should emphasize choice and context over a single “correct” answer for every task. If these principles sound familiar, that’s because many of them are already baked into PCs, smartphones and tablets.

“People want to feel independent and like they can act in the world,” said Shneiderman, author of “Designing the User Interface: Strategies for Effective Human-Computer Interaction.” “The question is not ‘Is AI good or bad?’ but ‘Is the future filled with tools designed to supplement and empower people?'”

That’s not to say narrow AI is the only kind researchers are working on; indeed, academics have long been involved in a debate about the merits of narrow AI versus general artificial intelligence. But the point is that there’s nothing predetermined about general AI when so much of our current research efforts are being poured into very specific branches of the field – buckets of knowledge that do more to facilitate the use of AI as a friendly helper rather

Telegram App Used in Saint Petersburg Bombing, Says Russia

Russia’s FSB security agency on Monday said the Telegram messaging service was used by those behind the Saint Petersburg metro bombing, the latest salvo by authorities after they threatened to block the app.

“During the probe into the April 3 terrorist attack in the Saint Petersburg metro, the FSB received reliable information about the use of Telegram by the suicide bomber, his accomplices and their mastermind abroad to conceal their criminal plans,” the FSB said in a statement.

They used Telegram “at each stage of the preparation of this terrorist attack,” it said.

Fifteen people were killed in the suicide bombing, which was claimed by the little-known Imam Shamil Battalion, a group suspected of links to Al-Qaeda.

Telegram is a free Russian-designed messaging app that lets people exchange messages, photos and videos in groups of up to 5,000. It has attracted about 100 million users since its launch in 2013.

But the service has drawn the ire of critics who say it can let criminals and terrorists communicate without fear of being tracked by police, pointing in particular to its use by Islamic State jihadists.

The FSB charged that “the members of the international terrorist organisations on Russian territory use Telegram”.

The app is already under fire in Moscow after Russia’s state communications watchdog on Friday threatened to ban it, saying the company behind the service had failed to submit company details for registration.

Telegram App Used in Saint Petersburg Bombing, Says Russia

Telegram’s secretive Russian chief executive, Pavel Durov, who has previously refused to bow to government regulation that would compromise the privacy of users, had called that threat “paradoxical” on one of his social media accounts.

He said it would force users, including “high-ranking Russian officials” to communicate via apps based in the United States like WhatsApp.

The 32-year-old had previously created Russia’s popular VKontakte social media site, before founding Telegram in the United States.

Durov said in April that the app had “consistently defended our users’ privacy” and “never made any deals with governments.”

The app is one of several targeted in a legal crackdown by Russian authorities on the internet and on social media sites in particular.

Since January 1, internet companies have been required to store all users’ personal data at data centres in Russia and provide it to the authorities on demand.

Draft legislation that has already secured initial backing in parliament would make it illegal for messaging services to have anonymous users.

 

Samsung Smart Switch Website Updated, Makes It Simpler to Transfer Data

One would think that Samsung’s latest flagship Galaxy S8 and Galaxy S8+ smartphones with their near bezel-less design and top-of-the-line hardware are enough to attract consumers towards them, but Samsung wants to make sure the transition is smooth and simple. The South Korean tech giant has updated its Smart Switch website that makes it ever more simpler to transfer content from your phone to your new Galaxy handset.

Smart Switch, which is compatible with Android, iOS, Windows, and BlackBerry handsets, lets you transfer content from your phone to a Galaxy device in three ways – directly over Wi-Fi using the Smart Switch app, by connecting your old and new device via USB, or through the Smart Switch computer software. Once you’ve established connection through any one of these options, Smart Switch will guide you through the content you wish to transfer and will do so quickly.

Using the desktop app lets you also backup your phone and Samsung says that the desktop interface makes it easy to restore your content on a new Galaxy smartphone. It also provides a simple way for you to conduct firmware updates.

Samsung Smart Switch Website Updated, Makes It Simpler to Transfer Data

Keep in mind that Smart Switch will transfer content from any phone or OS to Galaxy devices ranging from Galaxy S2 to the current Galaxy S8 smartphones (including tablets) running Android 4.0 and above. Additionally, certain devices running Android 4.0 and above may not support wireless transfer in which case you will need to use the other two options mentioned. Meanwhile, transfers between Android to Galaxy devices via USB cable require Android 4.3 or above as well as MTP (Media Transfer Protocol) USB support.

While Samsung updates its own Smart Switch app, Apple too has been trying to attract Android users to its iOS platform having recently launched three new ad campaigns for its Move to iOS app, which works in a similar fashion to the Smart Switch, helping you easily transfer essential files to an iOS device.

 

Google Chrome For Android Is Getting A Lot Faster With Chrome 59, Rolling Out Soon

Google Chrome is about to get a considerable speed boost on Android, as the latest Chrome 59 has started rolling out with some cool improvements.

As mobile use has increased tremendously in the past few years, we’ve become more reliant on smartphones and other mobile devices for a number of tasks — including surfing the web. In fact, it seems that we serve the web more on mobile devices now than we do on desktops, which means that our smartphones and tablets often handle great amounts of traffic, but are they properly equipped to do so?

Fast Mobile Browsing

Just having a powerful smartphone or tablet with a speedy processor and a solid amount of RAM may not always suffice for a fast browsing experience, however. The browser plays an important role as well, and it can either speed things up, or take ages to load webpages.

Google is well aware of this, especially since it handles a lot of that traffic from mobile devices. Its Chrome browser is notorious for hogging RAM, but Google vowed to improve it and the latest version of Chrome for Android aims to make the browser faster than ever.

Google Chrome 59 For Android

Google released Chrome 59 to Windows, Mac, and Linux first, then started the rollout for Android as well. The company touts that Chrome 59 uses less memory, loads pages faster, and packs a number of security patches and stability fixes.

The main highlight is that is loads pages faster, but just how much faster are we talking? Well, quite a lot, apparently. Google says that Android devices running the latest version of Chrome should see a major speed boost of up to 20 percent. That doesn’t mean that the browser will load all pages 20 percent faster all the time, as that speed peak will occur when all conditions are right, but it should still boast a minimum speed increase of roughly 10 percent in all cases.

It’s worth pointing out, however, that while Google announced the release on Tuesday, June 6, it also notes that Chrome 59 for Android should become available on Google Play “over the course of the next week.” This means that it will take another few days for the update to actually kick in for Android users, but it’s nonetheless coming soon.

 

Chrome V8 JavaScript Engine

The greater speed Chrome 59 brings to the table stems from improvements to the Chrome V8 JavaScript engine, which fuels many of the complex functionalities of various websites. At the same time, the fact that Chrome 59 will use less memory means that the browser should better handle even pages with heavy JavaScript content. This, in turn, should translate to a smoother and more responsive browsing experience.

Once the Chrome 59 version becomes ready to roll, Android devices should get the update automatically provided that automatic updates are enabled. Otherwise, users can also trigger the update process manually by heading over to the Google Play Store, accessing the “My Apps & Games” section from the menu, and grabbing the available updates.

 

Snapchat’s New Feature Snap Map Tells You Where Your Friends Are All The Time

Snapchat has just unveiled a new feature that encourages its users to get together in real-life instead of viewing each other’s experiences through a screen.

Called Snap Map, the feature lets any user share their current location, which will then appear on friends’ maps and will update accordingly when users open the app. The company has begun rolling the feature to iOS and Android users.

“We’ve built a whole new way to explore the world! See what’s happening, find your friends, and get inspired to go on an adventure!” announces Snap.

Clearly, Snap appears to be positioning Snap Map as a way to call up nearby friends and travel to wherever. Social media-focused apps encouraging users to share their location isn’t new, but to actually convince them to go out is almost unheard of, so the way Snap appears to be marketing the feature is slightly refreshing.

Snap Map Was Based On Zenly, A Company It Reportedly Acquired

Snap actually based Snap Map off its secret acquisition of social map app Zenly for about $250 to $350 million, a deal that reportedly closed in late May. Zenly similarly lets users see where their friends are on the map at a given time, at which point they can send messages to each other to plan trips or hangouts.

How To Use Snap Map On Snapchat

Using the new Snap Map feature should be pretty straightforward. Simply open Snapchat and elect to share your location with either all friends, select ones, or not at all via Ghost Mode. A user’s location won’t be shared if they’ve stopped using the app for several hours. That said, location sharing will be turned off by default, and you may choose to turn it on at any time.

To access the Snap Map screen, you have to perform a pinch gesture — just like zooming in or out on a photo — on the Snapchat camera homescreen. The app will bring up a virtual map populated with icons, called Actionmoji, representing friends and their location. Tapping these icons will either show their Stories or give you the option to hit them up and make some traveling plans.

The app automatically picks Actionmojis for the user based on location, what day is it, or other metrics.

As one can imagine, the Snap Map also doubles as an alternative way to view people’s Stories beyond the regular Stories feed and the Story Search feature. In the map, a user can see “heat” colors indicating that there are a lot of Stories being uploaded in that area, possibly because there’s a concert, a major festival, or something along those lines.

The company says Snap Map focuses on enhancing connections between people and their closest friends. The way it’s designed certainly suggest so, especially with the ability to view areas where there’s a high upload traffic for Stories.

In addition, Snap Map could be a tool to help the app grow and rake in more revenue by convincing more people to watch Stories. Snap says, however, that it won’t bring ads to Snap Map just yet. That said, ads will become a major component of the app moving forward, as the company’s deal with Warner closes.

We’re curious to know what you think of Snap’s new Snap Map feature. Feel free to sound off in the comments section below!

Unicode 10 To Come With 56 Emojis For iPhones Everywhere, Including A T-Rex And The Colbert Emoji

The Unicode Consortium on Tuesday, June 20, released version 10 of the Unicode Standard, adding emoji counterparts of a T-Rex and Stephen Colbert’s signature quizzical look, which sees one eyebrow raised as if to depict a kind of doubtfulness.

Unicode 10 Now Released

The release of Unicode 10 also means that code points required for the new batch of emoji have now been finalized and considered as stable enough for major device manufacturers, including Apple, Google, Microsoft, or Samsung, to include in their software.

What Kind Of Emoji Are Included?

The promised batch of emoji contains 56 in all, which includes a roster of new characters, beasts, zombies, vampires, fairies, and dinosaurs. Also included are emoji for breastfeeding women, a woman in a hijab, and a yoga pose. What else? A broccoli, a merman and mermaid, a grasshopper, a vomiting emoji, a pretzel, a pie, and even a fortune cookie — all these are part of Unicode 10.

While not an emoji, Unicode 10 also brings support for the Bitcoin symbol, which looks like a regular uppercase “B” but with two vertical lines. In addition, Unicode 10 also adds support for “lesser-used languages and unique written requirements worldwide”.

You can view all the new emoji in Emojipedia’s video.

Don’t Get Excited Yet, Though

It’s worth noting, however, that one shouldn’t expect for all these characters to be available right away, since your device will most likely require a software update before Unicode 10 can take into effect, and there’s a good chance that you might wait a while before that happens. Android users would probably have to wait until Google releases the stable version of Android O to use the new set of emoji, while iPhone, iPad, and iPod touch users would probably get it once iOS 11 officially launches.

According to the Emojipedia blog, the final emoji list for 2017 was announced in March, with the code points for many of these now finalized, as previously mentioned. The blog explains that vendors have had a few months notice with regard to which emoji are final, and the release of Unicode 10 simply makes sure that vendors can support them in any future software update.

Last summer, The Unicode Consortium made Unicode 9.0 available, containing 72 new emoji including ones for bacon, a person taking a selfie, a clown face, and much more. Majority of those made it into iOS with the iOS 10.12 update, so it stands to reason that iOS 11 will support Unicode 10 once it arrives this fall.

It’s no doubt that the emoji library has been continuously diversifying the objects, events, and individuals it can represent, from animals, to people of color, down to LGBT couples. That’s always a good thing, of course, since using emoji is akin to using a new type of language, and when everyone can understand a given language, the efficiency of communication improves.

Thoughts about the new emoji in Unicode 10? What’s your favorite out of the new batch? As always, you can hit up the comments section below to share your thoughts and opinions!

Look Out Spotify, Apple Music: Tesla Considering To Launch Its Own Music Streaming Service

Spotify and Apple Music may soon find a new challenger in the music streaming service industry from an unlikely source: electric vehicle manufacturer Tesla.

According to reports, Tesla has been speaking with the music industry on the possibility of creating its own music streaming service that will be bundled with its electric vehicles.

Tesla To Enter Music Streaming Scene?

Sources in the music industry claim that Tesla has spoken with all the major music labels on licensing a music streaming service. The service will be bundled with the company’s vehicles, such as the electric sedan Model S, the electric SUV Model X, and the upcoming mass-market electric sedan Model 3.

The full scope of Tesla’s ambitions was not made clear, but sources believe that the company is looking to offer multiple tiers for the planned music streaming service. The tiers will start with a web radio service, such as the one offered by Pandora, which will be enabled by the internet connectivity already present in Tesla’s electric vehicles through their dashboards.

The whole plan is seemingly not yet fully formed, but Tesla is already doing its due diligence by asking about acquiring the rights to stream albums and songs from the top artists from all over the world.

Tesla CEO Elon Musk actually hinted that the company was exploring music products at the latest shareholder meeting of the company in June. He said that it was difficult to “find good playlists or good matching algorithms” for music that drivers want to hear while on the road, and that the company will be announcing how it will solve the problem within the year.

Why Will Tesla Challenge Spotify And Apple Music?

The big question is why Tesla is planning to go through the trouble of creating its own music streaming service, when it can instead integrate Spotify or Apple Music into its electric vehicles. Tesla already has a deal in place to include Spotify in electric vehicles sold outside the United States, so such a setup can be done if the company wants to.

The labels will not turn down Tesla’s overtures if it pushes through with creating its own music streaming service, as it will be another source of revenue. From the comment of a Tesla spokesperson, it appears that the company is indeed serious about its plans.

“We believe it’s important to have an exceptional in-car experience so our customers can listen to the music they want from whatever source they choose,” the spokesperson said, adding that Tesla’s goal is to “achieve maximum happiness” for its customers.

While Tesla is considered as the market leader in the burgeoning electric car industry, it will be jumping into a music streaming space that is currently dominated by Spotify, with 50 million premium subscribers, followed by Apple Music, with 27 million paid users and looking to pose a bigger challenge to Spotify by launching a $99 annual subscription option.

How Tesla’s music streaming service will stand up against these two remains to be seen, but it will have to offer something beyond the usual features if it wants to make a significant impression in the industry.

Asus Vivobook S With 15.6-Inch NanoEdge Display, Windows 10 Launched

Asus has launched its new Vivobook S laptop with up to seventh-generation Intel Core processor at a starting price of $699 in the US. The highlight feature of the new Asus laptop is thin bezels on the display, described as NanoEdge by the company.

NanoEdge bezel provides the laptop with an impressive 80-percent screen-to-body ratio. With the use of this display, the Asus Vivobook S is able to fit a full-size 15.6-inch display into a 14-inch laptop frame, the company says on its website.

The new portable from Asus Vivobook series runs on Windows 10, features a 15.6-inch Wideview colour rich display with full-HD (1080×1920 pixels) resolution and viewing angle of up to 178-degrees. The laptop is available with up to Intel Core i7-7500U processor and 8GB of DDR4 RAM. The laptop is available with up to 1TB of HDD storage and a combo option of 128GB SSD and 1TB HDD as well.

Asus Vivobook S With 15.6-Inch NanoEdge Display, Windows 10 Launched

The connectivity options on the new Asus Vivobook S include a fingerprint sensor, USB 3.1 Type-C (Gen 1), USB 3.0, USB 2.0, Wi-Fi 802.11ac, and HDMI. The new Vivobook S is just 0.7-inch thick and weighs around 1.6kg. It measures 14.2×9.6×0.7-inches and comes with a 42WHrs rated 3S1P, 3-cell Li-ion battery. Asus claims that the battery on the laptop can be charged up to 60 percent of its capacity in just 49 minutes.

The Asus Vivobook S laptop is already available for purchase and can be bought either from Asus’ own online store or through third-party retailers like Newegg in the US.

Instagram Stories Gets Live Video Replay as It Hits 250 Million Daily Active Users

Photo-sharing app Instagram has announced it will introduce an option to share a replay of a user’s Live video to ‘Stories’ feature for 24 hours. The feature comes with the v10.26 update available on the App Store and Google Play for iOS and Android respectively.

“We’re also celebrating 250 million daily users on Instagram ‘Stories’, up from 200 million announced in April,” the company said in a statement.

Previously, all Live broadcasts disappeared once finished. That made them feel raw, spontaneous and urgent to watch.

Starting June 21, “we’re introducing the option to share a replay of your Live video to Instagram ‘Stories’. Now, more of your friends and followers can catch up on what they missed”, Instagram said.

Instagram Stories Gets Live Video Replay as It Hits 250 Million Daily Active Users
When your broadcast has ended, you’ll be able to tap “Share” at the bottom of the screen to add your replay to Instagram ‘Stories’ for 24 hours.

You can also tap the toggle and choose “Discard” and your Live video will disappear from the app as usual.

When someone you follow shares a replay, you’ll see a play button under their profile photo in the stories bar. Tap it to watch the video and see comments and likes from the original broadcast. You can also tap the right or left side of the screen to go forward or back 15 seconds, or tap “Send Message” to reply.