Telegram App Used in Saint Petersburg Bombing, Says Russia

Russia’s FSB security agency on Monday said the Telegram messaging service was used by those behind the Saint Petersburg metro bombing, the latest salvo by authorities after they threatened to block the app.

“During the probe into the April 3 terrorist attack in the Saint Petersburg metro, the FSB received reliable information about the use of Telegram by the suicide bomber, his accomplices and their mastermind abroad to conceal their criminal plans,” the FSB said in a statement.

They used Telegram “at each stage of the preparation of this terrorist attack,” it said.

Fifteen people were killed in the suicide bombing, which was claimed by the little-known Imam Shamil Battalion, a group suspected of links to Al-Qaeda.

Telegram is a free Russian-designed messaging app that lets people exchange messages, photos and videos in groups of up to 5,000. It has attracted about 100 million users since its launch in 2013.

But the service has drawn the ire of critics who say it can let criminals and terrorists communicate without fear of being tracked by police, pointing in particular to its use by Islamic State jihadists.

The FSB charged that “the members of the international terrorist organisations on Russian territory use Telegram”.

The app is already under fire in Moscow after Russia’s state communications watchdog on Friday threatened to ban it, saying the company behind the service had failed to submit company details for registration.

Telegram App Used in Saint Petersburg Bombing, Says Russia
Telegram’s secretive Russian chief executive, Pavel Durov, who has previously refused to bow to government regulation that would compromise the privacy of users, had called that threat “paradoxical” on one of his social media accounts.

He said it would force users, including “high-ranking Russian officials” to communicate via apps based in the United States like WhatsApp.

The 32-year-old had previously created Russia’s popular VKontakte social media site, before founding Telegram in the United States.

Durov said in April that the app had “consistently defended our users’ privacy” and “never made any deals with governments.”

The app is one of several targeted in a legal crackdown by Russian authorities on the internet and on social media sites in particular.

Since January 1, internet companies have been required to store all users’ personal data at data centres in Russia and provide it to the authorities on demand.

Draft legislation that has already secured initial backing in parliament would make it illegal for messaging services to have anonymous users.

Facebook, Microsoft, Twitter, YouTube Form Global Working Group to Combat Terror Content

Social media giants Facebook, Google’s YouTube, Twitter and Microsoft said on Monday they were forming a global working group to combine their efforts to remove terrorist content from their platforms.

Responding to pressure from governments in Europe and the United States after a spate of militant attacks, the companies said they would share technical solutions for removing terrorist content, commission research to inform their counter-speech efforts and work more with counter-terrorism experts.

The Global Internet Forum to Counter Terrorism “will formalise and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN,” the companies said in a statement.

The move comes on the heels of last week’s call from European heads of state for tech firms to establish an industry forum and develop new technology and tools to improve the automatic detection and removal of extremist content.

The political pressure on the companies has raised the prospect of new legislation at EU level, but so far only Germany has proposed a law fining social media networks up to EUR 50 million ($56 million) if they fail to remove hateful postings quickly. The lower house of the German parliament is expected to vote on the law this week.

The companies will seek to improve technical work such as a database created in December to share unique digital fingerprints they automatically assign to videos or photos of extremist content.

They will also exchange best practices on content detection techniques using machine learning as well as define “standard transparency reporting methods for terrorist content removals.”

Facebook, Microsoft, Twitter, YouTube Form Global Working Group to Combat Terror Content
Earlier this month Facebook opened up about its efforts to remove terrorism content in response to criticism from politicians that tech giants are not doing enough to stop militant groups using their platforms for propaganda and recruiting.

Google announced additional measures to identify and remove terrorist or violent extremist content on its video-sharing platform YouTube shortly thereafter.

Twitter suspended 376,890 accounts for violations related to the promotion of terrorism in the second half of 2016 and will share further updates on its efforts to combat violent extremism on its platform in its next Transparency Report.

The social media firms said they would work with smaller companies to help them tackle extremist content and organisations such as the Center for Strategic and International Studies to work on ways to counter online extremism and hate.

All four companies have initiatives to counter online hate speech and will use the forum to improve their efforts and train civil society organisations engaged in similar work.

 

Facebook Launches Online Civil Courage Initiative in the UK

Facebook is launching a UK programme to train and fund local organizations to combat extremist material online, as Internet companies attempt to clamp down on hate speech and violent content on their services.

Facebook, which outlined new efforts to remove extremist and terrorism content from its social media platform last week, will launch the Online Civil Courage Initiative in the UK on Friday, the company said in a statement.

The new initiative will train non-governmental organizations to help them monitor and respond to extremist content and create a dedicated support desk so they can communicate directly with Facebook, the company said.

“There is no place for hate or violence on Facebook,” said Sheryl Sandberg, Facebook’s chief operating officer. “We use technology like AI to find and remove terrorist propaganda, and we have teams of counterterrorism experts and reviewers around the world working to keep extremist content off our platform.”

The British government has stepped up attacks on Silicon Valley Internet companies for not acting quickly enough to take down extremist online propaganda and fostering “safe places” where extremists can breed following a string of attacks in recent months in London and Manchester.

Facebook Launches Online Civil Courage Initiative in the UKFacebook, Alphabet Inc’s Google and Twitter have responded by saying they have made heavy investments and employed thousands of people to take down hate speech and violent content over the past two years. Security analysts say the efforts have dramatically reduced the use of these platforms for jihadist recruitment efforts, although more work needs to be done.

Prime Minister Theresa May has sought to enlist British public opinion to force the US Internet players to work more closely with the government rather than proposing new legislation or policies to assert greater control over the web.

Earlier this week, May urged fellow European Union leaders at a meeting in Brussels to join her in putting pressure on tech companies to ‘rid terrorist material from the internet in all our languages’.

She called for the Internet companies to shift from reactively removing content when they are notified of it, towards greater use of automatic detection and removal tools – and ultimately preventing it from appearing on their platforms in the first place.

 

WhatsApp Starts Allowing Sharing of All File Types on Android, iPhone, Windows Phone: Reports

WhatsApp has long been used as a medium to share photos, videos, and even Word docs. And for those files that weren’t supported on WhatsApp, users tend to go the long route by uploading them first on the cloud, and then sharing the link for download. There are several other workarounds, including third-party apps, to get your unsupported file from one sender to another, but it appears WhatsApp no longer wants you to use those. The company is said to be testing support for all types of file transfers (including archives) on Android, iPhone, and Windows Phone with a limited number of users, removing any hindrance of file sharing on WhatsApp.

WABetaInfo spotted this roll out, and claims that it’s a phased one. While some users in countries like India, Japan, Kuwait, Sri Lanka have reported that the support has arrived, there are several other users that do not see it still. However, WhatsApp can be expected to roll this feature out to everyone in the due course of time. The file sharing limit is at 128MB on iOS, 64MB on Web, and 100MB on Android, WABetaInfo reports.

WhatsApp Starts Allowing Sharing of All File Types on Android, iPhone, Windows Phone: Reports
This new feature will now allow you to share video in a wide variety of formats, MP3 songs, or even APK files for that matter on WhatsApp. As of now, it’s now certain what type of file-checking system WhatsApp has put into place to prevent transfer of malicious or booby-trapped files. The other neat addition noted by the tipster is that with the new sharing feature, WhatsApp also allows you to send uncompressed photos and videos without compromising on resolution, but the ceiling limit is too low for high-quality video clips of sufficient length. Presumably, this cap has been enforced to not overwhelm WhatsApp’s servers with huge files.
WhatsApp is also tipped to be working on a recall feature that allows you to undo a message sent to someone, and is also working on bringing the new Status feature to WhatsApp Web.

 

Facebook Launches Online Civil Courage Initiative in the UK

Facebook is launching a UK programme to train and fund local organizations to combat extremist material online, as Internet companies attempt to clamp down on hate speech and violent content on their services.

Facebook, which outlined new efforts to remove extremist and terrorism content from its social media platform last week, will launch the Online Civil Courage Initiative in the UK on Friday, the company said in a statement.

The new initiative will train non-governmental organizations to help them monitor and respond to extremist content and create a dedicated support desk so they can communicate directly with Facebook, the company said.

“There is no place for hate or violence on Facebook,” said Sheryl Sandberg, Facebook’s chief operating officer. “We use technology like AI to find and remove terrorist propaganda, and we have teams of counterterrorism experts and reviewers around the world working to keep extremist content off our platform.”

The British government has stepped up attacks on Silicon Valley Internet companies for not acting quickly enough to take down extremist online propaganda and fostering “safe places” where extremists can breed following a string of attacks in recent months in London and Manchester.

Facebook Launches Online Civil Courage Initiative in the UK

Facebook, Alphabet Inc’s Google and Twitter have responded by saying they have made heavy investments and employed thousands of people to take down hate speech and violent content over the past two years. Security analysts say the efforts have dramatically reduced the use of these platforms for jihadist recruitment efforts, although more work needs to be done.

Prime Minister Theresa May has sought to enlist British public opinion to force the US Internet players to work more closely with the government rather than proposing new legislation or policies to assert greater control over the web.

Earlier this week, May urged fellow European Union leaders at a meeting in Brussels to join her in putting pressure on tech companies to ‘rid terrorist material from the internet in all our languages’.

She called for the Internet companies to shift from reactively removing content when they are notified of it, towards greater use of automatic detection and removal tools – and ultimately preventing it from appearing on their platforms in the first place.

 

Yahoo Mail App Gets Caller ID, Photo Upload Features

Yahoo on Tuesday launched Caller ID and photo upload features for its Yahoo Mail app that will help users identify a caller from their email contact list and also access their phone camera roll on a desktop.

The new features are now available and users can update Yahoo Mail app in the App Store (iOS v4.13) and Google Play (Android v5.13).

The new Caller ID feature will show a user who is calling even if the number is not saved in the smartphone, the company said in a statement. Yahoo Mail uses contact information from emails.

As soon as a user contacts you, the contact’s name will surface with the call and Yahoo Mail will update names in your call history or when you dial.

To enable this feature, simply go to Settings, then Phone, forward to “Call Blocking and Identification”, toggle the switch for Yahoo Mail and save the settings.

yahoo_photo_upload

Once the new photo upload feature is enabled, your recent camera roll photos will be instantly available when accessing your Yahoo Mail account on desktop.

To enable this feature user’s need to open the Yahoo Mail app on iOS or Android, go to Settings, then, Photo upload and tap the “Upload photos” toggle.

In a blog post, Yahoo VP of Product Management Michael Albers said, “Consider for a moment that our inboxes have become a place where all of our life’s details are captured and stored. With smarter contacts and better photo-sharing, we’re helping you take full advantage of your inbox!”

Employees can be sacked for social media use, even outside of work

AFTER a long day at work keeping face and kissing butt, it can be a relief to get home, turn off the proverbial filter and relax.

But in the age of social media, public and private time is blurred.

A questionable tweet, post or comment while sitting on your couch at night can cost your job – whether it is about work or not.

That was the experience this year of one man who publicly shared a screen shot of a woman’s Tinder profile with a snide remark.

After the post attracted nasty and threatening comments towards the woman, it went viral with the hashtag “sexual violence won’t be silenced” and ended with the man being fired.

The content was not related to any workplace, employer or company and was posted outside of work hours but Johnathan Mamaril, principal and director of employment law specialists NB Lawyers, says this does not matter.

“The main rationale behind the dismissal of (this man) would have been the ability to bring the company’s reputation into disrepute, whether it was realised or not,” Mamaril says.

“The mere perception has the damaging effect already.”

Avoid the shock and think before you post, comment or tweet. Source: iStock

Avoid the shock and think before you post, comment or tweet. Source: iStockSource:Supplied

Mamaril says all employers should have a social media policy but especially if they have a reliance or presence on social media, if they actively advertise through social media, if employees identifies themselves on social media as working for the company, or if employees use social media as a marketing tool for their job.

“Employers can take action against an employee for inappropriate social media use as long as they have a social media policy in place and have some type of training regarding the policy,” he says.

Another recent case was hotel manager Michael Nolan who lost his job after calling feminist commentator Clementine Ford a “sl**” on Facebook.

Ford shared a screen shot of their interaction to her 80,000 Facebook followers and tagged Nolan’s employer.

Feminist commentator Clementine Ford names and shames men who harass her online.

Feminist commentator Clementine Ford names and shames men who harass her online.Source:Supplied

Fair Work Commission commissioner Leigh Johns says the rules are not new but rather old rules being applied in the social media context.

“If you had two work colleagues fighting with each other at a work social function or in private time and it might tarnish their employer, it might be caught by these rules,” he says.

“If you’re on social media saying nasty things about your boss, you can imagine that’s going to cause problems.

“You should imagine anything you post may end up in front of someone you don’t want to see it.”

How to not waste your gap year

Five career truths you should know

Johns says most unfair dismissal cases he sees that stem from social media policies are unsuccessful.

“(There was) one case where the employee had some pretty terrible things on Facebook of an anti-Muslim nature but he was able to establish, because of his age as an older person, that he had no understanding of how Facebook worked and security settings,” Johns says.

“He had no idea this was a public forum and thought he was communicating between himself and his friends.”

But that would be a difficult defence to use anymore as Facebook has become so widespread.

“I wouldn’t hold that case up to give hope to people who post silly things on Facebook,” he says.

Most dismissals are a result of work-related content being posted online.

Johns recalls an employee who went on a Facebook rant after he was not paid the correct amount.

He broke the employer’s policy to always be polite and courteous, so was fired.

In another case, a young motor mechanic apprentice posted photos that were critical of his employer’s customer.

It got back to the customer and the employee was fired.

“He made a silly mistake and lost his job,” Johns says.

“We still see (social media cases) coming through and I’m still surprised people make these types of errors.”

Career coach Rebecca Fraser. Picture: Paul Loughnan

Social media is not only a tricky issue for workers – jobseekers need to be equally careful about what they post online.

A quick Google search will be part of the recruitment process for many future employers.

Career coach Rebecca Fraser says she Googles herself all the time.

“I want to know what others can find out about me and ensure that I am happy that this is what they are seeing,” she says.

“I look at all of the different selections, such as images and scholarly articles, just in case.

“I also specify my search down to my local region as well as globally.”

The founder of Rebecca Fraser Consultingshares her top tips for managing an online profile:

* Ensure all personal information remains private

* Be aware all opinions shared on open forums online are accessible long term, not just today

* Never appear derogatory, negative or rude towards past employers, colleagues or managers

* Be aware of what others are posting about you or images you are being tagged in online

* Regularly review yourself by Googling your name.

* Keep your personal and professional lives separate.

Fraser says people who find content about themselves online that may be detrimental to their career should contact the owner of the website and/or owner of the information and request to have it removed.

Brazilian court pulls the plug on WhatsApp

If you usually use WhatsApp to chat with friends and family in Brazil, you can forget about using the popular app for the next couple of days.

The Facebook-owned messaging and voice app was ordered shut down throughout the country for 48 hours by a Brazilian court on Wednesday, according to Reuters. The shutdown, which began at midnight local time (6 p.m. PT), was due to WhatsApp’s noncompliance in a criminal proceeding, according to a statement provided to Reuters by a Brazilian court in Sao Paulo.

The shutdown comes as Brazilian telecommunications companies have sought to curtail the meteoric growth of WhatsApp, which is used by people around the world to send texts without paying carrier fees. The companies claim the app undermines their own services, Reuters reported.

The messaging app is the most popular app in Brazil, used by about 93 percent of those surveyed by TechTudo, a Brazilian tech website. In April, WhatsApp reported it had 45 million users in the country, up from 38 million two months earlier.

The shutdown order came after the Sao Paulo State Justice Tribunal in São Bernardo do Campo determined WhatsApp had not complied with two earlier court orders issued this summer, Reuters reported. The nature of the case and the identity of the petitioner seeking the injunction were not immediately known.

Facebook declined to comment beyond a Facebook post by Jan Koum, the CEO of WhatsApp, that express disappointment with the decision.

“We are disappointed in the short-sighted decision to cut off access to WhatsApp, a communication tool that so many Brazilians have come to depend on, and sad to see Brazil isolate itself from the rest of the world,” Koum wrote.

Founded in 2009, WhatsApp started life as a basic text-messaging app but one that also offered the ability to leave voice messages. The app, which operates on just about every mobile platform, has also rolled out a voice calling feature, firing a shot across the bow of services like Skype and Viber.

WhatsApp has experienced consistent growth since it was acquired by Facebook last year for $19 billion — one of the largest deals in Silicon Valley history. In September, WhatsApp said it had more than 900 million monthly active users, twice the number of users it had 12 months earlier.