Cyber Security Analysts warn Hackers are hiding computer viruses in movie subtitles

North Korea hackers need to help feed their population somehow.

Advertisements

Cyber security analysts warn hackers can hide computer viruses inside online video subtitles and use malicious software code taking control of computers.

The cyber security group Check Point discovered the flaw and states millions of people who use video software to stream films and TV shows on computers could be at risk. The attack lets hackers take “complete control” over any type of device using the malicious software which includes smart TVs. Four programs are identified to be vulnerable – VLC, Kodi, Popcorn Time and Stremio.

“We estimate there are approximately 200 million video players and streamers that currently run the vulnerable software, making this one of the most widespread, easily accessed and zero-resistance vulnerabilities reported in recent years,” Check Point said.

>> Continue Reading <<

 

FACEZAM App will scan your Facebook profile picture giving ‘creepers’ your information!

You will never see my actual ‘face’ on Facebook again! -Blake

British entrepreneur develops facial recognition application identifying strangers by scanning a photograph.

FACZAM

Facezam can identify people by matching a photo of them with their Facebook profile. All users have to do is take a picture of someone on the street and run it through the app, which will tell them who it thinks the person in the photo is.

“Facezam could be the end of our anonymous societies,” said Jack Kenyon, founder of Facezam. “Users will be able to identify anyone within a matter of seconds, which means privacy will no longer exist in public society.”

Facezam scans billions of Facebook profile images a second, which it accesses through a database for developers, until it finds a match. It claims to be able to link most photos with a profile on the social network within 10 seconds.

The app, which will launch on iOS on March 21, has been tested on more than 10,000 images to date with a 70 per cent accuracy.

Facebook can delay the launch, which said Facezam violates its privacy policies.

“This activity violates our terms and we’re reaching out to the developer to ensure they bring their app into compliance,” said Facebook.

Facebook reviews apps that use its data before they go live to check they adhere to its policies. Apps that collect users’ data or use automated technology to scan Facebook are forbidden from launching without permission from the social network.

Facezam refuted that the app violates Facebook’s terms. “We’ve looked into this, and are confident the app won’t be violating Facebook’s terms,” said Kenyon.

The technology could help reduce crime by making everyone identifiable, Kenyon said, adding that the public implications of the app couldn’t be predicted. “There may be a mix of positives and negatives,” he said.

‘The end of anonymous society’

Unfortunately there is no way for the privacy conscious to remove themselves from the app, which can use its identification software on anyone with a Facebook profile.

Its accuracy does however drop to 55 per cent when a person’s face is obscured in either the photo of them or in their Facebook profile image. Factors that affect its success include obscuring hair, sunglasses, a large hat or odd angle, Kenyon said.

The inspiration for Facezam comes from Shazam, the music lookup service that can tell users the name and artist of a song after hearing just a few bars. Facezam said its legal consultants weren’t concerned that the name infringed on Shazam’s copyright.

Facial recognition software is already used by internet giants such as Facebook and Google to group photos together and suggest who should be tagged in them. It is also used in some law enforcement databases and by companies such as Tesco to map customer demographics.

But Facezam’s launch marks the first time that the general public will be able to use Facebook data in this way. Facebook blocked the now defunct NameTag, a Google Glass recognition app, from using its data in a similar way. Google then banned the technology altogether from being applied to Glass.

A similar tool called Find Face lets users look up people online using a photo that it matches with images on VKontakte, a Russian social network. British augmented reality company Blippar recently launched a similar search tool but it can only scan faces on its database. These include public figures such as politicians and musicians, with users able to add their own faces if they want to.

Facebook artificial intelligence spots suicidal users…

‘Facebook’ now becoming a suicide counselor…

Facebook has begun using artificial intelligence to identify members that may be at risk of killing themselves.

suicide
Facebook said its algorithms would flag messages expressing suicidal thoughts

The social network has developed algorithms that spot warning signs in users’ posts and the comments their friends leave in response.

After confirmation by Facebook’s human review team, the company contacts those thought to be at risk of self-harm to suggest ways they can seek help.

A suicide helpline chief said the move was “not just helpful but critical”.

The tool is being tested only in the US at present.

It marks the first use of AI technology to review messages on the network since founder Mark Zuckerberg announced last month that he also hoped to use algorithms to identify posts by terrorists, among other concerning content.

Facebook also announced new ways to tackle suicidal behaviour on its Facebook Live broadcast tool and has partnered with several US mental health organisations to let vulnerable users contact them via its Messenger platform. >> Continue Reading <<

Tech billionaire issues stark warning saying artificial intelligence could DESTROY human race which is already ‘part cyborg’ because of its dependence on smartphones

Tech billionaire issues stark warning saying artificial intelligence could DESTROY human race which is already ‘part cyborg’ because of its dependence on smartphones

TECH billionaire Elon Musk believes artificial intelligence could be catastrophic for humanity who are set to become a cyborg race which will have to grapple with 15 per cent of the global work force being without a job.

The creative genius added a ‘universal income’ would have to be introduced for the global population because robots will do everything.

Musk
Tech billionaire Elon Musk says artificial intelligence could be the end of the human race.

Speaking at the World Government Summit in Dubai, the entrepreneur also told the 4000 strong conference he saw spaceflights to the far reaches of the solar system being as common as a plane ride in 50 years.

And self-driven cars were just 10 years away from usurping human driven vehicles completely.

The business magnate, who was being interviewed by Mohammad Abdulla Alergawi, the Minister of Cabinet Affairs and the Future for the UAE, told the slightly perplexed crowd: “One of the most troubling questions is artificial intelligence. I don’t mean narrow A.I  – deep artificial intelligence, where you can have AI which is much smarter than the smartest human on earth. This is a dangerous situation.”

He also warned world governments: “Pay close attention to the development of artificial intelligence.

“Make sure researchers don’t get carried away – scientists get so engrossed in their work they don’t realise what they are doing.”

When asked if he thought A.I was a good or a bad thing Musk said: “I think it is both.

“One way to think of it is imagine you were very confident we were going to be visited by super intelligent aliens in 10 years or 20 years at the most.

“Digital superintelligence will be like an alien.”

He then joked: “It seems probable. But this is one of the great questions in physics and philosophy – where are the aliens?

“Maybe they are among us I don’t know. Some people think I am an alien. Not true. “Of course I would say that though wouldn’t I?”

He went on: “If there are super intelligent aliens out there they are probably already observing us.  >> Continue Reading from: THE SUN <<

AI-Powered Body Cams Give Cops The Power To Google Everything They See

Taser has started its own in-house AI unit, laying the groundwork for police body cameras that record fully-searchable video evidence

Vocativ
Photo Illustration: Vocativ

The police body camera industry is the latest to jump on the artificial intelligence bandwagon, bringing new powers and privacy concerns to a controversial technology bolstered by the need to hold police accountable after numerous high-profile killings of unarmed black citizens. Now, that tech is about to get smarter.

Last week, Taser, the stun gun company that has recently become an industry leader in body-mounted cameras, announced the creation of its own in-house artificial intelligence division. The new unit will utilize the company’s acquisition of two AI-focused firms: Dextro, a New York-based computer vision startup, and Misfit, another computer vision company previously owned by the watch manufacturer Fossil. Taser says the newly formed division will develop AI-powered tech specifically aimed at law enforcement, using automation and machine learning algorithms to let cops search for people and objects in video footage captured by on-body camera systems.

Moreover, the move suggests that body-worn cameras, which are already being used by police departments in many major cities, could soon become powerful surveillance tools capable of identifying different objects, events, and people encountered by officers on the street — both retroactively and in real time.

The idea is to use machine learning algorithms to streamline the process of combing through and redacting hours of video footage captured by police body cameras. Dextro has trained algorithms to scan video footage for different types of objects, like guns or toilets, as well as recognize events, like a foot chase or traffic stop. The result of all this tagging and classifying is that police will be able use keywords to search through video footage just like they’d search for news articles on Google, allowing them to quickly redact footage and zoom in on the relevant elements. Taser predicts that in a year’s time, their automation technology will reduce the total amount of time needed to redact faces from one hour of video footage from eight to 1.5 hours.

Dextro
A Dextro demonstration shows real-time classification of people and objects in video

>> Continue Reading from : Vocativ <<

CES 2017: Razer gaming laptop has not one but three screens

Gaming PC maker Razer has unveiled a concept laptop with three 4K screens at the CES tech show in Las Vegas.

The firm claims Project Valerie is the world’s first portable laptop of its kind.

Two additional screens slide out from the central display via an automatic mechanism.

One analyst praised the design, noting that gamers were increasingly splashing out on high-end laptops.

All three screens are 17in (43cm) in size.

When folded up and closed, the laptop is 1.5in thick. Razer said this was comparable to many standard gaming laptops, which tend to be chunkier than home and office devices.

“We thought, ‘This is crazy, can we do this?’,” a company spokesman told the BBC.

“The answer was: ‘Yeah, we are crazy enough, we can do it’.”

Project Valerie is still a prototype and Razer has not yet published a possible release date or price. >> continue reading <<