08 Aug Human Resource Crimes
Human Resource Crimes
Keeping it real in human resources now involves avoiding “deepfakes”. What used to be rare technology used is now spreading and could potentially harm employers.
Blame Forrest Gump. The 1994 movie used new technology to edit Gump’s character into scenes to make it seem like he talked with John F. Kennedy or sat next to John Lennon — an editing magician’s trick that won the film accolades.
That technology has evolved into what is now referred to as “deepfake” technology: a mix of AI and machine learning that allows users to alter videos, audios, and photos in powerful ways.
One deepfake example: A widely-shared video of Speaker of the House Nancy Pelosi that was doctored to slow down the speed of her speech, creating the impression Pelosi was impaired. Deepfakes can make it seem that someone is saying something or doing something they may never have — and that can create a new kind of security woe for employers of all types.
Just because deepfakes haven’t showed up at your company doesn’t mean they’ll stay away forever, Randy Barr, chief information security officer for Topia, a global mobility management software company, told HR Dive; “We’re going to start to see a lot more than this as soon as technology is readily available for people to use and try.”
What can HR do now to ensure employees are safe?
It’s all fun and photoshop until someone gets hurt
Deepfake technology can have positive purposes, such as in the creation of digital voices for those who have lost the ability to speak, or the David Beckham video that shows him explaining how people can protect themselves from malaria, using deepfake tech to look like he’s speaking in nine different languages.
But unlike the altered content from Forrest Gump and Instagram filters, the audience isn’t supposed to know that the deepfakes are manipulated pieces.
On top of that, the technology is often used explicitly to create trouble, Niraj Swami, CEO of SCAD AI, an AI consultancy, told HR Dive. “It stems from leveraging controversial material…offensive content or offensive perspectives,” he said. When this material pops up in social media, it creates media confusion, he said, and many viewers react emotionally to the false information.
Some deepfake videos can be identified relatively easily, Barr said. “One of the simple ways of detecting it is if you look at the video, see how often that individual blinks, because [with] the current AI technology and deepfake, it’s hard to impose the face over a body if the eyes are closed,” he said. Other tips are to look for a mismatch in skin tone, and placement of the eyebrows and chin, he added.
Just as deepfake technology is becoming more sophisticated, so is the technology used to identify altered media, with improvements on both sides expected to continue.
How deepfakes can harm employers
Although most deepfakes thus far have targeted politicians and celebrities, the technology has been seen in the work environment — and it may be used with increasing frequency, experts said.
Imagine a CEO placing an urgent call to a senior financial officer requesting an emergency money transfer — except the CEO’s voice was deepfaked by criminals, as Axios reported happening to a number of companies already. Deepfakes could be used to attack a company, Barr said; “[It] could be the evolution of how ransomware takes place.”
Remote employees could use deepfake tech to disguise their identities and hand off work to subcontractors, Swami said. This could be concerning if the subcontractor is not supposed to be offshore or if the initial employee had a security clearance, but the subcontractor does not, he said.
For HR leaders, deepfakes could lead to tricky situations, Forman said. What happens if an employee finds an altered photo or video of them on social media that uses their company ID or picture? What obligation does the organization have to investigate? “It’s becoming more difficult. You have workplace morale issues, compliance issues with your policy and procedures that all jump up because of deepfakes,” he said.
Guarding against deepfakes
HR leaders are used to discerning fake information, from exaggerations on a resume to doctored emails, but as technology improves, it becomes more challenging to anticipate potential issues. While HR is not expected to analyze media for alterations, leaders can take steps to protect employees and the company from being manipulated by deepfakes.
Review company technology policies, said Forman. New technologies up the ante for the workplace and the employer and employee relationships because of the increased risk for misconduct, he said. An employer may want to take an existing policy regarding anti-harassment, anti-retaliation, and anti-discrimination, and make sure the guidelines address the new technology, he added.
Companies should decide how they would respond if a deepfake incident occurred, Forman advised. Although there may be no one right or wrong answer, being prepared to react to the threat is necessary.
“The biggest thing is awareness,” Swami said. If employers see an incendiary video, they can’t have a knee jerk reaction if it is presented as evidence of wrong-doing, he said. Managers might not be able to believe their eyes, so employers may need to ensure its managers gather more information. “You can’t have a single source of truth.”
SOURCE: DeLoatch, Pamela. (5 August 2019). “Keeping it real: What HR leaders need to know about deepfakes” (Web Blog Post). Retrieved from: https://www.hrdive.com/news/keeping-it-real-what-hr-leaders-need-to-know-about-deepfakes/559475/