We all know the past year has seen many challenges, including ongoing supply-chain disruptions, fallout from the Russian invasion of Ukraine, and widespread third-party breaches and security incidents. From healthcare to energy, no industry is immune to the attacks that are happening in our cyber world. New reports dig deeper into the trend happening in many industries today. It’s pretty darn scary, if you ask me.
As one example, Prevalent’s new report The 2023 Third Party Risk Management Study provides deeper insights into current trends, hurdles, and programs that are impacting third-party risk management practitioners, which uncovers some current results. Let’s look at a few of them here.
The report reveals roughly 41% of companies experienced an impactful third-party breach in the last 12 months. What’s more, they still depend on overlapping tools and manual process, which actually tend to slow incident response. The majority of respondents (71%) report the top concern regarding the usage of third parties is a data breach or other security incidents due to poor vendor security practices.
Another trend is that nearly half of companies are still using spreadsheets (48%) to assess third parties. Further, there is a huge gap between tracking and remediating risks across the lifecycle—and on average 20% of companies are still doing nothing. That means there is still a lot of work to be done.
The report suggests companies should consider automating incident response to reduce costs and risk exposure, build a single source of the truth to eliminate silos and extend risk visibility throughout the enterprise, do away with spreadsheets, and automate assessment and monitoring processes across the lifecycle, and do remediation.
Certainly, there are many other trends that need to be watched as well. For example, since the launch of a new tool, one other survey suggests there is a much higher level of threat of passwords being hacked in the days ahead.
This survey comes from Password Manager, that was conducted online on April 27, 2023. In total, 1,000 participants in the U.S. completed the full survey. All participants had to meet demographic criteria ensuring they were age 25 or older, currently self-employed or employed for wages, had a household income of $50,000 per year or more, and have a career in security, software, information, or scientific or technical services.
Here is what the survey found. Roughly one in six security experts say there is a high-level threat of AI (artificial intelligence) tools such as ChatGPT and Google’s Bard being used to hack passwords.
It’s not just passwords either. Roughly 52% say AI has made it easier for scammers to steal sensitive information, and 18% say AI phishing scams pose a high-level threat to both the average American individual user and company.
The threat reaches far and wide, with more than one-third saying AI tools pose a medium or high-level threat to both individuals and businesses.
In order to respond to these cyber threats, businesses need to be prepared—and need to prepare staff. Many AI-generated scams have been circling around, including:
- “Your voice is being processed out of sight by AI, making it a useful tool for scammers to trick people around you into sending money to ‘you’ online.”
- “Scammers could use AI language models to generate convincing phishing emails that are tailored to the recipient’s personal information and interests.”
- “I have seen fake currency trading platforms that claim to have developed a trading system with artificial intelligence predictive capabilities to attract investors, but no such system actually exists.”
- “I have seen them use artificial intelligence to steal other people’s information quickly, which is very convenient.”
At the end of the day, it comes down to having good business practices and training. Employees need to assume any unsolicited communication is a potential scam and that it is always safest to contact the organization directly than hitting reply.
Workers also need to know that basic bots are used for all types of solicitation. Humans need to be trained to recognize these scams. Recognize that voicemail messages, text exchanges, and even chat room conversations can be AI generated to fool you into thinking you are communicating with a real person, with the goal of trying to manipulate you into revealing personal information or sensitive data.
If we can all begin to see and know when we are interacting with AI, we can better respond and lead our businesses in a way that is safe and secure. What else would you recommend?
Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #futureofwork #digitaltransformation #green #ecosystem #environmental #circularworld