Lock and Code

Malwarebytes

About

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

Available on

Community

104 episodes

Securing your home network is long, tiresome, and entirely worth it, with Carey Parker

Few words apply as broadly to the public—yet mean as little—as “home network security.” For many, a “home network” is an amorphous thing. It exists somewhere between a router, a modem, an outlet, and whatever cable it is that plugs into the wall. But the idea of a “home network” doesn’t need to intimidate, and securing that home network could be simpler than many folks realize. For starters, a home network can be simply understood as a router—which is the device that provides access to the internet in a home—and the other devices that connect that router. That includes obvious devices like phones, laptops, and tablets, and it includes “Internet of Things” devices, like a Ring doorbell, a Nest thermostat, and any Amazon Echo device that come pre-packaged with the company’s voice assistant, Alexa. There are also myriad “smart” devices to consider: smartwatches, smart speakers, smart light bulbs, don’t forget the smart fridges. If it sounds like we’re describing a home network as nothing more than a “list,” that’s because a home network is pretty much just a list. But where securing that list becomes complicated is in all the updates, hardware issues, settings changes, and even scandals that relate to every single device on that list. Routers, for instance, provide their own security, but over many years, they can lose the support of their manufacturers. IoT devices, depending on the brand, can be made from cheap parts with little concern for user security or privacy. And some devices have scandals plaguing their past—smart doorbells have been hacked and fitness trackers have revealed running routes to the public online. This shouldn’t be cause for fear. Instead, it should help prove why home network security is so important. Today, on the Lock and Code podcast with host David Ruiz, we’re speaking with cybersecurity and privacy advocate Carey Parker about securing your home network. Author of the book Firewalls Don’t Stop Dragons and host to the podcast of the same name, Parker chronicled the typical home network security journey last year and distilled the long process into four simple categories: Scan, simplify, assess, remediate. In joining the Lock and Code podcast yet again, Parker explains how everyone can begin their home network security path—where to start, what to prioritize, and the risks of putting this work off, while also emphasizing the importance of every home’s router: * > Your router is kind of the threshold that protects all the devices inside your house. But, like a vampire, once you invite the vampire across the threshold, all the things inside the house are now up for grabs. * > Carey Parker Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen...

45m
Mar 25
Going viral shouldn't lead to bomb threats, with Leigh Honeywell

A disappointing meal at a restaurant. An ugly breakup between two partners. A popular TV show that kills off a beloved, main character. In a perfect world, these are irritations and moments of vulnerability. But online today, these same events can sometimes be the catalyst for hate. That disappointing meal can produce a frighteningly invasive Yelp review that exposes a restaurant owner’s home address for all to see. That ugly breakup can lead to an abusive ex posting a video of revenge porn. And even a movie or videogame can enrage some individuals into such a fury that they begin sending death threats to the actors and cast mates involved. Online hate and harassment campaigns are well-known and widely studied. Sadly, they’re also becoming more frequent. In 2023, the Anti-Defamation League revealed https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2023 that 52% of American adults reported being harassed online at least some time in their life—the highest rate ever recorded by the organization and a dramatic climb from the 40% who responded similarly just one year earlier. When asking teens about harm, 51% said they’d suffered from online harassment in strictly the 12 months prior to taking the survey itself—a radical 15% increase from what teens said the year prior. The proposed solutions, so far, have been difficult to implement. Social media platforms often deflect blame—and are frequently shielded from legal liability—and many efforts to moderate and remove hateful content have either been slow or entirely absent in the past https://www.reuters.com/article/idUSKCN1GO2Q4/. Popular accounts with millions of followers will, without explicitly inciting violence, sometimes draw undue attention to everyday people. And the increasing need to have an online presence for teens—even classwork is done online now—makes it near impossible to simply “log off.” Today, on the Lock and Code podcast with host David Ruiz, we speak with Tall Poppy CEO and co-founder Leigh Honeywell, about the evolution of online hate, personal defense strategies that mirror many of the best practices in cybersecurity, and the modern risks of accidentally becoming viral in a world with little privacy. * > “It's not just that your content can go viral, it's that when your content goes viral, five people might be motivated enough to call in a fake bomb threat at your house.” * > Leigh Honeywell, CEO and co-founder of Tall Poppy Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) LISTEN UP—MALWAREBYTES DOESN'T JUST TALK CYBERSECURITY, WE PROVIDE IT. Protect yourself

42m
Mar 11
How to make a fake ID online, with Joseph Cox

For decades, fake IDs had roughly three purposes: Buying booze before legally allowed, getting into age-restricted clubs, and, we can only assume, completing nation-state spycraft for embedded informants and double agents. In 2024, that’s changed, as the uses for fake IDs have become enmeshed with the internet. Want to sign up for a cryptocurrency exchange where you’ll use traditional funds to purchase and exchange digital currency? You’ll likely need to submit a  of your real ID so that the cryptocurrency platform can ensure you’re a real user. What about if you want to watch porn online in the US state of Louisiana? It’s a niche example, but because of a law passed in 2022 https://www.politico.com/news/magazine/2023/08/08/age-law-online-porn-00110148, you will likely need to submit, again, a  of your state driver’s license to a separate ID verification mobile app that then connects with porn sites to authorize your request. The discrepancies in these end-uses are stark; cryptocurrency and porn don’t have too much in common with Red Bull vodkas and, to pick just one example, a Guatemalan coup https://en.wikipedia.org/wiki/1954_Guatemalan_coup_d%27%C3%A9tat. But there’s something else happening here that reveals the subtle differences between yesteryear’s fake IDs and today’s, which is that modern ID verification doesn’t need a physical ID card or passport to work—it can sometimes function only with an . Last month, the technology reporting outfit 404 Media investigated an online service called OnlyFake https://www.404media.co/inside-the-underground-site-where-ai-neural-networks-churns-out-fake-ids-onlyfake/ that claimed to use artificial intelligence to pump out  of fake IDs. By filling out some bogus personal information, like a made-up birthdate, height, and weight, OnlyFake would provide convincing images of real forms of ID, be they driver’s licenses in California or passports from the US, the UK, Mexico, Canada, Japan, and more. Those images, in turn, could then be used to fraudulently pass identification checks on certain websites. When 404 Media co-founder and reporter Joseph Cox learned about OnlyFake, he tested whether an image of a fake passport he generated could be used to authenticate his identity with an online cryptocurrency exchange. In short, it did. By creating a fraudulent British passport through OnlyFake, Joseph Cox—or as his fake ID said, “David Creeks”—managed to verify his false identity when creating an account with the cryptocurrency market OKX. Today, on the Lock and Code podcast with host David Ruiz, we speak with Cox about the believability of his fake IDs, the capabilities and limitations of OnlyFake, what’s in store for the future of the site— which went dark after Cox’s report https://www.404media.co/onlyfake-neural-network-fake-id-site-goes-dark-after-404-media-investigation/—and what other types of fraud are now dangerously within reach for countless threat actors. * > Making fake IDs, even photos of fake IDs, is a very particular skill set—it’s like a trade in the criminal underground. You don’t need that anymore. * > Joseph Cox, 404 Media co-founder Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and 

36m
Feb 26
If only you had to worry about malware, with Jason Haddix

If your IT and security teams think malware is bad, wait until they learn about everything else. In 2024, the modern cyberattack is a segmented, prolonged, and professional effort, in which specialists create strictly financial alliances to plant malware on unsuspecting employees, steal corporate credentials, slip into business networks, and, for a period of days if not weeks, simply sit and watch and test and prod, escalating their privileges while refraining from installing any noisy hacking tools that could be flagged by detection-based antivirus scans. In fact, some attacks have gone so "quiet" that they involve no malware at all. Last year, some ransomware gangs refrained from in their own attacks, opting to steal sensitive data and then threaten to publish it online if their victims refused to pay up—a method of extracting a ransom that is entirely without ransom. Understandably, security teams are outflanked. Defending against sophisticated, multifaceted attacks takes resources, technologies, and human expertise. But not every organization has that at hand. What, then, are IT-constrained businesses to do? Today, on the Lock and Code podcast with host David Ruiz, we speak with Jason Haddix, the former Chief Information Security Officer at the videogame developer Ubisoft, about how he and his colleagues from other companies faced off against modern adversaries who, during a prolonged crime spree, plundered employee credentials from the dark web, subverted corporate 2FA protections, and leaned heavily on internal web access to steal sensitive documentation. Haddix, who launched his own cybersecurity training and consulting firm Arcanum Information Security this year, said he learned so much during his time at Ubisoft that he and his peers in the industry coined a new, humorous term for attacks that abuse internet-connected platforms: "A browser and a dream." * > "When you first hear that, you're like, 'Okay, what could a browser give you inside of an organization?'" But Haddix made it clear: "On the internal LAN, you have knowledge bases like SharePoint, Confluence, MediaWiki. You have dev and project management sites like Trello, local Jira, local Redmine. You have source code managers, which are managed via websites—Git, GitHub, GitLab, Bitbucket, Subversion. You have repo management, build servers, dev platforms, configuration, management platforms, operations, front ends. These are all websites." Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) LLM Prompt Injection Game: https://gandalf.lakera.ai/ Overwhelmed by modern cyberthreats? ThreatDown can...

40m
Feb 12
Bruce Schneier predicts a future of AI-powered mass spying

If the internet helped create the era of mass surveillance, then artificial intelligence will bring about an era of mass spying. That’s the latest prediction from noted cryptographer and computer security professional Bruce Schneier, who, in December, shared a vision of the near future https://slate.com/technology/2023/12/ai-mass-spying-internet-surveillance.html where artificial intelligence—AI—will be able to comb through reams of surveillance data to answer the types of questions that, previously, only humans could.   “Spying is limited by the need for human labor,” Schneier wrote. “AI is about to change that.” As theorized by Schneier, if fed enough conversations, AI tools could spot who first started a rumor online, identify who is planning to attend a political protest (or unionize a workforce), and even who is plotting a crime. But “there’s so much more,” Schneier said. “To uncover an organizational structure, look for someone who gives similar instructions to a group of people, then all the people they have relayed those instructions to. To find people’s confidants, look at whom they tell secrets to. You can track friendships and alliances as they form and break, in minute detail. In short, you can know everything about what everybody is talking about.” Today, on the Lock and Code podcast with host David Ruiz, we speak with Bruce Schneier about artificial intelligence, Soviet era government surveillance, personal spyware, and why companies will likely leap at the opportunity to use AI on their customers. * > “Surveillance-based manipulation is the business model [of the internet] and anything that gives a company an advantage, they’re going to do.” Tune in today to listen to the full conversation. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) LISTEN UP—MALWAREBYTES DOESN'T JUST TALK CYBERSECURITY, WE PROVIDE IT. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners https://try.malwarebytes.com/lockandcode/.

26m
Jan 29
A true tale of virtual kidnapping

On Thursday, December 28, at 8:30 pm in the Utah town of Riverdale, the city police began investigating what they believed was a kidnapping. 17-year-old foreign exchange student Kai Zhuang was missing, and according to Riverdale Police Chief Casey Warren https://www.fox6now.com/news/kai-zhuang-utah-missing-teenage-foreign-exchange-student-forcefully-abducted, Zhuang was believed to be “forcefully taken” from his home, and “being held against his will.” The evidence leaned in police’s favor. That night, Zhuang’s parents in China reportedly received a photo of Zhuang in distress. They’d also received a ransom demand. But as police in Riverdale and across the state of Utah would soon learn, the alleged kidnapping had a few wrinkles. For starters, there was no sign that Zhuang had been forcefully removed from his home in Riverdale, where he’d been living with his host family. In fact, Zhuang’s disappearance was so quiet that his host family was entirely unaware that he’d been missing until police came and questioned them. Additionally, investigators learned that Zhuang had experienced a recent run-in with police officers nearly 75 miles away in the city of Provo. Just eight days before his disappearance in Riverdale, Zhuang caught the attention of Provo residents because of what they deemed strange behavior for a teenager: Buying camping gear in the middle of a freezing winter season. Police officers who intervened at the residents’ requests asked Zhuang if he was okay, he assured them he was, and a ride was arranged for the teenager back home. But what Zhuang didn’t tell Provo police at the time was that, already, he was being targeted in an extortion scam. But when Zhuang started to push back against his scammers, it was his parents who became the next target. Zhuang—and his family—had become victims of what is known as “virtual kidnapping.” For years, virtual kidnapping scams happened most frequently in Mexico and the Southwestern United States, in cities like Los Angeles and Houston. But in 2015, the scams began reaching farther into the US. The scams themselves are simple yet cruel attempts at extortion. Virtual kidnappers will call phone numbers belonging to affluent neighborhoods in the US and make bogus threats about a holding a family member hostage. As explained by the FBI in 2017 https://www.fbi.gov/news/stories/virtual-kidnapping, virtual kidnappers do not often know the person they are calling, their name, their occupation, or even the name of the family member they have pretended to abduct: “When an unsuspecting person answered the phone, they would hear a female screaming, ‘Help me!’ The screamer’s voice was likely a recording. Instinctively, the victim might blurt out his or her child’s name: ‘Mary, are you okay?’ And then a man’s voice would say something like, ‘We have Mary. She’s in a truck. We are holding her hostage. You need to pay a ransom and you need to do it now or we are going to cut off her fingers.’” Today, on the Lock and Code podcast with host David Ruiz, we are presenting a short, true story from December about virtual kidnapping. Today’s episode cites reporting and public statements from the Associated Press https://apnews.com/general-news-7f44adee393945459e453b25eef46700, the FBI https://www.fbi.gov/news/stories/virtual-kidnapping, ABC4.com https://www.abc4.com/news/northern-utah/riverdale-police-chief-update-cyber-kidnapping/, Fox 6 Milwaukee https://www.fox6now.com/news/kai-zhuang-utah-missing-teenage-foreign-exchange-student-forcefully-abducted, and the

18m
Jan 15
DNA data deserves better, with Suzanne Bernstein

Hackers want to know everything about you: Your credit card number, your ID and passport info, and now, your DNA. On October 1 2023, on a hacking website called BreachForums, a group of cybercriminals claimed that they had stolen—and would soon sell—individual profiles for users of the genetic testing company 23andMe https://www.malwarebytes.com/blog/news/2023/10/23andme. 23andMe offers direct-to-consumer genetic testing kits that provide customers with different types of information, including potential indicators of health risks along with reports that detail a person’s heritage, their DNA’s geographical footprint, and, if they opt in, a service to connect with relatives who have also used 23andMe’s DNA testing service. The data that 23andMe and similar companies collect is often seen as some of the most sensitive, personal information that exists about people today, as it can expose health risks, family connections, and medical diagnoses. This type of data has also been used to exonerate the wrongfully accused and to finally apprehend long-hidden fugitives. In 2018, deputies from the Sacramento County Sherriff’s department arrested a serial killer known as the Golden State Killer, after investigators took DNA left at decades-old crime scenes and compared it to a then-growing database of genetic information, finding the Golden State Killer’s relatives, and then zeroing in from there. And while the story of the Golden State Killer involves the use of genetic data to a crime, what happens when genetic data is of a crime? What law enforcement agency, if any, gets involved? What rights do consumers have? And how likely is it that consumer complaints will get heard? For customers of 23andMe, those are particularly relevant questions. After an internal investigation from the genetic testing company https://www.malwarebytes.com/blog/news/2023/12/23andme-says-er-actually-some-genetic-and-health-data-might-have-been-accessed-in-recent-breach, it was revealed that 6.9 million customers were impacted https://www.google.com/search?q=23andme+hack+6.9+million&oq=23andme+&gs_lcrp=EgZjaHJvbWUqBggCEEUYOzIGCAAQRRg5MgYIARBFGDsyBggCEEUYOzIGCAMQRRg7MgYIBBBFGDwyBggFEC4YQNIBCDM1MTZqMGoxqAIAsAIA&sourceid=chrome&ie=UTF-8 by the October breach. What do they do? Today on the Lock and Code podcast with host David Ruiz, we speak with Suzanne Bernstein, a law fellow at Electronic Privacy Information Center (EPIC) to understand the value of genetic data, the risks of its exposure, and the unfortunate reality that consumers face in having to protect themselves while also trusting private corporations to secure their most sensitive data. “We live our lives online and there's certain risks that are unavoidable or that are manageable relative to the benefit that a consumer might get from it,” Bernstein said. * > “Ultimately, while it's not the consumer's responsibility, an informed consumer can make the best choices about what kind of risks to take online.” Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at

37m
Jan 01
Meet the entirely legal, iPhone-crashing device: the Flipper Zero

It talks, it squawks, it even blocks! The stocking-stuffer on every hobby hacker’s wish list this year is the Flipper Zero. “Talk” across low-frequency radio to surreptitiously change TV channels, emulate garage door openers, or even pop open your friend’s Tesla charging port without their knowing! “Squawk” with the Flipper Zero’s mascot and user-interface tour guide, a “cyber-dolphin” who can “read” the minds of office key fobs and insecure hotel entry cards. And, introducing this year, block iPhones running iOS 17! No, really, this consumer-friendly device can crash iPhones, and in the United States, it is entirely legal to own. The Flipper Zero is advertised as a “multi-tool device for geeks.” It’s an open-source tool that can be used to hack into radio protocols, access control systems, hardware, and more. It can emulate keycards, serve as a universal remote for TVs, and make attempts to brute force garage door openers. But for security researcher Jeroen van der Ham, the Flipper Zero also served as a real pain in the butt one day in October, when, aboard a train in the Netherlands, he got a popup on his iPhone about a supposed Bluetooth pairing request with a nearby Apple TV. Strange as that may be on a train, van der Ham soon got another request. And then another, and another, and another. In explaining the problem to the outlet Ars Technica https://arstechnica.com/security/2023/11/flipper-zero-gadget-that-doses-iphones-takes-once-esoteric-attacks-mainstream/, van der Ham wrote: * > “My phone was getting these popups every few minutes and then my phone would reboot. I tried putting it in lock down mode, but it didn’t help.” Later that same day, on his way back home, once again aboard the train, van der Ham noticed something odd: the iPhone popups came back, and this time, he noticed that his fellow passengers were also getting hit. What van der Ham soon learned is that he—and the other passengers on the train—were being subjected to a Denial-of-Service attack, which weaponized the way that iPhones receive Bluetooth pairing requests. A Denial-of-Service attack is simple. Essentially, a hacker, or more commonly, an army of bots, will flood a device or a website with requests. The target in these attacks cannot keep up with the requests, so it often locks up and becomes inaccessible. That can be a major issue for a company that is suffering from having its website attacked, but it’s also dangerous for everyday people who may need to use their phones to, say, document something important, or reach out to someone when in need. In van der Ham’s case, the Denial-of-Service attack was likely coming from one passenger on the train, who was aided by the small, handheld device, the Flipper Zero. Today, on the Lock and Code podcast, with host David Ruiz, we speak with Cooper Quintin, senior public interest technologist with Electronic Frontier Foundation—and Flipper Zero owner—about what the Flipper Zero can do, what it can’t do, and whether governments should get involved in the regulation of the device (that’s a hard “No,” Quintin said). “Governments should be welcoming this device,” Quintin said. “Every government right now is saying, ‘We need more cyber security capacity. We need more cyber security researchers. We got cyber wars to fight, blah, blah, blah,’ right?” Quintin continued: * > “Then, when you make this amazing tool that is, I think, a really great way for people to start interacting with cybersecurity and getting really interested in it—then you ban that?” Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, 

36m
Dec 18, 2023
Why a ransomware gang tattled on its victim, with Allan Liska

Like the grade-school dweeb who reminds their teacher to assign tonight’s homework, or the power-tripping homeowner who threatens every neighbor with an HOA citation, the ransomware group ALPHV can now add itself to a shameful roster of pathetic, little tattle-tales. In November, the ransomware gang ALPHV, which also goes by the name Black Cat, notified the US Securities and Exchange Commission about the Costa Mesa-based software company MeridianLink, alleging that the company had failed to notify the government about a data breach. Under newly announced rules by the US Securities and Exchange Commission (SEC), public companies will be expected to notify the government agency about “material cybersecurity incidents” within four days of determining whether such an incident could have impacted the company’s stock prices or any investment decisions from the public. According to ALPHV, MeridianLink had violated that rule. But how did ALPHV know about this alleged breach? Simple. They claimed to have done it. “It has come to our attention that MeridianLink, in light of a significant breach compromising customer data and operational information, has failed to file the requisite disclosure under Item 1.05 of Form 8-K within the stipulated four business days, as mandated by the new SEC rules,” wrote ALPHV in a complaint that the group claimed to have filed with the US government. The victim, MeridianLink, refuted the claims. According to a MeridianLink spokesperson, while the company confirmed a cybersecurity incident, it denied the severity of the incident. “Based on our investigation to date, we have identified no evidence of unauthorized access to our production platforms, and the incident has caused minimal business interruption,” a MeridianLink spokesperson said at the time. “If we determine that any consumer personal information was involved in this incident, we will provide notifications as required by law.” This week on the Lock and Code podcast with host David Ruiz, we speak to Recorded Future intelligence analyst Allan Liska about what ALPHV could hope to accomplish with its SEC complaint, whether similar threats have been made in the past under other regulatory regime, and what organizations everywhere should know about ransomware attacks going into the new year. One big takeaway, Liska said, is that attacks are getting bigger, bolder, and brasher. “There are no protections anymore,” Liska said. “For a while, some ransomware actors were like, ‘No, we won’t go after hospitals, or we won’t do this, or we won’t do that.’ Those protections all seem to have flown out the window, and they’ll go after anything and anyone that will make them money. It doesn’t matter how small they are or how big they are.” Liska continued: * > “We’ve seen ransomware actors go after food banks. You’re not going to get a ransom from a food bank. Don’t do that.” Tune in today to listen to the full conversation. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0...

35m
Dec 04, 2023
Defeating Little Brother requires a new outlook on privacy: Lock and Code S04E23

A worrying trend is cropping up amongst Americans, particularly within Generation Z—they're spying on each other more. Whether reading someone's DMs, rifling through a partner's text messages, or even rummaging through the bags and belongings of someone else, Americans  keeping tabs on one another, especially when they're in a relationship. According to recent research from Malwarebytes https://try.malwarebytes.com/everyone-is-afraid-of-the-internet-report-2023/?utm_source=podcast&utm_medium=social&utm_campaign=b2c_pro_acq_eaotir23_169584625403&utm_content=V1, a shocking 49% of Gen Zers agreed or strongly agreed with the statement: “Being able to track my spouse's/significant other's location when they are away is extremely important to me.” On the Lock and Code podcast with host David Ruiz, we've repeatedly tackled the issue of surveillance, from the NSA's mass communications surveillance program exposed by Edward Snowden https://www.malwarebytes.com/blog/podcast/2023/07/of-sharks-surveillance-and-spied-on-emails, to the targeted use of Pegasus spyware against human rights dissidents and political activists https://www.malwarebytes.com/blog/podcast/2022/02/the-worlds-most-coveted-spyware-pegasus-lock-and-code-s03e04, to the purchase of privately-collected location data by state law enforcement agencies across the country https://www.malwarebytes.com/blog/podcast/2023/04/how-the-cops-buy-your-location-data-with-bennett-cyphers. But the type of surveillance we're talking about today is different. It isn't so much "Big Brother"—a concept introduced in the socio-dystopian novel 1984 by author George Orwell. It's "Little Brother." As far back as 2010, in a piece titled “Little Brother is Watching,” author Walter Kirn wrote for the New York Times:  “As the Internet proves every day, it isn’t some stern and monolithic Big Brother that we have to reckon with as we go about our daily lives, it’s a vast cohort of prankish Little Brothers equipped with devices that Orwell, writing 60 years ago, never dreamed of and who are loyal to no organized authority. The invasion of privacy — of others’ privacy but also our own, as we turn our lenses on ourselves in the quest for attention by any means — has been democratized.” Little Brother is us, recording someone else on our phones and then posting it on social media. Little Brother is us, years ago, Facebook stalking someone because they’re a college crush. Little Brother is us, watching a Ring webcam of a delivery driver, including when they are mishandling a package but also when they are doing a stupid little dance that we requested so we could post it online and get little dopamine hits from the Likes. Little Brother is our anxieties being  by watching the shiny blue GPS dots that represent our husbands and our wives, driving back from work. Little Brother isn't just surveillance. It is increasingly popular, normalized, and accessible surveillance. And it's creeping its way into more and more relationships every day.  So, what can stop it?  Today, we speak with our guests, Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading, about the apparent "appeal" of Little Brother surveillance, whether the tenets of privacy can ever fully defeat that surveillance, and what the possible merits of this surveillance could be, including, as Stockley suggested, in revealing government abuses of power.  "My question to you is, as with all forms of technology, there are two very different sides for this. So is...

45m
Nov 06, 2023
MGM attack is too late a wake-up call for businesses, says James Fair

In September, the Las Vegas casino and hotel operator MGM Resorts became a trending topic on social media... but for all the wrong reasons. A TikTok user posted a video taken from inside the casino floor of the MGM Grand—the company's flagship hotel complex near the southern end of the Las Vegas strip—that didn't involve the whirring of slot machines or the sirens and buzzers of sweepstake earnings, but, instead, row after row of digital gambling machines with blank, non-functional screens. That same TikTok user commented on their own post that it wasn't just errored-out gambling machines that were causing problems—hotel guests were also having trouble getting into their own rooms. As the user said online about their own experience: “Digital keys weren’t working. Had to get physical keys printed. They doubled booked our room so we walked in on someone.” The trouble didn't stop there. A separate photo shared online allegedly showed what looked like a Walkie-Talkie affixed to an elevator's handrail. Above the device was a piece of paper and a message written by hand: “For any elevator issues, please use the radio for support.”   As the public would soon learn, MGM Resorts was the victim of a cyberattack, reportedly carried out by a group of criminals called Scattered Spider, which used the ALPHV ransomware. It was one of the most publicly-exposed cyberattacks in recent history. But just a few days before the public saw the end result, the same cybercriminal group received a reported $15 million ransom payment from a separate victim situated just one and a half miles away. On September 14, Caesar’s Entertainment reported in a filing with the US Securities and Exchange Commission that it, too, had suffered a cyber breach, and according to reporting from CNBC, it received a $30 million ransom demand, which it then negotiated down by about 50 percent. The social media flurry, the TikTok videos, the comments and confusion from customers, the ghost-town casino floors captured in photographs—it all added up to something strange and new: Vegas was breached.  But how?  Though follow-on reporting suggests a particularly effective social engineering scam https://www.bloomberg.com/news/articles/2023-10-03/mgm-cyberattack-shows-how-hackers-deploy-social-engineering, the attacks themselves revealed a more troubling, potential vulnerability for businesses everywhere, which is that a company's budget—and its relative ability to devote resources to cybersecurity—doesn't necessarily insulate it from attacks.  Today on the Lock and Code podcast with host David Ruiz, we speak with James Fair, senior vice president of IT Services at the managed IT services company Executech, about whether businesses are taking cybersecurity seriously enough, which industries he's seen pushback from for initial cybersecurity recommendations (and why), and the frustration of seeing some companies only take cybersecurity seriously after a major attack.  * > "How many do we have to see? MGM got hit, you guys. Some of the biggest targets out there—people who have more cybersecurity budget than people can imagine—got hit. So, what are you waiting for?" Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you...

40m
Oct 23, 2023
AI sneak attacks, location spying, and definitely not malware, or, what one teenager fears online

What are you most worried about online? And what are you doing to stay safe?  Depending on who you are, those could be very different answers, but for teenagers and members of Generation Z, the internet isn't so scary because of traditional threats like malware and viruses. Instead, the internet is scary because of what it can expose. To Gen Z, a feared internet is one that is vindictive and cruel—an internet that reveals private information that Gen Z fears could harm their relationships with family and friends, damage their reputations, and even lead to their being bullied and physically harmed.  Those are some of the findings from Malwarebytes' latest research into the cybersecurity and online privacy beliefs and behaviors of people across the United States and Canada this year. Titled "Everyone's afraid of the internet and no one's sure what to do about it https://try.malwarebytes.com/everyone-is-afraid-of-the-internet-report-2023/?utm_source=podcast&utm_medium=social&utm_campaign=b2c_pro_acq_eaotir23_169584625403&utm_content=V1," Malwarebytes' new report shows that 81 percent of Gen Z worries about having personal, private information exposed—like their sexual orientations, personal struggles, medical history, and relationship issues (compared to 75 percent of non-Gen Zers). And 61 percent of Gen Zers worry about having embarrassing or compromising photos or videos shared online (compared to 55% of non Gen Zers). Not only that, 36 percent worry about being bullied because of that info being exposed, while 34 percent worry about being physically harmed. For those outside of Gen Z, those numbers are a lot lower—only 22 percent worry about bullying, and 27 percent worry about being physically harmed. Does this mean Gen Z is uniquely careful to prevent just that type of information from being exposed online? Not exactly. They talk more frequently to strangers online, they more frequently share personal information on social media, and they share photos and videos on public forums more than anyone—all things that leave a trail of information that could be gathered against them. Today, on the Lock and Code podcast with host David Ruiz, we drill down into what, specifically, a Bay Area teenager is afraid of when using the internet, and what she does to stay safe. Visiting the Lock and Code podcast for the second year in the row is Nitya Sharma, discussing AI "sneak attacks," political disinformation campaigns, the unannounced location tracking of Snapchat, and why she simply cannot be bothered about malware.  * > "I know that there's a threat of sharing information with bad people and then abusing it, but I just don't know what you would do with it. Show up to my house and try to kill me?"  Tune in today for the full conversation. YOU CAN READ OUR FULL REPORT HERE: "EVERYONE'S AFRAID OF THE INTERNET AND NO ONE'S SURE WHAT TO DO ABOUT IT. https://try.malwarebytes.com/everyone-is-afraid-of-the-internet-report-2023/?utm_source=podcast&utm_medium=social&utm_campaign=b2c_pro_acq_eaotir23_169584625403&utm_content=V1" You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at

47m
Oct 09, 2023
What does a car need to know about your sex life?

When you think of the modern tools that most invade your privacy, what do you picture? There's the obvious answers, like social media platforms including Facebook and Instagram. There's email and "everything" platforms like Google that can track your locations, your contacts, and, of course, your search history. There's even the modern web itself, rife with third-party cookies that track your browsing activity across websites so your information can be bundled together into an ad-friendly profile.  But here's a surprise answer with just as much validity: Cars.  A team of researchers at Mozilla which has reviewed the privacy and data collection policies of various product categories for several years now, named "Privacy Not Included," recently turned their attention to modern-day vehicles, and what they found shocked them. Cars are, to put it shortly, a privacy nightmare https://foundation.mozilla.org/en/blog/privacy-nightmare-on-wheels-every-car-brand-reviewed-by-mozilla-including-ford-volkswagen-and-toyota-flunks-privacy-test/.  According to the team's research, Nissan says it can collect “sexual activity” information about consumers. Kia says it can collect information about a consumer's “sex life.” Subaru  allegedly consent to the collection of their data by simply being in the vehicle Volkswagen says it collects data like a person's age and gender and whether they're using your seatbelt, and it can use that information for targeted marketing purposes.  But those are just some of the highlights from the Privacy Not Included team. Explains Zoë MacDonald, content creator for the research team:  "We were pretty surprised by the data points that the car companies say they can collect... including social security number, information about your religion, your marital status, genetic information, disability status... immigration status, race. And of course, as you said.. one of the most surprising ones for a lot of people who read our research is the sexual activity data." Today on the Lock and Code podcast with host David Ruiz, we speak with MacDonald and Jen Caltrider, Privacy Not Included team lead, about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter. We also explore the booming revenue stream that car manufacturers are tapping into by not only collecting people's data, but also packaging it together for targeted advertising. With so many data pipelines being threaded together, Caltrider says the auto manufacturers can even make "inferences" about you.   "What really creeps me out [is] they go on to say that they can take all the information they collect about you from the cars, the apps, the connected services, and everything they can gather about you from these third party sources," Caltrider said, "and they can combine it into these things they call 'inferences' about you about things like your intelligence, your abilities, your predispositions, your characteristics."  Caltrider continued: * > "And that's where it gets really creepy because I just imagine a car company knowing so much about me that they've determined how smart I am." Tune in today. 

43m
Sep 25, 2023
Re-air: What teenagers face growing up online

In 2022, Malwarebytes investigated the blurry, shifting idea of “identity” on the internet https://www.malwarebytes.com/blog/news/2022/10/only-half-of-teens-agree-they-feel-supported-online-by-parents, and how online identities are not only shaped by the people behind them, but also inherited by the internet’s youngest users, children. Children have always inherited some of their identities from their parents—consider that two of the largest indicators for political and religious affiliation in the US are, no surprise, the political and religious affiliations of someone’s parents—but the transfer of  identity poses unique risks.  When parents create email accounts for their kids, do they also teach their children about strong passwords? When parents post photos of their children online, do they also teach their children about the safest ways to post photos of themselves and others? When parents create a Netflix viewing profile on a child's iPad, are they prepared for what else a child might see online? Are parents certain that a kid is ready to  before they can ? Those types of questions drove a joint report that Malwarebytes published last year https://try.malwarebytes.com/forever-connected-report/, based on a survey of 2,000 people in North America. That research showed that, broadly, not enough children and teenagers trust their parents to support them online, and not enough parents know exactly how to give the support their children need. But stats and figures can only tell so much of the story, which is why last year, Lock and Code host David Ruiz spoke with a Bay Area high school graduate about her own thoughts on the difficulties of growing up online. Lock and Code is re-airing that episode this week because, in less than one month, Malwarebytes is releasing a follow-on report about behaviors, beliefs, and blunders in online privacy and cybersecurity. And as part of that report, Lock and Code is bringing back the same guest as last year, Nitya Sharma.  Before then, we are sharing with listeners our prior episode that aired in 2022 about the difficulties that an everyday teenager faces online, including managing her time online, trying to meet friends and complete homework, the traps of trading online interaction with in-person socializing, and what she would do differently with her children, if she ever started a family, in preparing them for the Internet. Tune in today.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

36m
Sep 11, 2023
"An influx of Elons," a hospital visit, and magic men: Becky Holmes shares more romance scams

Becky Holmes is a big deal online.  Hugh Jackman has invited her to dinner https://twitter.com/deathtospinach/status/1653418665167945728/photo/2. Prince William has told her she has "such a beautiful name." Once, Ricky Gervais simply  her photos ("I want you to take a snap of yourself and then send it to me on here...Send it to me on here!" he messaged on Twitter https://twitter.com/deathtospinach/status/1685637297251708928/photo/4), and even Tom Cruise slipped into her DMs (though he was a tad boring, twice asking about her health and more often showing a core misunderstanding of grammar).  Becky has played it cool, mostly, but there's no denying the "One That Got Away"—Official Keanu Reeves.  After repeatedly speaking to Becky online, convincing her to download the Cash app, and even promising to send her $20,000 (which Becky said she could use for a new tea towel), Official Keanu Reeves had a change of heart earlier this year: "I hate you," he said. "We are not in any damn relationship."  Official Keanu Reeves, of course, is not Keanu Reeves. And hughjackman373—as he labeled himself on Twitter—is not really Hugh Jackman. Neither is "Prince William," or "Ricky Gervais," or "Tom Cruise." All of these "celebrities" online are fake, and that isn't commentary on celebrity culture. It's simply a fact, because all of the personas online who have reached out to Becky Holmes are romance scammers.  Romance scams are serious crimes that follow similar plots.  Online, an attractive stranger or celebrity—coupled with an appealing profile picture—will send a message to a complete stranger, often on Twitter, Instagram, Facebook, or LinkedIn. They will flood the stranger with affectionate messages and promises of a perfect life together, sometimes building trust and emotional connection for weeks or even months. As time continues, they will also try to remove the conversation away from the social media platform where it started, instead moving it to WhatsApp, Telegram, Messages, or simple text.  Here, the scam has already started. Away from the major social media and networking platforms, the scammers persistent messages cannot be flagged for abuse or harassment, and the scammer is free to press on. Once an emotional connection is built, the scammer will suddenly be in trouble, and the best way out, is money—the victim’s money. These crimes target vulnerable people, like recently divorced individuals, widows, and the elderly. But when these same scammers reach out to Becky Holmes, Becky Holmes turns the tables. Becky once tricked a scammer into thinking she was visiting him in the far-off Antarctic. She has led one to believe that she had accidentally murdered someone and  needed help hiding the body. She has given fake, lewd addresses, wasted their time, and even shut them down when she can by coordinating with local law enforcement. And today on the Lock and Code podcast with host David Ruiz, Becky Holmes returns to talk about romance scammer "education" and the potential involvement in pyramid schemes, a disappointing lack of government response to protect victims, and the threat of Twitter removing its block function, along with some of the most recent romance scams that Becky has encountered online. * > “There’s suddenly been this kind of influx of Elons. Absolutely tons of those have come about… I think I get probably at least one, maybe two a day.” Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, 

51m
Aug 28, 2023
A new type of "freedom," or, tracking children with AirTags, with Heather Kelly

"Freedom" is a big word, and for many parents today, it's a word that includes location tracking.  Across America, parents are snapping up Apple AirTags, the inexpensive location tracking devices that can help owners find lost luggage, misplaced keys, and—increasingly so—roving toddlers setting out on mini-adventures.  The parental fear right now, according to The Washington Post technology reporter Heather Kelly, is that "anybody who can walk, therefore can walk away."  Parents wanting to know what their children are up to is nothing new. Before the advent of the Internet—and before the creation of search history—parents read through diaries. Before GPS location tracking, parents called the houses that their children were allegedly staying at. And before nearly every child had a smart phone that they could receive calls on, parents relied on a much simpler set of tools for coordination: Going to the mall, giving them a watch, and saying "Be at the food court at noon."  But, as so much parental monitoring has moved to the digital sphere, there's a new problem: Children become physically mobile far faster than they become responsible enough to . Enter the AirTag: a small, convenient device for parents to affix to toddlers' wrists, place into their backpacks, even sew into their clothes, as Kelly reported in her piece for The Washington Post https://www.washingtonpost.com/technology/2023/07/26/tracking-kids-airtags/.  In speaking with parents, families, and childcare experts, Kelly also uncovered an interesting dynamic. Parents, she reported, have started relying on Apple AirTags as a means to provide , not restrictions, to their children.  Today, on the Lock and Code podcast with host David Ruiz, we speak with Kelly about why parents are using AirTags, how childcare experts are reacting to the recent trend, and whether the devices can actually provide a balm to increasingly stressed parents who may need a moment to sit back and relax. Or, as Kelly said: * > "In the end, parents need to chill—and if this lets them chill, and if it doesn't impact the kids too much, and it lets them go do silly things like jumping in some puddles with their friends or light, really inconsequential shoplifting, good for them." Tune in today.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

37m
Aug 13, 2023
How Apple fixed what Microsoft hasn't, with Thomas Reed

Earlier this month, a group of hackers was spotted using a set of malicious tools—that originally gained popularity with online video game cheaters—to hide their Windows-based malware from being detected. Sounds unique, right?  Frustratingly, it isn't, as the specific security loophole that was abused by the hackers has been around for years, and Microsoft's response, or lack thereof, is actually a telling illustration of the competing security environments within Windows and macOS. Even more perplexing is the fact that Apple dealt with a similar issue nearly 10 years ago, locking down the way that certain external tools are given permission to run alongside the operating system's critical, core internals.  Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes' own Director of Core Tech Thomas Reed about everyone's favorite topic: Windows vs. Mac. But this isn't a conversation about the original iPod vs. Microsoft's Zune (we're sure you can find countless, 4-hour diatribes on YouTube for that), but instead about how the companies behind these operating systems can  to security issues in their own products. Because it isn't fair to say that Apple or Microsoft are wholesale "better" or "worse" about security. Instead, they're hampered by their users and their core market segments—Apple excels in the consumer market, whereas Microsoft excels with enterprises. And when your customers include hospitals, government agencies, and pretty much any business over a certain headcount, well, it comes with complications in deciding how to address security problems that won't leave those same customers behind.  Still, there's little excuse in leaving open the type of loophole that Windows has, said Reed: * > "Apple has done something that was pretty inconvenient for developers, but it really secured their customers because it basically meant we saw a complete stop in all kernel-level malware. It just shows you [that] it can be done. You're gonna break some eggs in the process, and Microsoft has not done that yet... They're gonna have to." Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

40m
Jul 31, 2023
Spy vs. spy: Exploring the LetMeSpy hack, with maia arson crimew

The language of a data breach, no matter what company gets hit, is largely the same. There's the stolen data—be it email addresses, credit card numbers, or even medical records. There are the users—unsuspecting, everyday people who, through no fault of their own, mistakenly put their trust into a company, platform, or service to keep their information safe. And there are, of course, the criminals. Some operate in groups. Some act alone. Some steal data as a means of extortion. Others steal it as a point of pride. All of them, it appears, take something that isn't theirs.  But what happens if a cybercriminal takes something that may have already been stolen?  In late June, a mobile app that can, without consent, pry into text messages, monitor call logs, and track GPS location history, warned its users that its services had been hacked. Email addresses, telephone numbers, and the content of messages were swiped, but how they were originally collected requires scrutiny. That's because the app itself, called LetMeSpy, is advertised as a parental and employer monitoring app, to be installed on the devices of  that LetMeSpy users want to track.  Want to read your child's text messages? LetMeSpy says it can help. Want to see where they are? LetMeSpy says it can do that, too. What about employers who are interested in the vague idea of "control and safety" of their business? Look no further than LetMeSpy, of course.   While LetMeSpy's website tells users that "phone control without your knowledge and consent may be illegal in your country," (it is in the US and many, many others) the app also claims that it can hide itself from view from the person being tracked. And that feature, in particular, is one of the more tell-tale signs of "stalkerware."  Stalkerware is a term used by the cybersecurity industry to describe mobile apps, primarily on Android, that can access a device's text messages, photos, videos, call records, and GPS locations without the device owner knowing about said surveillance. These types of apps can also automatically record every phone call made and received by a device, turn off a device's WiFi, and take control of the device's camera and microphone to snap photos or record audio—all without the victim knowing that their phone has been compromised.  Stalkerware poses a serious threat—particularly to survivors of domestic abuse—and Malwarebytes has defended users against these types of apps for years. But the hacking of an app with similar functionality raises questions.  Today, on the Lock and Code podcast with host David Ruiz, we speak with the hacktivist and security blogger maia arson crimew about the data that was revealed in LetMeSpy's hack, the almost-clumsy efforts by developers to make and market these apps online, and whether this hack—and others in the past—are "good."  * > "I'm the person on the podcast who can say 'We should hack things,' because I don't work for Malwarebytes. But the thing is, I don't think there really is any other way to get info in this industry." Tune in today.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Show notes and...

38m
Jul 17, 2023
Of sharks, surveillance, and spied-on emails: This is Section 702, with Matthew Guariglia

In the United States, when the police want to conduct a search on a suspected criminal, they must first obtain a search warrant. It is one of the foundational rights given to US persons under the Constitution, and a concept that has helped create the very idea of a right to privacy at home and online.  But sometimes, individualized warrants are never issued, never asked for, never really needed, depending on which government agency is conducting the surveillance, and for what reason. Every year, countless emails, social media DMs, and likely mobile messages are swept up by the US National Security Agency—even if those communications involve a US person—without any significant warrant requirement. Those digital communications can be searched by the FBI. The information the FBI gleans from those searches can be used can be used to prosecute Americans for crimes. And when the NSA or FBI make mistakes—which they do—there is little oversight.  This is surveillance under a law and authority called Section 702 of the FISA Amendments Act.  The law and the regime it has enabled are opaque. There are definitions for "collection" of digital communications, for "queries" and "batch queries," rules for which government agency can ask for what type of intelligence, references to types of searches that were allegedly ended several years ago, "programs" that determine how the NSA grabs digital communications—by requesting them from companies or by directly tapping into the very cables that carry the Internet across the globe—and an entire, secret court that, only has rarely released its opinions to the public.  Today, on the Lock and Code podcast, with host David Ruiz, we speak with Electronic Frontier Foundation Senior Policy Analyst Matthew Guariglia about what the NSA can grab online, whether its agents can read that information and who they can share it with, and how a database that was ostensibly created to monitor foreign intelligence operations became a tool for investigating Americans at home.  As Guariglia explains: * > "In the United States, if you collect any amount of data, eventually law enforcement will come for it, and this includes data that is collected by intelligence communities." Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

43m
Jul 03, 2023
Why businesses need a disinformation defense plan, with Lisa Kaplan: Lock and Code S04E13

When you think about the word "cyberthreat," what first comes to mind? Is it ransomware? Is it spyware? Maybe it's any collection of the infamous viruses, worms, Trojans, and botnets that have crippled countless companies throughout modern history.  In the future, though, what many businesses might first think of is something new: Disinformation.  Back in 2021, in speaking about threats to businesses, the former director of the US Cybersecurity and Infrastructure Security Agency, Chris Krebs, told news outlet Axios: “You’ve either been the target of a disinformation attack or you are about to be.” That same year, the consulting and professional services firm Price Waterhouse Coopers released a report on disinformation attacks against companies and organizations, and it found that these types of attacks were far more common than most of the public realized. From the report:  “In one notable instance of disinformation, a forged US Department of Defense memo stated that a semiconductor giant’s planned acquisition of another tech company had prompted national security concerns, causing the stocks of both companies to fall. In other incidents, widely publicized unfounded attacks on a businessman caused him to lose a bidding war, a false news story reported that a bottled water company’s products had been contaminated, and a foreign state’s TV network falsely linked 5G to adverse health effects in America, giving the adversary’s companies more time to develop their own 5G network to compete with US businesses.” Disinformation is here, and as much of it happens online—through coordinated social media posts and fast-made websites—it can truly be considered a "cyberthreat."  But what does that mean for businesses?  Today, on the Lock and Code podcast with host David Ruiz, we speak with Lisa Kaplan, founder and CEO of Alethea, about how organizations can prepare for a disinformation attack, and what they should be thinking about in the intersection between disinformation, malware, and cybersecurity. Kaplan said: * > "When you think about disinformation in its purest form, what we're really talking about is people telling lies and hiding who they are in order to achieve objectives and doing so in a deliberate and malicious life. I think that this is more insidious than malware. I think it's more pervasive than traditional cyber attacks, but I don't think that you can separate disinformation from cybersecurity." Tune in today.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

42m
Jun 19, 2023
Trusting AI not to lie: The cost of truth

In May, a lawyer who was defending their client in a lawsuit against Columbia's biggest airline, Avianca, submitted a legal filing before a court in Manhattan, New York, that listed several previous cases as support for their main argument to continue the lawsuit. But when the court reviewed the lawyer's citations, it found something curious: Several were entirely fabricated https://mashable.com/article/chatgpt-lawyer-made-up-cases.  The lawyer in question had gotten the help of another attorney who, in scrounging around for legal precedent to cite, utilized the "services" of ChatGPT.  ChatGPT was wrong. So why do so many people believe it's always right?  Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading to discuss the potential consequences of companies and individuals embracing natural language processing tools—like ChatGPT and Google's Bard—as arbiters of truth. Far from being understood simply as chatbots that can produce remarkable mimicries of human speech and dialogue, these tools are becoming sources of truth for countless individuals, while also gaining attraction amongst companies that see artificial intelligence (AI) and large language models (LLM) as the future, no matter what industry they operate in.  The future could look eerily similar to an earlier change in translation services, said Stockley, who witnessed the rapid displacement of human workers in favor of basic AI tools. The tools were far, far cheaper, but the quality of the translations—of the truth, Stockley said—was worse.  * > "That is an example of exactly this technology coming in and being treated as the arbiter of truth in the sense that there is a cost to how much truth we want." Tune in today.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

43m
Jun 05, 2023
Identity crisis: How an anti-porn crusade could jam the Internet, featuring Alec Muffett

On January 1, 2023, the Internet in Louisiana looked a little different than the Internet in Texas, Mississippi, and Arkansas—its next-door state neighbors. And on May 1, the Internet in Utah looked quite different, depending on where you looked, than the Internet in Arizona, or Idaho, or Nevada, or California or Oregon or Washington or, really, much of the rest of the United States.  The changes are, ostensibly, over pornography.  In Louisiana, today, visitors to the online porn site PornHub are asked to verify their age before they can access the site, and that age verification process hinges on a state-approved digital ID app called LA Wallet. In the United Kingdom, sweeping changes to the Internet are being proposed https://www.theverge.com/23708180/united-kingdom-online-safety-bill-explainer-legal-pornography-age-checks that would similarly require porn sites to verify the ages of their users to keep kids from seeing sexually explicit material. And in Australia, similar efforts to require age verification for adult websites might come hand-in-hand with the deployment of a government-issued digital ID https://www.theguardian.com/australia-news/2023/may/16/age-verification-for-adult-websites-may-involve-australian-government-digital-id.  But the large problem with all these proposals is not that they would make a new Internet only for children, but a new Internet for everyone. Look no further than Utah.  On May 1, after new rules came into effect to make porn sites verify the ages of their users, the site PornHub decided to refuse to comply with the law and instead, to block access to the site for anyone visiting from an IP address based in Utah. If you’re in Utah, right now, and connecting to the Internet with an IP address located in Utah, you cannot access PornHub. Instead, you’re presented with a message from adult film star Cheri Deville who explains that: “As you may know, your elected officials have required us to verify your age before granting you access to our website. While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.” Today, on the Lock and Code podcast with host David Ruiz, we speak with longtime security researcher Alec Muffett (who has joined us before to talk about Tor) to understand what is behind these requests to change the Internet, what flaws he's seen in studying past age verification proposals, and whether many members of the public are worrying about the wrong thing in trying to solve a social issue with technology.  "The battle cry of these people have has always been—either directly or mocked as being—'Could somebody think of the children?' And I'm thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she's an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity." Muffett continued: * > "I'm trying to protect that for her. I'd like to see more people grasping for that." Tune in today. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Additional Resources...

47m
May 22, 2023
The rise of "Franken-ransomware," with Allan Liska

Ransomware is becoming bespoke, and that could mean trouble for businesses and law enforcement investigators.  It wasn't always like this.  For a few years now, ransomware operators have congregated around a relatively new model of crime called "Ransomware-as-a-Service." In the Ransomware-as-a-Service model, or RaaS model, ransomware itself is not delivered to victims by the same criminals that make the ransomware. Instead, it is used almost "on loan" by criminal groups called "affiliates" who carry out attacks with the ransomware and, if successful, pay a share of their ill-gotten gains back to the ransomware’s creators. This model allows ransomware developers to significantly increase their reach and their illegal hauls. By essentially leasing out their malicious code to smaller groups of cybercriminals around the world, the ransomware developers can carry out more attacks, steal more money from victims, and avoid any isolated law enforcement action that would put their business in the ground, as the arrest of one affiliate group won't stop the work of dozens of others.  And not only do ransomware developers lean on other cybercriminals to carry out attacks, they also rely on an entire network of criminals to carry out smaller, specialized tasks. There are "Initial Access Brokers" who break into company networks and then sell that illegal method of access online. "You also have coders that you can contract out to," Liska said. "You have pen testers that you can contract out to. You can contract negotiators if you want. You can contract translators if you want." But as Liska explained, as the ransomware "business" spreads out, so do new weak points: disgruntled criminals.  "This whole underground marketplace that exists to serve ransomware means that your small group can do a lot," Liska said. "But that also means that you are entrusting the keys to your kingdom to these random contractors that you're paying in Bitcoin every now and then. And that, for example, is why the LockBit code got leaked—dude didn't pay his contractor." With plenty of leaked code now circulating online, some smaller cybercriminals gangs have taken to making minor alterations and then sending that new variant of ransomware out into the world—no affiliate model needed.  * > "Most of what we see is just repurposed code and we see a lot of what I call 'Franken-ransomware.'"  Today, on the Lock and Code podcast with host David Ruiz, Liska explains why Franken-ransomware poses unique challenges to future victims, cybersecurity companies, and law enforcement investigators.  Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

51m
May 08, 2023
Removing the human: When should AI be used in emotional crisis?

In January, a mental health nonprofit admitted that it had used Artificial Intelligence to help talk to people in distress.  Prompted first by a user's longing for personal improvement—and the difficulties involved in that journey—the AI tool generated a reply, which, with human intervention, could be sent verbatim in a chat box, or edited and fine-tuned to better fit the situation. The AI said: “I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone. There are people here who care about you and want to help you. I’m proud of you for making the decision to try to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey.” This was experimental work from Koko, a mental health nonprofit that integrated the GPT-3 large language model into its product for a short period of time that is now over. In a video demonstration https://www.loom.com/share/d9b5a26c644640ba95bb413147e41766 posted on Twitter earlier this year, Koko co-founder Rob Morris revealed that the nonprofit had used AI to provide "mental health support to about 4,000 people" across "about 30,000 messages." Though Koko pulled GPT-3 from its system after a reportedly short period of time, Morris said on Twitter that there are several questions left from the experience.  "The implications here are poorly understood," Morris said. "Would people eventually seek emotional support from machines, rather than friends and family?" Today, on the Lock and Code podcast with host David Ruiz, we speak with Courtney Brown, a social services administrator with a history in research and suicidology, to dig into the ethics, feasibility, and potential consequences of relying increasingly on AI tools to help people in distress. For Brown, the immediate implications draw up several concerns.  * > "It disturbed me to see AI using 'I care about you,' or 'I'm concerned,' or 'I'm proud of you.' That made me feel sick to my stomach. And I think it was partially because these are the things that I say, and it's partially because I think that they're going to lose power as a form of connecting to another human." But, importantly, Brown is not the only voice in today's podcast with experience in crisis support. For six years and across 1,000 hours, Ruiz volunteered on his local suicide prevention hotline. He, too, has a background to share.  Tune in today as Ruiz and Brown explore the boundaries for deploying AI on people suffering from emotional distress, whether the "support" offered by any AI will be as helpful and genuine as that of a human, and, importantly, whether they are simply afraid of having AI encroach on the most human experiences.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0

41m
Apr 24, 2023
How the cops buy a "God view" of your location data, with Bennett Cyphers

The list of people and organizations that are hungry for your location data—collected so routinely and packaged so conveniently that it can easily reveal where you live, where you work, where you shop, pray, eat, and relax—includes many of the usual suspects. Advertisers, obviously, want to send targeted ads to you and they believe those ads have a better success rate if they're sent to, say, someone who spends their time at a fast-food drive-through on the way home from the office, as opposed to someone who doesn't, or someone whose visited a high-end department store, or someone who, say, vacations regularly at expensive resorts. Hedge funds, interestingly, are also big buyers of location data, constantly seeking a competitive edge in their investments, which might mean understanding whether a fast food chain's newest locations are getting more foot traffic, or whether a new commercial real estate development is walkable from nearby homes.  But perhaps unexpected on this list is police. According to a recent investigation https://www.eff.org/deeplinks/2022/08/inside-fog-data-science-secretive-company-selling-mass-surveillance-local-police from Electronic Frontier Foundation and The Associated Press, a company called Fog Data Science has been gathering Americans' location data and selling it exclusively to local law enforcement agencies in the United States. Fog Data Science's tool—a subscription-based platform that charges clients for queries of the company's database—is called Fog Reveal. And according to Bennett Cyphers, one of the investigators who uncovered Fog Reveal through a series of public record requests, it's rather powerful.  * > "What [Fog Data Science] sells is, I would say, like a God view mode for the world... It's a map and you draw a shape on the map and it will show you every device that was in that area during a specified timeframe." Today, on the Lock and Code podcast with host David Ruiz, we speak to Cyphers about how he and his organization uncovered a massive data location broker that seemingly works only with local law enforcement, how that data broker collected Americans' data in the first place, where this data comes from, and why it is so easy to sell.  Tune in now.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

46m
Apr 10, 2023
Solving the password’s hardest problem with passkeys, featuring Anna Pobletts

How many passwords do you have? If you're at all like our Lock and Code host David Ruiz, that number hovers around 200. But the important follow up question is: How many of those passwords can you actually remember on your own? Prior studies https://www.malwarebytes.com/blog/news/2022/10/why-almost-everything-we-told-you-about-passwords-was-wrong suggest a number that sounds nearly embarrassing—probably around six.  After decades of requiring it, it turns out that the password has problems, the biggest of which is that when users are forced to create a password for every online account, they resort to creating easy-to-remember passwords that are built around their pets' names, their addresses, even the word "password." Those same users then re-use those weak passwords across multiple accounts, opening them up to easy online attacks that rely on entering the compromised credentials from one online account to crack into an entirely separate online account.  As if that weren't dangerous enough, passwords themselves are vulnerable to phishing attacks, where hackers can fraudulently pose as businesses that ask users to enter their login information on a website that looks legitimate, but isn't.  Thankfully, the cybersecurity industry has built a few safeguards around password use, such as multifactor authentication, which requires a second form of approval from a user beyond just entering their username and password. But, according to 1Password Head of Passwordless Anna Pobletts, many attempts around improving and replacing passwords have put extra work into the hands of users themselves: * > "There's been so many different attempts in the last 10, 20 years to replace passwords or improve passwords and the security around. But all of these attempts have been at the expense of the user." For Pobletts, who is our latest guest on the Lock and Code podcast, there is a better option now available that does not trade security for ease-of-use. Instead, it ensures that the secure option for users is  the easy option. That latest option is the use of "passkeys."  Resistant to phishing attacks, secured behind biometrics, and free from any requirement by users to create new ones on their own, passkeys could dramatically change our security for the better.  Today, we speak with Pobletts about whether we'll ever truly live in a passwordless future, along with what passkeys are, how they work, and what industry could see huge benefit from implementation. Tune in now.  Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

38m
Mar 27, 2023
"Brad Pitt," a still body, ketchup, and a knife, or the best trick ever played on a romance scammer, with Becky Holmes

Becky Holmes knows how to throw a romance scammer off script—simply bring up cannibalism.  In January, Holmes shared on Twitter that an account with the name "Thomas Smith" had started up a random chat with her that sounded an awful lot like the beginnins stages of a romance scam. But rather than instantly ignoring and blocking the advances—as Holmes recommends everyone do in these types of situations—she first had a little fun.  "I was hoping that you'd let me eat a small part of you when we meet," Holmes said https://twitter.com/deathtospinach/status/1616456307833192448/photo/2. "No major organs or anything obviously. I'm not weird lol."  By just a few messages later, "Thomas Smith" had run off, refusing to respond to Holmes' follow-up requests about what body part she fancied, along with her preferred seasoning (paprika).  Romance scams are a serious topic. In 2022, the US Federal Trade Commission reported https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2022/02/reports-romance-scams-hit-record-highs-2021 that, in the five years prior, victims of romance scams had reported losing a collective $1.3 billion. In just 2021, that number was $547 million, and the average amount of money reported stolen per person was $2,400. Worse, romance scammers themselves often target vulnerable people, including seniors, widows, and the recently divorced, and they show no remorse when developing long-lasting online relationships, all bit on lies, so that they can emotionally manipulate their victims into handing over hundreds or thousands of dollars.  But what would you do if you  a romance scammer had contacted you and you, like our guest on today's Lock and Code podcast with host David Ruiz, had simply had enough? If you were Becky Holmes, you'd push back.  For a couple of years now, Holmes has teased, mocked, strung along, and shut down online romance scammers, much of her work in public view as she shares some of her more exciting stories on Twitter https://twitter.com/deathtospinach. There's the romance scammer who she scared by not only accepting an invitation to meet, but ratcheting up the pressure by pretending to pack her bags, buy a ticket to Stockholm, and research venues for a perhaps too-soon wedding. There's the scammer she scared off by asking to eat part of his body. And, there's the story of the fake Brad Pitt: * > " My favorite story is Brad Pitt and the the dead tumble dryer repairman. And I honestly have to say, I don't think I'm ever going to top that. Every time ...I put a new tweet up, I think, oh, if only it was Brad Pitt and the dead body. I'm just never gonna get better." Tune in today to hear about Holmes' best stories, her first ever effort to push back, her insight into why she does what she does, and what you can do to spot a romance scam—and how to safely respond to one.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog https://www.malwarebytes.com/blog. And you can read our most recent report, the 2023...

48m
Mar 13, 2023
Fighting censorship online, or, encryption’s latest surprise use-case, with Mallory Knodel

Government threats to end-to-end encryption—the technology that secures your messages and shared photos and videos—have been around for decades, but the most recent threats to this technology are unique in how they intersect with a broader, sometimes-global effort to control information on the Internet. Take two efforts in the European Union and the United Kingdom. New proposals there would require companies to scan any content that their users share with one another for Child Sexual Abuse Material, or CSAM. If a company offers end-to-end encryption to its users, effectively locking the company itself out of being able to access the content that its users share, then it's tough luck for those companies. They will still be required to find a way to essentially do the impossible—build a system that keeps everyone else out, while letting themselves and the government in.  While these government proposals may sound similar to previous global efforts to weaken end-to-end encryption in the past, like the United States' prolonged attempt to tarnish end-to-end encryption by linking it to terrorist plots, they differ because of how easily they could become tools for censorship.  Today, on the Lock and Code podcast with host David Ruiz, we speak with Mallory Knodel, chief technology officer for Center for Democracy and Technology, about new threats to encryption, old and bad repeated proposals, who encryption benefits (everyone), and how building a tool to detect one legitimate harm could, in turn, create a tool to detect all sorts of legal content that other governments simply do not like.  "In many places of the world where there's not such a strong feeling about individual and personal privacy, sometimes that is replaced by an inability to access mainstream media, news, accurate information, and so on, because there's a heavy censorship regime in place," Knodel said.  "And I think that drawing that line between 'You're going to censor child sexual abuse material, which is illegal and disgusting and we want it to go away,' but it's so very easy to slide that knob over into 'Now you're also gonna block disinformation,' and you might at some point, take it a step further and block other kinds of content, too, and you just continue down that path." Knodel continued: "Then you do have a pretty easy way of mass-censoring certain kinds of content from the Internet that probably shouldn't be censored." Tune in today.  You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

59m
Feb 27, 2023
What is AI ”good” at (and what the heck is it, actually), with Josh Saxe

In November of last year, the AI research and development lab OpenAI revealed its latest, most advanced language project: A tool called ChatGPT. ChatGPT is so much more than "just" a chatbot. As users have shown with repeated testing and prodding, ChatGPT seems to "understand" things.  It can give you recipes that account for whatever dietary restrictions you have. It can deliver basic essays about moments in history. It can—and has been—used to cheat by university students who are giving a new meaning to plagiarism, stealing work that is not theirs. It can write song lyrics about X topic as though composed by Y artist. It can even have fun with language.  For example, when ChatGPT was asked to “Write a Biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR,” ChatGPT responded in part: “And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it. And he cried out to the Lord, saying ‘Oh Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge.’” Is this fun? Yes. Is it interesting? Absolutely. But what we're primarily interested about in today's episode of Lock and Code, with host David Ruiz, is where artificial intelligence and machine learning—ChatGPT included—can be applied to cybersecurity, because as some users have already discovered, ChatGPT can be used to some success to analyze lines of code for flaws. It is a capability that has likely further energized the multibillion-dollar endeavor to apply AI to cybersecurity. Today, on Lock and Code, we speak to Joshua Saxe about what machine learning is "good" at, what problems it can make worse, whether we have defenses to those problems, and what place machine learning and artificial intelligence have in the future of cybersecurity. According to Saxe, there are some areas where, under certain conditions, machine learning will never be able to compete. "If you're, say, gonna deploy a set of security products on a new computer network that's never used your security products before, and you want to detect, for example, insider threats—like insiders moving files around in ways that look suspicious—if you don't have any known examples of people at the company doing that, and also examples of people not doing that, and if you don't have thousands of known examples of people at the company doing that, that are current and likely to reoccur in the future, machine learning is just never going to compete with just manually writing down some heuristics around what we think bad looks like." Saxe continued:  "Because basically in this case, the machine learning is competing with the common sense model of the world and expert knowledge of a security analyst, and there's no way machine learning is gonna compete with the human brain in this context." Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

44m
Feb 13, 2023
A private moment, caught by a Roomba, ended up on Facebook. Eileen Guo explains how

In 2020, a photo of a woman sitting on a toilet—her shorts pulled half-way down her thighs—was shared on Facebook, and it was shared by someone whose job it was to look at that photo and, by labeling the objects in it, help train an artificial intelligence system for a vacuum. Bizarre? Yes. Unique? No.  In December, MIT Technology Review investigated the data collection and sharing practices of the company iRobot https://www.technologyreview.com/2022/12/19/1065306/roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy/, the developer of the popular self-automated Roomba vacuums. In their reporting, MIT Technology Review discovered a series of 15 images that were all captured by development versions of Roomba vacuums. Those images were eventually shared with third-party contractors in Venezuela who were tasked with the responsibility of "annotation"—the act of labeling photos with identifying information. This work of, say, tagging a cabinet as a cabinet, or a TV as a TV, or a shelf as a shelf, would help the robot vacuums "learn" about their surroundings when inside people's homes.  In response to MIT Technology Review's reporting, iRobot stressed that none of the images found by the outlet came from customers. Instead, the images were "from iRobot development robots used by paid data collectors and employees in 2020." That meant that the images were from people who agreed to be part of a testing or "beta" program for non-public versions of the Roomba vacuums, and that everyone who participated had signed an agreement as to how iRobot would use their data. According to the company's CEO in a post on LinkedIn https://www.linkedin.com/pulse/building-smart-robots-requires-responsible-colin-angle/: "Participants are informed and acknowledge how the data will be collected." But after MIT Technology Review published its investigation, people who'd previously participated in iRobot's testing environments reached out. According to several of them, they felt misled https://www.technologyreview.com/2022/12/19/1065306/roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy/.  Today, on the Lock and Code podcast with host David Ruiz, we speak with the investigative reporter of the piece, Eileen Guo, about how all of this happened, and about how, she said, this story illuminates a broader problem in data privacy today. "What this story is ultimately about is that conversations about privacy, protection, and what that actually means, are so lopsided because we just don't know what it is that we're consenting to." Tune in today. You can also find us on Apple Podcasts https://podcasts.apple.com/us/podcast/lock-and-code/id1500049667, Spotify https://open.spotify.com/show/3VB1MCXNk76TSddNNZcDuo?si=b454MPzCTYWvvS5bOPdxcA, and Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2xvY2thbmRjb2RlL2ZlZWQueG1s, plus whatever preferred podcast platform you use. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com http://incompetech.com/) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com)

46m
Jan 30, 2023