AI meeting agents are everywhere. They join Zoom calls, transcribe conversations, summarize action items, and promise to save employees hours of note-taking. From a business perspective, the upside is obvious: better documentation, fewer "I don't ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

Beware the legal risks of AI meeting agents

AI meeting agents are everywhere. They join Zoom calls, transcribe conversations, summarize action items, and promise to save employees hours of note-taking. From a business perspective, the upside is obvious: better documentation, fewer "I don't remember saying that" disputes, and cleaner follow-up.

But like most shiny tech, AI meeting agents come with real employment law and litigation risk—especially if you don't think through how (and when) you use them.

Start with wiretapping laws. Federal law is one-party consent, but many states are not. CA, CT, FL, IL, MD, MA, MI, MT, NV, NH, PA, and WA require all parties to consent to a recording. An AI agent that silently records or transcribes a meeting can easily violate those statutes. That's not just a technical foul. It can mean statutory damages, attorneys' fees, and—yes—class actions.

Then there's employee relations. If employees learn that every meeting might be recorded, transcribed, and stored indefinitely, candor dies fast. Performance conversations become stilted. Complaints may never get voiced. And if the AI summary is wrong, unreviewed, and uncorrected, congratulations—you've just created inaccurate evidence that will be blown up on a screen in front of a jury.

The real nightmare scenario, though, is forgetting to turn it off.

The meeting ends. People start chatting. Someone vents. Someone jokes. Someone says something they absolutely would not want memorialized in writing. The AI agent is still listening. Now you've captured informal comments that were never meant to be "on the record," and you've handed someone a litigation bombshell.

Bottom line: AI meeting agents can be useful—but only with clear policies, upfront consent, disciplined controls, and training that treats them like recording devices, not harmless assistants. Because when they go wrong, they don't just go wrong. They go off the rails.

Workplace investigations are hard. Until they’re not.

Workplace investigations are hard.

Witnesses forget. Memories conflict. Motives get murky. HR is left piecing together timelines, credibility, and intent from incomplete information, while everyone involved insists they did nothing wrong.

And then there are the easy ones.

Take the paramedic who now faces nearly two dozen criminal charges for allegedly urinating all over his workplace — on a supervisor's keyboard, into communal coffee creamer, an ice machine, orange juice, hand soap, ChapStick, canned vegetables, an air-conditioner vent, even a pot of chili. According to prosecutors, he didn't just do it. He filmed himself doing it. In uniform. Then allegedly posted the videos online to sell.

From an employment-law perspective, this is the unicorn investigation. The accused employee didn't just confess. He created high-definition evidence, complete with a thumbs-up to the camera.

No "he said/she said."
No credibility assessments.
No close calls about intent.

When an employee documents his own misconduct, the investigation largely writes itself. The employer's job shifts from "what happened?" to "how fast can we act, and how do we protect everyone else?"

That doesn't mean these cases are simple emotionally or operationally. Contaminated food, violated trust, horrified coworkers, reputational damage — those are real issues employers must address carefully. But the core investigative challenge? Solved by the employee himself.

The lesson isn't that investigations are usually this easy. They're not. The lesson is that employers need to be prepared for the extreme, the bizarre, and the unthinkable — and to respond decisively when clear evidence drops in their lap. At that point, HR's hardest job may be keeping a straight face while drafting the termination paperwork.

Federal court provides road map for lawful DEI programs

I keep getting asked how employers can legally maintain DEI programs in today's political climate. A federal judge just answered that question in a lawsuit the Missouri Attorney General brought against Starbucks—and in dismissing it, handed corporate America a roadmap.

The AG argued Starbucks' DEI policies were illegal because they "favored" BIPOC, women, and LGBTQ+ employees through mentorship, affinity groups, partnerships, and "quotas" tied to executive pay.

The court held that allegations without facts are just theories—and theories don't establish jurisdiction or liability.

The AG argued Missouri could sue to protect its citizens from discrimination.

The court held Missouri failed to identify even one injured Missourian. Without a concrete, particularized injury, there's no standing to sue—and without harm to a quasi-state interest, the state can't litigate private claims. The court also held that there was no reason individuals couldn't sue if discrimination actually occurred.

The AG argued maintaining "quotas" plus incentive compensation tied to meeting them necessarily meant discriminatory hiring and firing.

The court held discrimination still requires an adverse employment action taken because of race or sex. Starbucks' goals didn't show that. The company already exceeded many targets, retention incentives aren't zero-sum—keeping a diverse employee doesn’t mean a white or male employee was harmed—and demographic shifts ("more female and less white") don’t establish causation without facts tying them to discriminatory decisions.

The AG argued Partner Networks, mentorship programs, DEI partnerships, and executive incentives segregated employees or coerced discrimination.

The court held the networks were open to all, no one was alleged to be excluded from mentorship, and participation in groups like the Board Diversity Action Alliance showed no actual harm, and a company can't coerce itself.

The AG argued Starbucks "published" illegal job preferences in reports and proxy statements.

The court held those documents weren't job ads, didn't announce "preferred races get jobs," and didn't deter anyone from applying.

Want a durable DEI program? This opinion sketches the roadmap: aspirational goals, open-access affinity groups, mentorship without exclusion, and—most importantly—no employment decisions you can't tie to legitimate, non-discriminatory criteria. The AG brought politics. The court required facts.

WIRTW #788: the 'it's a beautiful day' edition

When I was a kid, Mr. Rogers' Neighborhood wasn't background noise. You sat on the floor. You watched. You waited for him to come through the door, change his shoes, and pull on that cardigan. Nothing flashy happened. No one was mocked. No one was humiliated. No one "won."

And yet, by the end, you felt steadier.

It took me years to understand why. Fred Rogers wasn't just entertaining children. He was teaching empathy—carefully, intentionally, and without irony. Which is why I keep coming back to this thought: we need a sociological study comparing the empathy of adults who grew up on Mr. Rogers' Neighborhood with those who didn't.

Because empathy feels like the missing muscle in American society right now.

Every episode opened with a simple, disarming truth:

"I like you just the way you are."

Not if you earn it.
Not if you agree.
Not if you fit in.

Just: you matter.

That idea once felt obvious. Today, it feels almost subversive. We sort people by usefulness, loyalty, productivity, and tribe. Empathy gets rationed. Compassion gets qualified. Caring about the “wrong” people is treated as a flaw.

Rogers never hedged.

"You are worth caring about."

That's where empathy begins. And when a society stops teaching that lesson, it doesn't become tougher or more realistic. It becomes colder—and easier to harden.

Rogers also understood that empathy requires emotional literacy. You can't recognize pain in others if you've been taught to deny it in yourself. On his show, he talked openly about fear, anger, sadness, and loss—not to inflame them, but to name them.

"When we're frightened, we tend to lash out at others."

That isn't politics. It's human behavior. When empathy erodes, fear rushes in. And fear doesn't make people wiser or stronger. It makes them reactive.

Rogers' answer wasn't suppression or denial. It was honesty.

"The thing that is mentionable becomes manageable."

That lesson applies to adults as much as children. Societies that can't talk honestly about discomfort don't become resilient. They become brittle.

Empathy also shapes how we see one another.

"The neighbors you find are the neighbors you look for."

If you go looking for threats, you'll find them. If you go looking for people, you'll find those too. One path leads to suspicion and withdrawal. The other leads to restraint and connection.

And empathy doesn't require unanimity.

"We don't have to think alike to love alike."

That sentence feels almost antique now, which should worry us. We increasingly treat disagreement as disqualification and difference as danger. Rogers offered a quieter alternative: coexistence without dehumanization.

He never framed empathy as weakness. He treated it as a civic skill—something to be taught, practiced, and protected. A society held together by empathy doesn't need as much fear or force to function.

Which brings us to where we are.

The erosion of empathy doesn't just harden people; it makes them easier to lead by fear. When compassion is framed as weakness, it leaves a vacuum. And something always rushes in to fill it.

So yes, fear still matters. But it's a consequence, not a cause. Fear is downstream of the deliberate erosion of empathy. When people are taught not to care, cruelty becomes easy. And when empathy disappears, bad ideas don't have to work very hard.

Fred Rogers never talked about politics. He didn't need to. He was doing something more basic: teaching children how to live with other people without losing their humanity.

America didn't lose its way because we cared too much.

We lost it because we stopped treating empathy as a strength.

Empathy isn't softness. It's social infrastructure. It's our superpower. And any culture that mocks it shouldn't be surprised when things start coming apart.



Here's what I read this week that you should read, too.

A Finger in the Constitutional Dike — via San Antonio Employment Law Blog



Are you using ChatGPT as your substitute lawyer? — via Improve Your HR by Suzanne Lucas, the Evil HR Lady


When a PIP becomes the retaliation claim — via Eric Meyer's Employer Handbook Blog


Left With Nothing But an Injunction: Fifth Circuit Vacates $75 Million Trade Secret Verdict After Plaintiff Fails to Apportion Damages — via Trading Secrets

Uncle Nearest Is Insolvent, Receiver's Explosive Affidavit Claims — via VinePair

Just Subpoena It.

This week, the EEOC sent a strong message to corporate America when it went to federal court to force Nike to turn over years of documents tied to allegations that its DEI programs discriminated against White employees.

The EEOC isn't suing Nike for discrimination—at least not yet. Instead, it has filed a subpoena enforcement action after Nike allegedly refused to fully comply with an investigation that reaches back to 2018. According to the agency, Nike's "DEI-related 2025 Targets" and other initiatives may have resulted in race-based decision-making in hiring, promotions, layoffs, internships, and mentoring and leadership-development programs.

Under EEOC Chair Andrea Lucas, the agency has made one thing unmistakably clear: Title VII is "colorblind." In fact, that's been the state of the law since in enactment in 1964. That means DEI programs are not immune from scrutiny—especially if they involve quotas, race-restricted programs, or employment decisions made even partly because of race.

The agency is asking Nike for information on layoff criteria, race and ethnicity tracking, whether executive compensation was influenced by workforce diversity metrics, and 16 allegedly race-restricted career development programs.

Nike says the subpoena is wildly overbroad, unduly burdensome, and a fishing expedition—and that it has already produced thousands of pages of documents. That procedural fight will play out in court. But zoom out, and the bigger picture is hard to miss.

Nike isn't just being investigated. It’s being showcased. That's the point in targeting Nike—to make an example of a big-name employer to highlight this issue.

Employers with DEI programs or goals: good intentions won't save you. DEI goals, standing alone, are not illegal. Their legality, however, depends on how you implement them. Numerical race-based targets, limiting opportunities by race, or factoring race into promotions, layoffs, or compensation decisions have always lived on thin legal ice. That ice just cracked.

If your DEI strategy includes hard targets tied to race, race-exclusive mentoring or leadership programs, or uses race as a factor in employment decisions, you should assume the EEOC is very, very interested. Employers should audit their DEI programs now—before the agency decides to do it for them.

© Jon Hyman. You're receiving this email because you've signed up to receive updates from us.

If you'd prefer not to receive updates, you can unsubscribe.