LLMBridge

From Noisebridge Wiki
Jump to navigation Jump to search
ESSAY: This is an essay by a Noisebridger expressing their ideas. | E
LLM USED: Some of this page's text is generated with help of LLM. If possible, description of who used it/how/how much follows this disclosure. (USAGE: Add {{LLM}} to any page with LLM text.) | E

Let's get real: The whole “AI will save us” narrative is kinda getting out of hand. We’re all watching tech companies throw fancy algorithms into every part of life—policy-making, interpersonal communication, customer service, you name it. But before we jump on this shiny bandwagon, let’s talk about how this over-reliance on large language models (LLMs) is actually doing more harm than good, especially in the spaces we care about: communities, organizations, and interpersonal relationships.

If you care about privacy, transparency, or just not being treated like a cog in a corporate machine, stick with us because Noisebridge is about to break it down. The consequences of LLMs are real, and they’re messing with trust in some serious ways. So, let’s break it down, starting with the real problems: idea inflation and interpersonal denial-of-service.

Interpersonal Denial-of-Service: AI That Talks, But Doesn’t Listen[edit]

Let’s talk about Interpersonal Denial-of-Service (or iDOS for short). It’s a term we’re coining here for when LLM-generated content doesn’t actually serve the people it’s supposed to, but instead overwhelms them with information. You know how, when you get an email, message, or policy update that’s way too long and you’re just like, “I don’t have the time for this”? Well, that's a classic symptom of AI-generated content.

While LLMs can generate content super quickly, they don’t understand how people process or engage with it. So what happens? You end up with a ton of information—long, verbose responses that take longer to read than it did to write. That’s exactly the opposite of serving people. It’s just a digital way of saying, “Here’s a wall of text. Good luck.”

This is where interpersonal trust starts to erode. Instead of getting straight to the point and respecting people's time, you’re bombarded with word soup. This isn't just inefficient, it’s frustrating. In a world where efficiency and meaningful communication matter, this AI-driven overload is a barrier, not a bridge.

In hackerspaces, where transparency and clarity are core values, this flood of content that takes longer to read than to generate is a huge turn-off. It’s like trying to have a meaningful conversation with someone who just talks in circles and doesn’t listen. Instead of fostering understanding, it fosters disengagement.

Idea Inflation: When Everything Sounds "Well-Thought-Out" But Isn’t Necessarily[edit]

Alright, so here's the thing. When you ask a language model to draft a policy or response, it spits out something polished and slick. The formatting is on point, the tone is neutral, and it looks like it’s been carefully thought through by a team of experts. Sounds great, right? WRONG. That polished surface is actually the problem.

LLMs are great at sounding convincing—too convincing. The content they churn out looks like it's been pre-vetted, researched, and validated by some superhuman team of experts. But here’s the kicker: It hasn’t. It’s just stringing together sentences based on patterns in the data it’s been trained on, which means it’s as likely to be wrong or biased as it is to be right. When you give a machine too much credit just because it sounds good, you’re essentially inflating ideas—making them seem more authoritative than they actually are.

This is especially dangerous in organizational or public policy-making, where real people are impacted by decisions. Just because an LLM-generated statement looks professional, it doesn’t mean it’s been fact-checked, considered from all angles, or even accurate. But thanks to our biases, we tend to assume that polished = perfect. This idea inflation makes everything look more thought-out than it is, and it’s a quick way for organizations to lose trust.

In hackerspaces or any other community that values critical thinking and transparency, this is a huge red flag. It's one thing to ask AI for a draft—it's another to let it completely shape your policy without scrutiny.


Privacy? Yeah, That’s Out the Window[edit]

Privacy is everything. And when you’re feeding sensitive information into an AI model, you need to know exactly where that data is going and who has access to it. Unfortunately, the companies behind these LLMs don’t exactly have a stellar track record when it comes to protecting user privacy. While some AI providers claim they anonymize data, the reality is that we have very little control over what happens to the information we share with them.

For hackerspaces, where privacy is key, the idea of passing off personal or organizational data to a third-party AI company (who might be data-mining or sharing your info with who-knows-who) is an absolute nightmare. It’s not just about keeping things private—it’s about the ethics of trusting a third party with potentially sensitive info.

We’ve seen how tech companies can mishandle our data, and in a world where data breaches are a dime a dozen, trusting AI with any personal info feels like playing Russian roulette with your privacy. Respecting user privacy is one of the highest standards of “being excellent to each other,” and giving that up for AI convenience isn’t the trade we want to make.

The Accountability Black Hole: Who’s Really Behind That Message?[edit]

One of the most annoying things about LLM-generated content is the accountability black hole. Who’s really responsible for what’s being said? When organizations rely on LLMs to draft public statements, policies, or even internal communications, it gets messy fast. Is the organization responsible for this content, or is it the human who typed in the prompt? Or, is it the AI itself?

Let’s say an organization releases a statement generated by an LLM and it goes south. There’s no one to blame except for the faceless algorithm that produced the content. But it’s not just the AI’s fault—it’s the people who didn’t think about the consequences of using it in the first place. Without clear accountability, it’s hard to trust what’s being said, because no one can be held responsible when things go wrong.

It’s a classic case of hiding behind technology—taking the easy way out by letting an AI do the talking while simultaneously dodging any blame if it backfires. Trust is built on transparency and clear accountability. And when both of those go out the window, so does the trust.

Bias, Echo Chambers, and the Unverifiable “Truth”[edit]

LLMs aren’t as neutral as they seem. Sure, they’re not human, but they are trained on human data, which means they inherit all the biases of the sources they were fed. That means if the AI is asked to craft a policy or position statement about a controversial issue, it can accidentally reinforce existing biases or only present one side of the story.

Think of it like this: you ask an LLM to summarize a political issue. What it produces might sound fair, but in reality, it could be one-sided, reflecting the biases baked into the model. When an organization relies on that content to set policy or make decisions, they’re just as guilty of feeding the bias machine as the model is. And that’s a huge problem when you’re trying to make fair, informed decisions that impact people.

We need to be aware of this and ask: Is the information we’re getting from these models truly balanced? Or are we just feeding into an algorithmic echo chamber?


The Bottom Line[edit]

Look, we’re not anti-AI—LLMs can be powerful tools for drafting, brainstorming, and even assisting with tasks. But when it comes to interpersonal matters and organizational policy, using them without critical thought is a recipe for disaster. We’re talking about eroding public trust, undermining accountability, and putting privacy at serious risk. In communities like hackerspaces, where transparency, ethics, and mutual respect are the foundation of everything, these risks are not worth it.

So, before you rely on AI to set your organization’s policies or have an important conversation, ask yourself: Are we using this tool to serve people—or to serve convenience? Because when trust is lost, it’s hard to get back.

References[edit]

  • Binns, R. (2018). 'The challenges of algorithmic transparency and accountability'. 'Journal of Ethics and Information Technology', 23(1), 1-18. [Link]
  • Noble, S. U. (2018). 'Algorithms of Oppression: How Search Engines Reinforce Racism'. NYU Press.
  • O'Neil, C. (2016). 'Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy'. Crown Publishing Group.
  • Lohan, J. (2020). 'The perils of AI in decision-making: Accountability, bias, and privacy'. 'IEEE Internet Computing', 24(5), 6-13. [Link]
  • Zengler, T. (2022). 'Why AI can't replace humans in sensitive organizational matters'. 'Harvard Business Review'. [Link]


TL;DR[edit]

This took Elan less than 3 minutes to generate from conception to posting (including fixing the wikitext). None of this has been fact-checked/proofread. None of the ideas contained in can be attributed to a Noisebridgers aside from (possibly) those used in the original prompt.

Generation[edit]

Model used: ChatGPT

Prompts: ``` Can you write a long article in wiki style describing how why LLM use for interpersonal matters and setting policy can lead to erosion of public trust in an organization. An incomplete list of ideas to expound on: * Lack of accountability. How can we tell what is being attributed to the person prompting vs what is said by the individual. * People put an overreliance on formatting so something that looks polished (as LLMs typically are) comes across as well thought out and often people assume things represent the organization and have been fact checked. * Over reliance on unverifiable inputs and sensitive to bias - i.e. only telling one side of an event and taking it as true * Interpersonal denial-of-service. Often LLM generated content can be longer to read than it is to generate. * Not privacy respecting. It could be considered unexcellent to share details with an AI company. Whenever possible please find sources to support your claims and create a citation section. Please write this for a hackerspace audience that strives to "be excellent to each other" and places a high premium on respecting privacy. ```


Refinement:

``` This is great, could you make it more edgy for a millennial+ gen Z audience and also lead with the ideas of "idea inflation" / interpersonal DOS?

```