As a young developer, there’s one thing I want above all: to learn, explore, and apply the full range of technologies available to me. But as a person, I also know I’m not alone in this world. With the skills I’ve gained—and those I’m still learning—in building digital products, I’m able to develop applications, websites, and even artificial intelligence systems that can impact the everyday lives of many people.
Often, the users of these products have little or no understanding of the underlying technologies and cannot fully grasp what’s happening in the complex web of data and algorithms. And if there’s no awareness of what’s technologically possible, users can’t even begin to express concrete concerns.
That’s why it’s partly my responsibility as a developer to create products that can be used without causing unpredictable consequences for the people who use them—such as unnecessarily disclosing personal data or combining their information with data from other sources in ways they can’t trace or understand. If I get to experiment and explore technology in the process, that’s just the cherry on top. But the key question remains: I can—but should I?
Who defines the possibilities, influences the decisions.
So how can the people responsible for building digital products ensure that everyone involved ends up with something truly valuable in their hands? The tension between collecting data to generate insights that benefit users (e.g. helping me connect with my friends) and collecting data to serve one’s own economic interests (e.g. targeted advertising) remains. But when building these products, there are several things developers can keep in mind to ensure the application actually provides value.
One of the most important:
A diverse team.

By now, the world is a tightly interwoven system of computers and humans. Prejudices that are deeply rooted in society are also reflected in machines. And when these machines are used, users are, in turn, influenced by the biases embedded within them.
There are numerous examples of this. One well-known case involved an automatic soap dispenser that failed to respond to the hands of a person of color. Another example came in the fall of 2019, when a Science magazine study revealed how systemic racial bias in the U.S. had found its way into an algorithm used to assess people’s healthcare needs. Women are also affected by flawed design decisions in software development. A UN study criticized the predominantly female voices of virtual assistants like Siri and Alexa as problematic. Why? Because they reinforce the still-prevalent stereotype of women in subservient or “helper” roles.
In all three cases, existing social imbalances are not only reproduced by technology, but amplified through their interaction with society. Only a diverse development team is equipped to recognize these inequalities, make them visible to the wider group, and actively reduce discrimination in the final product. Software is only as neutral and considerate as the way it is built and trained. Every developer should be aware of this responsibility.
Criminal Thinking

It’s crucial to estimate—early on and as thoroughly as possible—how much harm a product could potentially cause. One helpful approach is to consider what someone with different intentions than your own might do with the data being collected. Is the data still meaningful when taken out of the original context of the application? What conclusions could be drawn if this data were combined with other sources? What are the worst possible consequences this new application could lead to?
In Future Ethics, Cennydd Bowles presents the example of Airbnb—a platform that allows users to rent out their homes and apartments to travelers. While this may seem harmless at first glance, when large numbers of homes are rented primarily to tourists, they are no longer available as long-term housing. This has significant consequences for the housing structures of entire cities. Such outcomes demand strict regulation to protect access to private housing.
Unintended doesn’t mean unforeseeable.
Security gaps can only be addressed if someone actively looks for them. Intentionally thinking through how a product might be misused—even before it is released—can prevent it from being exploited later. It’s like giving a chair you’ve just built a good shake to make sure it’s stable.
Interplay of All Actors

There’s no definitive checklist. There’s no list of ethical obligations that developers can simply work through, tick off, and then be on the safe side. It’s hard to say that certain technologies inherently guarantee safe data practices or a product with meaningful value. For example, from a privacy perspective, it may be better to store data locally on users’ devices. At the same time, there are clear advantages to centralized data storage—such as enabling immediate access to health data during a hospital visit or when switching doctors.
What’s essential is open dialogue—with experts and non-experts alike—to properly assess the consequences of our work. This includes transparency about how applications work and their underlying mechanics, so that potential users even know what they should be questioning. But communication shouldn’t stop with the user side. Developers themselves need shared guidelines too. Just like journalists have a press code, maybe technologists need a unified technology code of ethics—an overarching code of conduct that defines a collective standard for ethical behavior in tech. In edge cases, laws can offer orientation. Compared to many other regions, the EU, for example, has relatively strict regulations when it comes to data protection.
Regulation doesn’t have to be a limitation— it can be a form of protection. In Germany, I have the freedom to use a fitness app and track my health however I see fit, without worrying that my health insurance provider will cancel my policy if they find out I only walked 1,000 steps last week—even though I “should” hit that in a day. That’s thanks to the structure of our publicly regulated healthcare system.
But laws aren’t always morally sound. It wasn’t that long ago that Rosa Parks broke the law by refusing to give up her seat on a bus reserved for white passengers. It’s up to us to keep asking: Are our laws enough? Are they achieving their purpose? Because at the end of the day, ethical questions don’t come from machines. They come from us—the developers—who have the foresight to know where data ends up and how it can be used.
Doing all of this alone is incredibly hard. That’s why we need to hold each other accountable. We need to remind our colleagues of the responsibility we all share, and support each other in building products we can go to bed with a clear conscience about. (Well—not literally go to bed, of course... those square eyes from too much screen time are real.)
Because if just one Google employee had protested the company’s involvement in drone warfare, nobody would’ve noticed. But if many stand up together, that’s something else entirely.
Throwing in the digital towel?
Sadly, it’s not always easy to work on products that are fully and unambiguously ethical. The money and tech infrastructure often lie in the hands of large corporations—some of which intentionally manipulate users and treat personal data as currency.
Still, we as developers, designers, project managers, and beyond hold powerful skills.
Skills that influence what people click, what information they see, and how they interact with the digital world. And even if our impact starts small, we have the power to choose how and when we use those skills—and what we use them for.
Further Reading
Bowles, Cennydd. (2018). Future Ethics. NowNext Press. (Highly recommended!)
Peter Purgathofer (Find him on Twitter/X)
