In a world increasingly influenced by algorithms andautomation, three St. Olaf College alumni working atcompanies on the forefront of the AI revolution sharewhy critical thought is more important than ever.
Artificial Intelligence (AI) is reshaping how people think, work, and connect. From generative chatbots to immersive virtual realities, new tools are arriving at a pace that, at times, feels faster than we can keep up with — and raising as many questions as they answer. For three Oles working in the tech industry, the future of AI isn’t an amorphous eventuality, but something they build, test, and grapple with every day.
Cianna Bedford-Petersen ’14 works at Meta, and works with AI to develop unique ways for users to express themselves. Photo by Fernando Sevilla.
“My job often involves researching ideas or needs of people for a product that doesn’t even exist tangibly yet.”
— Cianna Bedford-Petersen ’14
Cianna Bedford-Petersen ’14 is a user experience researcher at Meta, where she supports the development of the Metaverse across virtual reality and mobile settings. Her work, rooted in growing social connections across the world by helping people express their unique identity through virtual avatars and pets, reflects her long-standing interest in how humans present themselves in digital spaces.
That interest began at St. Olaf, in an 8 a.m. psychology class taught by Professor of Psychology Donna McMillan.
“I discovered I enjoyed exploring a side of psychology relating to who we are as people, and how we express ourselves in different environments,” she says.
That interest deepened as she found a passion for research working in Associate Professor of Psychology Shelly Dickinson’s lab, which eventually led her to graduate school. After earning a Ph.D. in personality psychology at the University of Oregon, Bedford-Petersen now works on research that explores the role AI plays in user experience, including generative tools that allow users to customize their digital avatars.
“AI is playing a big role in the product we’re developing,” Bedford-Petersen says. “We’re utilizing it a lot to make creating new items within the Metaverse accessible to more than just very technically focused developers. For example, we’ve recently introduced a generative AI–based tool that helps people create different clothing items for your avatar. We provide the shape of the clothing, and people can go in and describe colors and patterns that they’d like to see on that item, and GenAI helps bring that vision to life. It’s been a hit so far, and is a great way to help foster creativity.”
But AI doesn’t just enhance products — it’s changing how research itself is conducted.
“AI automates some of the more repetitive tasks, expands our analysis capabilities, and increases the audiences we can reach,” Bedford-Petersen says. “The ethics of the technology we’re building is another extremely important role of a researcher in a tech environment. Technology is growing at such a rapid rate that legislation cannot always keep up. Researchers take their role very seriously as an advocate for the experiences of the public — we are often the line of defense that understands and advises on best practices for creating safe and healthy online environments.”
That ethical responsibility goes hand in hand with the experimental, future-facing nature of her work.
“I work on the very theoretical end of tech, so my job often involves researching ideas or needs of people for a product that doesn’t even exist tangibly yet,” Bedford-Petersen says. “That can feel very ambiguous and challenging in the beginning, but over time when those ideas become an actual product I see people using — it’s a wild feeling.”
Somang Han ’18 works at Amazon, where she utilizes AI to debug code and collect data. Photo by Fernando Sevilla.
“What keeps me going is my ultimate goal of understanding and advancing AI’s societal benefits, particularly through building initiatives that help marginalized populations who have limited access to education.”
— Somang Han ’18
As a data scientist at Amazon, Somang Han ’18 builds machine learning and AI-powered tools to solve complex business problems, from debugging code to data collection. She is currently working on developing data-driven Amazon Web Services customer prioritization with AI and machine-learning models.
Han credits her time at St. Olaf — and particularly the course Algorithms for Decision Making with Professor Emeritus of Mathematics, Statistics, and Computer Science Matthew Richey — with providing her a strong foundation in critical thinking, problem-solving, and interdisciplinary collaboration.
“The liberal arts environment encourages curiosity and adaptability, which are essential qualities in a rapidly evolving field like data science, where we’re expected to work with different parts of an organization,” Han says.
That velocity can be both exhilarating and exhausting.
“It has certainly been challenging to keep up, but I try to remain dedicated to lifelong learning,” Han says. “To stay informed about current industry research and developments, I attend conferences, read scientific publications, and follow industry news and podcasts. While the rapid pace of advancement can be overwhelming at times, it also provides continuous opportunities for growth, and keeps the work exciting.”
Han is intrigued by advances in multimodal AI systems, which can perform physical tasks through better environmental understanding, and neuromorphic computing, which creates computer architectures that mimic biological neural networks and could enable more sophisticated edge AI applications. However, she is wary of potential misuse of the technology.
“A compelling analogy I heard from a podcast compares this to the early days of automobiles, when cars were introduced to the market without proper safety regulations and standards,” Han says. “Similarly, AI technology raises serious concerns, particularly around automated disinformation campaigns and information authenticity verification. Additionally, there are valid concerns about potential surveillance and social control, depending on who controls and implements these AI systems. Just as the automotive industry eventually developed comprehensive safety standards, we need to establish robust regulatory frameworks for AI development and deployment to ensure its responsible and ethical use.”
Despite the challenges, she stays grounded by a broader mission.
“Although there are ups and downs throughout my working days, what keeps me going is my ultimate goal of understanding and advancing AI’s societal benefits, particularly through building initiatives that help marginalized populations who have limited access to education,” Han says. “With the development of AI tools, motivated individuals now have greater access to information — but proper guidance and support systems are still essential.”
Charles Fyfe ’09 works as a software engineer at Meta, and uses the AI built into the tools he works with to unify systems across platforms. Photo by Samuel Gwin ’25.
“I think AI is going to change tech jobs — in the same way that most major developments in the field have.”
— Charles Fyfe ’09
Charles (McEachern) Fyfe ’09 is a software engineer on the messaging infrastructure team at Meta currently working on projects unifying data systems between Facebook and Instagram, and is also a visiting professor of physics who teaches a hardware design course at St. Olaf. His focus isn’t on building new features, but on optimizing what’s already there.
“I’m now running a job that is deleting 400 terabytes of data, which consumes about half a million dollars in electricity a year. This frees up all that electricity and storage for use elsewhere,” Fyfe says. “My focus is really on infrastructure cleanup, data migration, the bookkeeping stuff, and AI is something that’s just becoming increasingly prevalent.”
AI is woven into the tools he uses daily, and he has an appreciation for the ways it can support his work — as well as a skepticism of its limits.
“Complex systems like this can fail, and when they fail, they don’t fail like a human — they fail in really bizarre ways, and we don’t always understand why,” Fyfe says. “You can take a machine learning algorithm, and you can show it a picture of a panda, and it’ll say ‘I’m 80 percent sure that’s a panda.’ If you go in and you tweak the pixels just a tiny bit — it still looks like a panda to a person — you can do that in such a way that the machine learning algorithm would say, ‘I’m 99 percent sure that this is a goldfish.’ So just because you’ve got the numbers to line up just wrong, you get these weird failure modes that just don’t make any sense to a person. It’s a reminder that these tools need constant scrutiny.”
Still, Fyfe sees AI’s potential as an interface — not a replacement.
“I think AI is going to change tech jobs, in the same way that most major developments in the field have,” Fyfe says. “Fifty years ago when people wrote software, they were writing in Assembly, which is incredibly tedious, horrible, and error prone. And then we developed a language called C, which lets you write words, and then under the hood the computer turns it into Assembly. And then later we developed a language called Python, which is nicer to work with, and under the hood it turns into C, and under the hood that turns into Assembly. I think AI might end up being another kind of layer like that, where it’s just a nicer way to interface with the computer.”
He’s also attuned to how AI may reshape education, both from the perspective of the educator and the pupil.
“I think that this is maybe comparable to the development of the pocket calculator,” Fyfe says. “Before the pocket calculator, math class was learning how to multiply big numbers, and after, that’s just not a skill that people need anymore. We’re going to see that again for writing code and papers, reading and digesting text — there’s going to have to be a lot of thinking and trial and error to figure out what we want to teach, and what are young people going to want to learn.”
But he’s convinced Oles are uniquely positioned to meet the challenges ahead.
“I feel a strength of our campus right now is that we have a really diverse student body, both in terms of backgrounds and interests,” he says. “Students are coming to the classroom with multifaceted identities, wanting to cultivate varied areas of expertise.”
How a Liberal Arts Education Powers the Future
As the industry evolves, so does the way people interact with AI.
“With AI usage, the primary difference I’ve noticed is how different age groups approach AI as an evolved search engine versus a thought or skill partner,” says Bedford-Petersen.
Older users, she says, tend to treat AI like an advanced search engine, asking straightforward, factual questions. Middle generations often use it as a brainstorming tool to jump-start big projects.
“I see even younger generations adopting AI as a partner that fully supplements a skillset. An emerging example of this is ‘vibe coding,’ which is a programming approach where a person interacts with AI to describe a feature or product they’d like to build and AI takes the lead on writing the code. If you haven’t found an app that works the way you want, you turn to AI to supplement coding skills to build just the app you envisioned,” she says.
Even with the new tools and creative applications, the Oles are quick to point out that AI is far from flawless. Han warns against treating AI as a shortcut.
“Effectively deploying AI solutions requires a thoughtful approach and significant time investment,” she says. “It’s not just about implementing the technology; we need to carefully consider the specific use case, anticipated adoption rate, and potential business impact. We don’t want to apply a ‘big shovel’ to a ‘small problem.’”
She advocates for proactive ethics — testing systems for fairness, improving energy efficiency, and keeping human oversight at the center.
“I view AI as a tool to augment, rather than replace, human capabilities,” Han says. “It’s essential to maintain human oversight and ensure AI systems align with human values and societal needs. Regular ethical audits and staying engaged with evolving ethical frameworks in the field is key.”
“The ethics of the technology we’re building is another extremely important role of a researcher in a tech environment. Technology is growing at such a rapid rate that legislation cannot always keep up. Researchers take their role very seriously as an advocate for the experiences of the public — we are often the line of defense that understands and advises on best practices for creating safe and healthy online environments.”
— Cianna Bedford-Petersen ’14
Bedford-Petersen says she tries to remain open-minded, breaking down new technologies into their parts rather than judging them as a whole.
“Tech doesn’t always get new developments right on the first try, but they might be a stepping stone to building something that will really change or influence the way we interact with the world around us,” she says. “I’ve tried to be really discerning in both my career and day-to-day life about what tech can enhance our lives versus what might do more harm than good.”
Fyfe is blunter about the risks. He notes that AI is trained on massive internet databases, often containing personal or uncredited material, and that its complexity makes it difficult to explain why it works the way it does.
Still, he rejects alarmist predictions.
“To walk us back a bit from the precipice of doom, I don’t believe AI is going to replace all human jobs — we’re not all going to be broke and have no purpose or function — but imagine if you can figure out a way to give AI some elements of a person’s job and maybe put us back to a 32 hour work week? I think logistically and culturally, we’ve got some obstacles before that happens, but I do like to hope that we are in a Star Trek optimistic future, rather than a Star Wars pessimistic future.”
In the end, what stands out about these Oles is not the role they’re playing in tech now, but how they arrived in the field in the first place. For Bedford-Petersen, one of the biggest misconceptions about the tech industry today is that it’s made up entirely of software engineers. In reality, she says, these companies function like any other business, drawing on a wide range of expertise — from researchers and product managers to designers, lawyers, and marketing specialists.
“These companies are still full businesses that require a ton of different backgrounds and skillsets to function,” Bedford-Petersen says. “Many people who work in tech took really circuitous routes that help them bring a unique perspective to our work.”
With some tech leaders suggesting AI could replace entry-level engineers and coders, Fyfe says having a wide range of skills will be even more important to support the pipeline of employees the industry will need in the future.
“If you want senior engineers, you have to hire and train junior engineers,” Fyfe says. “So if you’re interested in computer science and technology, pursue what you love,but prepare for a reality where that isn’t enough — you need to have other skills besides being able to code.”
In Bedford-Petersen’s opinion, St. Olaf cultivated the versatility to keep up with such shifts.
“I am a big believer that a liberal arts education teaches you how to learn, and gives you the flexibility to tackle new challenges,” Bedford-Petersen says. “This is critical in a fast-paced industry like tech, where unexplored research questions are continually popping up. I lean on that liberal arts background to say, ‘I may not know everything about this topic yet, but I feel confident in my ability to learn about it and make connections to things I’ve done before.’”
AI-Generated Imagery
AI image generation uses complex algorithms to create new images from text prompts. To develop the image used at the beginning of this story, St. Olaf Magazine Creative Director Fernando Sevilla wanted to use a process that mirrored this article’s central question: How do we shape technology, and how does it, in turn, shape us? He set out to do this through an intentional collaboration between human creativity and AI. After feeding the article’s text into ChatGPT, he worked through a creative conversation to develop and update illustration prompts that captured its spirit. Those prompts were then used across several image-generation tools (ChatGPT, Google Gemini, and Adobe Firefly), allowing the technology to interpret and reimagine the ideas visually. The human touch was intentional, critical, and, yes … unnerving.
AI-Generated Prompt: A vertical illustration in the style of a hand painted woodcut or modern surrealist collage. A human figure stands at the center, their silhouette filled with flowing lines of code, books, and abstract patterns symbolizing knowledge. Around them, ghostly AI forms emerge shifting between gears, digital avatars, and glowing neural networks blending into organic elements like trees, hands, and faces. The composition should feel contemplative rather than futuristic, balancing technology with humanity. Rich textures, muted yet vibrant colors, and a timeless, almost allegorical tone, as if this were a 19th-century illustration updated to represent the modern spirit of artificial intelligence.
Updated Prompt: A vertical minimalist illustration with a clean, symbolic approach. At the center, a human silhouette is split in half: one side organic, with books, leaves, and warm textures; the other side digital, with circuits, glowing lines, and neuralnodes. Above them, a simple sun-like halo radiates both natural rays and binary code. The color palette should be limited — deep indigo, gold, and ivory — evoking the restraint of printmaking while still representing the tension and harmony between humanity and AI.
Final Prompt: A vertical surrealist collage-style illustration using hand-painted textures and paper-cut layers. A human silhouette acts as a window, filled with fragments: binary numbers, avatars, gears, books, and leaves. Around them, AI forms morph playfully — a typewriter turning into a neural network, or a tree sprouting circuit branches. The palette is limited to indigo, ochre, forest green, and ivory, with a tactile, crafted feel that balances whimsy and depth.