I'm driven by curiosity, creativity, and passion
– with a love for strategic design.

swipe_left

Co-Worker

arrow_downward arrow_forward

Momox

arrow_downward arrow_forward

Telecommunication Agents

arrow_downward arrow_forward

AI Shopping Assistant

arrow_downward arrow_forward

Mechanical Sketcher

arrow_downward arrow_forward

Co-Worker

Concept
Research
UX/UI
AI

Coworker is a concept we developed during the Invention Design course over the course of one semester. Coworker is an AI plugin for Figma that supports UX/UI designers throughout the entire design process. It knows all project data—from interviews to target groups—and never forgets anything.

Instead of wasting time on repetitive tasks, the smart coworker assists with design variants, UX tests, and decision-making—directly in Figma, tailored to each phase, from wireframe to final design.

Invention Design

— Prof. David Oswald (Lecturer)

Felix Strobel
Ismael Simoncini
Moritz Berendt

What is Co-Worker? keyboard_arrow_down

Co-Worker is a Figma plugin that opens just like any other third-party tool inside Figma.

Getting Started with Co-Worker keyboard_arrow_down

When the plugin is activated, Co-Worker opens as a window inside Figma. The user is greeted with a short onboarding and a project overview showing all projects previously created with Co-Worker. From there, they can simply select the project they want to work on.

Variant Agent keyboard_arrow_down

The user can switch between the Variant Agent and the UX Test Agent. Here, we're looking at the Variant Agent. To generate sub-variants, the user selects an existing base variant on their Figma board. Co-Worker then recognizes and analyzes the selected frame, highlighted in orange. After that, the user can generate new variants and adjust parameters such as focus, interactivity, and level of innovation.

Generating Variants keyboard_arrow_down

After a short processing time, Co-Worker places three variant types directly onto the Figma board:
- V-Base – rearranges existing elements.
- V-Next – intelligently replaces selected UI elements (e.g., dropdown → toggle).
- V-Smart – restructures layouts and introduces new ideas or features.
From each of these, the user can generate additional sub-variants to explore further directions.

Refining Variants keyboard_arrow_down

Once the user selects a variant, they can choose a variant type again to generate new sub-variants with their preferred parameters. Alternatively, they can switch to the UX Test Agent right away to start testing the chosen variant.

Sub-Variant Generation keyboard_arrow_down

Co-Worker now generates three new sub-variants based on the selected variant type.

Switching to UX Testing keyboard_arrow_down

If the user wants to evaluate a variant before making a decision, they can switch to the UX Test Agent using the toggle. Here, a selected variant can be tested against different UX heuristics to identify strengths and improvement areas.

Final Design keyboard_arrow_down

Finalisierte Design-Lösung mit allen Details.

Plugin Interface

Project Overview keyboard_arrow_down

In the project overview, users can select from their existing projects or create a new one.

AI Assistant

Project Dashboard keyboard_arrow_down

Within a project dashboard, users can switch between Wireframe and Hi-Fi status using a dropdown. They can also toggle between the Variant Agent and the UX Test Agent. Below, the dashboard displays the agent's activity history and the ongoing chat with Co-Worker.

Design Variants

Project Knowledge keyboard_arrow_down

Each project can be enriched with data so Co-Worker matches the user's level of context. Users can add a text-based project description and upload files such as usability tests, personas, target groups, interviews, or design systems via drag and drop. Unlike humans, Co-Worker retains all this information throughout the entire design process.

User Testing

Working with the Variant Agent keyboard_arrow_down

With the Variant Agent, users can generate sub-variants for any selected base variant on the Figma board. The elements recognized and analyzed by Co-Worker are highlighted in orange and can be manually adjusted if needed. Below, users can set the generation parameters and define how many variants they want to create.

Project Memory

Chatting with Co-Worker keyboard_arrow_down

Users can chat with Co-Worker at any time, based on all project data and generated variants. Co-Worker occasionally provides insights or suggestions and can also recommend which generated variants might be worth exploring further.

Smart Suggestions

After Generation keyboard_arrow_down

Once Co-Worker has generated variants, users can highlight changes using the Annotations toggle, create new sub-variants, or regenerate all variants at once.

Workflow Integration

Using the UX Test Agent keyboard_arrow_down

With the UX Test Agent, users select a variant they want to evaluate. Co-Worker automatically detects it, and a dropdown allows the user to choose which UX heuristic to test against. An optional checkbox enables an accessibility check to be included in the test as well.

Collaboration Tools

From Testing to Optimization keyboard_arrow_down

After the test, an optimized variant can be generated with the Variant Agent based on the UX test results.

Final Output

Iterating Further keyboard_arrow_down

From the optimized variant, users can again generate new sub-variants using the Variant Agent.

Design System keyboard_arrow_down

A simple, minimal design system — using color only where it serves a clear purpose. The purple-blue tone represents the user color, marking interactive elements and actions the user can perform. Orange is reserved for Co-Worker, highlighting everything that comes from or is recognized by the plugin.

Icon Style keyboard_arrow_down

A simple, clean icon set that represents all Co-Worker activities in a clear and lightweight way. The icons stay minimal and precise, using fine-line shapes to gently hint at actions without feeling loud or decorative.

Interaction Elements keyboard_arrow_down

Interaction elements are designed to feel intuitive and straightforward, making the plugin easy to navigate. Clear affordances, simple layouts, and consistent patterns help users understand actions at a glance.

Concept Mapping keyboard_arrow_down

The initial concept was mapped out in Figma, capturing the full flow of the design process and identifying key pain points in variant creation and decision-making.

Exploring AI in the Design Process keyboard_arrow_down

A first exploration of how an AI-powered plugin could support the design process in Figma — identifying where AI could step in, what tasks it could take over, and how it might streamline variant creation, testing, and decision-making.

Early Sketches & Ideation keyboard_arrow_down

Initial sketches and idea exploration based on the Double Diamond framework, mapping how an AI-driven workflow could fit naturally into each phase of the design process.

Wireframing the Dashboard keyboard_arrow_down

Development of wireframes for the Co-Worker dashboard, exploring different layouts, interaction patterns, and interface elements.

Navigation Architecture keyboard_arrow_down

Creating a clear, user-centered navigation structure that supports smooth orientation and makes key actions easy to access.

Momox

Redesign
UX/UI
Research
Mobile application

In Application Design, we rethought the existing apps Momox, Medimops, and Momox Fashion — and merged them into one intuitive experience. The goal was simple: create an easy-to-use app where people can quickly sell and buy books, games, and clothing.

We focused on a clearer UI, a smoother user experience, and a stronger sustainability angle. One key challenge was keeping the original simplicity of Momox while introducing new features and a more streamlined flow.

Application Design

— Rebecca Götte (Lecturer)

Felix Strobel
Ismael Simoncini
Moritz Berendt

Home Screen Overview keyboard_arrow_down

We reorganized the app into three sections: Sell, Home, and Shop. The Home screen introduces the CO₂-based rewards system and the shared cart — a selling box on the left and a shopping bag on the right — keeping selling and buying clearly connected.

Sell Mode keyboard_arrow_down

Sell mode opens with a swipe from the left. Users scan books or clothing through the central lens, see their CO₂ score, and add newly scanned items straight into the selling box at the bottom.

Multiscan keyboard_arrow_down

With Multiscan, activated through a toggle in the lens, users can scan whole shelves instead of single items. The app then shows a color-coded price overview, making it easy to see which items are valuable and add them directly to the selling box.

Scanning Clothing keyboard_arrow_down

With a quick toggle, the lens switches from books to clothing. Users scan a label, get a price suggestion, and add the item to the selling box — all made possible by merging the apps.

Shop Mode keyboard_arrow_down

By swiping right into the Shop, users can spend the budget they earned through selling and CO₂ rewards. The flow mirrors Sell mode and creates a clear loop between selling and buying — all in one app.

Final Screens keyboard_arrow_down

A refreshed, modern UI with updated interaction patterns — still simple and easy to use, keeping the core Momox feeling intact while rethinking it in a cleaner, more intuitive way.

Home as the Hub keyboard_arrow_down

The Home area sits between Sell and Shop, acting as the link between both sides. It shows the shared cart and the collected CO₂ rewards, making the transition between selling and buying seamless.

Unified App Structure keyboard_arrow_down

SELL, HOME, and BUY — previously separate apps — now live together in one unified interface. This creates a smoother experience without switching between apps and closes the loop between selling and buying in a simple, clear way.

Why We Redesigned Momox keyboard_arrow_down

Momox felt outdated — both in structure and UI. Books, fashion, and selling were split across three separate apps, even though they clearly belong together. So we explored how to unify them in one simple experience.

Merging the Three Apps keyboard_arrow_down

We combined all three apps — buying books, buying secondhand fashion, and selling items — into one unified experience. Everything now happens in a single app, closing the loop between selling and buying.

New Market Position keyboard_arrow_down

Our analysis showed a clear opportunity: shifting Momox from a single-purpose selling app to a combined sell-and-buy platform — while keeping its core strength as a place where users can quickly sell large quantities of items. This creates a more complete experience that competitors weren't offering.

User Research keyboard_arrow_down

Through interviews, we created personas that shaped our decisions — from students selling textbooks to parents passing on clothes and books from their kids.

Value Proposition Canvas keyboard_arrow_down

We mapped the current Momox customer jobs, pains, and gains, and used them to define our own gain creators, pain relievers, and core features for the redesigned app.

How Might We Questions keyboard_arrow_down

Based on the pain points, we defined our How-Might-We questions — the starting point for the redesign.

Navigation Architecture keyboard_arrow_down

We mapped out the navigation structure — defining the key sections, how they connect, and their hierarchy — as a foundation for the redesign.

Wireframes keyboard_arrow_down

We created wireframes with all essential elements and menu points to establish a clear base structure for the app.

Mid-Fi Screens keyboard_arrow_down

From the wireframes, we moved into mid-fi by gradually adding more detail and refining the structure of each screen.

Design Filters keyboard_arrow_down

Once the basic screens were set, we defined our design filters — built around the keywords familiar, enriching, and optimistic. For each of these, we selected matching adjectives and visual cues to guide the final design.

High-Fi Screens keyboard_arrow_down

We applied these design filters to our mid-fi frames and created the first high-fi versions, now including colors, icons, and a more polished visual system.

Visual Language keyboard_arrow_down

We assigned colors, icons, and visual elements to our three core themes — familiar, enriching, and optimistic — to translate them into a clear visual language for the app.

Color Variations keyboard_arrow_down

Each theme comes with a small set of tonal variations, giving us enough range to keep the visuals flexible while staying consistent across the app.

Key Screens keyboard_arrow_down

Finally, we arrived at our three key screens — the main frames where all design filters, research insights, and visual decisions come together, each aligned with our three core themes.

Visual Style keyboard_arrow_down

We developed a refreshed style guide with a new color palette, modernized visuals, an updated font, and a slightly adapted logo. The goal was to give the app a contemporary look that fits both the concept and our design filters.

Icon Set keyboard_arrow_down

To match the new concept, we created a clean, simple icon set for the redesign. The icons stay minimal and easy to read, keeping the app’s straightforward and familiar feel.

Components keyboard_arrow_down

We designed a set of consistent interaction components — from clear, glass-style toggles in the lens to simple, bold buttons and other essential UI elements. All components follow the same visual logic to keep the app easy to use and coherent throughout.

AI Agents in Telecommunications

UX/UI Design
SaaS
AI Interfaces

In this project, I supported the design of a white-label platform that allows companies — often large call centers — to create and manage their own AI support agents. My work focused on shaping the interface of the Agent Builder, where agents are created, configured, and organized.

We designed a clear setup flow, an overview where all agents can be managed, and transparent panels that show how each agent performs. Alongside the builder, we also worked on a Live Center that provides real-time insights into call center activity, making a complex AI system feel intuitive for a wide range of teams.

External Project

Felix Strobel
Moritz Strobel
Comdesk

Agent Setup Flow keyboard_arrow_down

We designed the full end-to-end flow that guides users through creating a new AI agent. Because the topic is technical, the goal was to make the setup feel clear and approachable. Users are led through each step of the configuration process, so they can build a functional telecommunications agent without needing deep technical knowledge.

Stepper Module keyboard_arrow_down

We created a stepper that leads users through five clear steps of building an AI agent — from creating a project to choosing templates, linking databases, and selecting the AI model. It turns a technical setup into a simple, structured flow.

Agent Overview keyboard_arrow_down

We designed an overview where users can see all their created AI agents at a glance — including each agent’s variants. From here, they can quickly create new variants, test them, and refine their setup. The main challenge was to present agents and their variations clearly while keeping actions fast and accessible.

Agent Insights keyboard_arrow_down

From the overview, users can dive into any agent or variant to see how it performs. Each detail view shows analytics, activity logs, and behavior insights — giving full transparency and control over how the agent works.

Agent Detail View keyboard_arrow_down

On the agent detail page, users can edit the agent, add more information, and even chat with it for quick testing. All core settings are available — from basic details like name and instructions to telephony options, integrations, statistics, and advanced expert configurations. This keeps the entire setup flexible and fully customizable.

Live-Center Overview keyboard_arrow_down

We designed a live dashboard specifically for call centers, showing all essential activity in real time — available agents, active calls, completed calls, logged-in staff, and skill groups. The goal was to give a clear, structured view of call performance and agent capacity at a glance.

AI Shopping Assistant

Concept
UX/UI Design
MVP
AI Assistant

This was an external project where we developed an MVP to test the core concept. The idea was to create an AI-powered shopping assistant that lets users generate a product they have in mind — even if it doesn’t exist online yet.

The app allows users to describe their idea through a prompt, create a visual version together with an AI assistant, and fine-tune every detail through an editable prompt. Once the imagined product is visualized, it’s matched with real items from the web to find the closest possible version of the user’s “dream product.”

External Project

Felix Strobel
Moritz Strobel
Eidos

MVP Userflow keyboard_arrow_down

We mapped a simple flow for the MVP: onboarding, getting suggestions, and creating editable prompts. After sending a prompt, users can tweak key parameters through tag bubbles and dropdowns. The app then matches the generated product with real items available online.

Evolving the Userflow keyboard_arrow_down

We refined the userflow to focus fully on the chat with the assistant. The interface uses adaptive card elements that grow or shrink depending on the step. This mid-fi flow was an early version that continued to evolve.

Adaptive Product Card keyboard_arrow_down

We added a flexible product card at the top that always shows the current dream product and adjusts in size depending on the step. Below it, the chat stays active — whether you're writing the prompt, editing it, generating the image, or comparing matches.

Final Hi-Fi Style keyboard_arrow_down

For the final hi-fi flow, we developed a visual style that fits the brand and the shopping theme — with matching colors, icons, and interaction elements. We added a subtle AI-inspired look to give the interface a clear, modern visual language.

Style Guide keyboard_arrow_down

For the MVP, we added a simple style guide with a first logo, defined action colors, tag-bubble colors, secondary tones, and an AI gradient. The result is a clean color system that fits the shopping theme and feels comfortable to use.

Icon Set keyboard_arrow_down

We created a set of clear, simple icons for tasks like sneaker matching and AI assistant actions. They provide quick, readable visual cues that support the flow without adding noise.

Component Library keyboard_arrow_down

We built a small component library with the key elements — buttons, tag bubbles, prompt fields, input fields, and the dynamic product card in its different states. Everything follows a consistent interaction pattern.

Mechanical Sketcher

Robotics
Prototyping
Mechatronics

In a one-week mechatronics workshop, we built a two-joint drawing arm inspired by human movement. The idea was to create a device that doesn’t draw perfectly, but with a natural, slightly imperfect line quality. The arm could move freely and draw any shape with a hand-drawn feel.

Our concept imagined generating small postcards: users describe their mood, an AI creates a visual, and the arm draws it. We prototyped the system with an Arduino, stepper and servo motors, laser-cut wood parts, and a simple pen holder — a compact test of human-like machine drawing.

Mechatronics Workshop

— Olivier Brückner (Lecturer)

Felix Strobel
Jasper Schminke
Ismael Simoncini

Coordinated Movement keyboard_arrow_down

The two-joint drawing arm moves through predefined coordinates in space. For each coordinate, it knows the exact motor values for both the stepper and the servo. By adjusting these motors, the arm can reach any point in its drawing area and sketch any pattern mapped to those coordinates.

Natural Line Quality keyboard_arrow_down

Because the arm uses two human-like joints, it isn’t perfectly rigid. This slight instability creates a natural, imperfect line on the paper — every movement is a bit different. When patterns are repeated, these small variations overlap and form an organic texture, which was exactly the effect we aimed for.

Spatial Calibration keyboard_arrow_down

We first mapped the arm’s reachable area using test coordinates and motor values. With these reference points and a simple formula in the code, the arm can now calculate the needed motor settings for any coordinate — and move freely to draw any shape.

Final Prototype keyboard_arrow_down

The final prototype was built in one week using laser-cut wooden arm parts, motors, and an Arduino. Mounted on a simple wooden base, the arm can draw directly onto postcards placed beneath it.

Control Setup keyboard_arrow_down

We used an Arduino connected to the stepper and servo motors, running our custom code to control the arm’s movements.

Pen Mount keyboard_arrow_down

We used a simple Stabilo pen, mounted through a laser-cut hole at the tip of the arm. The pen can be removed and replaced easily — just like using a normal drawing tool.

Laser-Cut Components keyboard_arrow_down

To work fast, we laser-cut all structural parts from wood using our prepared vector paths. This gave us a quick and precise foundation for building the arm.

Motor Mounting keyboard_arrow_down

The motors were fitted into precisely laser-cut holes and held in place through simple press-fit connections — no glue, just clean, straightforward snap-in assembly.

Homing Sensor keyboard_arrow_down

We added a simple button-style homing sensor, similar to those used in 3D printers. It lets the arm reset to a defined starting position before moving through its coordinates.

Reachable Area keyboard_arrow_down

We mapped the arm’s full drawing area and placed test coordinates inside it. These reference points helped the arm understand its space and calculate the correct motor settings for any position.

Radial Navigation keyboard_arrow_down

Using these reference points, the arm moves confidently from coordinate to coordinate, navigating the radial space we defined for it.

About

Get to know me

arrow_downward

Hi, I'm Felix

I'm an Interaction Design student at HfG Schwäbisch Gmünd. I'm a curious, creative person who loves diving into projects and shaping ideas into something clear and intuitive. I enjoy working through things in a structured, strategic way and figuring out how people interact with products.

Before moving into interaction design, I studied architecture and urban planning in Stuttgart, which shaped my sense of form, space, and systems. I like learning new things, working with others, and talking about good ideas. – Always open to connect.

Tools

Figma
Adobe Cloud
Notion
Blender
Rhino
Wordpress
Coding

Skills

Usability Testing
UX Research
Interaction Mapping
User Research
User Interview

AI Tools

ChatGPT
Cursor
Co-Pilot
Perplexity
Lovable
UX-Pilot
Gemini

CV

05 2024 Freelancing alongside my studies.
03 2024 University of Design
Schwäbisch Gmünd | Interaction Design
09 2021 – 07 2023 University of Stuttgart | Architecture and Urban Planning
01-06 2021 Talentstudio Stuttgart | Portfolio Course
2020 High School Diploma Albert Schweizer Gymnasium Leonberg
2020 Community Association Warmbronn | Design Work

My interests

arrow_downward
Contact Visual

Let's Talk

Let's talk

arrow_downward
Contact Visual 01

Contact me

I'm always open to new opportunities and contacts.
Feel free to reach out!

Currently looking for an internship starting March 2026 :)

Imprint

Information according to § 5 TMG

Felix Strobel
Weißensteiner Straße 24
73525 Schwäbisch Gmünd
Germany

Contact

Email: felix.strobel@hfg-gmuend.de
Phone: +49 152 59578453

Responsible for content according to § 55 Abs. 2 RStV

Felix Strobel
Weißensteiner Straße 24
73525 Schwäbisch Gmünd

Copyright

The content and works created by me on these pages are subject to German copyright law. Reproduction, editing, distribution and any kind of exploitation outside the limits of copyright law require my written consent. Downloads and copies of this page are only permitted for private, non-commercial use.

Disclaimer

As a service provider, I am responsible for my own content on these pages according to § 7 Para. 1 TMG in accordance with general laws. According to §§ 8 to 10 TMG, however, I am not obligated as a service provider to monitor transmitted or stored third-party information or to investigate circumstances that indicate illegal activity. Obligations to remove or block the use of information according to general laws remain unaffected by this.

My offer contains links to external third-party websites, the content of which I have no influence over. Therefore, I cannot assume any liability for this third-party content. The respective provider or operator of the pages is always responsible for the content of the linked pages.

Privacy Policy

The use of this website is generally possible without providing personal data. If personal data (for example, name, address or email addresses) is collected on these pages, this is done, as far as possible, always on a voluntary basis. This data will not be passed on to third parties without your express consent.