Case study
OutSystems pilot: Soft skills practice with developers
By WiseWorldOutSystems pilot, developers, 2 weeks

A two-week WiseWorld pilot inside OutSystems put soft skills into the hands of developers, on their own time, on their own screens. Here's what the data tells us about engagement, the skills that came up, and how the team felt about it.
AI-generated practice scenarios
Real conversations with characters
Across all participants
Episodes per active user, per day
The story behind the data
The question we wanted to answer
Will developers, a famously skeptical, sprint-pressured audience, actually show up, day after day, to train soft skills if practice is short, AI-driven, and entirely self-directed?
Soft skills are the new bottleneck for technical communities. Career growth, leadership and client work all hinge on things no IDE can teach. We wanted to know if the right format could finally make voluntary practice stick.
How we set it up
Who
- Developers from the OutSystems community
- 10 active participants
How
- 2-week window, self-paced
- No scheduled sessions or homework
- WhatsApp + WiseWorld app
What we measured
- Engagement and habit
- Skills practiced and assessed
- Honest user feedback
What happened
The participants showed up and stayed. They built a daily habit, tracked their own streaks, and told us, in their own words, that it was worth it.
~0.85 episodes per active user, per day
Top streak: 14 days. Cohort average: 7.4 days
86% understood their own soft skills better
100% saw value for the wider community
The next sections show how each of those numbers came to be.
All numbers in this story are aggregated across all participants. No individual usernames, identifiers, or personal data are shared.
Engagement in numbers
In just two weeks, the participants built a meaningful body of practice. The numbers below aren't views or clicks, they are full conversations with AI characters, with real soft-skill assessments behind them.
AI-generated worlds and scenarios entered
Conversations completed inside those episodes
Soft-skill data points captured by the AI
What that means in plain terms: on average, every dialogue produced almost two distinct soft-skill assessments. Practice wasn't cosmetic, every interaction generated learning data the team could see and act on.
Building a daily habit
The pilot was self-paced. Nobody was told to log in. And yet, the participants kept coming back, almost every workday. The strongest signal wasn't total activity. It was the streaks.
~0.85 episodes / day
Per active user. Nearly an episode every working day, without reminders or pressure, rare voluntary continuity for any kind of training.
14-day top streak
Two users kept a perfect streak across the full pilot window. Some wrote in unprompted when their streak broke, asking, Duolingo-style, if it could be recovered.
Top streaks per user
Consecutive active days, across all participants.
User 01
14dUser 02
14dUser 03
11dUser 04
10dUser 05
5dUser 06
4dUser 07
4dUser 08
4dUser 09
4dUser 10
4d
All identifiers anonymized. Cohort average: 7.4 days.
Where leadership showed up first
As the participants talked their way through scenarios, the AI assessed dozens of soft skills in the background. Across 2,548 assessments, three skills came up far more than any others, and two of them were leadership traits. Developers aren't just shipping code, they're leading.
times assessed
Initiative
Stepping up without being asked, proposing ideas, owning problems, moving things forward.
times assessed
Confidence
Speaking with conviction in code reviews, stand-ups and tough conversations.
times assessed
Coordination
Aligning work across people and timelines, the unsung skill behind shipping together.
Numbers = times each skill was detected and scored across all participants.
The story behind the numbers: developers don't just write code, they take initiative, speak up with confidence, and coordinate with others. WiseWorld made those leadership and teamwork moments visible for the first time.
Three learning journeys
When given the freedom to pick what to work on, the participants chose across three very human themes. Each one came with its own goal - and its own quiet barrier holding it back.
Inner Glow Up
Goal
Boost energy and confidence. Sharpen focus, let go of noise, and cultivate a positive mindset.
Common barrier
Overwhelm, self-doubt, overthinking, and fear of judgment.
Connection Superpowers
Goal
Expand networks, deepen conversations, improve social skills, and build meaningful relationships.
Common barrier
Reading social cues, fear of rejection, and keeping connections alive.
Career Rocket Fuel
Goal
Lead better, think bigger, communicate with confidence, and navigate career transitions.
Common barrier
Not feeling heard, lack of motivation, stress, and fear of failure.
What developers practiced
Behind every learning goal sits a specific skill the AI helps you build. These three came up most often as targets the participants wanted to actively grow.
times assessed
Social Perceptiveness
Reading the room, noticing what others feel and need before they say it.
times assessed
Persuasion
Getting buy-in for ideas without pushing, bringing people along, not over them.
times assessed
Confidence
Speaking up, owning a stance, and trusting your judgment under pressure.
Numbers = times each skill was selected as a practice target.
Notice the pattern: two of the top three are communication skills. Even in a developer-heavy team, the appetite for human, conversational growth is strong, if the format respects their time.
Making skills visible
Every conversation a developer had fed their personal PowerWheel - a visual map of 44 soft skills. As the pilot progressed, the wheel filled in. People could finally see their own progress, and team leaders could see strengths and gaps at a glance, without subjective scoring.
The wheel below shows the participants' average PowerWheel across all participants, no individual scores, just the shared shape of the team's soft-skill profile after two weeks.
- Replaces vague self-assessment with concrete, visual data
- Each user owns their own wheel, privacy stays with the individual
- Sharing was opt-in: many developers chose to share theirs
- L&D teams see aggregated patterns, not individual scores
- Teamwork
- Communication
- Cognitive Abilities
- Work Ethic
- Problem Solving
- Leadership
Voice of the participants
We surveyed the participants at the end of the pilot. The numbers below are aggregated from anonymous responses, no individual answers, just the shared signal.
Better self-understanding
"Yes" or "Partially" (n = 7)
Felt actual improvement
In at least one skill (n = 8)
See value for the community
"Yes" or "Maybe", zero said no (n = 8)
Likelihood to recommend
3-week pilot (n = 8)
Modal score 8/10 with ratings clustered in the upper half.
The signal in plain terms: most developers felt they understood themselves better. A clear majority felt actual improvement. And every single respondent saw value in offering this to their wider community.
What developers loved
We asked the participants what stood out. The patterns were consistent - and they tell us a lot about what makes soft skills training actually stick with engineers.
World generation & early stages
Building their own world to learn in. The opening days felt fresh and full of possibility.
Skill discovery & visualization
Seeing exactly which skills each conversation touched, with the chart to back it up.
Creating a character
Bringing their own avatar to life made the learning personal, not corporate.
Watching the PowerWheel evolve
The most loved feature. Visible progress, day after day, was deeply motivating.
Takeaways for HR & L&D
Two weeks. Ten developers. A clear pattern of what works.
- Voluntary daily practice is possible, even with developers, even mid-sprint. ~0.85 episodes / day, sustained.
- Streaks turn use into ritual. Two users hit a perfect 14-day run; some wrote in unprompted when their streak broke.
- Visible progress (the PowerWheel) is the single biggest motivator, the most-loved feature with participants.
- Soft skills show up in engineering work too: initiative, confidence, coordination led the assessment signal.
- Participants signed off on the experience: 86% understood themselves better, 100% saw value for their community, modal recommendation 8/10.
- Aggregate data gives L&D real signal without compromising individual privacy.