Gilles Crofils

Gilles Crofils

Hands-On Chief Technology Officer

Based in Western Europe, I'm a tech enthusiast with a track record of successfully leading digital projects for both local and global companies.1974 Birth.
1984 Delved into coding.
1999 Failed my First Startup in Science Popularization.
2010 Co-founded an IT Services Company in Paris/Beijing.
2017 Led a Transformation Plan for SwitchUp in Berlin.
August 2025 Eager to Build the Next Milestone Together with You.

Avoiding the early feedback trap

Abstract:

The article explores how early positive signals in new projects—like encouraging remarks from friends or initial spikes in signups—can easily mislead founders and teams due to optimism, confirmation bias, and reliance on vanity metrics. It emphasizes that feedback from non-target users or small, unrepresentative samples often masks underlying problems, and warns that genuine user actions, such as willingness to pay, referrals, or detailed critiques, provide far more reliable indicators of real progress than surface-level praise. To counter these pitfalls, the article offers practical strategies including selecting the right users for feedback, favoring qualitative insights over metrics when dealing with small numbers, and using tools like anonymous surveys, devil’s advocate reviews, and pre-set decision criteria to reduce bias. Real-world examples highlight the impact of both positive and negative early feedback—such as a founder changing course after a single honest critique, or teams being misled by early buzz only to confront later disengagement—illustrating the importance of structured frameworks and minimalist review tables to sort and interpret feedback effectively. The article concludes by stressing the value of transparent communication, record-keeping, and ethical disclosure, especially when projects overlap with professional obligations, and ultimately provides a toolkit to help innovators distinguish between fleeting excitement and genuine traction, ensuring early optimism leads to lasting success rather than disappointment.

Optimism often runs high in the early days of a new project. Every encouraging word feels like proof that things are headed in the right direction. But those first signals—whether a kind remark from a friend or a spike in early signups—can lead you astray. Reading too much into limited feedback is a common trap, and it’s easy to miss warning signs when excitement takes over. I’ve been there myself, both as a founder and as someone juggling side projects with a demanding CTO job.

This article explores why early signals can be so misleading, from confirmation bias to the risks of trusting simple metrics. I’ll share lessons I learned the hard way—like the time I launched a science popularization company and mistook polite encouragement for real demand. You’ll find practical strategies for reducing bias, choosing the right users for feedback, and understanding why genuine user actions matter more than surface-level praise. I’ll also show how simple frameworks and a few honest voices can help you make better decisions, even when the signals are confusing.

Here, you’ll find a toolkit for telling the difference between real progress and wishful thinking—so those first signs of traction help build lasting success, not just a quick flash of hope.

Why early signals can lead you astray

Hope, bias, and the early feedback trap

Optimism has its quirks. When you’re working on a new idea, it’s easy to spot signs of success everywhere—even in a single friendly comment or a nod from a colleague. Excitement makes every small approval feel like a green light. But it’s not just hope at play—our brains are wired to notice what supports our beliefs. This is confirmation bias, and it’s even more powerful when you care deeply about a project. You can become overconfident, seeing weak signals as solid proof.

Confirmation bias leads us to focus on feedback that matches our assumptions while brushing off criticism. If you have only a few early users, every positive remark feels important. This is risky, because a small sample can make one nice comment seem huge and you might miss red flags. Optimism bias adds to this, especially when early users are being polite. Sometimes, the feedback, it is just too polite. I once thought a single “great job!” from a friend meant I was onto something big—turns out, he just liked the logo.

Optimism bias is common among founders looking for validation. Early users—often friends or colleagues—might mean well, but sometimes they just want to be supportive. It’s easy to see this as true interest, missing critical product flaws. It’s not unusual for early praise to hide real problems that only pop up later when engagement drops. These psychological traps are only part of the issue. Practical problems with early feedback can also throw you off.

Small samples and vanity metrics

Feedback from friends or colleagues rarely gives the full picture. Their support can create a false sense of security, making a project seem more promising than it actually is. It’s easy to feel encouraged by kind words, only to realize later it was just loyalty, not real demand. Even early numbers can trick you—a handful of signups or likes from familiar faces may feel like traction, but these don’t often last.

Vanity metrics like a spike in signups, a few compliments, or social attention are common in the first days. These numbers feel good but usually hide a lack of real engagement. It’s easy to celebrate early numbers, but without continued activity or real feedback, those numbers lose meaning fast.

Some usual vanity metrics:
- Number of signups in the first week
- Social media mentions or shares
- General compliments from acquaintances

In my experience, a spike in early signups rarely translated to active users—one project saw 50 signups in the first week, but only two returned after the initial test. That was a tough pill to swallow, but it taught me to look beyond the numbers.

Examples show how quickly these signals fade. Many products launch to excitement but soon see engagement drop. Teams sometimes celebrate early buzz from launch sites or news, only to find the market wasn’t as interested as it seemed. To avoid these traps, it’s best to use frameworks that help separate real signal from noise—focusing on real user engagement over surface excitement.

Making sense of early signals

Choosing the right users for feedback

Who gives feedback can matter more than how many give it. The “3-user rule” says that testing with just three to five well-chosen users will spot most major issues—if those users really match your audience. It’s not about big numbers; it’s about depth and relevance. A small group of aligned users gives insights a big, random crowd can miss. Studies in user experience research back this up: having the right people matters more than having a lot of them early on.

Why user alignment matters

Feedback from people who don’t match your core users can quickly take you off track. If your testers are just friends or whoever’s around, their views may not reflect your real market. This can lead to wasted time, chasing ideas that don’t fit. Focusing on users similar to your audience makes it easier to learn quickly and change direction if needed. When I was leading an IT services company in Beijing, I learned this the hard way—early feedback from expat friends made me think we were on the right path, but our real customers had very different needs. Teams that work this way often spot big issues early and avoid common mistakes from misunderstood feedback.

The value of rapid, iterative cycles

Short, focused feedback cycles with small, targeted groups lead to faster improvements and less risk. By gathering feedback, adjusting, and looping back with the same group, you learn what works before scaling. This method—used by companies like Microsoft and teams who favor Lean Startup ideas—gets you closer to meeting real user needs. With small groups, the kind of feedback you collect matters as much as who you collect it from.

What feedback to trust with tiny samples

If you have fewer than ten users, the numbers are not that helpful. The best insights are in qualitative feedback:
- Recurring themes
- Shared frustrations
- Unique ideas that keep coming up

If a few users mention the same problem, that’s worth checking out. One-off comments may be interesting but often don’t reflect the bigger picture. At this stage, patterns and stories matter more than statistics.

Why numbers can be misleading

Tiny groups make metrics swing wildly and can look misleading. Instead of following shaky numbers, it’s better to look at user stories, specific problems, and how people react while using the product. A happy sigh or “this finally works” can mean more than a spreadsheet of clicks. Simple tools like handwritten notes or sticky notes make it easy to organize feedback without overcomplicating things.

How to spot real patterns

Putting feedback into groups—like with sticky notes or basic lists—helps separate genuine signals from one-off noise. Sorting feedback is a bit like tending a garden—some ideas need pruning, others just need time to grow.

  • Group comments by theme (e.g., confusion, praise, feature requests)
  • Count how often each theme comes up
  • Highlight feedback that challenges your current direction

For example, if three out of five users say a feature is confusing, that’s a clue to act on. Templates and checklists, common in product and design work, help track these patterns. Once you see main themes, look for real signs of excitement.

Signs of real user enthusiasm

Real interest shows in action. When users offer to pay, sign up, or refer others, these count as stronger signals than polite praise. Other signs of real potential:
- Users describing their own pain points in detail
- Requests for updates or next steps
- Willingness to try new features or versions
- Referrals to friends or coworkers

Each of these is stronger than a basic compliment. The more concrete the commitment, the more reliable the signal.

What to look for in user behavior

Users who are truly interested may:
- Share specific problems your product solves for them
- Ask about more involvement or upcoming features
- Volunteer for more tests or pilots
- Refer others who might benefit

A simple checklist can keep things clear and help avoid reading too much into friendly feedback. Tracking willing payers, update requests, and referrals can show which signals truly matter.

Staying focused with a checklist

Recording concrete user actions—like readiness to pay, requests for updates, and referrals—helps sift out empty praise or curiosity. This focuses attention on feedback that actually matters for learning fit with your market. Even with these tools, bias can creep in, so practical steps are needed to stay honest.

Practical ways to reduce bias in early feedback

Getting honest answers with anonymous feedback

Anonymous surveys and blind feedback channels help get real answers. When feedback isn’t tracked back to someone, they’re much more likely to be honest—even if the opinion isn’t what you want. This brings hidden issues to the surface. Neutral interviewers can boost this effect, removing pressure to be agreeable. It’s often surprising how much more direct people are when they know it’s private.

Using neutral facilitators or collecting feedback by email or recordings also helps users feel less pressure. When nobody is watching for a reaction, people are more thoughtful and real. This helps catch big issues early. The way you ask questions matters too.

Open-ended questions invite details and stories, not just ratings. Instead of asking, “Did you like the product?” try “What would you change?” or “Was anything confusing or frustrating?” These questions bring out strengths and weaknesses that simple scores miss. Beyond feedback collection, structured disagreement can test assumptions.

Challenging assumptions with devil’s advocate reviews

Bringing in someone to question your assumptions or doing a pre-mortem can help. Let someone argue against your current thinking or imagine reasons the project might fail. This highlights risks and weak spots. Scenario tests—like wondering what you’d do if your best feedback was a fluke—reveal if your direction is solid.

Scenario testing means asking, “If our top feedback vanished, what would that mean?” This exposes holes in your thinking and keeps you honest. Prompting yourself with questions like, “What would make us change our mind?” helps guard against only seeing what you want to see. Simple checklists of these questions can keep things balanced. Setting clear criteria ahead of time makes decisions safer.

Making decisions with clear, pre-set criteria

Setting specific goals before testing—like minimum signups or user actions—makes decisions more grounded. When hard choices come up, these rules stop you from moving the finish line. A practical way is to write down the minimum behaviors or actions that count as progress and stick to it, even if results don’t meet hopes. These cutoffs make decisions easier.

With clear rules, it’s simpler to decide to continue, change course, or stop. Even when the result is unclear, having set criteria helps you act rather than stall or rationalize. Simple templates or tables can help track these decisions.

Basic review checklists—a table or spreadsheet listing criteria, outcomes, and next steps—keep the focus where it should be. Tracking these points helps you spot when goals are drifting and helps avoid interpreting data too optimistically. I learned this lesson after wasting weeks chasing a feature that only one user wanted—if I’d set clearer criteria, I could have saved myself a lot of time.

When feedback changed everything

One user, big impact

Sometimes, one honest comment is more valuable than lots of polite compliments. When I launched my science popularization company, I spent months building based on assumptions. Then, our first real user sent a long, thoughtful critique. The feedback stung at first, but it saved us months on the wrong path and opened up better options. Sometimes, one honest voice is worth more than a room of applause.

Other times, unexpected validation means just as much. While leading a multicultural team in Beijing, we received an unsolicited testimonial from a local expert who truly understood the problem we were solving. This wasn’t just polite approval—it was a real sign that our product mattered to people outside our usual circle. That kind of feedback gave us the confidence to keep building, even when progress felt slow.

Not all early feedback means pushing on or pivoting. Sometimes, early feedback makes it clear that quitting is best. For instance, the first paying user might send tough critiques that reveal fundamental flaws. Instead of ignoring them, I’ve learned to step back and decide to quit before wasting resources. It might not be what you hope, but in the long run, it saves more effort. Of course, not all early signals are this obvious—some are more confusing or misleading.

Lessons from signals that led astray

Even well-known products have been fooled by early excitement. Some launches bring a burst of signups or news coverage, giving the impression of strong demand. But excitement often drops off, engagement fades, and the reality settles in—the early numbers didn’t show true interest. Taking a closer look at who sticks around and why gives a clearer picture. Real traction takes more than a good launch day.

Big companies face these problems too. Instagram started as a broader product, and Dropbox’s first users weren’t always its true market. Their teams learned that early users from the wrong crowd could hide the need for major changes. Only after checking feedback from real target users did they find the right path.

False positives aren’t the only danger—sometimes, early negative feedback also deserves another look. Superhuman’s team almost quit early because of mixed feedback, but looking at the comments more carefully revealed a few passionate users. That insight helped them focus and grow into a good fit for their market.

Leaving a stable CTO role in Berlin to pursue a side project was both thrilling and terrifying—especially when early feedback was mixed. I remember staring at a spreadsheet of signups, wondering if I’d made a huge mistake. Sometimes, a roadblock means it’s time to look deeper, not just turn back. To avoid confusion, using review templates and checklists makes it easier to see what feedback really means.

Checklist for confident next steps

Minimalist review table

Organizing and sorting feedback helps avoid misreading signals, especially when early feedback is mixed. A minimalist review table brings order. List columns like feedback description, where it came from, how often it comes up, if it matches your original idea, action needed, and notes. For example:

Feedback Source Frequency Hypothesis Alignment Action Notes/Opportunity Feature X confusing Beta user 3 of 8 Contradicts Investigate UI May need onboarding Wants mobile version Target user 2 of 8 Neutral Consider scope Potential feature add “Great job!” Friend 1 of 8 Supports No action Vanity signal

Keeping things simple makes it easier to see patterns—good or bad—without being distracted by surface praise.

Start by gathering every comment, then:
- Group by feedback type (bug, feature request, praise, confusion)
- Count how often each theme comes up
- Mark feedback that challenges your current direction

This filters out noise, letting you see where real chances or problems may be hiding.

It also helps to note the source of each comment. Mark when feedback comes from non-target users or lines up too neatly with your own hopes. Ask yourself:
- Is the feedback from a target user?
- Does it challenge or support your main idea?
- Is it a one-off or a recurring issue?

When I first balanced a side project with my CTO role in Berlin, I set clear boundaries to avoid burnout—tracking not just user feedback, but also my own energy and time. If I noticed my evenings disappearing or my stress rising, I knew it was time to pause and reassess. Financial realities matter too: I kept a close eye on expenses and only invested what I could afford to lose.

Once you’ve reviewed your signals, clear communication of results is the final step.

Sharing ambiguous results

When early results are unclear, share them honestly and soon, using written records or shared notes for clarity. This avoids confusion, especially when you’re part of a team. Keeping records, even when things are not clear, supports transparency and protects your reputation.

For side projects, early disclosure to your employer is also smart. If your project might overlap with your job, or if a conflict could come up, it’s wise to share info early. During my time in Berlin, I learned the importance of double-checking my employment contract before launching a side project, especially when company policies around moonlighting were unclear. This prevents misunderstandings around ownership or company rules and shows respect for ethical collaboration.

Following policies and keeping all communications documented helps cover your bases. Saving emails, meeting notes, and forms protects your work and your position. With these habits, you can move forward with more confidence, even when early signals don’t give a clear answer.

Spotting early progress feels exciting, but it’s easy to let excitement and bias cloud your view. A few nice words or a jump in signups may look promising, but these signals rarely show the full picture. By focusing feedback on real target users, tracking real actions not just praise, and using simple checklists, smarter decisions become possible. Simple steps like honest feedback and setting clear goals keep things grounded, helping projects stay on track. Every idea deserves a fair shot, especially when early signs are sifted from wishful thinking.

And if you ever feel overwhelmed by all the signals, remember: sorting feedback is a bit like tending a garden—some ideas need pruning, others just need time to grow. And sometimes, you just need to step back and enjoy the process, maybe with a bit of gardening or carpentry on the weekend to clear your head.

You might be interested by these articles:


25 Years in IT: A Journey of Expertise

2024-

My Own Adventures
(Lisbon/Remote)

AI Enthusiast & Explorer
As Head of My Own Adventures, I’ve delved into AI, not just as a hobby but as a full-blown quest. I’ve led ambitious personal projects, challenged the frontiers of my own curiosity, and explored the vast realms of machine learning. No deadlines or stress—just the occasional existential crisis about AI taking over the world.

2017 - 2023

SwitchUp
(Berlin/Remote)

Hands-On Chief Technology Officer
For this rapidly growing startup, established in 2014 and focused on developing a smart assistant for managing energy subscription plans, I led a transformative initiative to shift from a monolithic Rails application to a scalable, high-load architecture based on microservices.
More...

2010 - 2017

Second Bureau
(Beijing/Paris)

CTO / Managing Director Asia
I played a pivotal role as a CTO and Managing director of this IT Services company, where we specialized in assisting local, state-owned, and international companies in crafting and implementing their digital marketing strategies. I hired and managed a team of 17 engineers.
More...

SwitchUp Logo

SwitchUp
SwitchUp is dedicated to creating a smart assistant designed to oversee customer energy contracts, consistently searching the market for better offers.

In 2017, I joined the company to lead a transformation plan towards a scalable solution. Since then, the company has grown to manage 200,000 regular customers, with the capacity to optimize up to 30,000 plans each month.Role:
In my role as Hands-On CTO, I:
- Architected a future-proof microservices-based solution.
- Developed and championed a multi-year roadmap for tech development.
- Built and managed a high-performing engineering team.
- Contributed directly to maintaining and evolving the legacy system for optimal performance.
Challenges:
Balancing short-term needs with long-term vision was crucial for this rapidly scaling business. Resource constraints demanded strategic prioritization. Addressing urgent requirements like launching new collaborations quickly could compromise long-term architectural stability and scalability, potentially hindering future integration and codebase sustainability.
Technologies:
Proficient in Ruby (versions 2 and 3), Ruby on Rails (versions 4 to 7), AWS, Heroku, Redis, Tailwind CSS, JWT, and implementing microservices architectures.

Arik Meyer's Endorsement of Gilles Crofils
Second Bureau Logo

Second Bureau
Second Bureau was a French company that I founded with a partner experienced in the e-retail.
Rooted in agile methods, we assisted our clients in making or optimizing their internet presence - e-commerce, m-commerce and social marketing. Our multicultural teams located in Beijing and Paris supported French companies in their ventures into the Chinese market

Cancel

Thank you !

Disclaimer: AI-Generated Content for Experimental Purposes Only

Please be aware that the articles published on this blog are created using artificial intelligence technologies, specifically OpenAI, Gemini and MistralAI, and are meant purely for experimental purposes.These articles do not represent my personal opinions, beliefs, or viewpoints, nor do they reflect the perspectives of any individuals involved in the creation or management of this blog.

The content produced by the AI is a result of machine learning algorithms and is not based on personal experiences, human insights, or the latest real-world information. It is important for readers to understand that the AI-generated content may not accurately represent facts, current events, or realistic scenarios.The purpose of this AI-generated content is to explore the capabilities and limitations of machine learning in content creation. It should not be used as a source for factual information or as a basis for forming opinions on any subject matter. We encourage readers to seek information from reliable, human-authored sources for any important or decision-influencing purposes.Use of this AI-generated content is at your own risk, and the platform assumes no responsibility for any misconceptions, errors, or reliance on the information provided herein.

Alt Text

Body