It takes more than great code
to be a great engineer.

Soft Skills Engineering is a weekly advice podcast for software developers.

The show's hosts are experienced developers who answer your questions about topics like:

  • pay raises
  • hiring and firing developers
  • technical leadership
  • learning new technologies
  • quitting your job
  • getting promoted
  • code review etiquette
  • and much more...

Soft Skills Engineering is made possible through generous donations from listeners. A heart with a striped shadowSupport us on Patreon

A speech bubble

Why should you listen?

Here's what listeners say:

Recent Episodes

Latest Episode

Episode 478: Can you coach self-awareness and my boss is an llm

Download

In this episode, Dave and Jamison answer these questions:

  1. Can you coach self-awareness? I manage someone who seems to believe their skill set is on par with their teammates, regardless of their constant PR feedback regarding the same issues over and over, the extra attention they are regularly given to help them overcome coding challenges, and the PIP they are currently on to address these issues (and others). What are some approaches I could take to help steer them to better understand their areas for growth when explicit measures don’t seem to get through?

  2. I work at a small 10-person startup. The company has absolutely nothing to do with AI, but one of the founders has gone full evangelist. He genuinely believes AGI is arriving this year and that there isn’t a single job, task, or process where an LLM isn’t the obvious tool.

    Day in, day out, he’s posting links to random AI products with captions like “looks interesting 👀”. It’s like Clippy got a16z funding, moved to Shoreditch, and now spends his days flogging us apps we didn’t ask for. He also insists we “use AI more in development,” despite not understanding development in the slightest.

    The routine is always the same:

    1. He asks the engineering team how to achieve some goal (always involving an LLM).
    2. We give a sensible answer, weighing complexity, cost, feasibility.
    3. He comes back with a massive pasted transcript: “here’s what ChatGPT thinks.”
    4. We pick out what’s actually useful, quietly bin the nonsense.
    5. He takes our response, shoves it straight back into ChatGPT, and returns with another transcript: “here’s what ChatGPT thinks.”

    This has been going on for months. At this point, he’s basically a human middleware layer for ChatGPT — no analysis, no original thought, just endless copy-paste recursion. I’m genuinely worried he’s outsourcing his entire thinking process to a chatbot and slowly losing the ability to engage with ideas on his own.

    How do I tell him — politely but firmly — that this is both rude and a bit tragic? And, half-serious: is there a prompt injection I can use to jailbreak my founder back into being an actual founder rather than a ChatGPT relay bot?

A smiling speech bubble

Episode 477: Four months and I already hate my job and grumpy and fuzzy

Download

In this episode, Dave and Jamison answer these questions:

  1. Hey guys,

    I have been working for four months at my job and I already don’t like it.

    This is my first job out of college and I work as a C# backend engineer for a small B2B SaaS company. I really think this company is a dead end. There is a lot of technical debt and antipatterns and we have no automated testing whatsoever. Most of our time is spent manually debugging but no one wants to refactor.

    I’m already thinking about working somewhere else. However, it took me a while to get this job, and I don’t think the market has gotten any better since. I’m trying to decide whether I should focus on applying to jobs again or if I should work on a bunch of side projects and open source to stand out better. On one hand, I can learn new technologies on my own to make me stand out for my next job, but on the other hand, I feel like as long as I stay at this company I am wasting time, since I’m not learning from my job. I want to switch to more distributed backend engineering in Java anyways, but I’m not sure how to go about it.

  2. Listener Ghani asks,

    “I’m a mid-level software engineer who has trouble communicating with my engineering manager and product manager when there is unclear or missing information about an assignment/story/project.

    They answer with hostile/dismissive tone/non-answer (e.g it’s on the jira-card, epic, etc). They course correct when they have the information later, harshly

    my impressions were

    • they don’t have the information at the time
    • they expect engineers to make decision
    • they expect engineers to know something they don’t (e.g architecture, infrastructure, past decision, plans, etc)

    I really want to look for where we can have a safe exchange of information. How can I do this?

A smiling speech bubble

Episode 476: How much help is too much help and guarding against slop

Download

In this episode, Dave and Jamison answer these questions:

  1. Two junior engineers recently joined my team, and I’ve been tasked with onboarding them. This is the first time I’ve been responsible for junior devs, and I’m struggling with how to coach them up. For context, we’re a small engineering team where self-sufficiency is highly valued; processes/overhead is minimal, and we have a real bias for action. As such, when they ask me for help, my intuition is often to respond “Keep looking, figure it out!”; in my mind, walking them to the answer would be anthithetical to our culture and set the wrong expectation for how they should go about solving problems. This is especially the case when they throw their hands up and say “Help, I’m stuck, what do I do”. Though, I don’t want to be so unhelpful that it frustrates them or legitimately impedes their progress. I’ve also noticed them sometimes going “behind” me to ask others engineers for help, which makes me think that I am being too unhelpful. The number one question I ask myself is: How much help should I be giving them? How do I find the right balance here?

  2. I’m seeing more and more AI slop in my org’s code base that I fear will have meaningful impact on the integrity and maintainability of the application we deliver to customers. Everyone talks the talk of “Ultimately, it’s the implementer’s responsibility to audit and understand the code they ship,” but few seem to walk the walk. How can I best work with my team to address this, especially in a context where leadership is prioritizing velocity?