• 13 Posts
  • 3.53K Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle

  • As a developer

    • I can jot down a bunch of notes and have ai turn it into a reasonable presentation or documentation or proposal
    • zoom has an ai agent which is pretty good about summarizing a meeting. It usually just needs minor corrections and you can send it out much faster than taking notes
    • for coding I mostly use ai like autocomplete. Sometimes it’s able to autocomplete entire code blocks
    • for something new I might have ai generate a class or something, and use it as a first draft where I then make it work

  • Not really.

    Linter in the build pipeline is generally not useful because most people won’t give results time or priority. You usually can’t fail the build for lint issues so all it does is fill logs. I usually configure a linter and prettifier in a precommit hook, to shift that left. People are more willing to fix their code in small pieces as they try to commit.

    But this is also why SonarQube is a key tool. The scanners are lint-like, and you can even import some lint output. But the important part is it tries to prioritize them, score them, and enforce a quality gate based on them. I usually can’t fail a build for lint errors but SonarQube can if there are too many or too priority, or if they are security related.

    But this is not the same as a code review. If an ai can use the code base as context, it should be able to add checks for consistency and maintainability similar to the rest of the code. For example I had a junior developer blindly follow the AI to use a different mocking framework than the rest of the code, for no reason other than it may have been more common in the training data. A code review ai should be able to notice that. Maybe this is too advanced for current ai, but the same guy blindly followed ai to add classes that already existed. They were just different enough that SonarQube didn’t flag is as duplicate code but ai ought to be able to summarize functionality and realize they were the same. Or I wonder if ai could do code organization? Junior guys spew classes and methods everywhere without any effort in organizing like with like, so someone can maintain it all. Or how about style? I hope yo never revisit style wars but when you’re modifying code you really need to follow style and naming of what’s already there. Maybe ai code review can pick up on that


  • Shame. There was a time that people dug out of their own messes, I think you learn more, faster

    Yes, that’s how we became senior guys. But when you have deadlines that you’re both on the hook for and they’re just floundering, you can only give them so much opportunity. I’ve had too many arguments with management about letting them merge and I’m not letting that ruin my code base

    Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?

    We have a new VP collecting metrics on everyone, including lines of code, number of merge requests, times per day using ai, days per week in the office vs at home



  • Code reviews seem like a good opportunity for an LLM. It seems like they would be good at it. I’ve actually spent the last half hour googling for tools.

    I’ve spent literally a month in reviews for this junior guy on one stupid feature, and so much of it has been so basic. It’s a combination of him committing ai slop without understanding or vetting it, and being too junior to consider maintainability or usability. It would have saved so much of my time if ai could have done some of those review cycles without me



  • For some of us that’s more useful. I’m currently playing a DevSecOps role and one of the defining characteristics is I need to know all the tools. On Friday, I was writing some Java modules, then some groovy glue, then spent the after writing a Python utility. While im reasonably good about jumping among languages and tools, those context switches are expensive. I definitely want ai help with that.

    That being said, ai is just a step up from search or autocomplete, it’s not magical. I’ve had the most luck with it generating unit tests since they tend to be simple and repetitive (also a major place for the juniors to screw up: ai doesn’t know whether the slop it’s pumping out is useful. You do need to guide it and understand it, and you really need to cull the dreck)


  • I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.

    So far ai overall creates more mess faster.

    Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.

    My company wants metrics on how much time we’re saving with ai, but

    • I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative
    • it’s just another tool. There’s not really a defined task or set time. If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?










  • NJ is fun to tease.

    • its shape seems perfect for the highway
    • its an ugly part of nyc
    • an ugly part of Philadelphia
    • and Atlantic City went way downhill (admittedly haven’t been there since Trump was bankrupting casinos)

    But actually yes. One of my buddies from college was from a very nice part of NJ, exactly like you describe. Well worth visiting and really shows off what a great place NJ can be