Ew. Most people I’ve seen w/ long nails type with the pads of their fingers, with absolutely terrible typing posture.
Mama told me not to come.
She said, that ain’t the way to have fun.
Ew. Most people I’ve seen w/ long nails type with the pads of their fingers, with absolutely terrible typing posture.
As someone who is turned off by long nails (indicates they may be high maintenance), I enjoy reading about what others see in them.
My SIL has long nails, and watching her use her phone and laptop is just fascinating. So many problems I just never even considered…
What are you eating, Kevlar?
Exactly. There are a ton of stupid products out there, and ecosystems around those stupid products, and I think that’s awesome. Variety is the spice of life after all. For example:
Yet each of those has facilitated variety. Cars are an expression of what we value, hair styles are a huge part of our identities, and plus-sized product lines can build confidence and have created a market all their own. I certainly won’t ever understand a ton of the products that exist, but I like that those products exist, because it means that there’s a ton of variety in how we live our lives.
So yeah, keep making weird solutions to unnecessary problems. But at the same time, let’s try to do it in a way that doesn’t destroy our planet.
But if you have the nails, then you need something to make it easier to type, assuming your job involves a lot of typing. Just because the need was created by fulfilling a want doesn’t make it less of a need, because at the end of the day, anything could be reduced down to wants instead of needs, and that’s not helpful.
Yup. I actually only take a 50% workload because half of my time is spent in random meetings telling people no, or giving obscenely high estimates that essentially amount to “no.” The other half of my time is fixing problems from when they didn’t listen when I said “no.”
Such is life I guess. But occasionally, I get to work on something new. And honestly, that’s fine, I’ve long since stopped caring about my name showing up on things.
Yeah, it makes no sense. AI is at best a replacement for junior devs and interns.
My boss comes to me saying we must finish feature X by date Y or else.
Me:
We’re literally in this mess right now. Basically, product team set out some goals for the year, and we pointed out early on that feature X is going to have a ton of issues. Halfway through the year, my boss (the director) tells the product team we need to start feature X immediately or it’s going to have risk of missing the EOY goals. Product team gets all the pre-reqs finished about 2 months before EOY (our “year” ends this month), and surprise surprise, there are tons of issues and we’re likely to miss the deadline. Product team is freaking out about their bonuses, whereas I’m chuckling in the corner pointing to the multiple times we told them it’s going to have issues.
There’s a reason you hire senior engineers, and it’s not to wave a magic wand and fix all the issues at the last minute, it’s to tell you your expectations are unreasonable. The process should be:
If you skip some of those steps, you’re going to have a bad time.
Yup, but with good headsets costing way more than good monitors and generally needing even better GPUs, I’m just not interested. Yeah, the immersion is cool, but at current prices and with the current selection of games, the value proposition just isn’t there. Add to that the bulk, it’ll probably be on my wishlist for a while (then again, Bigscreen VR headset looks cool, just need a way to swap pads so my SO/kids can try it).
So yeah, maybe in 5-10 years it’ll make more sense. It could also happen sooner if consoles really got behind it, because they’re great at bringing down entry costs.
All software has bugs. I prefer the human-generated bugs, they’re much easier to diagnose and solve.
Can confirm. At our company, we have a tech debt budget, which is really awesome since we can fix the worst of the problems. However, we generate tech debt faster than we can fix it. Adding AI to the mix would just make tech debt even faster, because instead of senior devs reviewing junior dev code, we’d have junior devs reviewing AI code…
As a senior dev, this sounds like job security. :)
Eh, I’m a senior dev, and I don’t ban it (my boss, the director, does that for me lol; he’s worried about company secrets leaking).
In fact, we had an interview for a senior dev position, and the applicant asked if they could use AI, and I told them to use whatever tools they normally would for development. It shouldn’t come as a surprise that they totally botched the programming challenge because of it (introduced the same bug twice, then said they were very confident in the correctness of the code…), and that made it so much easier to filter them out from our hiring pool. If you’re going to use a tool in an interview, you better feel confident with it. If that dev had solved the problem significantly faster than our other applicants, I would’ve taken that to my boss to have the team experiment with it. We target budget 30 min for our challenges, and our seniors generally finish in under 20, and it took them more than our allotted time to get the code to actually run properly (and that’s with us pointing out certain mistakes the AI generated).
But no, I haven’t seen an actually productive use of AI for software development, beyond searching for docs online (which you can totally do w/ Bing or Google w/o involving our codebase). You may feel more productive because more code is appearing on the screen, but the increase in bugs likely reduces overall productivity. We’re always looking for ways to improve, but when I can solve the same problem in my bare-bones editor (vim) faster than my more junior colleagues can with their fancy IDEs, I really don’t think AI is going to be the thing that improves our productivity, actually understanding logic will. If someone demonstrates that AI does save time, I’ll try it out and campaign for it.
Anyway, that’s my take as someone who has been in the industry for something like 15 years. Knowing your tools is more important, IMO, than having more tools.
I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn’t notice.
AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you’re doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It’ll do a decent job at certain prompts (i.e. generate me a function/class that…), but you’re going to need to go through line-by-line and make sure it’s actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don’t bother w/ AI, but YMMV.
An area where it’s probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, “find the docs for a function in library X that does…” I know what I want, I just may not remember the name or the module, and I certainly don’t remember the argument order.
I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn’t recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I’ve had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.
I absolutely think SW engineering is going to be a great career long-term, I just can’t encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you’re passionate about it, you’re going to ignore whatever I say anyway, and you’ll succeed. But if my recommendation changes your mind, then you probably aren’t passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.
I’m not worried at all about my job or anyone on my team, I’m worried for the next batch of CS grads who chatGPT’d their way through their degree. “Cs get degrees” isn’t going to land you a job anymore, passion about the subject matter will.
VR
Yeah, I think it’s ripe for an explosion, provided it gets more accessible. Right now, your options are:
I’m unwilling to do either, so I’m sitting on the sidelines. If I can get a headset for <$500 that works well on my platform (Linux), I’ll get VR. In fact, I might buy 4 so I can play with my SO and kids. However, I’m not going to spend $2k just for myself. I’m guessing a lot of other people are the same way. If Microsoft or Sony makes VR accessible for console, we’ll probably see more interest on PC as well.
People are not upgrading because they don’t see the need
Exactly. I have a Ryzen 5600 and an RX 6650, and it basically plays anything I want to play. I also have a Steam Deck, and that’s still doing a great job. Yeah, I could upgrade things and get a little better everything, but I can play basically everything I care about (hint: not many recent AAA games in there) on reasonable settings on my 1440p display. My SO has basically the same setup, but with an RX 6700 XT.
I’ll upgrade when either the hardware fails or I want to play a game that needs better hardware. But I don’t see that happening until the next round of consoles comes out.
non-gaming laptop
It’s perhaps more important for non-gaming laptops, because if you’re buying a gaming laptop, you’re probably also buying a higher-end monitor (so USB-C/Thunderbolt). For a regular laptop, having HDMI means you can connect to a TV and play a video, share a screen, etc. You’re more likely to do that with a more portable laptop than a bulky gaming laptop.
The alternative is needing to bring a dongle everywhere. On a non-gaming laptop, I only really need like three ports: USB-A for older stuff, USB-C for dock and power, and HDMI for TVs and monitors. An extra USB-A would be nice, but hardly necessary (I’d prefer an ethernet port, but I think that ship has sailed).
Here are the things I use most frequently:
So outside of charging and plugging into the dock at my desk, I have zero use for USB-C. So I only need one USB-C, because the only time I use it is when I can just use a dock at my desk. I have never used more than 2 USB-C ports at a single time (and that only happens at work, when I’m rechanging the laptop while plugging into the USB-C monitor), and that’s only because my work monitor doesn’t provide enough power to charge my laptop.
Idk, if you put all of the blame on the scammer, there’s less reason to change behavior to prevent the next time. It’s fine if it’s to get past the initial anxiety and start making progress toward a solution, but that feeling of being stupid really motivates me to change my behavior to protect myself better.
You shouldn’t blame yourself for causing the problem, but you should recognize the actions you should have taken to prevent the problem. If you don’t have at least some shame, why would you do something different the next time?
Ew, I would hate to be in charge of code reviews at an org like that.
The proper metric is success of the actual product. We have our engineers give estimates, then hold them to those estimates and evaluate based on consistency of on-time releases and number of production bugs. At the end of the day, predictable, high quality delivery is usually more valuable than faster time to market, unless you’re in a startup or something and just need to get early adopters on-board. Judge QA by defects discovered in production and devs by defects found by QA and in production. It’s really not that hard.