In my defence, I manually verify every test/calculation by hand, but so far copilot is nearly 100% accurate with the tests it generates. Unless it is something particularly complex you’re working with, if copilot don’t understand what a function does, you’ve might want to check if the function should be simplified/split up. Specific edge cases I still need to write myself though, as copilot seems mostly focused on happy paths it recognise.
I’m a bit of a TDD person. I’m not as strict about it as some people are, but the idea of just telling AI to look at your code and make unit tests for it really rubs me the wrong way. If you wrote the code wrong, it’s gonna assume it’s right. And sure, there are probably those golden moments where it realizes you made a mistake and tells you, but that’s not something unique to “writing unit tests with AI”, you could still get that without AI or even with it just by asking it to review the code.
I’m not dogmatic about test driven development, but seeing those failing tests is super important. Knowing that your test fails without your code but works with your code is huge.
So many unit tests I see are so stupid. I think people just write them to get coverage sometimes. Like I saw a test the other day a coworker wrote for a function to get back a date given a query. The test data was a list with a single date. That’s not really testing that it’s grabbing the right one at all.
It’s just sort of a bigger problem I see with folks misunderstanding and/or undervaluing unit tests.
If you wrote the code wrong, it’s gonna assume it’s right.
Yeah, that might be an issue if copilot base the tests on the code. I only write tests for the core (pure) functions, so it’s fairly easy to just say what the inputs and the expected output should be and let copilot have at it. Testing stateful functions is a can of worms it’s often better to design around if your toolset supports it.
I obviously don’t have any context for what sort of project you’re working on and I’m sure it’s very different from mine, but I’m currently working on a distributed system with Erlang/Elixir, and often all I want to check is that the happy path gives the expected output. Catching strange edge cases that happens in the shell module due to unexpected state is something I’m happy to just let fail, and have the supervisor clean up and restart to a known state. It’s quite freeing to not write defensive code.
What sort of test cases would you want to write for querying a date? Some ISO-8601 verification?
Whenever I see someone say “I write my unit tests with AI” I cringe so hard.
In my defence, I manually verify every test/calculation by hand, but so far copilot is nearly 100% accurate with the tests it generates. Unless it is something particularly complex you’re working with, if copilot don’t understand what a function does, you’ve might want to check if the function should be simplified/split up. Specific edge cases I still need to write myself though, as copilot seems mostly focused on happy paths it recognise.
I’m a bit of a TDD person. I’m not as strict about it as some people are, but the idea of just telling AI to look at your code and make unit tests for it really rubs me the wrong way. If you wrote the code wrong, it’s gonna assume it’s right. And sure, there are probably those golden moments where it realizes you made a mistake and tells you, but that’s not something unique to “writing unit tests with AI”, you could still get that without AI or even with it just by asking it to review the code.
I’m not dogmatic about test driven development, but seeing those failing tests is super important. Knowing that your test fails without your code but works with your code is huge.
So many unit tests I see are so stupid. I think people just write them to get coverage sometimes. Like I saw a test the other day a coworker wrote for a function to get back a date given a query. The test data was a list with a single date. That’s not really testing that it’s grabbing the right one at all.
It’s just sort of a bigger problem I see with folks misunderstanding and/or undervaluing unit tests.
Yeah, that might be an issue if copilot base the tests on the code. I only write tests for the core (pure) functions, so it’s fairly easy to just say what the inputs and the expected output should be and let copilot have at it. Testing stateful functions is a can of worms it’s often better to design around if your toolset supports it.
I obviously don’t have any context for what sort of project you’re working on and I’m sure it’s very different from mine, but I’m currently working on a distributed system with Erlang/Elixir, and often all I want to check is that the happy path gives the expected output. Catching strange edge cases that happens in the shell module due to unexpected state is something I’m happy to just let fail, and have the supervisor clean up and restart to a known state. It’s quite freeing to not write defensive code.
What sort of test cases would you want to write for querying a date? Some ISO-8601 verification?