Wow, major props to the team for shipping this. It’s one thing to promise AI-driven QA. It’s another to deliver a product people might actually rely on... That said, I’m curious (and a little skeptical): how confident can teams be that the autogenerated tests will catch the weird edge cases, or adapt when UI or flows change mid-sprint? If the AI gets brittle do we end up more debugging the tests than our actual product ?? Anyway, this is the kind of push QA needs... Congratulations again..