Cal.com Goes Closed Source: The AI Reckoning for Open Software
On April 15, 2026, Cal.com — one of the most visible open source scheduling platforms on the planet — announced it was moving its core codebase behind closed doors. Five years of public repositories, community contributions, and a strong open source brand, gone in a single blog post. The reason, according to founder Peer Richelsen, is not a pivot to proprietary monetisation. It is a survival decision forced by AI.
The announcement landed eight days after Anthropic unveiled Mythos Preview, an internal model that finds and exploits zero-day vulnerabilities across every major operating system and browser. It came weeks after AI-driven scanners uncovered a 27-year-old vulnerability in the BSD kernel and generated working exploits within hours. For Cal.com's enterprise customers — hospitals, banks, government agencies — the math stopped working. If you ship sensitive scheduling infrastructure, publishing your source is publishing a blueprint.
This is not a one-off PR moment. It is the clearest signal yet that AI has rewritten the economics of software security, and every company shipping code needs to rethink what open means in 2026.
What Cal.com actually did
The company moved its main product repository to a private codebase. Cal.com for Teams, Cal.com for Enterprise, and the hosted product are now closed source. In parallel, they launched Cal.diy, a fully MIT-licensed fork aimed at hobbyists, self-hosters, and curious developers. The open project is a functional scheduling tool. The closed product is where the hardened auth, SOC 2 controls, and enterprise integrations live.
This is the hybrid pattern that is quietly becoming the default:
- A public "community edition" that keeps the brand, the contributors, and the long tail of self-hosted deployments
- A private "enterprise edition" that handles high stakes data, compliance boundaries, and anything an AI scanner could weaponise
MongoDB, Elastic, HashiCorp, and Redis all walked versions of this path before AI was the driver. Cal.com is the first major player to say out loud that AI exploit scanners — not cloud hyperscalers — are the reason.
The AI scanner problem is real
The uncomfortable truth is that open source security has always relied on a tacit assumption: more eyes will find bugs faster than attackers will. That assumption worked when attackers were humans with limited time. It does not work when attackers are autonomous agents that can read an entire codebase in minutes.
Three data points from the last six months tell the story:
- Mythos Preview (Anthropic, April 2026) demonstrated end-to-end vulnerability discovery and exploit generation on production targets. Anthropic ran it as a defensive research tool. Adversaries will not.
- The litellm supply chain attack (March 2026) was traced back to an autonomous agent — hackerbot-claw — that compromised Trivy's CI pipeline, stole PyPI tokens, and poisoned a library downloaded 95 million times a month. Exposure window: under three hours. Blast radius: every environment that pulled the bad version.
- Hex Security's field data, cited in the Cal.com announcement, found that open source applications are five to ten times easier to exploit than closed equivalents when AI scanners are pointed at them.
The common thread is cost asymmetry. Offence has collapsed to near zero. Defence is still priced in human-hours.
Is this the end of open source?
Short answer: no. Longer answer: open source is splitting into tiers.
Infrastructure and primitives stay open. Linux, PostgreSQL, Kubernetes, React, FastAPI, and the thousands of libraries the industry depends on have too much accumulated review, too many eyeballs, and too much institutional backing to close. Closing them would break the internet before it would protect anyone.
Application-layer products handling regulated data are the ones going private. CRMs, schedulers, HR systems, healthcare platforms, fintech backends — anything where a single exploit means a breach notification, a GDPR fine, or a regulator walking in. Cal.com is early, not alone.
Developer tooling is the interesting middle. Cursor, Windsurf, and Claude Code are already mostly proprietary. The next wave of AI-native dev tools will ship closed by default, with optional open components (CLIs, runtimes, MCP servers) where ecosystem effects matter more than source visibility.
The open source foundation stays intact. What shrinks is the middle — the class of application companies that treated "open source" as a marketing stance rather than a structural commitment.
What this means for MENA businesses
For SMEs and startups across Tunisia, Saudi Arabia, and the broader MENA region, the Cal.com decision has four direct consequences worth internalising.
1. Your self-hosted stack is now an attack surface audit. If you run open source apps on your own infrastructure — GitLab, Nextcloud, Matomo, any MIT-licensed schedulers or CRMs — you inherit the AI-scanner risk. The mitigation is not "switch to closed source." It is patch discipline, sandboxing, and removing anything you are not actively using. Dependency count is attack surface.
2. Supply chain hygiene is no longer optional. Pin dependencies to exact versions with hash verification. Use Trusted Publishers on PyPI and npm. Run AI agents in isolated environments that cannot touch production credentials. The litellm blueprint works against anyone, in any region.
3. Open source licensing is shifting — read the fine print. Expect more products to follow the Cal.com pattern: a permissive community fork plus a commercial closed product. If your business depends on a self-hosted open source tool, track the license status of every major release. The version you deployed in 2024 may not be the version shipping in 2026.
4. "Closed" does not mean "secure." The byteiota and Hacker News reactions to Cal.com's announcement were pointed: closing code slows AI scanners, it does not stop them. Closed source with weak SDLC is still vulnerable. The real work — threat modelling, secrets management, least-privilege IAM, signed artefacts — does not change based on whether the repo is public.
The playbook for software companies in 2026
If you are building or running software that handles customer data, the Cal.com decision is a forcing function. A working 2026 playbook looks something like this:
- Classify your code. Mark every repo as "safe to publish," "publish with caveats," or "must stay private." The taxonomy should be based on blast radius if fully analysed by an AI scanner, not on current licensing habits.
- Adopt a hybrid licensing model early. A community edition builds brand. A commercial edition funds the company. Cal.diy is the template: same product heritage, different security posture.
- Invest in artifact signing and SBOMs. Sigstore, in-toto attestations, and signed SBOMs let customers verify what they are running. This is table stakes for enterprise contracts in the post-litellm world.
- Run defensive AI scanners against your own code before attackers do. GitHub Advanced Security, Semgrep Pro, and newer AI-native tools like Hex Security and Endor Labs close the offence-defence gap. The budget for them is smaller than the cost of one breach.
- Sandbox your AI coding agents. If Claude Code, Cursor, or Copilot runs in an environment with direct access to your cloud credentials, you are one compromised dependency away from a bad day. Run them in ephemeral containers with scoped credentials.
The bigger picture
Cal.com's move is not a betrayal of open source. It is the most visible admission yet that AI has shifted the security baseline, and that some products can no longer absorb the cost of full public transparency. The calculus that made "source available" the default for the last two decades was built for a human-scale threat model. That model is over.
The right question is not "should we close our code?" It is "what is the smallest meaningful perimeter we can defend, and how do we build the rest in the open?" For most MENA businesses, that means keeping your internal tooling, your infrastructure-as-code, and your public primitives open — while treating customer-facing application logic as something to guard, sign, and sandbox.
The Cal.com announcement will not be the last of its kind this year. It will probably not be in the top five by December. The companies that treat it as a wake-up call rather than an outlier are the ones that will still be shipping software in 2028.
Related reading on noqta.tn:
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.