Laravel 11 reached end of life on 12 March 2026. Bug fixes stopped on 12 September 2025. As of today every Laravel 11 application in production is running framework code that will receive no further security patches from the maintainers. The next critical CVE in the framework — or in one of its core dependencies — will require either an emergency upgrade under pressure, or a custom backport that no team in the region has the bandwidth to maintain.
This is the cleanest example of a category of vendor risk that we see repeatedly across MENA SME engagements: EOL stack left running silently because nobody internally is tracking the upstream support calendar, and the original developer who built the application has long since moved on.
The realistic plan for getting off Laravel 11 — or off any of the older Laravel versions still in production around the region — is the subject of this article. We focus on the SME case (single application, 20,000 to 80,000 lines, one or two queue workers, one MySQL or PostgreSQL database) because that is where the bulk of the regional exposure sits.
What "End of Life" Actually Means
When the Laravel maintainers stop releasing security updates for a major version, the framework code stops receiving patches for vulnerabilities discovered after that date. The application keeps running. Existing CVEs that were patched before EOL remain patched. New CVEs that emerge after EOL — whether in the framework itself or in core dependencies like Symfony components, Doctrine, Monolog or PHP-AST — are not backported.
In practice the risk window opens slowly and closes painfully. For the first three to six months after EOL, the application is mostly fine — no critical CVEs land, the existing security posture holds. After six to eighteen months, a high-severity CVE emerges in Symfony or PHP that the Laravel 11 release line will not receive. At that point the team has three options: emergency upgrade under time pressure (the most expensive path), custom patch maintained internally (requires a senior PHP engineer with deep framework knowledge), or accept the risk (the path that ends in an incident).
The cheapest option, by a wide margin, is the planned upgrade before the CVE pressure arrives. Which is now.
The 14-Day Migration Plan
The plan we run for SME Laravel applications has four phases. Each phase has a clear exit criterion. The plan does not assume the application is in good shape — most are not — and includes the remediation steps that make the upgrade actually safe.
Day 1 — Audit and decision
The first day is read-only. The engineer cloning the repository should produce four outputs:
- Current state assessment. PHP version, Laravel version, deployment target, queue driver, database driver, cache driver, mail driver, session driver.
composer auditoutput.php artisan aboutoutput. Production environment variables (redacted). Documented or undocumented cron jobs. CI/CD pipeline state. - Dependency audit. Full
composer.jsonreview against the Laravel 13 compatibility matrix for every package. Every package gets a label: ready (has a Laravel 13 version), needs minor update (compatible version exists but composer constraint blocks it), needs replacement (package abandoned or no Laravel 13 support), or unknown (custom package or fork). - Test coverage assessment. Pest or PHPUnit suite run with coverage. Anything below 40% line coverage on the controllers and services means QA needs to be heavier in phase 4.
- Path decision. Sequential upgrade (11 to 12 to 13) versus stand-up-and-port (clean Laravel 13 application, port code over). For applications under 30,000 lines with poor test coverage and significant package debt, port is often faster than upgrade. For mature applications with good test coverage and clean dependencies, sequential upgrade wins. The output of day 1 is the chosen path and the resourced plan for the remaining 13 days.
Days 2 to 4 — Dependency and package work
Before the framework moves, the dependencies have to be ready. The work in these days is unglamorous: bump composer constraints, replace abandoned packages, port custom packages to the new container resolution patterns, remove deprecated facades, and re-run composer audit until it is clean.
The packages that consistently cause MENA SME teams trouble are: payment gateway SDKs (Stripe, Paymee, HyperPay, MyFatoorah, Tap) that lag a few months behind framework releases; older Spatie packages that have been replaced by newer alternatives; abandoned admin panel packages that should have been replaced years ago; and legacy export packages that depended on PHPSpreadsheet versions that are themselves EOL.
The exit criterion for this phase is a clean composer audit, all production-critical packages either ready for Laravel 13 or with a documented replacement plan, and no fatal errors when running the test suite against the current Laravel 11 with the upgraded dependencies.
Days 5 to 10 — Framework upgrade work
The actual framework upgrade. The sequence depends on the chosen path.
For the sequential upgrade path, the team works through the official Laravel upgrade guides for 11 → 12 and 12 → 13 in order. Each upgrade is a discrete PR with the framework version bump, the deprecation fixes, the renamed methods, and the new container resolution patterns. Tests run after each upgrade.
For the stand-up-and-port path, the team scaffolds a fresh Laravel 13 application, ports the database migration set, ports the routes and controllers, ports the models and policies, ports the queue jobs and scheduled tasks, ports the views or API resources. Data migration is planned as a separate step at the end.
The most common breaking changes in either path are: container resolution changes (constructor injection where it used to be method injection, or vice versa), middleware constructor parameter handling, model casting and attribute mutator syntax updates, queue worker payload format changes (which can break in-flight jobs at cutover), and validation rule signature changes.
The exit criterion for this phase is the application running locally on Laravel 13 with all tests green, all manual smoke tests passing, and the production environment infrastructure (PHP version, extensions, queue worker config, supervisor config) verified compatible.
Days 11 to 14 — Staging, QA, cutover
The last phase is deployment discipline. A staging environment that mirrors production runs the new application against a copy of the production database. The team runs a full QA pass against the staging environment, replays a recent week of production traffic if traffic-replay tooling is available, and runs every payment, integration and scheduled task end-to-end.
The cutover plan is documented in detail: maintenance window, database migration sequence, queue worker drain and restart procedure, cache flush, rollback plan with explicit rollback decision criteria, and a 24-hour observation window with on-call coverage.
The exit criterion is the application in production on Laravel 13, all queue workers running cleanly, all scheduled tasks completing on time, and the rollback plan deactivated after the 24-hour observation window passes without incident.
The Disqualifiers — When You Should Not Run This Plan
The 14-day plan works for applications that meet a defined profile. There are situations where the right answer is not the plan above.
If the application is on Laravel 5, 6, 7 or 8, the plan does not apply. These versions predate too much of the modern Laravel architecture. The honest path is a controlled rewrite to Laravel 13 over 4 to 12 weeks, with the legacy application kept running behind a WAF in the meantime.
If the application has zero automated test coverage and the team cannot afford to write a baseline QA test plan, the plan does not apply. The risk of regression in production is too high without observability. The first investment must be in QA tooling and a baseline test plan.
If the original developer is unreachable and no current team member has read the code, the plan does not apply. The first step is a handover assessment — what does the application actually do, what are the critical user flows, what are the integrations — before any framework work begins. This is its own engagement (typically 3 to 5 days).
If the application depends on a payment integration or government integration (ZATCA, INPDP submission, social security filing) whose vendor SDK does not yet support Laravel 13, the plan must wait for the SDK or include a vendored fork as a documented interim step.
If the application is mission-critical and revenue-generating and the team has no on-call capability for the cutover, the plan must include external on-call coverage — either from the engineering vendor running the upgrade or from a separate operations partner.
The Real Cost Comparison
The cost of running the planned migration at MENA rates is in the range of TND 12,000 to TND 25,000 for a Tunisian team, SAR 35,000 to SAR 70,000 for a Saudi team, AED 30,000 to AED 60,000 for a UAE team. Add 10 to 20 percent if QA is in-scope. Add infrastructure cost if the upgrade requires a PHP version bump that the current hosting plan does not support.
The cost of the alternative — running unsupported framework code in production and managing the next emergency — is harder to model but consistently higher. A single security incident under PDPL in Saudi Arabia carries administrative fines up to SAR 5 million per violation, plus civil liability and the operational cost of breach response and customer notification. A single ZATCA non-compliance incident carries fines per missed e-invoice. A ransomware incident carries the operational cost (typically USD 20,000 to USD 200,000 for a mid-market business in MENA) plus the reputational cost. A data breach under DIFC or ADGM jurisdiction carries fines and disclosure obligations.
The unit economics consistently favour the planned upgrade. Most clients we work with on this find the upgrade is paid for by the avoidance of a single incident in the following 18 months.
What Noqta Does Differently
The version of this plan we run for clients has three reinforcements over the version above.
First, we work in our PMaaS pattern — every change is in a tracked GitLab issue, every PR has a documented test plan, every cutover is logged in the project dashboard, and the client sees the same view we see. The post-migration audit trail is as useful as the migration itself for the next review cycle.
Second, we use an AI-assisted dependency audit pass. Our internal pattern reads the composer.lock, cross-references against the Laravel 13 compatibility matrix and Packagist abandonment signals, and produces a labelled dependency report in minutes rather than the days a manual audit takes. This shortens day 1 substantially.
Third, we operate the cutover under our standard handover discipline — every credential rotated, every Git mirror verified current, every staging environment retained for 30 days post-cutover as a known-good rollback target. The migration is the moment when the client should also gain operational independence from the vendor that originally built the application, and we treat it as that.
The 7-Day Triage Path When You Cannot Wait
For clients who discover their Laravel 11 application is in production and want a defensive posture before the full migration can be resourced, there is a 7-day triage that buys runway.
The triage is not a migration. It is a CVE-reduction and observability pass: lock down the actuator-equivalent debug endpoints (Telescope, Debugbar, Horizon dashboard), audit and rotate every credential, put the application behind Cloudflare or BunkerWeb with WAF rules, add Sentry or Flare error tracking with alert routing, audit the queue worker setup under supervisor, verify the backup restore drill, and document the incident-response runbook.
After the triage the application is still on Laravel 11. But the most exploitable surface is reduced, the operational telemetry is present, and the team has the breathing room to resource the migration properly.
The triage is not a substitute for the migration. It is the bridge to the migration.
Related reading: the source-code escrow pattern, the vendor audit playbook, and the GitLab + AI PM dashboard pattern we use to keep the upgrade visible to the client throughout the engagement.