<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:hashnode="https://hashnode.com/rss"><channel><title><![CDATA[Desde mi Hamaca]]></title><description><![CDATA[Software Architect, Backend Developer and Former Head of Engineering. 
The key is to use the correct tools at the correct situation. 
Feel free to share and comment.
]]></description><link>https://blog.equationlabs.io</link><generator>RSS for Node</generator><lastBuildDate>Mon, 02 Dec 2024 20:39:45 GMT</lastBuildDate><atom:link href="https://blog.equationlabs.io/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="next" href="https://blog.equationlabs.io/rss.xml?page=2"/><atom:link rel="previous" href="https://blog.equationlabs.io/rss.xml"/><item><title><![CDATA[From PHP to Rust: Migrating a REST API between these two languages. (Part I)]]></title><description><![CDATA[Disclaimer
Before beginning, I want to say that I'm a huge fan of PHP for several years. This not only allowed me to create great applications but also keeps the food on my table <3.
However, Rust is gaining traction among the developer community. It...]]></description><link>https://blog.equationlabs.io/from-php-to-rust-migrating-a-rest-api-between-these-two-languages-part-i</link><guid isPermaLink="true">https://blog.equationlabs.io/from-php-to-rust-migrating-a-rest-api-between-these-two-languages-part-i</guid><category><![CDATA[Rust]]></category><category><![CDATA[PHP]]></category><category><![CDATA[DDD]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Thu, 29 Dec 2022 14:16:46 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;h2 id=&quot;heading-disclaimer&quot;&gt;Disclaimer&lt;/h2&gt;&lt;p&gt;Before beginning, I want to say that I&apos;m a huge fan of &lt;code&gt;PHP&lt;/code&gt; for several years. This not only allowed me to create great applications but also keeps the food on my table &amp;lt;3.&lt;/p&gt;&lt;p&gt;However, &lt;code&gt;Rust&lt;/code&gt; is gaining traction among the developer community. It&apos;s not for nothing that it&apos;s the &lt;code&gt;most loved&lt;/code&gt; programming language in &lt;code&gt;the last 7 years&lt;/code&gt; in &lt;code&gt;Stackoverflow&apos;s Developer Survey&lt;/code&gt; (&lt;a target=&quot;_blank&quot; href=&quot;https://survey.stackoverflow.co/2022/#section-most-loved-dreaded-and-wanted-programming-scripting-and-markup-languages&quot;&gt;details here&lt;/a&gt;). With this blog post, I want to try and show you the difficulties for a &lt;code&gt;PHP&lt;/code&gt; developer to learn &lt;code&gt;Rust&lt;/code&gt; with a practical example: &lt;code&gt;the migration of a single API endpoint from PHP to Rust&lt;/code&gt;.&lt;/p&gt;&lt;h2 id=&quot;heading-whats-rust-language&quot;&gt;What&apos;s RUST Language&lt;/h2&gt;&lt;p&gt;Originally made as a &lt;code&gt;C-Language&lt;/code&gt; replacement, &lt;code&gt;Rust&lt;/code&gt; rapidly evolved not only to build software systems &lt;code&gt;(operating systems, IoT, and AI)&lt;/code&gt; but with the big community, the web frameworks begin to grow in the ecosystem&lt;/p&gt;&lt;p&gt;Why &lt;code&gt;Rust&lt;/code&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Blazingly fast and memory-efficient. With no runtime or garbage collector, it can power performance-critical services&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A strong typed system and ownership model that guarantees memory-safety and thread-safety programming, enabling you to eliminate many cases of bugs at compile-time&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Great documentation, a friendly compiler with useful error messages, top-notch tooling and much more.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The most known web framework is called &lt;code&gt;Actix-Web&lt;/code&gt; (&lt;a target=&quot;_blank&quot; href=&quot;https://actix.rs&quot;&gt;more here&lt;/a&gt;) , and it is considered one of the &lt;code&gt;most performant web frameworks available on the market&lt;/code&gt;, capable of serving &lt;code&gt;552K requests per second&lt;/code&gt;. &lt;a target=&quot;_blank&quot; href=&quot;https://www.techempower.com/benchmarks/#section=data-r21&amp;amp;hw=ph&amp;amp;test=fortune&quot;&gt;(the benchmark can be viewed here)&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-the-application&quot;&gt;The Application&lt;/h2&gt;&lt;p&gt;If you read my previous blog post, I always have a &lt;code&gt;&quot;demo application&quot;&lt;/code&gt; to use in all my tutorials: the &lt;code&gt;&quot;Availability API&quot;&lt;/code&gt;. This application is far from simple but it contains the implementation of a minimum web application nowadays (Rest API, DB Access, Hexagonal Architecture design, etc).&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1658159844888/jAGpW1FPk.png?auto=compress,format&amp;amp;format=webp&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671720082853/98221f94-6c2b-4b4f-90a3-797c10c0568f.jpeg&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The architecture of the infrastructure is not important here but since we use a k8s cluster agnostic, we can reuse it here for demonstration purposes.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671720274056/6f21946a-e966-4131-aaeb-50e27f9a1364.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-the-migration&quot;&gt;The Migration&lt;/h2&gt;&lt;p&gt;As the first step, we want to write the web server and configure the web server using the &lt;code&gt;actix-web framework&lt;/code&gt;. This will include setting the routing and the controller to receive and respond to requests.&lt;/p&gt;&lt;h3 id=&quot;heading-file-structures&quot;&gt;File Structures&lt;/h3&gt;&lt;p&gt;As we said before, the &lt;code&gt;availability API&lt;/code&gt; was designed with a hexagonal architecture in mind, so we want to reach the same goal using &lt;code&gt;Rust&lt;/code&gt;. Let&apos;s see how it compares with PHP.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671723718440/9ab0c71a-f156-4f1b-8853-2df3b8e937d0.jpeg&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;heading-rust-main-entrypoint&quot;&gt;Rust &lt;code&gt;main&lt;/code&gt; entrypoint&lt;/h3&gt;&lt;p&gt;In &lt;code&gt;Rust&lt;/code&gt; you use &lt;code&gt;main.rs&lt;/code&gt;. This is equal to bootstrapping in &lt;code&gt;PHP&lt;/code&gt;. In this file, you have a &lt;code&gt;main()&lt;/code&gt; function, which is mandatory for any &lt;code&gt;Rust&lt;/code&gt; application to work just like &lt;code&gt;Java&lt;/code&gt; or &lt;code&gt;Typescript&lt;/code&gt; Main.&lt;/p&gt;&lt;p&gt;Here how my &lt;code&gt;main.rs&lt;/code&gt; the file looks like. Remember, that I&apos;m using using &lt;code&gt;actix-web&lt;/code&gt; framework, so the &lt;code&gt;&quot;bootstrapping&quot;&lt;/code&gt; of the web server will be through the &lt;code&gt;actix-web&lt;/code&gt; framework.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671724105096/290b724d-6ebf-430f-b3c7-1bd4d40bd51c.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;You also have the &lt;code&gt;availability-controller&lt;/code&gt;. This will ask third party &lt;code&gt;API&apos;s&lt;/code&gt; and &lt;code&gt;databases&lt;/code&gt; if it&apos;s available.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671724442187/339489d0-51d4-4b2d-8162-5fc2861b3671.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Note that &lt;code&gt;PropertyAvailabilityRequest&lt;/code&gt; is a &lt;code&gt;DTO&lt;/code&gt; with a series of &lt;code&gt;ValueObject&lt;/code&gt; that validates itself, so in case of an invalid &lt;code&gt;request&lt;/code&gt; &lt;code&gt;body&lt;/code&gt;, and serialized &lt;code&gt;JSON&lt;/code&gt; error is returned to the client.&lt;/p&gt;&lt;h3 id=&quot;heading-compile-and-run&quot;&gt;Compile and Run&lt;/h3&gt;&lt;p&gt;Now let&apos;s compile and try to run this first step:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cargo build &amp;amp;&amp;amp; cargo run&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671725674114/9899be12-71c5-45e4-809b-4ddc77af9ad4.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Now we&apos;re ready to test our first very basic endpoint, the response must be equal to the request that we send to the API.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;curl --location --request POST &lt;span class=&quot;hljs-string&quot;&gt;&apos;http://localhost:8080/v1/property/availability&apos;&lt;/span&gt; \--header &lt;span class=&quot;hljs-string&quot;&gt;&apos;Content-Type: application/vnd.api+json&apos;&lt;/span&gt; \--header &lt;span class=&quot;hljs-string&quot;&gt;&apos;Accept: application/vnd.api+json&apos;&lt;/span&gt; \--data-raw &lt;span class=&quot;hljs-string&quot;&gt;&apos;{  &quot;requestDates&quot;: {    &quot;checkin&quot;: &quot;1956-06-29T02:09:38.752Z&quot;,    &quot;checkout&quot;: &quot;1977-01-17T15:39:47.465Z&quot;  },  &quot;pax&quot;: [    {      &quot;adults&quot;: 2,      &quot;childs&quot;: 1    }  ]}&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And at least for this first part, we can migrate an endpoint from &lt;code&gt;PHP&lt;/code&gt; to &lt;code&gt;Rust&lt;/code&gt;. We have a lot of work to do regarding the database connection, event dispatching and other kinds of stuff. In the next chapter, we will add some kind of middleware to add custom headers, database connection and so on.&lt;/p&gt;&lt;p&gt;Thank you for reading!&lt;/p&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support Me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;h2 id=&quot;heading-disclaimer&quot;&gt;Disclaimer&lt;/h2&gt;&lt;p&gt;Before beginning, I want to say that I&apos;m a huge fan of &lt;code&gt;PHP&lt;/code&gt; for several years. This not only allowed me to create great applications but also keeps the food on my table &amp;lt;3.&lt;/p&gt;&lt;p&gt;However, &lt;code&gt;Rust&lt;/code&gt; is gaining traction among the developer community. It&apos;s not for nothing that it&apos;s the &lt;code&gt;most loved&lt;/code&gt; programming language in &lt;code&gt;the last 7 years&lt;/code&gt; in &lt;code&gt;Stackoverflow&apos;s Developer Survey&lt;/code&gt; (&lt;a target=&quot;_blank&quot; href=&quot;https://survey.stackoverflow.co/2022/#section-most-loved-dreaded-and-wanted-programming-scripting-and-markup-languages&quot;&gt;details here&lt;/a&gt;). With this blog post, I want to try and show you the difficulties for a &lt;code&gt;PHP&lt;/code&gt; developer to learn &lt;code&gt;Rust&lt;/code&gt; with a practical example: &lt;code&gt;the migration of a single API endpoint from PHP to Rust&lt;/code&gt;.&lt;/p&gt;&lt;h2 id=&quot;heading-whats-rust-language&quot;&gt;What&apos;s RUST Language&lt;/h2&gt;&lt;p&gt;Originally made as a &lt;code&gt;C-Language&lt;/code&gt; replacement, &lt;code&gt;Rust&lt;/code&gt; rapidly evolved not only to build software systems &lt;code&gt;(operating systems, IoT, and AI)&lt;/code&gt; but with the big community, the web frameworks begin to grow in the ecosystem&lt;/p&gt;&lt;p&gt;Why &lt;code&gt;Rust&lt;/code&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Blazingly fast and memory-efficient. With no runtime or garbage collector, it can power performance-critical services&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A strong typed system and ownership model that guarantees memory-safety and thread-safety programming, enabling you to eliminate many cases of bugs at compile-time&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Great documentation, a friendly compiler with useful error messages, top-notch tooling and much more.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The most known web framework is called &lt;code&gt;Actix-Web&lt;/code&gt; (&lt;a target=&quot;_blank&quot; href=&quot;https://actix.rs&quot;&gt;more here&lt;/a&gt;) , and it is considered one of the &lt;code&gt;most performant web frameworks available on the market&lt;/code&gt;, capable of serving &lt;code&gt;552K requests per second&lt;/code&gt;. &lt;a target=&quot;_blank&quot; href=&quot;https://www.techempower.com/benchmarks/#section=data-r21&amp;amp;hw=ph&amp;amp;test=fortune&quot;&gt;(the benchmark can be viewed here)&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-the-application&quot;&gt;The Application&lt;/h2&gt;&lt;p&gt;If you read my previous blog post, I always have a &lt;code&gt;&quot;demo application&quot;&lt;/code&gt; to use in all my tutorials: the &lt;code&gt;&quot;Availability API&quot;&lt;/code&gt;. This application is far from simple but it contains the implementation of a minimum web application nowadays (Rest API, DB Access, Hexagonal Architecture design, etc).&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1658159844888/jAGpW1FPk.png?auto=compress,format&amp;amp;format=webp&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671720082853/98221f94-6c2b-4b4f-90a3-797c10c0568f.jpeg&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The architecture of the infrastructure is not important here but since we use a k8s cluster agnostic, we can reuse it here for demonstration purposes.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671720274056/6f21946a-e966-4131-aaeb-50e27f9a1364.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-the-migration&quot;&gt;The Migration&lt;/h2&gt;&lt;p&gt;As the first step, we want to write the web server and configure the web server using the &lt;code&gt;actix-web framework&lt;/code&gt;. This will include setting the routing and the controller to receive and respond to requests.&lt;/p&gt;&lt;h3 id=&quot;heading-file-structures&quot;&gt;File Structures&lt;/h3&gt;&lt;p&gt;As we said before, the &lt;code&gt;availability API&lt;/code&gt; was designed with a hexagonal architecture in mind, so we want to reach the same goal using &lt;code&gt;Rust&lt;/code&gt;. Let&apos;s see how it compares with PHP.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671723718440/9ab0c71a-f156-4f1b-8853-2df3b8e937d0.jpeg&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;heading-rust-main-entrypoint&quot;&gt;Rust &lt;code&gt;main&lt;/code&gt; entrypoint&lt;/h3&gt;&lt;p&gt;In &lt;code&gt;Rust&lt;/code&gt; you use &lt;code&gt;main.rs&lt;/code&gt;. This is equal to bootstrapping in &lt;code&gt;PHP&lt;/code&gt;. In this file, you have a &lt;code&gt;main()&lt;/code&gt; function, which is mandatory for any &lt;code&gt;Rust&lt;/code&gt; application to work just like &lt;code&gt;Java&lt;/code&gt; or &lt;code&gt;Typescript&lt;/code&gt; Main.&lt;/p&gt;&lt;p&gt;Here how my &lt;code&gt;main.rs&lt;/code&gt; the file looks like. Remember, that I&apos;m using using &lt;code&gt;actix-web&lt;/code&gt; framework, so the &lt;code&gt;&quot;bootstrapping&quot;&lt;/code&gt; of the web server will be through the &lt;code&gt;actix-web&lt;/code&gt; framework.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671724105096/290b724d-6ebf-430f-b3c7-1bd4d40bd51c.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;You also have the &lt;code&gt;availability-controller&lt;/code&gt;. This will ask third party &lt;code&gt;API&apos;s&lt;/code&gt; and &lt;code&gt;databases&lt;/code&gt; if it&apos;s available.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671724442187/339489d0-51d4-4b2d-8162-5fc2861b3671.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Note that &lt;code&gt;PropertyAvailabilityRequest&lt;/code&gt; is a &lt;code&gt;DTO&lt;/code&gt; with a series of &lt;code&gt;ValueObject&lt;/code&gt; that validates itself, so in case of an invalid &lt;code&gt;request&lt;/code&gt; &lt;code&gt;body&lt;/code&gt;, and serialized &lt;code&gt;JSON&lt;/code&gt; error is returned to the client.&lt;/p&gt;&lt;h3 id=&quot;heading-compile-and-run&quot;&gt;Compile and Run&lt;/h3&gt;&lt;p&gt;Now let&apos;s compile and try to run this first step:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cargo build &amp;amp;&amp;amp; cargo run&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1671725674114/9899be12-71c5-45e4-809b-4ddc77af9ad4.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Now we&apos;re ready to test our first very basic endpoint, the response must be equal to the request that we send to the API.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;curl --location --request POST &lt;span class=&quot;hljs-string&quot;&gt;&apos;http://localhost:8080/v1/property/availability&apos;&lt;/span&gt; \--header &lt;span class=&quot;hljs-string&quot;&gt;&apos;Content-Type: application/vnd.api+json&apos;&lt;/span&gt; \--header &lt;span class=&quot;hljs-string&quot;&gt;&apos;Accept: application/vnd.api+json&apos;&lt;/span&gt; \--data-raw &lt;span class=&quot;hljs-string&quot;&gt;&apos;{  &quot;requestDates&quot;: {    &quot;checkin&quot;: &quot;1956-06-29T02:09:38.752Z&quot;,    &quot;checkout&quot;: &quot;1977-01-17T15:39:47.465Z&quot;  },  &quot;pax&quot;: [    {      &quot;adults&quot;: 2,      &quot;childs&quot;: 1    }  ]}&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And at least for this first part, we can migrate an endpoint from &lt;code&gt;PHP&lt;/code&gt; to &lt;code&gt;Rust&lt;/code&gt;. We have a lot of work to do regarding the database connection, event dispatching and other kinds of stuff. In the next chapter, we will add some kind of middleware to add custom headers, database connection and so on.&lt;/p&gt;&lt;p&gt;Thank you for reading!&lt;/p&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support Me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1670513826170/aQ94Iuya9.jpg</hashnode:coverImage></item><item><title><![CDATA[Managing database migrations safely in high replicated k8s deployment.]]></title><description><![CDATA[So, you want to run migrations in a cloud native application running on a Kubernetes cluster, and don't die trying huh!
***Well you're in the right place!! ***
After I break some applications in terms of database migrations in a multi replica and con...]]></description><link>https://blog.equationlabs.io/managing-database-migrations-safely-in-high-replicated-k8s-deployment</link><guid isPermaLink="true">https://blog.equationlabs.io/managing-database-migrations-safely-in-high-replicated-k8s-deployment</guid><category><![CDATA[Databases]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Sun, 13 Nov 2022 12:49:04 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;So, you want to run migrations in a cloud native application running on a &lt;code&gt;Kubernetes&lt;/code&gt; cluster, and don&apos;t die trying huh!&lt;/p&gt;&lt;p&gt;***Well you&apos;re in the right place!! ***&lt;/p&gt;&lt;p&gt;After I break some applications in terms of database migrations in a multi replica and concurrent deployment process, I want to give you some advices based on my faults, on how you can run you migrations in a safely way, with native &lt;code&gt;k8s&lt;/code&gt; specs and without hacks of any kind. (no &lt;code&gt;helm&lt;/code&gt;, no &lt;code&gt;external deployers&lt;/code&gt;, pure and plain &lt;code&gt;k8s&lt;/code&gt; process well orchestrated)&lt;/p&gt;&lt;h2 id=&quot;heading-the-problem&quot;&gt;The Problem&lt;/h2&gt;&lt;p&gt;Is very common, modern application evolves faster, new features arise from product to satisfy the final user, and with every new deploy is too common the need to alter your database in some form and you have, many tools to allow you to manage the execution of the migrations against your database, BUT, not when they occur.&lt;/p&gt;&lt;p&gt;If you have an application pod, let say with 4 replicas, and you deploy it, the all 4 will try to run the migrations at the same time potentially causing data corruption and data loss, and nobody wants that.&lt;/p&gt;&lt;h2 id=&quot;heading-when-to-run-migrations-the-workflow&quot;&gt;When to run migrations &lt;code&gt;(the workflow)&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;In the &lt;code&gt;old-way&lt;/code&gt; of run migrations we used to have something like this:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Put the application in &lt;code&gt;maintenance mode&lt;/code&gt; (divert traffic to a special page)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Run database migrations&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deploy new base code&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Disable maintenance mode on application&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Obviously, this isn&apos;t acceptable approach if you want to achieve &lt;code&gt;zero-downtime&lt;/code&gt; deployments in the actual always-on world, we need to achieve &lt;em&gt;(at leats)&lt;/em&gt; the following steps to assure that migrations and application run in a safety way.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Run migrations while the old version of the application is still running, and do the rolling update &quot;only&quot; when migrations are successfully run.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Something like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1668344068616/A3UYB5av5.jpg&quot; alt=&quot;Pipeline + cluster proposals-2.jpg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-job-initcontainer-and-rollingupdates&quot;&gt;&lt;code&gt;Job&lt;/code&gt;, &lt;code&gt;InitContainer&lt;/code&gt; and &lt;code&gt;RollingUpdates&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;So after choose our strategy to run &lt;code&gt;migrations&lt;/code&gt; on &lt;code&gt;k8s&lt;/code&gt;, we need to write our manifests in order to accomplish the defined workflow.&lt;/p&gt;&lt;p&gt;First, the migrations job itself:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;batch/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Job&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;availability-api-migrations&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ttlSecondsAfterFinished:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;60&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;backoffLimit:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;containers:&lt;/span&gt;        &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;migrations&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;availability-api-migrations&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;/bin/sh&apos;&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;-c&apos;&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos; bin/console doctrine:migrations:migrate --no-interaction -v&apos;&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;envFrom:&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;secretRef:&lt;/span&gt;                &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;protected-credentials-from-vault&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;restartPolicy:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Never&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now in the deployment manifest of the application we need to defined 2 things very important to allow our workflow work as expected.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The rolling update strategy&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;the init container and command to forbid deployment init until migrations are done.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apps/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Deployment&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-string&quot;&gt;...&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;strategy:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;rollingUpdate:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;maxSurge:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;25&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;%&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;maxUnavailable:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;   &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;initContainers:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;wait-for-migrations-job&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;bitnami/kubectl:1.25&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;kubectl&apos;&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;args:&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;wait&apos;&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;--for=condition=complete&apos;&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;--timeout=600s&apos;&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;job/availability-api-migrations&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;What is the meaning of that snippet above:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;RollingUpdate&lt;/code&gt; strategy:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The rolling update allow us, to define the strategy on how many pod replicas we want to update with the new code at a time (you can also choose between other &lt;code&gt;k8s&lt;/code&gt; strategies like &lt;code&gt;recreate&lt;/code&gt;, &lt;code&gt;blue/green&lt;/code&gt; or &lt;code&gt;canary&lt;/code&gt;)&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;InitContainer&lt;/code&gt; migration job watcher&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Now we want, to &lt;code&gt;forbid&lt;/code&gt; in some way that the rollout deployment begins until the migrations job was finished in completed status. Fortunately the &lt;code&gt;kubectl cli&lt;/code&gt; tools allow us to ask and wait the status of &lt;code&gt;k8s&lt;/code&gt; component, we use that advantage of the kubelete api to use here, and &quot;block&quot; in some way the deployment&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;&lt;span class=&quot;hljs-variable&quot;&gt;$kubectl&lt;/span&gt; &lt;span class=&quot;hljs-built_in&quot;&gt;wait&lt;/span&gt; --for-condition=complete --timeout=600 job/availability-api-migrations&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We wait for the job &lt;code&gt;availability-api-migrations&lt;/code&gt; status complete with a &lt;code&gt;timeout&lt;/code&gt; o five minutes, the kubectl will ask permanently until timeout was reach. Obviously if the migration job finish in there milliseconds, automatically the wait loop ends and allow the deploy to begin otherwise, the deploy will fail. The timeout is the MAX allowed time to wait for ask for complete status on a job.&lt;/p&gt;&lt;p&gt;At least with this we can assure that data consistency is preserve, but it&apos;s importan follow some guidelines in terms on how to write and deploy migrations in you applications, below we&apos;ll cover that.&lt;/p&gt;&lt;h2 id=&quot;heading-safety-recommendations&quot;&gt;Safety recommendations&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Write your migrations always thinking in a fast execution, if you expect to run migrations that took more than 5 minutes, considering ask for a maintenance window to restrict traffic access to the application.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Do your migrations always thinking in retro compatibility of your current code running on production.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For example, do not alter a table adding a column NOT NULLABLE or without DEFAULT value.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you need to add a column and delete other, the best strategy to follow is to do 2 separates deployments.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Run the first deploy adding the column, and validate that all is running smoothly in production.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Run a new deployment only deleting the old column from the database.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support Me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below, I would be appreciated or you can become an sponsor :).&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;So, you want to run migrations in a cloud native application running on a &lt;code&gt;Kubernetes&lt;/code&gt; cluster, and don&apos;t die trying huh!&lt;/p&gt;&lt;p&gt;***Well you&apos;re in the right place!! ***&lt;/p&gt;&lt;p&gt;After I break some applications in terms of database migrations in a multi replica and concurrent deployment process, I want to give you some advices based on my faults, on how you can run you migrations in a safely way, with native &lt;code&gt;k8s&lt;/code&gt; specs and without hacks of any kind. (no &lt;code&gt;helm&lt;/code&gt;, no &lt;code&gt;external deployers&lt;/code&gt;, pure and plain &lt;code&gt;k8s&lt;/code&gt; process well orchestrated)&lt;/p&gt;&lt;h2 id=&quot;heading-the-problem&quot;&gt;The Problem&lt;/h2&gt;&lt;p&gt;Is very common, modern application evolves faster, new features arise from product to satisfy the final user, and with every new deploy is too common the need to alter your database in some form and you have, many tools to allow you to manage the execution of the migrations against your database, BUT, not when they occur.&lt;/p&gt;&lt;p&gt;If you have an application pod, let say with 4 replicas, and you deploy it, the all 4 will try to run the migrations at the same time potentially causing data corruption and data loss, and nobody wants that.&lt;/p&gt;&lt;h2 id=&quot;heading-when-to-run-migrations-the-workflow&quot;&gt;When to run migrations &lt;code&gt;(the workflow)&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;In the &lt;code&gt;old-way&lt;/code&gt; of run migrations we used to have something like this:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Put the application in &lt;code&gt;maintenance mode&lt;/code&gt; (divert traffic to a special page)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Run database migrations&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deploy new base code&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Disable maintenance mode on application&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Obviously, this isn&apos;t acceptable approach if you want to achieve &lt;code&gt;zero-downtime&lt;/code&gt; deployments in the actual always-on world, we need to achieve &lt;em&gt;(at leats)&lt;/em&gt; the following steps to assure that migrations and application run in a safety way.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Run migrations while the old version of the application is still running, and do the rolling update &quot;only&quot; when migrations are successfully run.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Something like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1668344068616/A3UYB5av5.jpg&quot; alt=&quot;Pipeline + cluster proposals-2.jpg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-job-initcontainer-and-rollingupdates&quot;&gt;&lt;code&gt;Job&lt;/code&gt;, &lt;code&gt;InitContainer&lt;/code&gt; and &lt;code&gt;RollingUpdates&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;So after choose our strategy to run &lt;code&gt;migrations&lt;/code&gt; on &lt;code&gt;k8s&lt;/code&gt;, we need to write our manifests in order to accomplish the defined workflow.&lt;/p&gt;&lt;p&gt;First, the migrations job itself:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;batch/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Job&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;availability-api-migrations&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ttlSecondsAfterFinished:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;60&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;backoffLimit:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;containers:&lt;/span&gt;        &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;migrations&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;availability-api-migrations&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;/bin/sh&apos;&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;-c&apos;&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos; bin/console doctrine:migrations:migrate --no-interaction -v&apos;&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;envFrom:&lt;/span&gt;            &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;secretRef:&lt;/span&gt;                &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;protected-credentials-from-vault&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;restartPolicy:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Never&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now in the deployment manifest of the application we need to defined 2 things very important to allow our workflow work as expected.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The rolling update strategy&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;the init container and command to forbid deployment init until migrations are done.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apps/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Deployment&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-string&quot;&gt;...&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;strategy:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;rollingUpdate:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;maxSurge:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;25&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;%&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;maxUnavailable:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;   &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;initContainers:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;wait-for-migrations-job&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;bitnami/kubectl:1.25&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;kubectl&apos;&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;args:&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;wait&apos;&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;--for=condition=complete&apos;&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;--timeout=600s&apos;&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;job/availability-api-migrations&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;What is the meaning of that snippet above:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;RollingUpdate&lt;/code&gt; strategy:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The rolling update allow us, to define the strategy on how many pod replicas we want to update with the new code at a time (you can also choose between other &lt;code&gt;k8s&lt;/code&gt; strategies like &lt;code&gt;recreate&lt;/code&gt;, &lt;code&gt;blue/green&lt;/code&gt; or &lt;code&gt;canary&lt;/code&gt;)&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;InitContainer&lt;/code&gt; migration job watcher&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Now we want, to &lt;code&gt;forbid&lt;/code&gt; in some way that the rollout deployment begins until the migrations job was finished in completed status. Fortunately the &lt;code&gt;kubectl cli&lt;/code&gt; tools allow us to ask and wait the status of &lt;code&gt;k8s&lt;/code&gt; component, we use that advantage of the kubelete api to use here, and &quot;block&quot; in some way the deployment&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;&lt;span class=&quot;hljs-variable&quot;&gt;$kubectl&lt;/span&gt; &lt;span class=&quot;hljs-built_in&quot;&gt;wait&lt;/span&gt; --for-condition=complete --timeout=600 job/availability-api-migrations&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We wait for the job &lt;code&gt;availability-api-migrations&lt;/code&gt; status complete with a &lt;code&gt;timeout&lt;/code&gt; o five minutes, the kubectl will ask permanently until timeout was reach. Obviously if the migration job finish in there milliseconds, automatically the wait loop ends and allow the deploy to begin otherwise, the deploy will fail. The timeout is the MAX allowed time to wait for ask for complete status on a job.&lt;/p&gt;&lt;p&gt;At least with this we can assure that data consistency is preserve, but it&apos;s importan follow some guidelines in terms on how to write and deploy migrations in you applications, below we&apos;ll cover that.&lt;/p&gt;&lt;h2 id=&quot;heading-safety-recommendations&quot;&gt;Safety recommendations&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Write your migrations always thinking in a fast execution, if you expect to run migrations that took more than 5 minutes, considering ask for a maintenance window to restrict traffic access to the application.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Do your migrations always thinking in retro compatibility of your current code running on production.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For example, do not alter a table adding a column NOT NULLABLE or without DEFAULT value.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you need to add a column and delete other, the best strategy to follow is to do 2 separates deployments.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Run the first deploy adding the column, and validate that all is running smoothly in production.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Run a new deployment only deleting the old column from the database.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support Me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below, I would be appreciated or you can become an sponsor :).&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1668019261256/hRDQ2d9qy.png</hashnode:coverImage></item><item><title><![CDATA[How to build a CI/CD workflow with Skaffold for your application (Part III)]]></title><description><![CDATA[🔥 This is third part (and last) of the series "Full CI/CD workflow with Skaffold for your application".
Let's recap: The Workflow
This is the workflow so far:
📣 You can check how to get to this point in the firsts two delivery of the series.

Gitla...]]></description><link>https://blog.equationlabs.io/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-iii</link><guid isPermaLink="true">https://blog.equationlabs.io/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-iii</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Symfony]]></category><category><![CDATA[GitLab]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Wed, 09 Nov 2022 09:06:04 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;🔥 This is third part (and last) of the series &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/series/workflow-with-skaffold&quot;&gt;&quot;Full CI/CD workflow with Skaffold for your application&quot;.&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-lets-recap-the-workflow&quot;&gt;Let&apos;s recap: &lt;code&gt;The Workflow&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;This is the &lt;code&gt;workflow&lt;/code&gt; so far:&lt;/p&gt;&lt;p&gt;&lt;em&gt;📣 You can check how to get to this point in the firsts two delivery of the series.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg&quot; alt=&quot;2fl6qCIhG.png.jpeg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-gitlab-k8s-agent-and-security&quot;&gt;Gitlab &lt;code&gt;K8s agent&lt;/code&gt; and Security&lt;/h2&gt;&lt;p&gt;This main part in the integration of &lt;code&gt;k8s&lt;/code&gt; and &lt;code&gt;Gitlab&lt;/code&gt; with the &lt;code&gt;Gitlab K8s Agent&lt;/code&gt;, is, in my experience, the best and easy way I find to integrate K8s with a DevOps platform like Gitlab.&lt;/p&gt;&lt;p&gt;Let&apos;s recap some steps&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;You need to add an Agent and them run a helm chart into your cluster to allow the secure communication between both.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;the agent can be configured in 2 ways:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;CI_ACCESS&lt;/code&gt;: Allow access from the project repository pipeline to the cluster and then you are in charge to manage how to deploy in the cluster.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;GITOPS_ACCESS&lt;/code&gt;: This allow a full gitops flow like &lt;code&gt;ArgoCD&lt;/code&gt; for example, updating your cluster in a pull based way in sync with the main branch of the repository.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;In my case a use the first one &lt;code&gt;CI_ACCESS&lt;/code&gt; since I want to manage in a more granular way, the whole process with &lt;code&gt;skaffold&lt;/code&gt;, so mi configuration is way simpler&lt;/p&gt;&lt;p&gt;I have 2 repositories in a application group, one for the micro service itself and one for the agent (the agent also could be put in the micro service repository, but if you want more granular access or share the agent/cluster between applications of the same stack, this is the best way)&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982028996/VdcNVOaUU.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.17.42.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;So in the K8s-agents repository, we only have the declarative config.yaml file for every agent that we want to create (for this example I have 2, one for lower-envs/runner and one for production since they are 2 different clusters)&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982211419/lgoKLKteB.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.23.17.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;And in the config itself, I give access to all the projects that I want to use in the cluster.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982282906/c40e50lXX.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.24.37.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;And last, bit no least, the need to link the agent with the cluster, for that, you should go to the k8s project, Kubernetes Cluster menu, and there you will see an interface where you will receive the instructions on how to link agent/cluster via a helm chart to be installed in the cluster.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982484851/8zvk9zHf8.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.26.58.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982568682/jDSedEjlDe.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.27.05.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;After that, you cluster and gitlab instance are now linked, and you applications can use the cluster as Kubernetes-Executor runners and also for dynamic review environments (aka dynamic QA instances to say so)&lt;/p&gt;&lt;h3 id=&quot;heading-deployment-and-safety-recommendations-for-k8s-agents&quot;&gt;Deployment and Safety Recommendations for &lt;code&gt;K8s Agents&lt;/code&gt;&lt;/h3&gt;&lt;p&gt;To restrict access to your cluster, you can use impersonation. To specify impersonations, use the &lt;code&gt;access_as&lt;/code&gt; attribute in your Agent&apos;s configuration file and use K8s RBAC rules to manage impersonated account permissions.&lt;/p&gt;&lt;p&gt;You can impersonate:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The Agent itself (default) = The CI job that accesses the cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A specific user or system account defined within the cluster&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Impersonation give some benefits in terms of security:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Allows you to leverage your K8s authorisation capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Segments fine-grained permissions with the CI/CD tunnel at the project or group level&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Controls permissions with the CI/CD tunnel at the username or service account&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-provisioning-cluster-with-terraform&quot;&gt;Provisioning cluster with &lt;code&gt;terraform&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;As I said, the main goal of this tutorial is try to get the same tooling for local development, pipeline and deployment, but (always got a but), we have 2 sets of the terraform configuration instructions, of example for local development I want, as developer, to get as much of the observability tools that I have on production, in case I need to test metrics, build dashboards on grafana, etc, but without the complexity of production infrastructure architecture.&lt;/p&gt;&lt;p&gt;So in this case, we can give the application diagram for the first part here, as recap of &apos;how our local development stack&lt;code&gt;looks like, and how to achieved with&lt;/code&gt;terraform `.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.plantuml.com/plantuml/png/RPDFSzem4CNl_XGg9p9juaiFcPxY6gRj55eFX4DFZ90Nq4H_t9L4OJhvxbqXrqGaXqpqPt_xtZxX1-Sv-g1LyKuQeK8BREzzvpwL9V8_Tplfzs4J7A2mneFnTyBgibFSHERM-LR9JLb_l6tYqMe-ApLt7f2ErhNLdJMHwMB_ObRz-hbwNC-g7vDbNJNJyKrHB4zKhTVJen-JW0iQy0CRrKeIDg9xnlgAppQObkDf_7JlgEBxlMDLroafk9VMi5g5A3kwON-9OOFq6E40wA11UpmHzuZ0j_9fHCj5kc7dAzBAC07evzpmNV9psKKoNifjb0QcpySwsMMlxVABoQgJ15S7BXNVI2NzYLNDDxO4F4W1iR6M0YrpwU3rBF49q2e5-4QVo3TV6_QUBInl5y4Om7ugmhYaxMH3SRJInU7Z_uW8BlQGacOBK5TvPOfeWuV8n1y88ObOJ_AmhWBlq1va2sSjUZfMBoONT9MDr7lhBT72YoJpJ7z5FWUrrU3t42BG39j8pS6Z5EwSQumWPySxv5loIeLVqkhCM2EzHMbsPMMuEdbga48Xcvd9JDW9v1qmdHIpQD9uWrY61PVoQBddh0y8YNhElWSuUa0oq_G5aPpsP_712RWsznQ2y3k0y__DqLYiIEQ63ov_im5nBvZY0KmRjFe7&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667247028660/_7nejmil9.png&quot; alt=&quot;application-diagram.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;So, for this application structure, i want to get in my local environment the cluster and it&apos;s &apos;pre-requisites&apos; for my architecture, understood as pre-requisites all the others components inside the cluster that no belongs to the application itself (monitoring stack, traefik, cert-manager, etc).&lt;/p&gt;&lt;p&gt;So for that, I write simple modules to install that dependencies inside the local cluster, and get them available to use when a run my application locally with &lt;code&gt;skaffold&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;My structure for infrastructure folder looks like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667983890051/d2r2cjn0I.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.49.38.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Every module has inside their &lt;code&gt;main.tf&lt;/code&gt; configuration file setting the desired state for my cluster after it&apos;s applied.&lt;/p&gt;&lt;p&gt;Lets take a look for one of this module (prometheus) main file:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;terraform {  required_providers {    kubernetes = {      &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt;  = &lt;span class=&quot;hljs-string&quot;&gt;&quot;hashicorp/kubernetes&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;&amp;gt;= 2.13.1&quot;&lt;/span&gt;    }    helm = {      &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt;  = &lt;span class=&quot;hljs-string&quot;&gt;&quot;hashicorp/helm&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;&amp;gt;= 2.7.0&quot;&lt;/span&gt;    }    kubectl = {      &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt;  = &lt;span class=&quot;hljs-string&quot;&gt;&quot;gavinbunney/kubectl&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;&amp;gt;= 1.14.0&quot;&lt;/span&gt;    }  }}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;kubernetes_namespace_v1&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;monitoring_namespace&quot;&lt;/span&gt; {  metadata {    name = var.monitoring_stack_namespace  }}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;helm_release&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus_stack&quot;&lt;/span&gt; {  name  = var.monitoring_stack_prometheus_name  repository = &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://prometheus-community.github.io/helm-charts&quot;&lt;/span&gt;  chart = &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus&quot;&lt;/span&gt;  version = var.monitoring_stack_prometheus_version_number  namespace = var.monitoring_stack_namespace  create_namespace = &lt;span class=&quot;hljs-literal&quot;&gt;false&lt;/span&gt;  values = [    file(&lt;span class=&quot;hljs-string&quot;&gt;&quot;&lt;span class=&quot;hljs-variable&quot;&gt;${path.module}&lt;/span&gt;/manifests/prometheus-override-values.yaml&quot;&lt;/span&gt;)  ]  depends_on = [    kubernetes_namespace_v1.monitoring_namespace  ]}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;kubectl_manifest&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus_stack_ingress&quot;&lt;/span&gt; {  yaml_body = file(&lt;span class=&quot;hljs-string&quot;&gt;&quot;&lt;span class=&quot;hljs-variable&quot;&gt;${path.module}&lt;/span&gt;/manifests/prometheus-ingress.yaml&quot;&lt;/span&gt;)  depends_on = [    helm_release.prometheus_stack  ]}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And then, in the root &lt;code&gt;main.tf&lt;/code&gt; configuration file, you can wrap as many modules as you want, for my case, with my 4 modules was enough (prometheus, traefik, cert-manager, grafana)&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;module &lt;span class=&quot;hljs-string&quot;&gt;&quot;cert_manager_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/cert-manager&quot;&lt;/span&gt;}module &lt;span class=&quot;hljs-string&quot;&gt;&quot;traefik_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/traefik&quot;&lt;/span&gt;  depends_on = [    module.cert_manager_stack  ]}module &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/prometheus&quot;&lt;/span&gt;  depends_on = [    module.traefik_stack  ]}module &lt;span class=&quot;hljs-string&quot;&gt;&quot;grafana_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/grafana&quot;&lt;/span&gt;  depends_on = [    module.prometheus_stack  ]}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This also, has a handy target in our Makefile, allowing developers and operators, easily setup and remove cluster pre-requisites&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667984415271/B6SI0SOw1.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.48.58.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667984399818/Yh-hM8j4L.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.49.06.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Setup the pre-requisites took about 3 minutes, but if something that you need to do, time to time, you can setup your cluster today, work on your feature for some days, and then shutdown.&lt;/p&gt;&lt;p&gt;After all this, you will have a fully functional Local-To-Prod pipeline. (if you need to see how the Gitlab CI file looks like, is in the second part of this series)&lt;/p&gt;&lt;h2 id=&quot;heading-next&quot;&gt;Next&lt;/h2&gt;&lt;p&gt;This is the las delivery of the series, but now I&apos;ll write about the others tools that i use to address different challenges in my day to day work.&lt;/p&gt;&lt;p&gt;If you are interested the next topics I&apos;ll write on are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Managing database migrations at scale in &lt;code&gt;Kubernetes&lt;/code&gt; for &lt;code&gt;PHP&lt;/code&gt; application with &lt;code&gt;symfony/migrations&lt;/code&gt; component&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;Istio&lt;/code&gt;, &lt;code&gt;Cert-Manager&lt;/code&gt; and &lt;code&gt;Let&apos;s Encrypt&lt;/code&gt;: Secured your &lt;code&gt;k8&lt;/code&gt; clusters communication with automated generation and provisioning of SSL Certificates&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Internal Developer Platform: A modern way to run engineering teams.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;Digital War Room&lt;/code&gt; or how to get observability for Engineering Managers across applications and teams.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support me&lt;/h2&gt;&lt;p&gt;If you find this content interesting, please consider buying me a coffee :&apos;)&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;🔥 This is third part (and last) of the series &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/series/workflow-with-skaffold&quot;&gt;&quot;Full CI/CD workflow with Skaffold for your application&quot;.&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-lets-recap-the-workflow&quot;&gt;Let&apos;s recap: &lt;code&gt;The Workflow&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;This is the &lt;code&gt;workflow&lt;/code&gt; so far:&lt;/p&gt;&lt;p&gt;&lt;em&gt;📣 You can check how to get to this point in the firsts two delivery of the series.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg&quot; alt=&quot;2fl6qCIhG.png.jpeg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-gitlab-k8s-agent-and-security&quot;&gt;Gitlab &lt;code&gt;K8s agent&lt;/code&gt; and Security&lt;/h2&gt;&lt;p&gt;This main part in the integration of &lt;code&gt;k8s&lt;/code&gt; and &lt;code&gt;Gitlab&lt;/code&gt; with the &lt;code&gt;Gitlab K8s Agent&lt;/code&gt;, is, in my experience, the best and easy way I find to integrate K8s with a DevOps platform like Gitlab.&lt;/p&gt;&lt;p&gt;Let&apos;s recap some steps&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;You need to add an Agent and them run a helm chart into your cluster to allow the secure communication between both.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;the agent can be configured in 2 ways:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;CI_ACCESS&lt;/code&gt;: Allow access from the project repository pipeline to the cluster and then you are in charge to manage how to deploy in the cluster.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;GITOPS_ACCESS&lt;/code&gt;: This allow a full gitops flow like &lt;code&gt;ArgoCD&lt;/code&gt; for example, updating your cluster in a pull based way in sync with the main branch of the repository.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;In my case a use the first one &lt;code&gt;CI_ACCESS&lt;/code&gt; since I want to manage in a more granular way, the whole process with &lt;code&gt;skaffold&lt;/code&gt;, so mi configuration is way simpler&lt;/p&gt;&lt;p&gt;I have 2 repositories in a application group, one for the micro service itself and one for the agent (the agent also could be put in the micro service repository, but if you want more granular access or share the agent/cluster between applications of the same stack, this is the best way)&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982028996/VdcNVOaUU.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.17.42.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;So in the K8s-agents repository, we only have the declarative config.yaml file for every agent that we want to create (for this example I have 2, one for lower-envs/runner and one for production since they are 2 different clusters)&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982211419/lgoKLKteB.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.23.17.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;And in the config itself, I give access to all the projects that I want to use in the cluster.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982282906/c40e50lXX.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.24.37.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;And last, bit no least, the need to link the agent with the cluster, for that, you should go to the k8s project, Kubernetes Cluster menu, and there you will see an interface where you will receive the instructions on how to link agent/cluster via a helm chart to be installed in the cluster.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982484851/8zvk9zHf8.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.26.58.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667982568682/jDSedEjlDe.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.27.05.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;After that, you cluster and gitlab instance are now linked, and you applications can use the cluster as Kubernetes-Executor runners and also for dynamic review environments (aka dynamic QA instances to say so)&lt;/p&gt;&lt;h3 id=&quot;heading-deployment-and-safety-recommendations-for-k8s-agents&quot;&gt;Deployment and Safety Recommendations for &lt;code&gt;K8s Agents&lt;/code&gt;&lt;/h3&gt;&lt;p&gt;To restrict access to your cluster, you can use impersonation. To specify impersonations, use the &lt;code&gt;access_as&lt;/code&gt; attribute in your Agent&apos;s configuration file and use K8s RBAC rules to manage impersonated account permissions.&lt;/p&gt;&lt;p&gt;You can impersonate:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The Agent itself (default) = The CI job that accesses the cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A specific user or system account defined within the cluster&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Impersonation give some benefits in terms of security:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Allows you to leverage your K8s authorisation capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Segments fine-grained permissions with the CI/CD tunnel at the project or group level&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Controls permissions with the CI/CD tunnel at the username or service account&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-provisioning-cluster-with-terraform&quot;&gt;Provisioning cluster with &lt;code&gt;terraform&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;As I said, the main goal of this tutorial is try to get the same tooling for local development, pipeline and deployment, but (always got a but), we have 2 sets of the terraform configuration instructions, of example for local development I want, as developer, to get as much of the observability tools that I have on production, in case I need to test metrics, build dashboards on grafana, etc, but without the complexity of production infrastructure architecture.&lt;/p&gt;&lt;p&gt;So in this case, we can give the application diagram for the first part here, as recap of &apos;how our local development stack&lt;code&gt;looks like, and how to achieved with&lt;/code&gt;terraform `.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.plantuml.com/plantuml/png/RPDFSzem4CNl_XGg9p9juaiFcPxY6gRj55eFX4DFZ90Nq4H_t9L4OJhvxbqXrqGaXqpqPt_xtZxX1-Sv-g1LyKuQeK8BREzzvpwL9V8_Tplfzs4J7A2mneFnTyBgibFSHERM-LR9JLb_l6tYqMe-ApLt7f2ErhNLdJMHwMB_ObRz-hbwNC-g7vDbNJNJyKrHB4zKhTVJen-JW0iQy0CRrKeIDg9xnlgAppQObkDf_7JlgEBxlMDLroafk9VMi5g5A3kwON-9OOFq6E40wA11UpmHzuZ0j_9fHCj5kc7dAzBAC07evzpmNV9psKKoNifjb0QcpySwsMMlxVABoQgJ15S7BXNVI2NzYLNDDxO4F4W1iR6M0YrpwU3rBF49q2e5-4QVo3TV6_QUBInl5y4Om7ugmhYaxMH3SRJInU7Z_uW8BlQGacOBK5TvPOfeWuV8n1y88ObOJ_AmhWBlq1va2sSjUZfMBoONT9MDr7lhBT72YoJpJ7z5FWUrrU3t42BG39j8pS6Z5EwSQumWPySxv5loIeLVqkhCM2EzHMbsPMMuEdbga48Xcvd9JDW9v1qmdHIpQD9uWrY61PVoQBddh0y8YNhElWSuUa0oq_G5aPpsP_712RWsznQ2y3k0y__DqLYiIEQ63ov_im5nBvZY0KmRjFe7&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667247028660/_7nejmil9.png&quot; alt=&quot;application-diagram.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;So, for this application structure, i want to get in my local environment the cluster and it&apos;s &apos;pre-requisites&apos; for my architecture, understood as pre-requisites all the others components inside the cluster that no belongs to the application itself (monitoring stack, traefik, cert-manager, etc).&lt;/p&gt;&lt;p&gt;So for that, I write simple modules to install that dependencies inside the local cluster, and get them available to use when a run my application locally with &lt;code&gt;skaffold&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;My structure for infrastructure folder looks like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667983890051/d2r2cjn0I.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.49.38.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Every module has inside their &lt;code&gt;main.tf&lt;/code&gt; configuration file setting the desired state for my cluster after it&apos;s applied.&lt;/p&gt;&lt;p&gt;Lets take a look for one of this module (prometheus) main file:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;terraform {  required_providers {    kubernetes = {      &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt;  = &lt;span class=&quot;hljs-string&quot;&gt;&quot;hashicorp/kubernetes&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;&amp;gt;= 2.13.1&quot;&lt;/span&gt;    }    helm = {      &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt;  = &lt;span class=&quot;hljs-string&quot;&gt;&quot;hashicorp/helm&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;&amp;gt;= 2.7.0&quot;&lt;/span&gt;    }    kubectl = {      &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt;  = &lt;span class=&quot;hljs-string&quot;&gt;&quot;gavinbunney/kubectl&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;&amp;gt;= 1.14.0&quot;&lt;/span&gt;    }  }}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;kubernetes_namespace_v1&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;monitoring_namespace&quot;&lt;/span&gt; {  metadata {    name = var.monitoring_stack_namespace  }}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;helm_release&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus_stack&quot;&lt;/span&gt; {  name  = var.monitoring_stack_prometheus_name  repository = &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://prometheus-community.github.io/helm-charts&quot;&lt;/span&gt;  chart = &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus&quot;&lt;/span&gt;  version = var.monitoring_stack_prometheus_version_number  namespace = var.monitoring_stack_namespace  create_namespace = &lt;span class=&quot;hljs-literal&quot;&gt;false&lt;/span&gt;  values = [    file(&lt;span class=&quot;hljs-string&quot;&gt;&quot;&lt;span class=&quot;hljs-variable&quot;&gt;${path.module}&lt;/span&gt;/manifests/prometheus-override-values.yaml&quot;&lt;/span&gt;)  ]  depends_on = [    kubernetes_namespace_v1.monitoring_namespace  ]}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;kubectl_manifest&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus_stack_ingress&quot;&lt;/span&gt; {  yaml_body = file(&lt;span class=&quot;hljs-string&quot;&gt;&quot;&lt;span class=&quot;hljs-variable&quot;&gt;${path.module}&lt;/span&gt;/manifests/prometheus-ingress.yaml&quot;&lt;/span&gt;)  depends_on = [    helm_release.prometheus_stack  ]}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And then, in the root &lt;code&gt;main.tf&lt;/code&gt; configuration file, you can wrap as many modules as you want, for my case, with my 4 modules was enough (prometheus, traefik, cert-manager, grafana)&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;module &lt;span class=&quot;hljs-string&quot;&gt;&quot;cert_manager_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/cert-manager&quot;&lt;/span&gt;}module &lt;span class=&quot;hljs-string&quot;&gt;&quot;traefik_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/traefik&quot;&lt;/span&gt;  depends_on = [    module.cert_manager_stack  ]}module &lt;span class=&quot;hljs-string&quot;&gt;&quot;prometheus_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/prometheus&quot;&lt;/span&gt;  depends_on = [    module.traefik_stack  ]}module &lt;span class=&quot;hljs-string&quot;&gt;&quot;grafana_stack&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-built_in&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;./module/grafana&quot;&lt;/span&gt;  depends_on = [    module.prometheus_stack  ]}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This also, has a handy target in our Makefile, allowing developers and operators, easily setup and remove cluster pre-requisites&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667984415271/B6SI0SOw1.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.48.58.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667984399818/Yh-hM8j4L.png&quot; alt=&quot;Screenshot 2022-11-09 at 09.49.06.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Setup the pre-requisites took about 3 minutes, but if something that you need to do, time to time, you can setup your cluster today, work on your feature for some days, and then shutdown.&lt;/p&gt;&lt;p&gt;After all this, you will have a fully functional Local-To-Prod pipeline. (if you need to see how the Gitlab CI file looks like, is in the second part of this series)&lt;/p&gt;&lt;h2 id=&quot;heading-next&quot;&gt;Next&lt;/h2&gt;&lt;p&gt;This is the las delivery of the series, but now I&apos;ll write about the others tools that i use to address different challenges in my day to day work.&lt;/p&gt;&lt;p&gt;If you are interested the next topics I&apos;ll write on are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Managing database migrations at scale in &lt;code&gt;Kubernetes&lt;/code&gt; for &lt;code&gt;PHP&lt;/code&gt; application with &lt;code&gt;symfony/migrations&lt;/code&gt; component&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;Istio&lt;/code&gt;, &lt;code&gt;Cert-Manager&lt;/code&gt; and &lt;code&gt;Let&apos;s Encrypt&lt;/code&gt;: Secured your &lt;code&gt;k8&lt;/code&gt; clusters communication with automated generation and provisioning of SSL Certificates&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Internal Developer Platform: A modern way to run engineering teams.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;Digital War Room&lt;/code&gt; or how to get observability for Engineering Managers across applications and teams.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support me&lt;/h2&gt;&lt;p&gt;If you find this content interesting, please consider buying me a coffee :&apos;)&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1667807537209/nefRHbKdK.webp</hashnode:coverImage></item><item><title><![CDATA[How to build a CI/CD workflow with Skaffold for your application (Part II)]]></title><description><![CDATA[🔥 This is the second part of the series "Full CI/CD workflow with Skaffold for your application".
Lets recap the Workflow
As you remind - and if not, you can read the first release of this tutorial 😅 - my main idea is to implement one tool - skaffo...]]></description><link>https://blog.equationlabs.io/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-ii</link><guid isPermaLink="true">https://blog.equationlabs.io/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-ii</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Symfony]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Mon, 07 Nov 2022 08:45:42 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;🔥 This is the second part of the series &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/series/workflow-with-skaffold&quot;&gt;&quot;Full CI/CD workflow with Skaffold for your application&quot;&lt;/a&gt;.&lt;/p&gt;&lt;h2 id=&quot;heading-lets-recap-the-workflow&quot;&gt;Lets recap the &lt;code&gt;Workflow&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;As you remind - and if not, you can read the first release of this tutorial 😅 - my main idea is to implement one tool - &lt;code&gt;skaffold&lt;/code&gt; - as a building block for my &lt;code&gt;CI/CD workflow&lt;/code&gt;, who should be managed by a single &lt;code&gt;makefile&lt;/code&gt; as entrypoint for local development and pipelines - on &lt;code&gt;gitlab&lt;/code&gt; - and all this should be deployed in a &lt;code&gt;K8s&lt;/code&gt; cluster in &lt;code&gt;GCP&lt;/code&gt;&lt;/p&gt;&lt;p&gt;And also, for the simplicity of this tutorial, we use the &lt;code&gt;image&lt;/code&gt; and &lt;code&gt;artefact&lt;/code&gt; repository in the same &lt;code&gt;gitlab&lt;/code&gt; SAAS, but you can use whatever you want for this task (&lt;code&gt;Amazon S3&lt;/code&gt;, &lt;code&gt;Docker&lt;/code&gt; Registry, &lt;code&gt;Private Registries&lt;/code&gt;, &lt;code&gt;Azure&lt;/code&gt; Object Storage, etc)&lt;/p&gt;&lt;p&gt;This is the &lt;code&gt;workflow&lt;/code&gt; so far:&lt;/p&gt;&lt;p&gt;&lt;em&gt;📣 The tool setup and local workflow were covered in the first delivery of this tutorial.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg&quot; alt=&quot;2fl6qCIhG.png.jpeg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-the-makefile&quot;&gt;The Makefile&lt;/h2&gt;&lt;p&gt;As said, the &lt;code&gt;makefile&lt;/code&gt;, is, the main entrypoint for commands to be executed by &lt;code&gt;developers&lt;/code&gt; when they do development work locally, and for &lt;code&gt;gitlab pipelines&lt;/code&gt; in their different stages (must vary on your implementation), and that &lt;code&gt;makefile&lt;/code&gt; commands mostly are wraps for &lt;code&gt;skaffold&lt;/code&gt; ones, with the difference, that i need to pass dynamic values to those stages to work I expect to, so it&apos;s better to wrap then in a makefile target that received that dynamic params and then run the &apos;skaffold&apos; command itself. (later you will now why)&lt;/p&gt;&lt;p&gt;As for the time I was writing this article, my targets are:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667763214992/_MmsTbgzE.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.33.21.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Hopefully for local development we have an unique command since &lt;code&gt;skaffold run&lt;/code&gt; do the full pipeline cycle &lt;code&gt;(build, test, deploy, hot reload)&lt;/code&gt; for development , so, why then i need to wrap this single command in a &lt;code&gt;makefile target&lt;/code&gt;?, mostly because I need that this work no matter what type/kind of local cluster technology the developer use (docker-desktop, Minikube, etc) and no matter what OS developer machine was (MacOS, *Nix) and for that, I need to pass the &lt;code&gt;kube-context&lt;/code&gt; parameter to &lt;code&gt;skaffold&lt;/code&gt; that in my case is &lt;code&gt;docker-desktop&lt;/code&gt; (since docker desktop already bring me a pre installed &lt;code&gt;k8s cluster for local development&lt;/code&gt; and that free me to the need to install manually a cluster in my machine - a win for docker-desktop here -.&lt;/p&gt;&lt;p&gt;So you will encounter that the majority of the &lt;code&gt;makefile&lt;/code&gt; wraps, come in the form of rehuse the same command in multiple stages (pipelines) based on the received parameter in the make execution, and also, generating random seeds to prefix namespaces (because you will have more than one developer working in the same code base at once) and I want to avoid collision between &lt;code&gt;gitlab pipelines&lt;/code&gt; and deploys in lower environments when N developers are working in the same code base.&lt;/p&gt;&lt;p&gt;We must focused in &lt;code&gt;pipeline&lt;/code&gt; target ones (all of then with &lt;code&gt;skaffold&lt;/code&gt; tool), since the infrastructure ones, related to install prerequisites in local cluster to comply with the architecture design, isn&apos;t cover in this series, but in the another series that &apos;ll wrote in next weeks)&lt;/p&gt;&lt;p&gt;Here is how my &lt;code&gt;makefile&lt;/code&gt; looks at the time of writing this article, and for those stages (because in technology everything evolves quickly I always remind this 😅)&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-makefile&quot;&gt;&lt;span class=&quot;hljs-comment&quot;&gt;#------ Development and Pipeline targets ----------#&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;run:    ## DEVELOPMENT[skaffold]: Up and running stack in development mode with hot reloading in the local machine&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _requirements    @skaffold dev -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p development -n &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt; --no-prune=false --cache-artifacts=false&lt;span class=&quot;hljs-section&quot;&gt;unit:    ## DEVELOPMENT[skaffold]: build, deploy and run unit tests =&amp;gt; FOR PIPELINE: `make unit profile=pipeline kube_context=cluster-gitlab-context`&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _run_test_suite SUITE=unit PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(PROFILE)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt; KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;integration: ## DEVELOPMENT[skaffold]: build, deploy and run integration tests =&amp;gt; FOR PIPELINE: `make integration profile=pipeline kube_context=cluster-gitlab-context`&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _run_test_suite SUITE=integration PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(PROFILE)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt; KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;functional: ## DEVELOPMENT[skaffold]: build, deploy and run functional tests =&amp;gt; FOR PIPELINE: `make functional profile=pipeline kube_context=cluster-gitlab-context`&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _run_test_suite SUITE=functional PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(PROFILE)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt; KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;build:    ## PIPELINE[skaffold]: build and push images to registry =&amp;gt; `make build tag=1.0.0|71dcab00 kube_context=docker-desktop`&lt;/span&gt;    @skaffold build -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p production -t &lt;span class=&quot;hljs-variable&quot;&gt;$(tag)&lt;/span&gt; --kube-context=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt; --file-output=pipeline-artifacts.json&lt;span class=&quot;hljs-section&quot;&gt;render:    ## PIPELINE[skaffold]: render manifests and push to artifact registry =&amp;gt; `make render namespace=availability tag=1.0.0|71dcab00`&lt;/span&gt;    @skaffold render -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p production -n &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt; -a pipeline-artifacts.json -o &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt;-api-&lt;span class=&quot;hljs-variable&quot;&gt;$(tag)&lt;/span&gt;-production.yaml&lt;span class=&quot;hljs-section&quot;&gt;deploy:    ## PIPELINE[skaffold]: apply hydrated manifests to desired namespace on cluster `make deploy tag=1.0.0|71dcab00 profile=production namespace=availability kube_context=docker-desktop`&lt;/span&gt;    @kubectl create namespace &lt;span class=&quot;hljs-variable&quot;&gt;$(namespace)&lt;/span&gt; --context=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt;    @skaffold apply -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p &lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; -n &lt;span class=&quot;hljs-variable&quot;&gt;$(namespace)&lt;/span&gt; --kube-context=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt; --status-check=true &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt;-api-&lt;span class=&quot;hljs-variable&quot;&gt;$(tag)&lt;/span&gt;-production.yaml || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(namespace)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;As you can see, all targets are warps for &lt;code&gt;skaffold&lt;/code&gt;, but, allowing me to pass dynamic data, context and profiles (we&apos;ll look this in the &lt;code&gt;skaffold&lt;/code&gt; file explanation below), the complete file is more larger, but with this snippet you can get an idea on how you can build something like that.&lt;/p&gt;&lt;h2 id=&quot;heading-the-skaffold-file&quot;&gt;The &lt;code&gt;skaffold&lt;/code&gt; file&lt;/h2&gt;&lt;p&gt;The main &lt;code&gt;workflow orchestrator&lt;/code&gt; has a main config file, when we can define, how the application should be builded, tested, render their manifest and deployed, so it&apos;s a vertebral part of this strategy, and now I&apos;ll explain you mine.&lt;/p&gt;&lt;p&gt;I have two &lt;code&gt;skaffold&lt;/code&gt; profiles: one called &lt;code&gt;development&lt;/code&gt; and the other &lt;code&gt;production, and a common part share between them like tag strategy, deploy strategy, and both of them use the same&lt;/code&gt; Dockerfile&lt;code&gt;to build their images pointing to the correct&lt;/code&gt;target`. (You can use separate Dockerfiles, but in my case I want to maintain as simple as possible because the difference between 2 images dependencies are minimum)&lt;/p&gt;&lt;p&gt;the skaffold fiel lives inside my deploy folder (do you remember my file organisation? You can re-checked in the first delivery of this series), and all the deployment related files are store there.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667568630626/BK1BkpCMP.png&quot; alt=&quot;Screenshot 2022-11-04 at 14.28.59.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;As you can see the &lt;code&gt;skaffold.yaml&lt;/code&gt; file is in the root of the deploy folder since is my main &lt;code&gt;workflow orchestrator&lt;/code&gt;, in the other folder we have the main &lt;code&gt;k8s manifests&lt;/code&gt; and inside overlays we have the yaml patch&apos;s for every profile (most of the cases the same as an environment) that we want to declare.&lt;/p&gt;&lt;h2 id=&quot;heading-the-gitlab-ciyml-pipeline&quot;&gt;The &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; Pipeline&lt;/h2&gt;&lt;p&gt;Since I use trunk base development strategy in micro services, we need to design 2 flows of pipelines, one for features branches and one for main branch, additionally to that, to be able to reach production we first need to tag a commit, so that&apos;s the real trigger for Go-To-Prod Pipeline.&lt;/p&gt;&lt;h3 id=&quot;heading-feature-branches-workflow-trigger-by-mergerequest-commit&quot;&gt;Feature Branches Workflow (trigger by merge_request commit):&lt;/h3&gt;&lt;p&gt;In this stage, the developer needs to be able to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Run all tests suites&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Build and Image with production dependencies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deploy to a dynamic namespace in the same cluster where pipeline runs (a Dynamic QA to say so, allowing to every developer to have &quot;their own&quot; QA server while their are working on a feature)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Destroy the dynamic deployment (remove review app and namespace)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Looks like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759285752/uJZIk6dpo.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.27.58.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759296006/aVxZ8ellf.png&quot; alt=&quot;Screenshot 2022-11-06 at 15.46.45.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Also we use gitlab environments to see in the Gitlab UI &quot;what&apos;s is deployed in what environment&quot;, and also to visualise production and stages latest deployment status and artefacts.&lt;/p&gt;&lt;p&gt;So, when a new &quot;dynamic QA&quot; environment is deployed, you&apos;ll see this in environment page of your project:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667760771094/pZsqLNroN.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.52.14.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;And also on the MR View on Gitlab you&apos;ll see in what &quot;review&quot; environment is deployed that MR.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667760778935/DCwFN5n6x.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.52.36.png&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;heading-main-branch-workflow-trigger-by-a-tag&quot;&gt;Main Branch Workflow (trigger by a TAG):&lt;/h3&gt;&lt;p&gt;In this stage, the developer needs to be able to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Run all tests suites&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Build and Image with production dependencies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Create a Release Package (this a gitlab feature like Github to display release contents in a special page on gitlab)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deploy to Production&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Looks like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759456750/KMqd5KDZa.png&quot; alt=&quot;Screenshot 2022-11-06 at 16.57.35.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759490932/OG7wFHc0A.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.31.20.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;The main workflow has 3 stages more, including deploying to staging cluster/namespace, but more important has the creation of the release package and the deployment to production.&lt;/p&gt;&lt;p&gt;The only manual job is deploy to production.&lt;/p&gt;&lt;p&gt;Gitlab Release&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667761248338/FXomvjxOy.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.00.37.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Production Environment after Deployment&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667761214382/VQzJafEOg.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.00.08.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;All the best of all, all this is DONE automatically via the automated Pipeline on Gitlab. You can view the skeleton of the pipeline in this Snippet:&lt;/p&gt;&lt;p&gt;https://gitlab.com/playground-arena/api-symfony-roadrunner-cqrs/-/snippets/2448780&lt;/p&gt;&lt;h2 id=&quot;heading-bonus-kaniko-image-builder&quot;&gt;BONUS: &lt;code&gt;Kaniko&lt;/code&gt; image Builder&lt;/h2&gt;&lt;p&gt;Since version 1.24, &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/&quot;&gt;&lt;code&gt;Kubernetes&lt;/code&gt; was moving away from &lt;code&gt;dockershim&lt;/code&gt;&lt;/a&gt; as container runtime, so I want a solution that:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Allow me to build container inside a K8s cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Continue to writing Dockerfiles as today&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Not need to share or mount a socket into a pod (that&apos;s the main security reason to not use docker)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;So I discover the last year &lt;code&gt;Kaniko&lt;/code&gt;, this is another tool on the &lt;code&gt;Google Container Tools&lt;/code&gt; on &lt;code&gt;Github&lt;/code&gt;, that allows us to build an container image from a Dockerfile without Docker. Marvellous.&lt;/p&gt;&lt;p&gt;It has som benefits:&lt;/p&gt;&lt;p&gt;*&lt;code&gt;Kaniko&lt;/code&gt; doesn&apos;t depend on a &lt;code&gt;Docker daemon&lt;/code&gt; and executes each command within a Dockerfile completely in userspace. (no need to mount a docker socket anymore)&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Provides a handy docker image to use in pipelines (a very light weight one, so your jobs doesn&apos;t take much time to run)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Provides a Caching system in the same repository where the images are stored, so in every job that tries to build the image, &lt;code&gt;Kaniko&lt;/code&gt; will check first the repo cache layers and downloaded speeding up the build job.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;When a build a image I check my repo and I can see the caching layers separated from the final images (see image below)&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667762313501/PjAyGdJLh.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.18.09.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Cache Layers&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667762327292/bBwJblXs_.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.18.13.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Builded and Tagged Images&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667762342390/LI9SwKP31.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.18.21.png&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-deployment-safety-considerations&quot;&gt;Deployment safety considerations&lt;/h2&gt;&lt;p&gt;All in life has it&apos;s tradeoff, and this isn&apos;t the exception, however, is possible to joint team autonomy with security and governance, no only with this example, but in general in software development.&lt;/p&gt;&lt;p&gt;Some of my personal recommendations on this are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Secrets: Always use a secret vault (in my case i use Hashicorp one)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fine-grained permissions control with the CI/CD tunnel via impersonation and k8s agent:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Allows you to leverage your K8s authorization capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Segments fine-grained permissions with the CI/CD tunnel at the project or group level&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Controls permissions with the CI/CD tunnel at the username or service account&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Make your high environment namespaces immutable (on K8s namespace creation time)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fine-Grained RBAC on Gitlab Roles (I think this isn&apos;t available on the Free and Self-Managed versions)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-next-chapter&quot;&gt;Next Chapter&lt;/h2&gt;&lt;p&gt;In the next chapter I&apos;ll wrap up all things that we cover in the first 2 releases of the series in a fully functional pipeline from local to production with a small k8s cluster and I expect, that you can see all the work in action and some final remarks to allow you to test in your projects.&lt;/p&gt;&lt;p&gt;Thanks for reading!&lt;/p&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;🔥 This is the second part of the series &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/series/workflow-with-skaffold&quot;&gt;&quot;Full CI/CD workflow with Skaffold for your application&quot;&lt;/a&gt;.&lt;/p&gt;&lt;h2 id=&quot;heading-lets-recap-the-workflow&quot;&gt;Lets recap the &lt;code&gt;Workflow&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;As you remind - and if not, you can read the first release of this tutorial 😅 - my main idea is to implement one tool - &lt;code&gt;skaffold&lt;/code&gt; - as a building block for my &lt;code&gt;CI/CD workflow&lt;/code&gt;, who should be managed by a single &lt;code&gt;makefile&lt;/code&gt; as entrypoint for local development and pipelines - on &lt;code&gt;gitlab&lt;/code&gt; - and all this should be deployed in a &lt;code&gt;K8s&lt;/code&gt; cluster in &lt;code&gt;GCP&lt;/code&gt;&lt;/p&gt;&lt;p&gt;And also, for the simplicity of this tutorial, we use the &lt;code&gt;image&lt;/code&gt; and &lt;code&gt;artefact&lt;/code&gt; repository in the same &lt;code&gt;gitlab&lt;/code&gt; SAAS, but you can use whatever you want for this task (&lt;code&gt;Amazon S3&lt;/code&gt;, &lt;code&gt;Docker&lt;/code&gt; Registry, &lt;code&gt;Private Registries&lt;/code&gt;, &lt;code&gt;Azure&lt;/code&gt; Object Storage, etc)&lt;/p&gt;&lt;p&gt;This is the &lt;code&gt;workflow&lt;/code&gt; so far:&lt;/p&gt;&lt;p&gt;&lt;em&gt;📣 The tool setup and local workflow were covered in the first delivery of this tutorial.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg&quot; alt=&quot;2fl6qCIhG.png.jpeg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-the-makefile&quot;&gt;The Makefile&lt;/h2&gt;&lt;p&gt;As said, the &lt;code&gt;makefile&lt;/code&gt;, is, the main entrypoint for commands to be executed by &lt;code&gt;developers&lt;/code&gt; when they do development work locally, and for &lt;code&gt;gitlab pipelines&lt;/code&gt; in their different stages (must vary on your implementation), and that &lt;code&gt;makefile&lt;/code&gt; commands mostly are wraps for &lt;code&gt;skaffold&lt;/code&gt; ones, with the difference, that i need to pass dynamic values to those stages to work I expect to, so it&apos;s better to wrap then in a makefile target that received that dynamic params and then run the &apos;skaffold&apos; command itself. (later you will now why)&lt;/p&gt;&lt;p&gt;As for the time I was writing this article, my targets are:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667763214992/_MmsTbgzE.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.33.21.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Hopefully for local development we have an unique command since &lt;code&gt;skaffold run&lt;/code&gt; do the full pipeline cycle &lt;code&gt;(build, test, deploy, hot reload)&lt;/code&gt; for development , so, why then i need to wrap this single command in a &lt;code&gt;makefile target&lt;/code&gt;?, mostly because I need that this work no matter what type/kind of local cluster technology the developer use (docker-desktop, Minikube, etc) and no matter what OS developer machine was (MacOS, *Nix) and for that, I need to pass the &lt;code&gt;kube-context&lt;/code&gt; parameter to &lt;code&gt;skaffold&lt;/code&gt; that in my case is &lt;code&gt;docker-desktop&lt;/code&gt; (since docker desktop already bring me a pre installed &lt;code&gt;k8s cluster for local development&lt;/code&gt; and that free me to the need to install manually a cluster in my machine - a win for docker-desktop here -.&lt;/p&gt;&lt;p&gt;So you will encounter that the majority of the &lt;code&gt;makefile&lt;/code&gt; wraps, come in the form of rehuse the same command in multiple stages (pipelines) based on the received parameter in the make execution, and also, generating random seeds to prefix namespaces (because you will have more than one developer working in the same code base at once) and I want to avoid collision between &lt;code&gt;gitlab pipelines&lt;/code&gt; and deploys in lower environments when N developers are working in the same code base.&lt;/p&gt;&lt;p&gt;We must focused in &lt;code&gt;pipeline&lt;/code&gt; target ones (all of then with &lt;code&gt;skaffold&lt;/code&gt; tool), since the infrastructure ones, related to install prerequisites in local cluster to comply with the architecture design, isn&apos;t cover in this series, but in the another series that &apos;ll wrote in next weeks)&lt;/p&gt;&lt;p&gt;Here is how my &lt;code&gt;makefile&lt;/code&gt; looks at the time of writing this article, and for those stages (because in technology everything evolves quickly I always remind this 😅)&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-makefile&quot;&gt;&lt;span class=&quot;hljs-comment&quot;&gt;#------ Development and Pipeline targets ----------#&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;run:    ## DEVELOPMENT[skaffold]: Up and running stack in development mode with hot reloading in the local machine&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _requirements    @skaffold dev -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p development -n &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt; --no-prune=false --cache-artifacts=false&lt;span class=&quot;hljs-section&quot;&gt;unit:    ## DEVELOPMENT[skaffold]: build, deploy and run unit tests =&amp;gt; FOR PIPELINE: `make unit profile=pipeline kube_context=cluster-gitlab-context`&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _run_test_suite SUITE=unit PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(PROFILE)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt; KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;integration: ## DEVELOPMENT[skaffold]: build, deploy and run integration tests =&amp;gt; FOR PIPELINE: `make integration profile=pipeline kube_context=cluster-gitlab-context`&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _run_test_suite SUITE=integration PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(PROFILE)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt; KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;functional: ## DEVELOPMENT[skaffold]: build, deploy and run functional tests =&amp;gt; FOR PIPELINE: `make functional profile=pipeline kube_context=cluster-gitlab-context`&lt;/span&gt;    @&lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _run_test_suite SUITE=functional PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(PROFILE)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt; KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(KUBE_CONTEXT)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(DYNAMIC_NAMESPACE)&lt;/span&gt;&lt;span class=&quot;hljs-section&quot;&gt;build:    ## PIPELINE[skaffold]: build and push images to registry =&amp;gt; `make build tag=1.0.0|71dcab00 kube_context=docker-desktop`&lt;/span&gt;    @skaffold build -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p production -t &lt;span class=&quot;hljs-variable&quot;&gt;$(tag)&lt;/span&gt; --kube-context=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt; --file-output=pipeline-artifacts.json&lt;span class=&quot;hljs-section&quot;&gt;render:    ## PIPELINE[skaffold]: render manifests and push to artifact registry =&amp;gt; `make render namespace=availability tag=1.0.0|71dcab00`&lt;/span&gt;    @skaffold render -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p production -n &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt; -a pipeline-artifacts.json -o &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt;-api-&lt;span class=&quot;hljs-variable&quot;&gt;$(tag)&lt;/span&gt;-production.yaml&lt;span class=&quot;hljs-section&quot;&gt;deploy:    ## PIPELINE[skaffold]: apply hydrated manifests to desired namespace on cluster `make deploy tag=1.0.0|71dcab00 profile=production namespace=availability kube_context=docker-desktop`&lt;/span&gt;    @kubectl create namespace &lt;span class=&quot;hljs-variable&quot;&gt;$(namespace)&lt;/span&gt; --context=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt;    @skaffold apply -f &lt;span class=&quot;hljs-variable&quot;&gt;$(DEPLOY_DIR)&lt;/span&gt;/skaffold.yaml -p &lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; -n &lt;span class=&quot;hljs-variable&quot;&gt;$(namespace)&lt;/span&gt; --kube-context=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt; --status-check=true &lt;span class=&quot;hljs-variable&quot;&gt;$(PROJECT_NAME)&lt;/span&gt;-api-&lt;span class=&quot;hljs-variable&quot;&gt;$(tag)&lt;/span&gt;-production.yaml || &lt;span class=&quot;hljs-variable&quot;&gt;$(MAKE)&lt;/span&gt; _cleanup KUBE_CONTEXT=&lt;span class=&quot;hljs-variable&quot;&gt;$(kube_context)&lt;/span&gt; PROFILE=&lt;span class=&quot;hljs-variable&quot;&gt;$(profile)&lt;/span&gt; NAMESPACE=&lt;span class=&quot;hljs-variable&quot;&gt;$(namespace)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;As you can see, all targets are warps for &lt;code&gt;skaffold&lt;/code&gt;, but, allowing me to pass dynamic data, context and profiles (we&apos;ll look this in the &lt;code&gt;skaffold&lt;/code&gt; file explanation below), the complete file is more larger, but with this snippet you can get an idea on how you can build something like that.&lt;/p&gt;&lt;h2 id=&quot;heading-the-skaffold-file&quot;&gt;The &lt;code&gt;skaffold&lt;/code&gt; file&lt;/h2&gt;&lt;p&gt;The main &lt;code&gt;workflow orchestrator&lt;/code&gt; has a main config file, when we can define, how the application should be builded, tested, render their manifest and deployed, so it&apos;s a vertebral part of this strategy, and now I&apos;ll explain you mine.&lt;/p&gt;&lt;p&gt;I have two &lt;code&gt;skaffold&lt;/code&gt; profiles: one called &lt;code&gt;development&lt;/code&gt; and the other &lt;code&gt;production, and a common part share between them like tag strategy, deploy strategy, and both of them use the same&lt;/code&gt; Dockerfile&lt;code&gt;to build their images pointing to the correct&lt;/code&gt;target`. (You can use separate Dockerfiles, but in my case I want to maintain as simple as possible because the difference between 2 images dependencies are minimum)&lt;/p&gt;&lt;p&gt;the skaffold fiel lives inside my deploy folder (do you remember my file organisation? You can re-checked in the first delivery of this series), and all the deployment related files are store there.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667568630626/BK1BkpCMP.png&quot; alt=&quot;Screenshot 2022-11-04 at 14.28.59.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;As you can see the &lt;code&gt;skaffold.yaml&lt;/code&gt; file is in the root of the deploy folder since is my main &lt;code&gt;workflow orchestrator&lt;/code&gt;, in the other folder we have the main &lt;code&gt;k8s manifests&lt;/code&gt; and inside overlays we have the yaml patch&apos;s for every profile (most of the cases the same as an environment) that we want to declare.&lt;/p&gt;&lt;h2 id=&quot;heading-the-gitlab-ciyml-pipeline&quot;&gt;The &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; Pipeline&lt;/h2&gt;&lt;p&gt;Since I use trunk base development strategy in micro services, we need to design 2 flows of pipelines, one for features branches and one for main branch, additionally to that, to be able to reach production we first need to tag a commit, so that&apos;s the real trigger for Go-To-Prod Pipeline.&lt;/p&gt;&lt;h3 id=&quot;heading-feature-branches-workflow-trigger-by-mergerequest-commit&quot;&gt;Feature Branches Workflow (trigger by merge_request commit):&lt;/h3&gt;&lt;p&gt;In this stage, the developer needs to be able to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Run all tests suites&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Build and Image with production dependencies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deploy to a dynamic namespace in the same cluster where pipeline runs (a Dynamic QA to say so, allowing to every developer to have &quot;their own&quot; QA server while their are working on a feature)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Destroy the dynamic deployment (remove review app and namespace)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Looks like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759285752/uJZIk6dpo.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.27.58.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759296006/aVxZ8ellf.png&quot; alt=&quot;Screenshot 2022-11-06 at 15.46.45.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Also we use gitlab environments to see in the Gitlab UI &quot;what&apos;s is deployed in what environment&quot;, and also to visualise production and stages latest deployment status and artefacts.&lt;/p&gt;&lt;p&gt;So, when a new &quot;dynamic QA&quot; environment is deployed, you&apos;ll see this in environment page of your project:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667760771094/pZsqLNroN.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.52.14.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;And also on the MR View on Gitlab you&apos;ll see in what &quot;review&quot; environment is deployed that MR.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667760778935/DCwFN5n6x.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.52.36.png&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;heading-main-branch-workflow-trigger-by-a-tag&quot;&gt;Main Branch Workflow (trigger by a TAG):&lt;/h3&gt;&lt;p&gt;In this stage, the developer needs to be able to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Run all tests suites&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Build and Image with production dependencies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Create a Release Package (this a gitlab feature like Github to display release contents in a special page on gitlab)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deploy to Production&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Looks like this:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759456750/KMqd5KDZa.png&quot; alt=&quot;Screenshot 2022-11-06 at 16.57.35.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667759490932/OG7wFHc0A.png&quot; alt=&quot;Screenshot 2022-11-06 at 19.31.20.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;The main workflow has 3 stages more, including deploying to staging cluster/namespace, but more important has the creation of the release package and the deployment to production.&lt;/p&gt;&lt;p&gt;The only manual job is deploy to production.&lt;/p&gt;&lt;p&gt;Gitlab Release&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667761248338/FXomvjxOy.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.00.37.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Production Environment after Deployment&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667761214382/VQzJafEOg.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.00.08.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;All the best of all, all this is DONE automatically via the automated Pipeline on Gitlab. You can view the skeleton of the pipeline in this Snippet:&lt;/p&gt;&lt;p&gt;https://gitlab.com/playground-arena/api-symfony-roadrunner-cqrs/-/snippets/2448780&lt;/p&gt;&lt;h2 id=&quot;heading-bonus-kaniko-image-builder&quot;&gt;BONUS: &lt;code&gt;Kaniko&lt;/code&gt; image Builder&lt;/h2&gt;&lt;p&gt;Since version 1.24, &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/&quot;&gt;&lt;code&gt;Kubernetes&lt;/code&gt; was moving away from &lt;code&gt;dockershim&lt;/code&gt;&lt;/a&gt; as container runtime, so I want a solution that:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Allow me to build container inside a K8s cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Continue to writing Dockerfiles as today&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Not need to share or mount a socket into a pod (that&apos;s the main security reason to not use docker)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;So I discover the last year &lt;code&gt;Kaniko&lt;/code&gt;, this is another tool on the &lt;code&gt;Google Container Tools&lt;/code&gt; on &lt;code&gt;Github&lt;/code&gt;, that allows us to build an container image from a Dockerfile without Docker. Marvellous.&lt;/p&gt;&lt;p&gt;It has som benefits:&lt;/p&gt;&lt;p&gt;*&lt;code&gt;Kaniko&lt;/code&gt; doesn&apos;t depend on a &lt;code&gt;Docker daemon&lt;/code&gt; and executes each command within a Dockerfile completely in userspace. (no need to mount a docker socket anymore)&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Provides a handy docker image to use in pipelines (a very light weight one, so your jobs doesn&apos;t take much time to run)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Provides a Caching system in the same repository where the images are stored, so in every job that tries to build the image, &lt;code&gt;Kaniko&lt;/code&gt; will check first the repo cache layers and downloaded speeding up the build job.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;When a build a image I check my repo and I can see the caching layers separated from the final images (see image below)&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667762313501/PjAyGdJLh.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.18.09.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Cache Layers&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667762327292/bBwJblXs_.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.18.13.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Builded and Tagged Images&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667762342390/LI9SwKP31.png&quot; alt=&quot;Screenshot 2022-11-06 at 20.18.21.png&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-deployment-safety-considerations&quot;&gt;Deployment safety considerations&lt;/h2&gt;&lt;p&gt;All in life has it&apos;s tradeoff, and this isn&apos;t the exception, however, is possible to joint team autonomy with security and governance, no only with this example, but in general in software development.&lt;/p&gt;&lt;p&gt;Some of my personal recommendations on this are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Secrets: Always use a secret vault (in my case i use Hashicorp one)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fine-grained permissions control with the CI/CD tunnel via impersonation and k8s agent:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Allows you to leverage your K8s authorization capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Segments fine-grained permissions with the CI/CD tunnel at the project or group level&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Controls permissions with the CI/CD tunnel at the username or service account&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Make your high environment namespaces immutable (on K8s namespace creation time)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fine-Grained RBAC on Gitlab Roles (I think this isn&apos;t available on the Free and Self-Managed versions)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-next-chapter&quot;&gt;Next Chapter&lt;/h2&gt;&lt;p&gt;In the next chapter I&apos;ll wrap up all things that we cover in the first 2 releases of the series in a fully functional pipeline from local to production with a small k8s cluster and I expect, that you can see all the work in action and some final remarks to allow you to test in your projects.&lt;/p&gt;&lt;p&gt;Thanks for reading!&lt;/p&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1667552026990/z8becxwLQ.webp</hashnode:coverImage></item><item><title><![CDATA[How to build a CI/CD workflow with Skaffold for your application (Part I)]]></title><description><![CDATA[Skaffold (part of the Google Container Tools ) was on the market since 2018, but was in 2020 when, (at least for me), they reach a prod-grade maturity level on the tool.
And I was more than fascinated on how, this tool not only can facilitate the dev...]]></description><link>https://blog.equationlabs.io/cicd-workflow-with-skaffold-for-your-application-part-i</link><guid isPermaLink="true">https://blog.equationlabs.io/cicd-workflow-with-skaffold-for-your-application-part-i</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[skaffold]]></category><category><![CDATA[Symfony]]></category><category><![CDATA[GitLab]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Mon, 31 Oct 2022 20:39:26 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;&lt;code&gt;Skaffold&lt;/code&gt; (part of the &lt;code&gt;Google Container Tools&lt;/code&gt; ) was on the market since 2018, but was in 2020 when, (at least for me), they reach a prod-grade maturity level on the tool.&lt;/p&gt;&lt;p&gt;And I was more than fascinated on how, this tool not only can facilitate the developer work on local machines, but also, as a complete pipeline from development to production environment, if is used with a couple of other tools.&lt;/p&gt;&lt;p&gt;&lt;code&gt;Easy and Repeatable Kubernetes Development&lt;/code&gt;, no matter if you are a developer, lead, platform engineer, SRE or head of Engineering, all we&apos;re agree on that 🙋🏽.&lt;/p&gt;&lt;p&gt;We want an easy, repeatable, reproducible development workflow, that brings more autonomy in the teams, to bring more product value to the final user in a secure way.&lt;/p&gt;&lt;p&gt;I want to show you, how I use &lt;code&gt;Skaffold&lt;/code&gt; as building-block for my micro service CI/CD pipeline from local to production.&lt;/p&gt;&lt;h2 id=&quot;heading-the-toolset&quot;&gt;The Toolset&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;skaffold&lt;/code&gt; cli (you can use the provided docker image or install in you machine)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;Kustomize&lt;/code&gt; (kustomize is part of kubectl cli already)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;K8s&lt;/code&gt; cluster (local and remote) - if you use docker-desktop you&apos;ve already one cluster installed by default, to use for local development.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;Isn&apos;t intended in this article to show, how to install a k8s cluster for local development, you have various alternatives like Minikube out there. Like I said, I my case since I use docker-desktop and they come with a k8s inside by default, is easier to me and for the development experience to work on this way, however if you don&apos;t use docker desktop, you could use any of the others alternatives available in the market.&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;heading-the-workflow&quot;&gt;The Workflow&lt;/h2&gt;&lt;p&gt;The main idea is, to use &apos;skaffold&apos; as a building block from local to production environment, simplifying the tooling used by the developer and facilitating the integration with the actual &lt;code&gt;gitlab&lt;/code&gt; repository service.&lt;/p&gt;&lt;p&gt;The more complex part would be, the &lt;code&gt;integration&lt;/code&gt; and &lt;code&gt;functional&lt;/code&gt; tests, since integration and functional tests needs the complete application and dependencies running, a little more work needs to be done to accomplish that, however isn&apos;t as complex as sounds, since I used a &lt;code&gt;Kubernetes gitlab runner&lt;/code&gt; to run the pipelines, so, we can use the same runner to deploy the application in a special namespace, run our tests, and then, remove the application from the runner.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667243854951/2fl6qCIhG.png&quot; alt=&quot;Screenshot 2022-10-31 at 20.17.18.png&quot; /&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Be aware, that you need to do a cleanup process after each pipeline or stage run, to avoid left running process consuming capacity and space in your Kubernetes cluster and to avoid recurring in unexpected operational costs.&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;heading-the-application&quot;&gt;The Application&lt;/h2&gt;&lt;p&gt;It&apos;s the simplest micro service you may know, but it&apos;s intended for demonstration purposes. Was made in &lt;code&gt;Symfony&lt;/code&gt; with &lt;code&gt;RoadRunner&lt;/code&gt; as application server, expose metrics to a &lt;code&gt;prometheus&lt;/code&gt; metrics server, and then this metrics are fetched from the &lt;code&gt;monitoring stack&lt;/code&gt; to be used in &lt;code&gt;grafana&lt;/code&gt; dashboard for monitoring and observability purposes.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.plantuml.com/plantuml/png/RPDFSzem4CNl_XGg9p9juaiFcPxY6gRj55eFX4DFZ90Nq4H_t9L4OJhvxbqXrqGaXqpqPt_xtZxX1-Sv-g1LyKuQeK8BREzzvpwL9V8_Tplfzs4J7A2mneFnTyBgibFSHERM-LR9JLb_l6tYqMe-ApLt7f2ErhNLdJMHwMB_ObRz-hbwNC-g7vDbNJNJyKrHB4zKhTVJen-JW0iQy0CRrKeIDg9xnlgAppQObkDf_7JlgEBxlMDLroafk9VMi5g5A3kwON-9OOFq6E40wA11UpmHzuZ0j_9fHCj5kc7dAzBAC07evzpmNV9psKKoNifjb0QcpySwsMMlxVABoQgJ15S7BXNVI2NzYLNDDxO4F4W1iR6M0YrpwU3rBF49q2e5-4QVo3TV6_QUBInl5y4Om7ugmhYaxMH3SRJInU7Z_uW8BlQGacOBK5TvPOfeWuV8n1y88ObOJ_AmhWBlq1va2sSjUZfMBoONT9MDr7lhBT72YoJpJ7z5FWUrrU3t42BG39j8pS6Z5EwSQumWPySxv5loIeLVqkhCM2EzHMbsPMMuEdbga48Xcvd9JDW9v1qmdHIpQD9uWrY61PVoQBddh0y8YNhElWSuUa0oq_G5aPpsP_712RWsznQ2y3k0y__DqLYiIEQ63ov_im5nBvZY0KmRjFe7&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667247028660/_7nejmil9.png&quot; alt=&quot;application-diagram.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Now, that we have all the context clear, lets begin with the next steps: first at all, i need to setup our &lt;code&gt;repository skeleton&lt;/code&gt; and &lt;code&gt;directory structure&lt;/code&gt;, in order to be, as functional as possible to my intended workflow and development process.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Take into account, that this, is how I setup my repositories, and you should fit this to your expectations and operational workflow.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Additionally, my principal goal when I began to work with this &lt;code&gt;GitOps&lt;/code&gt; approach was also: reduce the cognitive load and operational complexity for new an current colleagues, reducing also the onboarding time and the number of tools that we need to do our work.&lt;/p&gt;&lt;p&gt;If you need a more complex scenario for metrics scaling, i normally try to use &lt;code&gt;thanos&lt;/code&gt; for that job, since allows me to easily scale &lt;code&gt;prometheus&lt;/code&gt;, and get &lt;code&gt;long term storage&lt;/code&gt; in commonly knows cloud object storage services, like &lt;code&gt;Amazon S3&lt;/code&gt; or &lt;code&gt;GCP Cloud Storage&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Below you can find a diagram of the same application but implementing thanos.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.plantuml.com/plantuml/png/ZLHDSzCm4BtxLuYSqe7M1pXqEDKudSAGW4cQ0oUF8cyIKLao-WWDJFyxkvQdYUqomodoQj--js-rkN6UMnzgbRoIMgXG0TjxtxZtQMhvhwkTzFkm2GwiCDg3zbV2r6cZk2RCfVELafiqVtTPK6YzcASrTnuiXihSr8tHX6ceVZBFldzTtvVpxCjibMV5xVGYILP7pAxBsqS_HG8NQh1ls2HN4c6Jq_q74tJ5xN7wSEtm_lErOrdJA2cubqQpN0KYdLomFmbZpxnJ2mUm3Wfh7ey8kxV0j_9XWiTbl67j5HATemHONzPSyrqKWv-B-4L8vnIZ3BabTd0jU2YJdyHbZKHKTk1IyOrKqXzPLdnY2ociSM0FKa3KtPE0PbkZ5DWNiAIY-5YmrsnfUBKCMbFhiO3sNEBdR8EzLz9Hf_HB4C757Z2F4fUW1kRq6Aq97beGlGN4H4Wv6tWpyBUnvY0h0iOHvSlP2RlkDTMfwqJXmOl8yvJqPk7tN1jNEYmhkAKPjW6sYW528diDVW_1iGLuAqKSoQY6m00NtfnLoRlG_zLfdXDwsOIj8u3HGDjX99rXemPwHRRWnPxmksMH8og2vZsctc2SiBm15kdw0ueUZtiT2XYJ9aylxbdPiNJ3N1WjiQ3Kk-6w3Ot-6S1AEBFvMmpyosJMAxApVCirnzoxU2BOYJpDD5T77wVpRDYGUTGqtHn7JgzFR2FjmSM7N77FMNpPr3AQrVlNWaSF5YKLNGRPTTl5scMzECyscnyWVEcyiNm7c3etwESzs9gjOemeLs_Jkxn8iz_1jWiRNzBvGtY9TUrEAzkw6Zli_bR7sse1ctN-7DCnZH_HI3UTm9qCxIEZYsFSU0uteAjGgxy0&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667247112713/y0Hhlyreq.png&quot; alt=&quot;application-diagram-extent.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Let&apos;s continue this guide with the simplified version of the application (the one with only &lt;code&gt;prometheus&lt;/code&gt; and &lt;code&gt;Grafana&lt;/code&gt; components for monitoring)&lt;/p&gt;&lt;h2 id=&quot;heading-folder-structure-and-orchestration-process&quot;&gt;Folder Structure and Orchestration process&lt;/h2&gt;&lt;p&gt;Lets recap on some concepts about the tooling that we&apos;re implementing here, to fulfil the promise of a, full workflow with skaffold from local to production.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;I deploy on &lt;code&gt;K8s&lt;/code&gt; cluster with &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kustomize&lt;/code&gt; (kustomize is part of k8s bundle).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I use &lt;code&gt;skaffold&lt;/code&gt; as workflow building block through their cli command steps (build, test, render, deploy, verify)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I build images locally with &lt;code&gt;docker&lt;/code&gt; cli (is a pre requisite for my workflow) and in &lt;code&gt;gitlab&lt;/code&gt; i have a couple of options alongside docker (&lt;code&gt;Kaniko&lt;/code&gt; or &lt;code&gt;Docker in Docker&lt;/code&gt; variations), but i&apos;ll cover that in the next steps.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I use a &lt;code&gt;Makefile&lt;/code&gt; as command &quot;collector&quot; entrypoint no only for local development but for gitlab pipeline to group commands in single word ones (make run, build, unit, etc).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I use &lt;code&gt;terraform&lt;/code&gt; declarative configuration files, to set the desired state of my working cluster (in that case the local one), in this desired state we have some pre-requisites needed in my architecture definition, like &lt;code&gt;cert-manager&lt;/code&gt;, &lt;code&gt;traefik&lt;/code&gt;, &lt;code&gt;prometheus&lt;/code&gt; and &lt;code&gt;grafana&lt;/code&gt;, like my machines on staging and production.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;* deploy/    manifests/  &lt;span class=&quot;hljs-comment&quot;&gt;# the place where k8s yaml resides&lt;/span&gt;         - *k8s.yaml         - kuztomization.yaml    overlays/ &lt;span class=&quot;hljs-comment&quot;&gt;# for every environment that you want, you should have an overlay&lt;/span&gt;        development/              - *.k8s.patch.yaml              - kuztomization.yaml        production/             - *.k8s.patch.yaml             - kuztomization.yaml   - skaffold.yaml* infrastructure/ &lt;span class=&quot;hljs-comment&quot;&gt;#terraform scripts to install cluster pre requisites vault, cert-manager, treafik&lt;/span&gt;* src/ &lt;span class=&quot;hljs-comment&quot;&gt;# all the source code of your application&lt;/span&gt;    - Dockerfile - Makefile- .gitlab-ci.yaml&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Another important component of this setup is: the &lt;code&gt;Dockerfile&lt;/code&gt;, to be able to use the same dockerfile to build images for development and production environments (with the dependencies of each of them), I build a &lt;code&gt;multi-stage&lt;/code&gt; dockerfile that allows me to get a target for development, and a target for production, that we can point to in the &lt;code&gt;skaffold&lt;/code&gt; build phase.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667244719111/LWX13klyl.png&quot; alt=&quot;Screenshot 2022-10-31 at 20.31.39.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;code&gt;Kustomize&lt;/code&gt; files, (&lt;code&gt;kustomization.yaml&lt;/code&gt; ones), allow me to declare a &lt;code&gt;patch or merge&lt;/code&gt; of the part of the main &lt;code&gt;k8s&lt;/code&gt; manifest, to apply the changes that I need to change in some environment without the necessity of duplicating the entire YAML, so, for example, if I have the following &lt;code&gt;k8s&lt;/code&gt; manifest declaring an API with 1 replica, and then, I can declare a patch to set that number to 4 replicas if the environment is in production.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667210150883/uVgpZtVjf.png&quot; alt=&quot;Screenshot 2022-10-31 at 10.54.48.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;The following image shows you, how is a main &lt;code&gt;k8s&lt;/code&gt; manifest and their corresponding patch for production. You can patch anything you want, adding all the data, metadata and other labels to every manifest in environment overlays.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667148986552/Zj2sXJ9Oc.png&quot; alt=&quot;Screenshot 2022-10-30 at 17.54.36.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Then, we have our &lt;code&gt;skaffold&lt;/code&gt; file, in charge of the orchestration process of the workflow itself:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667244800783/oBMDbsByO.png&quot; alt=&quot;Screenshot 2022-10-31 at 20.30.24.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Since &lt;code&gt;skaffold&lt;/code&gt; allows us to use &lt;code&gt;kustomization&lt;/code&gt; as a deployment strategy, I organise my profiles to do so, and also, with this, the development team has a lot of manoeuvres to modify and deploy changes with zero effort.&lt;/p&gt;&lt;p&gt;Now, I can run everything in one shot to see how this work, so, it&apos;s everything is ok, I&apos;ll capable to access all the tools (via browser) and the API requesting them.&lt;/p&gt;&lt;p&gt;To run skaffold you need to run the following command: &lt;code&gt;skaffold dev -p development&lt;/code&gt;, but since we use a &lt;code&gt;Makefile&lt;/code&gt; as a command entrypoint, you can see above that make run do the same job as we need to run skaffold in development mode.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667151507712/gsBwGFfET.gif&quot; alt=&quot;ezgif-4-f2a066ca18.gif&quot; /&gt;&lt;/p&gt;&lt;p&gt;I use my domain and a self-signed certificate to access, all applications via a &lt;code&gt;FQDN&lt;/code&gt; over HTTPS (using &lt;code&gt;cert-manager&lt;/code&gt; and &lt;code&gt;traefik&lt;/code&gt; for that), now, I&apos;ll be able to access all of them via that URLS (on the local machine this URLs points to the loopback &lt;code&gt;127.0.0.1&lt;/code&gt; in the &lt;code&gt;/etc/hosts&lt;/code&gt; file)&lt;/p&gt;&lt;p&gt;We should have at least these applications:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Grafana&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Traefik Dashboard&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;API /Application&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Let&apos;s see in this animated GIF, those applications running:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667153104081/m5Rk9v6Nx.gif&quot; alt=&quot;ezgif-4-162073e2b1.gif&quot; /&gt;&lt;/p&gt;&lt;p&gt;😅 with this, i already have a full local development cycle for my local environment, now the next milestone is, to make my &lt;/p&gt;&lt;p&gt;gitlab pipelines compliance with this pipeline and make the way to Low and Prod environments.&lt;/p&gt;&lt;p&gt;Let&apos;s stop here for now. I&apos;ll prepare the material for the next blog entry.&lt;/p&gt;&lt;h2 id=&quot;heading-next-chapter&quot;&gt;Next Chapter&lt;/h2&gt;&lt;p&gt;In the next chapter of this tutorial, I&apos;ll try to implement this local workflow in &lt;code&gt;GitLab&lt;/code&gt; pipeline, allowing me to use the &lt;code&gt;tests, build, render, deploy and verify&lt;/code&gt; skaffold stages in my entire pipeline and deploy the application to a &lt;code&gt;k8s&lt;/code&gt; cluster in &lt;code&gt;GCP&lt;/code&gt; in a full GitOps manner.&lt;/p&gt;&lt;p&gt;Thanks for reading and see you the next week for more! 😃&lt;/p&gt;&lt;p&gt;A Big KUDOS to the team #skaffold for a great job, if you want to know more about you can reach them at &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.slack.com/archives/CABQMSZA6&quot;&gt;slack&lt;/a&gt; or in their &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/GoogleContainerTools/skaffold&quot;&gt;repo&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;&lt;code&gt;Skaffold&lt;/code&gt; (part of the &lt;code&gt;Google Container Tools&lt;/code&gt; ) was on the market since 2018, but was in 2020 when, (at least for me), they reach a prod-grade maturity level on the tool.&lt;/p&gt;&lt;p&gt;And I was more than fascinated on how, this tool not only can facilitate the developer work on local machines, but also, as a complete pipeline from development to production environment, if is used with a couple of other tools.&lt;/p&gt;&lt;p&gt;&lt;code&gt;Easy and Repeatable Kubernetes Development&lt;/code&gt;, no matter if you are a developer, lead, platform engineer, SRE or head of Engineering, all we&apos;re agree on that 🙋🏽.&lt;/p&gt;&lt;p&gt;We want an easy, repeatable, reproducible development workflow, that brings more autonomy in the teams, to bring more product value to the final user in a secure way.&lt;/p&gt;&lt;p&gt;I want to show you, how I use &lt;code&gt;Skaffold&lt;/code&gt; as building-block for my micro service CI/CD pipeline from local to production.&lt;/p&gt;&lt;h2 id=&quot;heading-the-toolset&quot;&gt;The Toolset&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;skaffold&lt;/code&gt; cli (you can use the provided docker image or install in you machine)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;Kustomize&lt;/code&gt; (kustomize is part of kubectl cli already)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;K8s&lt;/code&gt; cluster (local and remote) - if you use docker-desktop you&apos;ve already one cluster installed by default, to use for local development.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;Isn&apos;t intended in this article to show, how to install a k8s cluster for local development, you have various alternatives like Minikube out there. Like I said, I my case since I use docker-desktop and they come with a k8s inside by default, is easier to me and for the development experience to work on this way, however if you don&apos;t use docker desktop, you could use any of the others alternatives available in the market.&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;heading-the-workflow&quot;&gt;The Workflow&lt;/h2&gt;&lt;p&gt;The main idea is, to use &apos;skaffold&apos; as a building block from local to production environment, simplifying the tooling used by the developer and facilitating the integration with the actual &lt;code&gt;gitlab&lt;/code&gt; repository service.&lt;/p&gt;&lt;p&gt;The more complex part would be, the &lt;code&gt;integration&lt;/code&gt; and &lt;code&gt;functional&lt;/code&gt; tests, since integration and functional tests needs the complete application and dependencies running, a little more work needs to be done to accomplish that, however isn&apos;t as complex as sounds, since I used a &lt;code&gt;Kubernetes gitlab runner&lt;/code&gt; to run the pipelines, so, we can use the same runner to deploy the application in a special namespace, run our tests, and then, remove the application from the runner.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667243854951/2fl6qCIhG.png&quot; alt=&quot;Screenshot 2022-10-31 at 20.17.18.png&quot; /&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Be aware, that you need to do a cleanup process after each pipeline or stage run, to avoid left running process consuming capacity and space in your Kubernetes cluster and to avoid recurring in unexpected operational costs.&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 id=&quot;heading-the-application&quot;&gt;The Application&lt;/h2&gt;&lt;p&gt;It&apos;s the simplest micro service you may know, but it&apos;s intended for demonstration purposes. Was made in &lt;code&gt;Symfony&lt;/code&gt; with &lt;code&gt;RoadRunner&lt;/code&gt; as application server, expose metrics to a &lt;code&gt;prometheus&lt;/code&gt; metrics server, and then this metrics are fetched from the &lt;code&gt;monitoring stack&lt;/code&gt; to be used in &lt;code&gt;grafana&lt;/code&gt; dashboard for monitoring and observability purposes.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.plantuml.com/plantuml/png/RPDFSzem4CNl_XGg9p9juaiFcPxY6gRj55eFX4DFZ90Nq4H_t9L4OJhvxbqXrqGaXqpqPt_xtZxX1-Sv-g1LyKuQeK8BREzzvpwL9V8_Tplfzs4J7A2mneFnTyBgibFSHERM-LR9JLb_l6tYqMe-ApLt7f2ErhNLdJMHwMB_ObRz-hbwNC-g7vDbNJNJyKrHB4zKhTVJen-JW0iQy0CRrKeIDg9xnlgAppQObkDf_7JlgEBxlMDLroafk9VMi5g5A3kwON-9OOFq6E40wA11UpmHzuZ0j_9fHCj5kc7dAzBAC07evzpmNV9psKKoNifjb0QcpySwsMMlxVABoQgJ15S7BXNVI2NzYLNDDxO4F4W1iR6M0YrpwU3rBF49q2e5-4QVo3TV6_QUBInl5y4Om7ugmhYaxMH3SRJInU7Z_uW8BlQGacOBK5TvPOfeWuV8n1y88ObOJ_AmhWBlq1va2sSjUZfMBoONT9MDr7lhBT72YoJpJ7z5FWUrrU3t42BG39j8pS6Z5EwSQumWPySxv5loIeLVqkhCM2EzHMbsPMMuEdbga48Xcvd9JDW9v1qmdHIpQD9uWrY61PVoQBddh0y8YNhElWSuUa0oq_G5aPpsP_712RWsznQ2y3k0y__DqLYiIEQ63ov_im5nBvZY0KmRjFe7&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667247028660/_7nejmil9.png&quot; alt=&quot;application-diagram.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Now, that we have all the context clear, lets begin with the next steps: first at all, i need to setup our &lt;code&gt;repository skeleton&lt;/code&gt; and &lt;code&gt;directory structure&lt;/code&gt;, in order to be, as functional as possible to my intended workflow and development process.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Take into account, that this, is how I setup my repositories, and you should fit this to your expectations and operational workflow.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Additionally, my principal goal when I began to work with this &lt;code&gt;GitOps&lt;/code&gt; approach was also: reduce the cognitive load and operational complexity for new an current colleagues, reducing also the onboarding time and the number of tools that we need to do our work.&lt;/p&gt;&lt;p&gt;If you need a more complex scenario for metrics scaling, i normally try to use &lt;code&gt;thanos&lt;/code&gt; for that job, since allows me to easily scale &lt;code&gt;prometheus&lt;/code&gt;, and get &lt;code&gt;long term storage&lt;/code&gt; in commonly knows cloud object storage services, like &lt;code&gt;Amazon S3&lt;/code&gt; or &lt;code&gt;GCP Cloud Storage&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Below you can find a diagram of the same application but implementing thanos.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.plantuml.com/plantuml/png/ZLHDSzCm4BtxLuYSqe7M1pXqEDKudSAGW4cQ0oUF8cyIKLao-WWDJFyxkvQdYUqomodoQj--js-rkN6UMnzgbRoIMgXG0TjxtxZtQMhvhwkTzFkm2GwiCDg3zbV2r6cZk2RCfVELafiqVtTPK6YzcASrTnuiXihSr8tHX6ceVZBFldzTtvVpxCjibMV5xVGYILP7pAxBsqS_HG8NQh1ls2HN4c6Jq_q74tJ5xN7wSEtm_lErOrdJA2cubqQpN0KYdLomFmbZpxnJ2mUm3Wfh7ey8kxV0j_9XWiTbl67j5HATemHONzPSyrqKWv-B-4L8vnIZ3BabTd0jU2YJdyHbZKHKTk1IyOrKqXzPLdnY2ociSM0FKa3KtPE0PbkZ5DWNiAIY-5YmrsnfUBKCMbFhiO3sNEBdR8EzLz9Hf_HB4C757Z2F4fUW1kRq6Aq97beGlGN4H4Wv6tWpyBUnvY0h0iOHvSlP2RlkDTMfwqJXmOl8yvJqPk7tN1jNEYmhkAKPjW6sYW528diDVW_1iGLuAqKSoQY6m00NtfnLoRlG_zLfdXDwsOIj8u3HGDjX99rXemPwHRRWnPxmksMH8og2vZsctc2SiBm15kdw0ueUZtiT2XYJ9aylxbdPiNJ3N1WjiQ3Kk-6w3Ot-6S1AEBFvMmpyosJMAxApVCirnzoxU2BOYJpDD5T77wVpRDYGUTGqtHn7JgzFR2FjmSM7N77FMNpPr3AQrVlNWaSF5YKLNGRPTTl5scMzECyscnyWVEcyiNm7c3etwESzs9gjOemeLs_Jkxn8iz_1jWiRNzBvGtY9TUrEAzkw6Zli_bR7sse1ctN-7DCnZH_HI3UTm9qCxIEZYsFSU0uteAjGgxy0&quot;&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667247112713/y0Hhlyreq.png&quot; alt=&quot;application-diagram-extent.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Let&apos;s continue this guide with the simplified version of the application (the one with only &lt;code&gt;prometheus&lt;/code&gt; and &lt;code&gt;Grafana&lt;/code&gt; components for monitoring)&lt;/p&gt;&lt;h2 id=&quot;heading-folder-structure-and-orchestration-process&quot;&gt;Folder Structure and Orchestration process&lt;/h2&gt;&lt;p&gt;Lets recap on some concepts about the tooling that we&apos;re implementing here, to fulfil the promise of a, full workflow with skaffold from local to production.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;I deploy on &lt;code&gt;K8s&lt;/code&gt; cluster with &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kustomize&lt;/code&gt; (kustomize is part of k8s bundle).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I use &lt;code&gt;skaffold&lt;/code&gt; as workflow building block through their cli command steps (build, test, render, deploy, verify)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I build images locally with &lt;code&gt;docker&lt;/code&gt; cli (is a pre requisite for my workflow) and in &lt;code&gt;gitlab&lt;/code&gt; i have a couple of options alongside docker (&lt;code&gt;Kaniko&lt;/code&gt; or &lt;code&gt;Docker in Docker&lt;/code&gt; variations), but i&apos;ll cover that in the next steps.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I use a &lt;code&gt;Makefile&lt;/code&gt; as command &quot;collector&quot; entrypoint no only for local development but for gitlab pipeline to group commands in single word ones (make run, build, unit, etc).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;I use &lt;code&gt;terraform&lt;/code&gt; declarative configuration files, to set the desired state of my working cluster (in that case the local one), in this desired state we have some pre-requisites needed in my architecture definition, like &lt;code&gt;cert-manager&lt;/code&gt;, &lt;code&gt;traefik&lt;/code&gt;, &lt;code&gt;prometheus&lt;/code&gt; and &lt;code&gt;grafana&lt;/code&gt;, like my machines on staging and production.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;* deploy/    manifests/  &lt;span class=&quot;hljs-comment&quot;&gt;# the place where k8s yaml resides&lt;/span&gt;         - *k8s.yaml         - kuztomization.yaml    overlays/ &lt;span class=&quot;hljs-comment&quot;&gt;# for every environment that you want, you should have an overlay&lt;/span&gt;        development/              - *.k8s.patch.yaml              - kuztomization.yaml        production/             - *.k8s.patch.yaml             - kuztomization.yaml   - skaffold.yaml* infrastructure/ &lt;span class=&quot;hljs-comment&quot;&gt;#terraform scripts to install cluster pre requisites vault, cert-manager, treafik&lt;/span&gt;* src/ &lt;span class=&quot;hljs-comment&quot;&gt;# all the source code of your application&lt;/span&gt;    - Dockerfile - Makefile- .gitlab-ci.yaml&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Another important component of this setup is: the &lt;code&gt;Dockerfile&lt;/code&gt;, to be able to use the same dockerfile to build images for development and production environments (with the dependencies of each of them), I build a &lt;code&gt;multi-stage&lt;/code&gt; dockerfile that allows me to get a target for development, and a target for production, that we can point to in the &lt;code&gt;skaffold&lt;/code&gt; build phase.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667244719111/LWX13klyl.png&quot; alt=&quot;Screenshot 2022-10-31 at 20.31.39.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;code&gt;Kustomize&lt;/code&gt; files, (&lt;code&gt;kustomization.yaml&lt;/code&gt; ones), allow me to declare a &lt;code&gt;patch or merge&lt;/code&gt; of the part of the main &lt;code&gt;k8s&lt;/code&gt; manifest, to apply the changes that I need to change in some environment without the necessity of duplicating the entire YAML, so, for example, if I have the following &lt;code&gt;k8s&lt;/code&gt; manifest declaring an API with 1 replica, and then, I can declare a patch to set that number to 4 replicas if the environment is in production.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667210150883/uVgpZtVjf.png&quot; alt=&quot;Screenshot 2022-10-31 at 10.54.48.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;The following image shows you, how is a main &lt;code&gt;k8s&lt;/code&gt; manifest and their corresponding patch for production. You can patch anything you want, adding all the data, metadata and other labels to every manifest in environment overlays.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667148986552/Zj2sXJ9Oc.png&quot; alt=&quot;Screenshot 2022-10-30 at 17.54.36.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Then, we have our &lt;code&gt;skaffold&lt;/code&gt; file, in charge of the orchestration process of the workflow itself:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667244800783/oBMDbsByO.png&quot; alt=&quot;Screenshot 2022-10-31 at 20.30.24.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Since &lt;code&gt;skaffold&lt;/code&gt; allows us to use &lt;code&gt;kustomization&lt;/code&gt; as a deployment strategy, I organise my profiles to do so, and also, with this, the development team has a lot of manoeuvres to modify and deploy changes with zero effort.&lt;/p&gt;&lt;p&gt;Now, I can run everything in one shot to see how this work, so, it&apos;s everything is ok, I&apos;ll capable to access all the tools (via browser) and the API requesting them.&lt;/p&gt;&lt;p&gt;To run skaffold you need to run the following command: &lt;code&gt;skaffold dev -p development&lt;/code&gt;, but since we use a &lt;code&gt;Makefile&lt;/code&gt; as a command entrypoint, you can see above that make run do the same job as we need to run skaffold in development mode.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667151507712/gsBwGFfET.gif&quot; alt=&quot;ezgif-4-f2a066ca18.gif&quot; /&gt;&lt;/p&gt;&lt;p&gt;I use my domain and a self-signed certificate to access, all applications via a &lt;code&gt;FQDN&lt;/code&gt; over HTTPS (using &lt;code&gt;cert-manager&lt;/code&gt; and &lt;code&gt;traefik&lt;/code&gt; for that), now, I&apos;ll be able to access all of them via that URLS (on the local machine this URLs points to the loopback &lt;code&gt;127.0.0.1&lt;/code&gt; in the &lt;code&gt;/etc/hosts&lt;/code&gt; file)&lt;/p&gt;&lt;p&gt;We should have at least these applications:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Grafana&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Traefik Dashboard&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;API /Application&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Let&apos;s see in this animated GIF, those applications running:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1667153104081/m5Rk9v6Nx.gif&quot; alt=&quot;ezgif-4-162073e2b1.gif&quot; /&gt;&lt;/p&gt;&lt;p&gt;😅 with this, i already have a full local development cycle for my local environment, now the next milestone is, to make my &lt;/p&gt;&lt;p&gt;gitlab pipelines compliance with this pipeline and make the way to Low and Prod environments.&lt;/p&gt;&lt;p&gt;Let&apos;s stop here for now. I&apos;ll prepare the material for the next blog entry.&lt;/p&gt;&lt;h2 id=&quot;heading-next-chapter&quot;&gt;Next Chapter&lt;/h2&gt;&lt;p&gt;In the next chapter of this tutorial, I&apos;ll try to implement this local workflow in &lt;code&gt;GitLab&lt;/code&gt; pipeline, allowing me to use the &lt;code&gt;tests, build, render, deploy and verify&lt;/code&gt; skaffold stages in my entire pipeline and deploy the application to a &lt;code&gt;k8s&lt;/code&gt; cluster in &lt;code&gt;GCP&lt;/code&gt; in a full GitOps manner.&lt;/p&gt;&lt;p&gt;Thanks for reading and see you the next week for more! 😃&lt;/p&gt;&lt;p&gt;A Big KUDOS to the team #skaffold for a great job, if you want to know more about you can reach them at &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.slack.com/archives/CABQMSZA6&quot;&gt;slack&lt;/a&gt; or in their &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/GoogleContainerTools/skaffold&quot;&gt;repo&lt;/a&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-support-me&quot;&gt;Support me&lt;/h2&gt;&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1666972538879/VViaAyMIo.png</hashnode:coverImage></item><item><title><![CDATA[Una mini API para hacer benchmark de Symfony con RoadRunner (Parte II)]]></title><description><![CDATA[Esta es la continuación de la primera entrega que puedes leer aqui -> https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-i 

TLDR; Hemos desarrollado una pequeña API  (PHP 8.1  +  Symfony v6.1) utilizando R...]]></description><link>https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-ii</link><guid isPermaLink="true">https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-ii</guid><category><![CDATA[Symfony]]></category><category><![CDATA[PHP]]></category><category><![CDATA[performance]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Fri, 30 Sep 2022 06:01:55 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;blockquote&gt;&lt;p&gt;Esta es la continuacin de la primera entrega que puedes leer aqui -&amp;gt; https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-i &lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;TLDR; Hemos desarrollado una pequea API  (&lt;code&gt;PHP&lt;/code&gt; 8.1  +  &lt;code&gt;Symfony&lt;/code&gt; v6.1) utilizando &lt;code&gt;RoadRunner&lt;/code&gt; como application server para realizar un pequeo benchmark de cuanta mejora tenemos al reemplazar &lt;code&gt;Nginx&lt;/code&gt; por &lt;code&gt;RoadRunner&lt;/code&gt;&lt;/p&gt;&lt;p&gt;Si se preguntan porque eleg  &lt;code&gt;RoadRunner&lt;/code&gt; en vez de otros SAPI que hacen lo mismo (&lt;code&gt;SwoolePHP&lt;/code&gt;, &lt;code&gt;ReachPHP&lt;/code&gt;, etc) es porque utilizando este ultimo, la manera de desarrollar se mantiene variando un poco como manejamos las conexiones persistentes y su fcil implementacin con &lt;code&gt;Symfony&lt;/code&gt; y su componente &lt;code&gt;Runtime&lt;/code&gt;.&lt;/p&gt;&lt;hr /&gt;&lt;p&gt;Bien, lo que tenemos ahora es una &lt;code&gt;API&lt;/code&gt; con un solo endpoint que siempre retorna el mismo resultado, y queremos ver (a manera de benchmark) cuantos request puede manejar y en cuanto tiempo comparado con la misma API pero utilizando &lt;code&gt;NGINX&lt;/code&gt; como convencionalmente es utilizado.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664439498559/tyrfPkUw9.png&quot; alt=&quot;Screenshot 2022-09-29 at 10.16.13.png&quot; class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Para hacer el benchmark lo mas sencillo posible, vamos a utilizar &lt;code&gt;Vegeta&lt;/code&gt;  - para conocer mas de Vegeta puedes &lt;a href=&quot;https://github.com/tsenart/vegeta&quot;&gt;mirar aqu&lt;/a&gt; - &lt;/p&gt;&lt;p&gt;El escenario es la misma aplicacin con &lt;code&gt;PHP+FPM+NGINX&lt;/code&gt; y por otro lado &lt;code&gt;PHP+RoadRunner&lt;/code&gt; en ambas utilizaremos el siguiente comando de vegeta para hacer la prueba de carga:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cat src/tests/benchmark/target.txt | vegeta attack -rate 100 -duration=60s&lt;span class=&quot;hljs-string&quot;&gt;&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;Leyenda: Rate: 100 Requests por Unidad (por default es 1s, con lo cual serian 20 req/s)Duration: 60segundos&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;El reporte nos generara la data para poder determinar los percentiles de respuesta de nuestros requests a lo largo de la prueba y as poder medir la latencia en ambos escenarios.&lt;/p&gt;&lt;h3 id=&quot;heading-php-fpm-nginx&quot;&gt;PHP + FPM + NGINX&lt;/h3&gt;&lt;hr /&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664472152099/AC2neuAcQ.png&quot; alt=&quot;php-fpm-vegeta.png&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;heading-php-roadrunner&quot;&gt;PHP + ROADRUNNER&lt;/h3&gt;&lt;hr /&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664472165019/WF4Ll9lew.png&quot; alt=&quot;Screenshot 2022-09-29 at 19.14.57.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Si compramos la data de los histogramas resultantes de las pruebas (Vegeta permite exportar resultados que luego pueden ser graficados y cruzados con otros histogramas) nos da la siguiente escena:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664517413332/CwL5xd60Y.png&quot; alt=&quot;histirgram-comparison.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;EL percentil 95% parece estar muy cerca en ambos caos, pero en el caso de RoadRunner se mantiene constante a lo largo de la prueba de carga con lo cual (para el escenario de la prueba) su degradacin en el tiempo es mnima (el tiempo de respuesta a los usuarios se mantiene estable) &lt;/p&gt;&lt;p&gt;Hasta ac mi ensayo de prueba de carga utilizando PHP + RoadRunner, espero les haya gustado y si es as pues dejame tu comentario o like en el post.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;blockquote&gt;&lt;p&gt;Esta es la continuacin de la primera entrega que puedes leer aqui -&amp;gt; https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-i &lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;TLDR; Hemos desarrollado una pequea API  (&lt;code&gt;PHP&lt;/code&gt; 8.1  +  &lt;code&gt;Symfony&lt;/code&gt; v6.1) utilizando &lt;code&gt;RoadRunner&lt;/code&gt; como application server para realizar un pequeo benchmark de cuanta mejora tenemos al reemplazar &lt;code&gt;Nginx&lt;/code&gt; por &lt;code&gt;RoadRunner&lt;/code&gt;&lt;/p&gt;&lt;p&gt;Si se preguntan porque eleg  &lt;code&gt;RoadRunner&lt;/code&gt; en vez de otros SAPI que hacen lo mismo (&lt;code&gt;SwoolePHP&lt;/code&gt;, &lt;code&gt;ReachPHP&lt;/code&gt;, etc) es porque utilizando este ultimo, la manera de desarrollar se mantiene variando un poco como manejamos las conexiones persistentes y su fcil implementacin con &lt;code&gt;Symfony&lt;/code&gt; y su componente &lt;code&gt;Runtime&lt;/code&gt;.&lt;/p&gt;&lt;hr /&gt;&lt;p&gt;Bien, lo que tenemos ahora es una &lt;code&gt;API&lt;/code&gt; con un solo endpoint que siempre retorna el mismo resultado, y queremos ver (a manera de benchmark) cuantos request puede manejar y en cuanto tiempo comparado con la misma API pero utilizando &lt;code&gt;NGINX&lt;/code&gt; como convencionalmente es utilizado.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664439498559/tyrfPkUw9.png&quot; alt=&quot;Screenshot 2022-09-29 at 10.16.13.png&quot; class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Para hacer el benchmark lo mas sencillo posible, vamos a utilizar &lt;code&gt;Vegeta&lt;/code&gt;  - para conocer mas de Vegeta puedes &lt;a href=&quot;https://github.com/tsenart/vegeta&quot;&gt;mirar aqu&lt;/a&gt; - &lt;/p&gt;&lt;p&gt;El escenario es la misma aplicacin con &lt;code&gt;PHP+FPM+NGINX&lt;/code&gt; y por otro lado &lt;code&gt;PHP+RoadRunner&lt;/code&gt; en ambas utilizaremos el siguiente comando de vegeta para hacer la prueba de carga:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cat src/tests/benchmark/target.txt | vegeta attack -rate 100 -duration=60s&lt;span class=&quot;hljs-string&quot;&gt;&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;Leyenda: Rate: 100 Requests por Unidad (por default es 1s, con lo cual serian 20 req/s)Duration: 60segundos&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;El reporte nos generara la data para poder determinar los percentiles de respuesta de nuestros requests a lo largo de la prueba y as poder medir la latencia en ambos escenarios.&lt;/p&gt;&lt;h3 id=&quot;heading-php-fpm-nginx&quot;&gt;PHP + FPM + NGINX&lt;/h3&gt;&lt;hr /&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664472152099/AC2neuAcQ.png&quot; alt=&quot;php-fpm-vegeta.png&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;heading-php-roadrunner&quot;&gt;PHP + ROADRUNNER&lt;/h3&gt;&lt;hr /&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664472165019/WF4Ll9lew.png&quot; alt=&quot;Screenshot 2022-09-29 at 19.14.57.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Si compramos la data de los histogramas resultantes de las pruebas (Vegeta permite exportar resultados que luego pueden ser graficados y cruzados con otros histogramas) nos da la siguiente escena:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1664517413332/CwL5xd60Y.png&quot; alt=&quot;histirgram-comparison.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;EL percentil 95% parece estar muy cerca en ambos caos, pero en el caso de RoadRunner se mantiene constante a lo largo de la prueba de carga con lo cual (para el escenario de la prueba) su degradacin en el tiempo es mnima (el tiempo de respuesta a los usuarios se mantiene estable) &lt;/p&gt;&lt;p&gt;Hasta ac mi ensayo de prueba de carga utilizando PHP + RoadRunner, espero les haya gustado y si es as pues dejame tu comentario o like en el post.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1664438620336/6N3DtsfK_.png</hashnode:coverImage></item><item><title><![CDATA[Una mini API para hacer benchmark de Symfony con RoadRunner (Parte I)]]></title><description><![CDATA[PHP ha avanzado mucho desde sus inicio, y eso incluye un ecosistema que ahora incluye poderosos application servers cómo lo es RoadRunner. Este último es un application server, load balancer y process manager hecho en GoLang, que utilizando GoRoutine...]]></description><link>https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-i</link><guid isPermaLink="true">https://blog.equationlabs.io/una-mini-api-para-hacer-benchmark-de-symfony-con-roadrunner-parte-i</guid><category><![CDATA[Symfony]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Benchmark]]></category><category><![CDATA[#prometheus]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Tue, 19 Jul 2022 07:03:14 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;PHP ha avanzado mucho desde sus inicio, y eso incluye un ecosistema que ahora incluye poderosos application servers cmo lo es RoadRunner. Este ltimo es un application server, load balancer y process manager hecho en GoLang, que utilizando &lt;strong&gt;GoRoutines&lt;/strong&gt; y &lt;strong&gt;multithreading&lt;/strong&gt;, mantiene la aplicacin PHP en memoria entre requests eliminando la necesidad de boot loading y code loading process reduciendo as, la latencia de tu aplicacin pudiendo servir mas requests en menos tiempo. &lt;em&gt;(performance at it&apos;s best)&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Algunas &quot;features&quot; interesantes de RoadRunner:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Production-ready PSR-7 compatible HTTP, HTTP2, FastCGI server&lt;/li&gt;&lt;li&gt;Agnstico el framework de trabajo&lt;/li&gt;&lt;li&gt;Servidor de mtricas integrado (prometheus)&lt;/li&gt;&lt;li&gt;Integraciones para Symfony, Laravel, Slim, CakePHP, Zend Expressive, Spiral&lt;/li&gt;&lt;li&gt;y mas...&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;Es importante recalcar que existen variedad de Runtimes que pueden ser utilizados con PHP (Swoole, ReactPHP, Bref y hasta el siempre conocido PHP-FPM) en nuestro caso vamos a utilizar RoadRunner por su simplicidad de instalacin y conociendo de antemano que Swoole es capaz de rendir mas performance hoy en da que RoadRunner&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Para probar un poco como funciona, vamos a implementar una pequea API (utilizando un poco de CQRS) con un solo endpoint, que nos permita, recibir un request, ir a otra api externa (tipo clima), recibir la respuesta, transformarla y retornarla al cliente final.&lt;/p&gt;&lt;p&gt;Ac un pequeo diagrama de secuencia para entender el flujo de la misma&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1658159844888/jAGpW1FPk.png&quot; alt=&quot;sequence-diagram-api.png&quot; /&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;NOTA: Todo esto est disponible en el README del repositorio que esta al final de este post.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Utilizar RoadRunner con Symfony es bastante sencillo, recuerda que no necesitaras ni Nginx ni FPM para esto, solo el binario de RoadRunner, tu cdigo fuente y sus dependencias declaradas en el archivo de composer por lo tanto el setup es muy simple, en mi caso el Dockerfile es el siguiente:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;FROM spiralscout/roadrunner:2.10.5 as rrFROM php:8.1 as phpRUN apt-get update &amp;amp;&amp;amp; apt-get install -y libzip-dev unzip bash&lt;span class=&quot;hljs-comment&quot;&gt;# Copy Composer&lt;/span&gt;COPY --from=composer:2 /usr/bin/composer /usr/bin/composer&lt;span class=&quot;hljs-comment&quot;&gt;# Source Code&lt;/span&gt;ADD . .RUN composer install -o&lt;span class=&quot;hljs-comment&quot;&gt;# Copy RoadRunner&lt;/span&gt;COPY --from=rr /usr/bin/rr /usr/bin/rr&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;RoadRunner necesita de su archivo de configuracin &lt;strong&gt;&lt;em&gt;.rr.yaml&lt;/em&gt;&lt;/strong&gt; para funcionar, este archivo en mi caso, incluye la clase del lado de symfony que servir de entrypoint (esta delcarado como una variable de ambiente llamada APP_RUNTIME).&lt;/p&gt;&lt;p&gt;Para poder hacer funcionar RoadRunner con Symfony es necesario hacer uso del componente symfony/runtime e instalar el runtime adecuado, que en nuestro caso ser &lt;a target=&quot;_blank&quot; href=&quot;php-runtime/roadrunner-symfony-nyholm&quot;&gt;https://github.com/php-runtime/roadrunner-symfony-nyholm &lt;/a&gt;. &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;El componente symfony/runtime desacopla la lgica de boostraping de cualquier estado global para asegurarse que la aplicacin pueda ejecutarse con variedad de runtimes como PHP-PM, ReactPHP, Swoole, RoadRunner, etc. sin ningn cambio en tu aplicacin. para conocer mas puedes ver la documentacin oficial aqui: https://symfony.com/doc/current/components/runtime.html&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Nuestro archivo de configuracin de RoadRunner (para este caso practico) es el siguiente:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;version:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;2.7&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;server:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;php public/index.php&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;env:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;APP_RUNTIME:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Runtime\RoadRunnerSymfonyNyholm\Runtime&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;http:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;address:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;0.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:8080&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;middleware:&lt;/span&gt; [ &lt;span class=&quot;hljs-string&quot;&gt;&quot;gzip&quot;&lt;/span&gt; ]  &lt;span class=&quot;hljs-attr&quot;&gt;pool:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;num_workers:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;${RR_NUM_WORKERS}&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;max_jobs:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;${RR_MAX_JOBS}&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;supervisor:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;max_worker_memory:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;${RR_MAX_WORKER_MEMORY}&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metrics:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;address:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;0.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:2112&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;logs:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;mode:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;production&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;channels:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;http:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;level:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;error&lt;/span&gt;     &lt;span class=&quot;hljs-attr&quot;&gt;server:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;level:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;error&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;mode:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;raw&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;metrics:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;level:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;error&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;Puedes encontrar mas detalles sobre como configurar RoadRunner para cada ambiente (dev, debug, production, etc) en el siguiente enlace https://roadrunner.dev/docs/intro-config/2.x/en&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Adicionalmente en tu docker-compose debes indicar en el command de inicializacin donde esta el binario de RoadRunner y que archivo de configuracin quieres utilizar, por ejemplo debera quedar algo como esto:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;php:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;build:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;dockerfile:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Dockerfile&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;context:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;8080:8080&quot;&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;2112:2112&quot;&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# para exponer las mtricas del servidor embebido de prometheus que traer RR&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;env_file:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.env&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;working_dir:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;/opt&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;volumes:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;./:/opt&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt; [ &lt;span class=&quot;hljs-string&quot;&gt;&apos;/usr/bin/rr&apos;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&apos;serve&apos;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&apos;-c&apos;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&apos;.rr.yaml&apos;&lt;/span&gt; ]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Una de las cosas que me gusta de RoadRunner es que trae por default un servidor de Prometheus embebido, con lo cual automticamente expone mtricas para ser consumidas por un prometheus collector de manara muy sencilla en &lt;strong&gt;http://{host}:2112/metrics&lt;/strong&gt; y ademas te permite a travs de una cmoda interfaz agregar tus mtricas custom de aplicacin utilizando el mismo servidor sin necesidad tener que instalar libreras adicionales en tu aplicacin o un servidor adicional de prometheus para exponer mtricas.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Como plus handy, en la documentacin oficial tienes un dashboard de grafana para poder monitorear toda tu aplicacin que se este ejecutando sobre RoadRunner (workers, consumo de CPU, etc)&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1658214028357/9S7kT0v4z.png&quot; alt=&quot;Screenshot 2022-07-19 at 09.00.20.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Con esto ya tenemos la primera parte para poder comenzar nuestro pequeo benchmarking.&lt;/p&gt;&lt;p&gt;En la segunda entrega de este post, vamos a ir directo a las diferentes ejecuciones del benchmark comparando PHP-FPM contra RoadRunner utilizando una herramienta de HTTP Load llamada Vegueta. (puedes tambin utilizar Apache Bench o WRK tool) &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Puedes conseguir mas informacin de Vegeta en este enlace https://github.com/tsenart/vegeta&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Hasta el prximo post!&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;PHP ha avanzado mucho desde sus inicio, y eso incluye un ecosistema que ahora incluye poderosos application servers cmo lo es RoadRunner. Este ltimo es un application server, load balancer y process manager hecho en GoLang, que utilizando &lt;strong&gt;GoRoutines&lt;/strong&gt; y &lt;strong&gt;multithreading&lt;/strong&gt;, mantiene la aplicacin PHP en memoria entre requests eliminando la necesidad de boot loading y code loading process reduciendo as, la latencia de tu aplicacin pudiendo servir mas requests en menos tiempo. &lt;em&gt;(performance at it&apos;s best)&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Algunas &quot;features&quot; interesantes de RoadRunner:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Production-ready PSR-7 compatible HTTP, HTTP2, FastCGI server&lt;/li&gt;&lt;li&gt;Agnstico el framework de trabajo&lt;/li&gt;&lt;li&gt;Servidor de mtricas integrado (prometheus)&lt;/li&gt;&lt;li&gt;Integraciones para Symfony, Laravel, Slim, CakePHP, Zend Expressive, Spiral&lt;/li&gt;&lt;li&gt;y mas...&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;Es importante recalcar que existen variedad de Runtimes que pueden ser utilizados con PHP (Swoole, ReactPHP, Bref y hasta el siempre conocido PHP-FPM) en nuestro caso vamos a utilizar RoadRunner por su simplicidad de instalacin y conociendo de antemano que Swoole es capaz de rendir mas performance hoy en da que RoadRunner&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Para probar un poco como funciona, vamos a implementar una pequea API (utilizando un poco de CQRS) con un solo endpoint, que nos permita, recibir un request, ir a otra api externa (tipo clima), recibir la respuesta, transformarla y retornarla al cliente final.&lt;/p&gt;&lt;p&gt;Ac un pequeo diagrama de secuencia para entender el flujo de la misma&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1658159844888/jAGpW1FPk.png&quot; alt=&quot;sequence-diagram-api.png&quot; /&gt;&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;NOTA: Todo esto est disponible en el README del repositorio que esta al final de este post.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Utilizar RoadRunner con Symfony es bastante sencillo, recuerda que no necesitaras ni Nginx ni FPM para esto, solo el binario de RoadRunner, tu cdigo fuente y sus dependencias declaradas en el archivo de composer por lo tanto el setup es muy simple, en mi caso el Dockerfile es el siguiente:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;FROM spiralscout/roadrunner:2.10.5 as rrFROM php:8.1 as phpRUN apt-get update &amp;amp;&amp;amp; apt-get install -y libzip-dev unzip bash&lt;span class=&quot;hljs-comment&quot;&gt;# Copy Composer&lt;/span&gt;COPY --from=composer:2 /usr/bin/composer /usr/bin/composer&lt;span class=&quot;hljs-comment&quot;&gt;# Source Code&lt;/span&gt;ADD . .RUN composer install -o&lt;span class=&quot;hljs-comment&quot;&gt;# Copy RoadRunner&lt;/span&gt;COPY --from=rr /usr/bin/rr /usr/bin/rr&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;RoadRunner necesita de su archivo de configuracin &lt;strong&gt;&lt;em&gt;.rr.yaml&lt;/em&gt;&lt;/strong&gt; para funcionar, este archivo en mi caso, incluye la clase del lado de symfony que servir de entrypoint (esta delcarado como una variable de ambiente llamada APP_RUNTIME).&lt;/p&gt;&lt;p&gt;Para poder hacer funcionar RoadRunner con Symfony es necesario hacer uso del componente symfony/runtime e instalar el runtime adecuado, que en nuestro caso ser &lt;a target=&quot;_blank&quot; href=&quot;php-runtime/roadrunner-symfony-nyholm&quot;&gt;https://github.com/php-runtime/roadrunner-symfony-nyholm &lt;/a&gt;. &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;El componente symfony/runtime desacopla la lgica de boostraping de cualquier estado global para asegurarse que la aplicacin pueda ejecutarse con variedad de runtimes como PHP-PM, ReactPHP, Swoole, RoadRunner, etc. sin ningn cambio en tu aplicacin. para conocer mas puedes ver la documentacin oficial aqui: https://symfony.com/doc/current/components/runtime.html&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Nuestro archivo de configuracin de RoadRunner (para este caso practico) es el siguiente:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;version:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;2.7&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;server:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;php public/index.php&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;env:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;APP_RUNTIME:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Runtime\RoadRunnerSymfonyNyholm\Runtime&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;http:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;address:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;0.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:8080&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;middleware:&lt;/span&gt; [ &lt;span class=&quot;hljs-string&quot;&gt;&quot;gzip&quot;&lt;/span&gt; ]  &lt;span class=&quot;hljs-attr&quot;&gt;pool:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;num_workers:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;${RR_NUM_WORKERS}&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;max_jobs:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;${RR_MAX_JOBS}&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;supervisor:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;max_worker_memory:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;${RR_MAX_WORKER_MEMORY}&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metrics:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;address:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;0.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:2112&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;logs:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;mode:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;production&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;channels:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;http:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;level:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;error&lt;/span&gt;     &lt;span class=&quot;hljs-attr&quot;&gt;server:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;level:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;error&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;mode:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;raw&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;metrics:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;level:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;error&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;Puedes encontrar mas detalles sobre como configurar RoadRunner para cada ambiente (dev, debug, production, etc) en el siguiente enlace https://roadrunner.dev/docs/intro-config/2.x/en&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Adicionalmente en tu docker-compose debes indicar en el command de inicializacin donde esta el binario de RoadRunner y que archivo de configuracin quieres utilizar, por ejemplo debera quedar algo como esto:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;php:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;build:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;dockerfile:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Dockerfile&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;context:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;8080:8080&quot;&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;2112:2112&quot;&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# para exponer las mtricas del servidor embebido de prometheus que traer RR&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;env_file:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.env&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;working_dir:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;/opt&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;volumes:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;./:/opt&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;command:&lt;/span&gt; [ &lt;span class=&quot;hljs-string&quot;&gt;&apos;/usr/bin/rr&apos;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&apos;serve&apos;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&apos;-c&apos;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&apos;.rr.yaml&apos;&lt;/span&gt; ]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Una de las cosas que me gusta de RoadRunner es que trae por default un servidor de Prometheus embebido, con lo cual automticamente expone mtricas para ser consumidas por un prometheus collector de manara muy sencilla en &lt;strong&gt;http://{host}:2112/metrics&lt;/strong&gt; y ademas te permite a travs de una cmoda interfaz agregar tus mtricas custom de aplicacin utilizando el mismo servidor sin necesidad tener que instalar libreras adicionales en tu aplicacin o un servidor adicional de prometheus para exponer mtricas.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Como plus handy, en la documentacin oficial tienes un dashboard de grafana para poder monitorear toda tu aplicacin que se este ejecutando sobre RoadRunner (workers, consumo de CPU, etc)&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1658214028357/9S7kT0v4z.png&quot; alt=&quot;Screenshot 2022-07-19 at 09.00.20.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Con esto ya tenemos la primera parte para poder comenzar nuestro pequeo benchmarking.&lt;/p&gt;&lt;p&gt;En la segunda entrega de este post, vamos a ir directo a las diferentes ejecuciones del benchmark comparando PHP-FPM contra RoadRunner utilizando una herramienta de HTTP Load llamada Vegueta. (puedes tambin utilizar Apache Bench o WRK tool) &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;Puedes conseguir mas informacin de Vegeta en este enlace https://github.com/tsenart/vegeta&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Hasta el prximo post!&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1658156312373/TDHCLCg7z.png</hashnode:coverImage></item><item><title><![CDATA[Trunk Based Development, Kubernetes y GCP: Un pipeline con la ayuda de Gitlab, Kaniko, Skaffold y Terraform]]></title><description><![CDATA[Esta es la segunda entrega del articulo  k8s desde dev hasta prod con Gitlab, Skaffold, Kustomize y Kaniko donde arme el skeleton base para esta segunda entrega, al igual que el anterior al final del artículo estará el enlace al repositorio con todo ...]]></description><link>https://blog.equationlabs.io/trunk-based-development-kubernetes-y-gcp-un-pipeline-con-la-ayuda-de-gitlab-kaniko-skaffold-y-terraform</link><guid isPermaLink="true">https://blog.equationlabs.io/trunk-based-development-kubernetes-y-gcp-un-pipeline-con-la-ayuda-de-gitlab-kaniko-skaffold-y-terraform</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Tue, 31 Aug 2021 00:17:12 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;blockquote&gt;&lt;p&gt;Esta es la segunda entrega del articulo  &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/k8s-desde-dev-hasta-prod-con-gitlab-skaffold-kustomiza-y-kaniko&quot;&gt;k8s desde dev hasta prod con Gitlab, Skaffold, Kustomize y Kaniko&lt;/a&gt; donde arme el skeleton base para esta segunda entrega, al igual que el anterior al final del artculo estar el enlace al repositorio con todo el cdigo &lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;premisa&quot;&gt;Premisa&lt;/h3&gt;&lt;p&gt;Ya tenemos un pipeline que cumple con las siguientes pasos: &lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;code&gt;Test&lt;/code&gt; test suites ejecutadas por &lt;code&gt;PHPUnit&lt;/code&gt; &lt;/li&gt;&lt;li&gt;&lt;code&gt;Build&lt;/code&gt; con &lt;code&gt;Kaniko&lt;/code&gt; construimos las imgenes, las tagueamos y las enviamos a nuestro registro de imgenes (utilizaremos el mismo de gitlab en este caso), &lt;code&gt;Kaniko&lt;/code&gt; se encarga de guardar una capa de cache en el mismo registro.&lt;/li&gt;&lt;li&gt;&lt;code&gt;Deploy&lt;/code&gt; con &lt;code&gt;Skaffold&lt;/code&gt; se construyen dinmicamente los manifiestos junto con las imgenes construidas en el paso anterior generando un manifiesto final que es aplicado en el cluster&lt;/li&gt;&lt;li&gt;&lt;code&gt;Destroy&lt;/code&gt; con Skaffold se puede tambin eliminar del manifiesto aplicado actualmente.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Lo que queremos ahora es, eliminar un stage &lt;code&gt;Destroy&lt;/code&gt;, y agregar un par de pasos ms para aprovisionamiento de infraestructura con &lt;code&gt;Terraform&lt;/code&gt;, el ciclo quedara ms o menos as:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630105679957/PyC35fH_1.jpeg&quot; alt=&quot;emr-base.jpeg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;vamos-a-ello&quot;&gt;Vamos a ello&lt;/h2&gt;&lt;p&gt;Como esta es la  &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/k8s-desde-dev-hasta-prod-con-gitlab-skaffold-kustomize-y-kaniko&quot;&gt;continuacin de la primera parte&lt;/a&gt; , no los voy a aburrir en la creacin del pipeline, sino que nos enfocaremos en la mejora del del mismo y de los archivos de instrucciones de &lt;code&gt;Terraform&lt;/code&gt;.&lt;/p&gt;&lt;h3 id=&quot;estructura-de-archivos&quot;&gt;Estructura de Archivos&lt;/h3&gt;&lt;p&gt;Vamos a crear una nueva carpeta &lt;code&gt;infrastructure&lt;/code&gt; y una sub carpeta &lt;code&gt;terraform&lt;/code&gt; a nivel del root dir, donde irn colocados todos los archivos de &lt;code&gt;terraform&lt;/code&gt;&lt;/p&gt;&lt;p&gt;Luego de esto vamos a crear los archivos declarativos para aprovisionar un cluster en nuestra infraestructura (en mi caso estoy utilizando &lt;code&gt;Google Cloud&lt;/code&gt;). La idea principal es que adicionalmente a los &lt;code&gt;clusters permanentes (staging y produccin)&lt;/code&gt; estos archivos de terraform se encarguen de crear &lt;code&gt;cluster temporales&lt;/code&gt; que permitan que los equipos cuenten dinmicamente con un cluster para probar sus features y que este mismo cluster temporal pueda eliminarse una vez se haga el merge de esa branch a la rama main del repositorio.&lt;/p&gt;&lt;p&gt;Para esto vamos a crear los archivos de &lt;code&gt;terraform&lt;/code&gt; y aparte vamos a colocar un &lt;code&gt;webhook&lt;/code&gt; en &lt;code&gt;GitLab&lt;/code&gt;  al momento del merge request para eliminar dicho cluster temporal y el environment temporal de gitlab.&lt;/p&gt;&lt;h3 id=&quot;archivos-de-skaffold-y-kustomize-para-ambientes-review&quot;&gt;Archivos de Skaffold y Kustomize para ambientes &lt;code&gt;review&lt;/code&gt;&lt;/h3&gt;&lt;p&gt;En un nterin, como estamos utilizando &lt;code&gt;skaffold&lt;/code&gt; para los ciclos de CI/CD, vamos a crear rpidamente un nuevo profile llamado &lt;code&gt;review&lt;/code&gt; para nuestras aplicaciones de ambientes temporales/dinmicos para el manejo de los manifiestos de &lt;code&gt;kubernetes&lt;/code&gt; para esto podemos hacer una copiar del folder &lt;code&gt;staging&lt;/code&gt; que esta dentro de &lt;code&gt;deployments/k8s/environments&lt;/code&gt; teniendo en cuenta cambiar los keys de &lt;code&gt;staging&lt;/code&gt; a &lt;code&gt;review&lt;/code&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630344435498/k94KHqj1f.png&quot; alt=&quot;Screen Shot 2021-08-30 at 14.27.04.png&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;vamos-con-terraform&quot;&gt;Vamos con Terraform&lt;/h3&gt;&lt;p&gt;En lugar de hacer el tpico archivo &lt;code&gt;main.tf&lt;/code&gt; en este caso y para mejor visualizacin vamos a separarlos por su funcin. &lt;/p&gt;&lt;p&gt;Empecemos con el &lt;code&gt;0-provider.tf&lt;/code&gt; donde, como su nombre lo indica, vamos a generar las instrucciones para que &lt;code&gt;terraform&lt;/code&gt; sepa con que proveedor de servicios en la nube estamos trabajando (en nuestro caso &lt;code&gt;Google Cloud&lt;/code&gt;), previamente necesitars haber creado previamente tus credenciales de service account para que puedas comunicarte con tu proveedor (&lt;strong&gt;GCP -&amp;gt; IAM &amp;amp; Admin &amp;gt; Service Accounts&lt;/strong&gt;, y luego hacer click en &lt;strong&gt;Create Service Account&lt;/strong&gt;.)&lt;/p&gt;&lt;p&gt;Los valores &lt;code&gt;var.*&lt;/code&gt; sern definidos mas adelante en un archivo de &lt;code&gt;terraform&lt;/code&gt; conocido como &lt;code&gt;tfvars&lt;/code&gt; esto nos permitir en ese archivo escribir las variables dinmicas antes de inicializar el plan y aplicarlo.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# Google Cloud Platform Provider&lt;/span&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# https://registry.terraform.io/providers/hashicorp/google/latest/docs&lt;/span&gt;&lt;span class=&quot;hljs-attribute&quot;&gt;provider&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;google&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-attribute&quot;&gt;credentials&lt;/span&gt; = file(&lt;span class=&quot;hljs-string&quot;&gt;&quot;../configs/service-account.json&quot;&lt;/span&gt;)  project     = var.project_id  region      = var.region}terraform {  &lt;span class=&quot;hljs-section&quot;&gt;required_providers&lt;/span&gt; {    &lt;span class=&quot;hljs-attribute&quot;&gt;google&lt;/span&gt; = {      &lt;span class=&quot;hljs-attribute&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;hashicorp/google&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;3.5.0&quot;&lt;/span&gt;    }  }}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;En tu consola para ir probando localmente puedes ejecutar el siguiente comando para inicializar  &lt;code&gt;terraform&lt;/code&gt;, esto descargara las primeras dependencias y las instalar en un folder  &lt;code&gt;.terraform&lt;/code&gt; del mismo directorio de trabajo y crear un archivo  &lt;code&gt;.lock.hcl&lt;/code&gt; que, como buena practica, deberas guardarlo en tu repo.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;$: terraform init&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Adicionalmente a esto vamos a crear un archivo &lt;code&gt;1-cluster.tf&lt;/code&gt; para definir las instrucciones de &lt;code&gt;terraform&lt;/code&gt; para crear un cluster en nuestra infraestructura de &lt;code&gt;Google Cloud Platform&lt;/code&gt; junto con la instruccin para crear una VPC para este cluster.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;google_compute_network&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;vpc_network&quot;&lt;/span&gt; {  name = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.cluster_network}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;google_container_cluster&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;gke-cluster&quot;&lt;/span&gt; {  name               = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.cluster_name  network            = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.cluster_network  location           = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.region  initial_node_count = &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Luego estarn los dos ltimos archivos, &lt;code&gt;vars.tf&lt;/code&gt; y &lt;code&gt;terraform.tfvars&lt;/code&gt; dnde estarn las variables globales y sus valores y placeholders para que sean accesibles para &lt;code&gt;terraform&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;project_id&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;region&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;cluster_name&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;network_name&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-attr&quot;&gt;project_id&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;YOUR_GCP_PROJECT_ID&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;region&lt;/span&gt;     = &lt;span class=&quot;hljs-string&quot;&gt;&quot;us-central-1c&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;cluster_name&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;PLACEHOLDER_CLUSTER&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;network_name&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;PLACEHOLDER_NETWORK&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Luego de esto nos queda completar nuestro pipeline de &lt;code&gt;GitLab&lt;/code&gt; con los &lt;code&gt;jobs&lt;/code&gt; que debemos sumar para &lt;code&gt;terraform&lt;/code&gt; (incluye 3 validate, init, plan y apply) y para el &lt;code&gt;deploy&lt;/code&gt; del ambiente dinmico tomando en cuenta que solo debe crear dichos &lt;code&gt;jobs&lt;/code&gt; si el branch es un &lt;code&gt;feature branch&lt;/code&gt; y que no sea la rama &lt;code&gt;main&lt;/code&gt;. &lt;/p&gt;&lt;p&gt;Si ves en detalle, hemos creado una variable de CI/CD de &lt;code&gt;GitLab&lt;/code&gt; que contiene las credenciales de la &lt;code&gt;service-account&lt;/code&gt; (&lt;strong&gt; SERVICE_ACCOUNT_GCP&lt;/strong&gt;) de Google Cloud para que pueda estar accesible para el pipeline y pasarla a los jobs de terraform en forma de archivo json.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;.terraform:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;hashicorp/terraform:1.0.5&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;entrypoint:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;/usr/bin/env&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&apos;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;before_script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;infrastructure&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;configs&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;configs&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$SERVICE_ACCOUNT_GCP&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;|&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;base64&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;service-account.json&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;../terraform&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;sed&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;s/\PLACEHOLDER_CLUSTER/$CI_COMMIT_REF_SLUG&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform.tfvars&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;sed&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;s/\PLACEHOLDER_NETWORK/$CI_COMMIT_REF_SLUG&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform.tfvars&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;init&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;cache:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;key:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform-cache&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH =~ /feature/&apos;&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;validate:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;validate&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;plan:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-out&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan.tfplan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;validate&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;artifacts:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan.tfplan&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apply:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apply&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-input=false&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan.tfplan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;when:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;manual&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apply:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_PROJECT_DIR/planfile&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;./&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apply&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-input=false&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;planfile&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;when:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;manual&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;deploy:feature:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.skaffold&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deploy&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;environment:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;$CI_COMMIT_REF_NAME&quot;&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;url:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_COMMIT_SHORT_SHA.features.equatonlabs.io&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;build:api&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;skaffold&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deploy&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deployment/skaffold.yaml&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;feature&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-a&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;latest-build.json&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;--status-check=true&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;when:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;manual&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;destroy:feature:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;destroy&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;environment:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_COMMIT_REF_NAME&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;url:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_COMMIT_SHORT_SHA.features.equatonlabs.io&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;action:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;stop&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;destroy&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_MERGE_REQUEST_APPROVED&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;ahora-juntemos-todo&quot;&gt;Ahora juntemos todo&lt;/h3&gt;&lt;p&gt;Una vez tengamos todos nuestros archivos en lugar es momento de hacer nuestro &lt;code&gt;commit &amp;amp; push&lt;/code&gt; al repositorio y seguir la ejecucin del pipeline de &lt;code&gt;GitLab&lt;/code&gt; y sus salidas para asegurarnos que esta todo en orden.&lt;/p&gt;&lt;p&gt;Tendramos este escenario:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Tener 2 nuevos stages &lt;code&gt;provision&lt;/code&gt; y &lt;code&gt;deploy&lt;/code&gt; cuando la rama comienza con &lt;code&gt;feature/&lt;/code&gt; &lt;/li&gt;&lt;li&gt;Dinmicamente nuestro &lt;code&gt;cluster&lt;/code&gt; y la &lt;code&gt;vpc&lt;/code&gt; tendrn el nombre de la variable runtime &lt;code&gt;CI_COMMIT_REF_SLUG&lt;/code&gt;  (ver mas  &lt;a target=&quot;_blank&quot; href=&quot;https://docs.gitlab.com/ee/ci/variables/predefined_variables.html&quot;&gt;ac&lt;/a&gt; )&lt;/li&gt;&lt;li&gt;En &lt;code&gt;GitLab&lt;/code&gt; se creara un &lt;code&gt;environment&lt;/code&gt; dinmico basado en la variable &lt;code&gt;CI_COMMIT_REF_NAME&lt;/code&gt;&lt;/li&gt;&lt;li&gt;Una vez aprobado el &lt;code&gt;MR&lt;/code&gt; a la rama main, se dispara un &lt;code&gt;job&lt;/code&gt; que elimina el cluster temporal que se utilizo para desarrollar y probar la &lt;code&gt;feature&lt;/code&gt; y as &lt;strong&gt;no incurrir en costos innecesarios&lt;/strong&gt;.&lt;/li&gt;&lt;li&gt;Una vez aprobado el MR a la rama &lt;code&gt;main&lt;/code&gt; se eliminar tambin el environment creado en &lt;code&gt;GitLab&lt;/code&gt;&lt;/li&gt;&lt;li&gt;En la rama &lt;code&gt;main&lt;/code&gt; no debe aparecer el stage de &lt;code&gt;provision&lt;/code&gt; ya que esos cluster no son permanentes en el tiempo y se manejan fuera de este &lt;code&gt;pipeline&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;strong&gt;Pipeline en Features&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1633026544204/M7bt9gBHF.png&quot; alt=&quot;Screenshot 2021-09-30 at 20.27.40.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Environment dinmico y deploy dashboard&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630368132018/V6MhRgpwD.png&quot; alt=&quot;Screen Shot 2021-08-30 at 21.01.53.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Google Cloud con el Cluster Creado&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630363525738/JRS2hQ-7X.png&quot; alt=&quot;Screen Shot 2021-08-30 at 19.45.14.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Ahora nos queda probar, para completar el circuito, que al aprobar el &lt;code&gt;MR&lt;/code&gt;, se ejecute el paso de &lt;code&gt;destroy:feature&lt;/code&gt; que elimine el cluster y el environment de el deploy dashboard de &lt;code&gt;GitLab&lt;/code&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Pipeline con el MR aprobado&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1633534583973/9_dgH4g76.png&quot; alt=&quot;Screenshot 2021-10-06 at 17.35.05.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Y esto es todo por ahora amigos, como siempre compartan, que compartir hace bien. Les dejo ms abajo el repo con todos los files que vimos hoy para que los puedan ojear mas a detalles.&lt;/p&gt;&lt;hr /&gt;&lt;div class=&quot;embed-wrapper&quot;&gt;&lt;div class=&quot;embed-loading&quot;&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;/div&gt;&lt;a class=&quot;embed-card&quot; href=&quot;https://gitlab.com/equationlabs/stacks/emr/endovelicus&quot;&gt;https://gitlab.com/equationlabs/stacks/emr/endovelicus&lt;/a&gt;&lt;/div&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;blockquote&gt;&lt;p&gt;Esta es la segunda entrega del articulo  &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/k8s-desde-dev-hasta-prod-con-gitlab-skaffold-kustomiza-y-kaniko&quot;&gt;k8s desde dev hasta prod con Gitlab, Skaffold, Kustomize y Kaniko&lt;/a&gt; donde arme el skeleton base para esta segunda entrega, al igual que el anterior al final del artculo estar el enlace al repositorio con todo el cdigo &lt;/p&gt;&lt;/blockquote&gt;&lt;h3 id=&quot;premisa&quot;&gt;Premisa&lt;/h3&gt;&lt;p&gt;Ya tenemos un pipeline que cumple con las siguientes pasos: &lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;code&gt;Test&lt;/code&gt; test suites ejecutadas por &lt;code&gt;PHPUnit&lt;/code&gt; &lt;/li&gt;&lt;li&gt;&lt;code&gt;Build&lt;/code&gt; con &lt;code&gt;Kaniko&lt;/code&gt; construimos las imgenes, las tagueamos y las enviamos a nuestro registro de imgenes (utilizaremos el mismo de gitlab en este caso), &lt;code&gt;Kaniko&lt;/code&gt; se encarga de guardar una capa de cache en el mismo registro.&lt;/li&gt;&lt;li&gt;&lt;code&gt;Deploy&lt;/code&gt; con &lt;code&gt;Skaffold&lt;/code&gt; se construyen dinmicamente los manifiestos junto con las imgenes construidas en el paso anterior generando un manifiesto final que es aplicado en el cluster&lt;/li&gt;&lt;li&gt;&lt;code&gt;Destroy&lt;/code&gt; con Skaffold se puede tambin eliminar del manifiesto aplicado actualmente.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Lo que queremos ahora es, eliminar un stage &lt;code&gt;Destroy&lt;/code&gt;, y agregar un par de pasos ms para aprovisionamiento de infraestructura con &lt;code&gt;Terraform&lt;/code&gt;, el ciclo quedara ms o menos as:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630105679957/PyC35fH_1.jpeg&quot; alt=&quot;emr-base.jpeg&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;vamos-a-ello&quot;&gt;Vamos a ello&lt;/h2&gt;&lt;p&gt;Como esta es la  &lt;a target=&quot;_blank&quot; href=&quot;https://blog.equationlabs.io/k8s-desde-dev-hasta-prod-con-gitlab-skaffold-kustomize-y-kaniko&quot;&gt;continuacin de la primera parte&lt;/a&gt; , no los voy a aburrir en la creacin del pipeline, sino que nos enfocaremos en la mejora del del mismo y de los archivos de instrucciones de &lt;code&gt;Terraform&lt;/code&gt;.&lt;/p&gt;&lt;h3 id=&quot;estructura-de-archivos&quot;&gt;Estructura de Archivos&lt;/h3&gt;&lt;p&gt;Vamos a crear una nueva carpeta &lt;code&gt;infrastructure&lt;/code&gt; y una sub carpeta &lt;code&gt;terraform&lt;/code&gt; a nivel del root dir, donde irn colocados todos los archivos de &lt;code&gt;terraform&lt;/code&gt;&lt;/p&gt;&lt;p&gt;Luego de esto vamos a crear los archivos declarativos para aprovisionar un cluster en nuestra infraestructura (en mi caso estoy utilizando &lt;code&gt;Google Cloud&lt;/code&gt;). La idea principal es que adicionalmente a los &lt;code&gt;clusters permanentes (staging y produccin)&lt;/code&gt; estos archivos de terraform se encarguen de crear &lt;code&gt;cluster temporales&lt;/code&gt; que permitan que los equipos cuenten dinmicamente con un cluster para probar sus features y que este mismo cluster temporal pueda eliminarse una vez se haga el merge de esa branch a la rama main del repositorio.&lt;/p&gt;&lt;p&gt;Para esto vamos a crear los archivos de &lt;code&gt;terraform&lt;/code&gt; y aparte vamos a colocar un &lt;code&gt;webhook&lt;/code&gt; en &lt;code&gt;GitLab&lt;/code&gt;  al momento del merge request para eliminar dicho cluster temporal y el environment temporal de gitlab.&lt;/p&gt;&lt;h3 id=&quot;archivos-de-skaffold-y-kustomize-para-ambientes-review&quot;&gt;Archivos de Skaffold y Kustomize para ambientes &lt;code&gt;review&lt;/code&gt;&lt;/h3&gt;&lt;p&gt;En un nterin, como estamos utilizando &lt;code&gt;skaffold&lt;/code&gt; para los ciclos de CI/CD, vamos a crear rpidamente un nuevo profile llamado &lt;code&gt;review&lt;/code&gt; para nuestras aplicaciones de ambientes temporales/dinmicos para el manejo de los manifiestos de &lt;code&gt;kubernetes&lt;/code&gt; para esto podemos hacer una copiar del folder &lt;code&gt;staging&lt;/code&gt; que esta dentro de &lt;code&gt;deployments/k8s/environments&lt;/code&gt; teniendo en cuenta cambiar los keys de &lt;code&gt;staging&lt;/code&gt; a &lt;code&gt;review&lt;/code&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630344435498/k94KHqj1f.png&quot; alt=&quot;Screen Shot 2021-08-30 at 14.27.04.png&quot; /&gt;&lt;/p&gt;&lt;h3 id=&quot;vamos-con-terraform&quot;&gt;Vamos con Terraform&lt;/h3&gt;&lt;p&gt;En lugar de hacer el tpico archivo &lt;code&gt;main.tf&lt;/code&gt; en este caso y para mejor visualizacin vamos a separarlos por su funcin. &lt;/p&gt;&lt;p&gt;Empecemos con el &lt;code&gt;0-provider.tf&lt;/code&gt; donde, como su nombre lo indica, vamos a generar las instrucciones para que &lt;code&gt;terraform&lt;/code&gt; sepa con que proveedor de servicios en la nube estamos trabajando (en nuestro caso &lt;code&gt;Google Cloud&lt;/code&gt;), previamente necesitars haber creado previamente tus credenciales de service account para que puedas comunicarte con tu proveedor (&lt;strong&gt;GCP -&amp;gt; IAM &amp;amp; Admin &amp;gt; Service Accounts&lt;/strong&gt;, y luego hacer click en &lt;strong&gt;Create Service Account&lt;/strong&gt;.)&lt;/p&gt;&lt;p&gt;Los valores &lt;code&gt;var.*&lt;/code&gt; sern definidos mas adelante en un archivo de &lt;code&gt;terraform&lt;/code&gt; conocido como &lt;code&gt;tfvars&lt;/code&gt; esto nos permitir en ese archivo escribir las variables dinmicas antes de inicializar el plan y aplicarlo.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# Google Cloud Platform Provider&lt;/span&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# https://registry.terraform.io/providers/hashicorp/google/latest/docs&lt;/span&gt;&lt;span class=&quot;hljs-attribute&quot;&gt;provider&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;google&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-attribute&quot;&gt;credentials&lt;/span&gt; = file(&lt;span class=&quot;hljs-string&quot;&gt;&quot;../configs/service-account.json&quot;&lt;/span&gt;)  project     = var.project_id  region      = var.region}terraform {  &lt;span class=&quot;hljs-section&quot;&gt;required_providers&lt;/span&gt; {    &lt;span class=&quot;hljs-attribute&quot;&gt;google&lt;/span&gt; = {      &lt;span class=&quot;hljs-attribute&quot;&gt;source&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;hashicorp/google&quot;&lt;/span&gt;      version = &lt;span class=&quot;hljs-string&quot;&gt;&quot;3.5.0&quot;&lt;/span&gt;    }  }}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;En tu consola para ir probando localmente puedes ejecutar el siguiente comando para inicializar  &lt;code&gt;terraform&lt;/code&gt;, esto descargara las primeras dependencias y las instalar en un folder  &lt;code&gt;.terraform&lt;/code&gt; del mismo directorio de trabajo y crear un archivo  &lt;code&gt;.lock.hcl&lt;/code&gt; que, como buena practica, deberas guardarlo en tu repo.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;$: terraform init&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Adicionalmente a esto vamos a crear un archivo &lt;code&gt;1-cluster.tf&lt;/code&gt; para definir las instrucciones de &lt;code&gt;terraform&lt;/code&gt; para crear un cluster en nuestra infraestructura de &lt;code&gt;Google Cloud Platform&lt;/code&gt; junto con la instruccin para crear una VPC para este cluster.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;google_compute_network&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;vpc_network&quot;&lt;/span&gt; {  name = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.cluster_network}resource &lt;span class=&quot;hljs-string&quot;&gt;&quot;google_container_cluster&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;gke-cluster&quot;&lt;/span&gt; {  name               = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.cluster_name  network            = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.cluster_network  location           = &lt;span class=&quot;hljs-keyword&quot;&gt;var&lt;/span&gt;.region  initial_node_count = &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Luego estarn los dos ltimos archivos, &lt;code&gt;vars.tf&lt;/code&gt; y &lt;code&gt;terraform.tfvars&lt;/code&gt; dnde estarn las variables globales y sus valores y placeholders para que sean accesibles para &lt;code&gt;terraform&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;project_id&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;region&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;cluster_name&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}variable &lt;span class=&quot;hljs-string&quot;&gt;&quot;network_name&quot;&lt;/span&gt; {  &lt;span class=&quot;hljs-keyword&quot;&gt;type&lt;/span&gt;  = &lt;span class=&quot;hljs-keyword&quot;&gt;string&lt;/span&gt;}&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-attr&quot;&gt;project_id&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;YOUR_GCP_PROJECT_ID&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;region&lt;/span&gt;     = &lt;span class=&quot;hljs-string&quot;&gt;&quot;us-central-1c&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;cluster_name&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;PLACEHOLDER_CLUSTER&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;network_name&lt;/span&gt; = &lt;span class=&quot;hljs-string&quot;&gt;&quot;PLACEHOLDER_NETWORK&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Luego de esto nos queda completar nuestro pipeline de &lt;code&gt;GitLab&lt;/code&gt; con los &lt;code&gt;jobs&lt;/code&gt; que debemos sumar para &lt;code&gt;terraform&lt;/code&gt; (incluye 3 validate, init, plan y apply) y para el &lt;code&gt;deploy&lt;/code&gt; del ambiente dinmico tomando en cuenta que solo debe crear dichos &lt;code&gt;jobs&lt;/code&gt; si el branch es un &lt;code&gt;feature branch&lt;/code&gt; y que no sea la rama &lt;code&gt;main&lt;/code&gt;. &lt;/p&gt;&lt;p&gt;Si ves en detalle, hemos creado una variable de CI/CD de &lt;code&gt;GitLab&lt;/code&gt; que contiene las credenciales de la &lt;code&gt;service-account&lt;/code&gt; (&lt;strong&gt; SERVICE_ACCOUNT_GCP&lt;/strong&gt;) de Google Cloud para que pueda estar accesible para el pipeline y pasarla a los jobs de terraform en forma de archivo json.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;.terraform:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;hashicorp/terraform:1.0.5&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;entrypoint:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;/usr/bin/env&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&apos;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;before_script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;infrastructure&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;configs&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;configs&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$SERVICE_ACCOUNT_GCP&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;|&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;base64&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;service-account.json&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;../terraform&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;sed&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;s/\PLACEHOLDER_CLUSTER/$CI_COMMIT_REF_SLUG&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform.tfvars&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;sed&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;s/\PLACEHOLDER_NETWORK/$CI_COMMIT_REF_SLUG&quot;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform.tfvars&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;init&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;cache:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;key:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform-cache&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH =~ /feature/&apos;&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;validate:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;validate&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;plan:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-out&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan.tfplan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;validate&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;artifacts:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan.tfplan&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apply:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apply&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-input=false&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan.tfplan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;when:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;manual&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apply:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;provision&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_PROJECT_DIR/planfile&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;./&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apply&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-input=false&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;planfile&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;plan&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;when:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;manual&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;deploy:feature:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.skaffold&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deploy&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;environment:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;$CI_COMMIT_REF_NAME&quot;&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;url:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_COMMIT_SHORT_SHA.features.equatonlabs.io&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;needs:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;build:api&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;skaffold&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deploy&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deployment/skaffold.yaml&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;feature&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-a&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;latest-build.json&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;--status-check=true&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;when:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;manual&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;destroy:feature:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;extends:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;.terraform&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;stage:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;destroy&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;environment:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_COMMIT_REF_NAME&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;url:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_COMMIT_SHORT_SHA.features.equatonlabs.io&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;action:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;stop&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;script:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;terraform&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;destroy&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;$CI_MERGE_REQUEST_APPROVED&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;if:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;ahora-juntemos-todo&quot;&gt;Ahora juntemos todo&lt;/h3&gt;&lt;p&gt;Una vez tengamos todos nuestros archivos en lugar es momento de hacer nuestro &lt;code&gt;commit &amp;amp; push&lt;/code&gt; al repositorio y seguir la ejecucin del pipeline de &lt;code&gt;GitLab&lt;/code&gt; y sus salidas para asegurarnos que esta todo en orden.&lt;/p&gt;&lt;p&gt;Tendramos este escenario:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Tener 2 nuevos stages &lt;code&gt;provision&lt;/code&gt; y &lt;code&gt;deploy&lt;/code&gt; cuando la rama comienza con &lt;code&gt;feature/&lt;/code&gt; &lt;/li&gt;&lt;li&gt;Dinmicamente nuestro &lt;code&gt;cluster&lt;/code&gt; y la &lt;code&gt;vpc&lt;/code&gt; tendrn el nombre de la variable runtime &lt;code&gt;CI_COMMIT_REF_SLUG&lt;/code&gt;  (ver mas  &lt;a target=&quot;_blank&quot; href=&quot;https://docs.gitlab.com/ee/ci/variables/predefined_variables.html&quot;&gt;ac&lt;/a&gt; )&lt;/li&gt;&lt;li&gt;En &lt;code&gt;GitLab&lt;/code&gt; se creara un &lt;code&gt;environment&lt;/code&gt; dinmico basado en la variable &lt;code&gt;CI_COMMIT_REF_NAME&lt;/code&gt;&lt;/li&gt;&lt;li&gt;Una vez aprobado el &lt;code&gt;MR&lt;/code&gt; a la rama main, se dispara un &lt;code&gt;job&lt;/code&gt; que elimina el cluster temporal que se utilizo para desarrollar y probar la &lt;code&gt;feature&lt;/code&gt; y as &lt;strong&gt;no incurrir en costos innecesarios&lt;/strong&gt;.&lt;/li&gt;&lt;li&gt;Una vez aprobado el MR a la rama &lt;code&gt;main&lt;/code&gt; se eliminar tambin el environment creado en &lt;code&gt;GitLab&lt;/code&gt;&lt;/li&gt;&lt;li&gt;En la rama &lt;code&gt;main&lt;/code&gt; no debe aparecer el stage de &lt;code&gt;provision&lt;/code&gt; ya que esos cluster no son permanentes en el tiempo y se manejan fuera de este &lt;code&gt;pipeline&lt;/code&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;strong&gt;Pipeline en Features&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1633026544204/M7bt9gBHF.png&quot; alt=&quot;Screenshot 2021-09-30 at 20.27.40.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Environment dinmico y deploy dashboard&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630368132018/V6MhRgpwD.png&quot; alt=&quot;Screen Shot 2021-08-30 at 21.01.53.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Google Cloud con el Cluster Creado&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1630363525738/JRS2hQ-7X.png&quot; alt=&quot;Screen Shot 2021-08-30 at 19.45.14.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Ahora nos queda probar, para completar el circuito, que al aprobar el &lt;code&gt;MR&lt;/code&gt;, se ejecute el paso de &lt;code&gt;destroy:feature&lt;/code&gt; que elimine el cluster y el environment de el deploy dashboard de &lt;code&gt;GitLab&lt;/code&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Pipeline con el MR aprobado&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1633534583973/9_dgH4g76.png&quot; alt=&quot;Screenshot 2021-10-06 at 17.35.05.png&quot; /&gt;&lt;/p&gt;&lt;p&gt;Y esto es todo por ahora amigos, como siempre compartan, que compartir hace bien. Les dejo ms abajo el repo con todos los files que vimos hoy para que los puedan ojear mas a detalles.&lt;/p&gt;&lt;hr /&gt;&lt;div class=&quot;embed-wrapper&quot;&gt;&lt;div class=&quot;embed-loading&quot;&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;/div&gt;&lt;a class=&quot;embed-card&quot; href=&quot;https://gitlab.com/equationlabs/stacks/emr/endovelicus&quot;&gt;https://gitlab.com/equationlabs/stacks/emr/endovelicus&lt;/a&gt;&lt;/div&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1630105446862/gBkXdewix.jpeg</hashnode:coverImage></item><item><title><![CDATA[k8s desde dev hasta prod con Gitlab, Skaffold, Kustomize y Kaniko]]></title><description><![CDATA[La prueba de concepto

Vamos a trabajar con lo siguiente:

Repositorio en GitLab con una aplicación de Symfony default
Cluster activo de Kubernetes puedes tenerlo en AWS o GCP y localmente con utilizar en Mac Docker-Desktop o minikube.
kubectl CLI of...]]></description><link>https://blog.equationlabs.io/k8s-desde-dev-hasta-prod-con-gitlab-skaffold-kustomize-y-kaniko</link><guid isPermaLink="true">https://blog.equationlabs.io/k8s-desde-dev-hasta-prod-con-gitlab-skaffold-kustomize-y-kaniko</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Git]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Raul Castellanos]]></dc:creator><pubDate>Wed, 25 Aug 2021 22:15:36 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;h3 id=&quot;la-prueba-de-concepto&quot;&gt;La prueba de concepto&lt;/h3&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629916807662/y4QaYpwPd.jpeg&quot; alt=&quot;ci-ce-release-cycle.jpeg&quot; /&gt;&lt;/p&gt;&lt;p&gt;Vamos a trabajar con lo siguiente:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Repositorio en &lt;code&gt;GitLab&lt;/code&gt; con una aplicacin de &lt;code&gt;Symfony&lt;/code&gt; default&lt;/li&gt;&lt;li&gt;Cluster activo de &lt;code&gt;Kubernetes&lt;/code&gt; puedes tenerlo en &lt;code&gt;AWS&lt;/code&gt; o &lt;code&gt;GCP&lt;/code&gt; y localmente con utilizar en Mac &lt;code&gt;Docker-Desktop&lt;/code&gt; o &lt;code&gt;minikube&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;&lt;code&gt;kubectl&lt;/code&gt; CLI oficial de kubernetes para interactuar con tu cluster&lt;/li&gt;&lt;li&gt;&lt;code&gt;Skaffold&lt;/code&gt; para manejar el ciclo de push, deploy de tu pipeline&lt;/li&gt;&lt;li&gt;&lt;code&gt;Kaniko&lt;/code&gt; para manejar el ciclo de build de tus imgenes&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;instalando-todos-los-requisitos&quot;&gt;Instalando todos los requisitos&lt;/h3&gt;&lt;p&gt;Basado en MacOS sin embargo en los sitios de cada herramienta esta bien detallado cmo instalar para los dems sistemas operativos.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;kubectl&lt;/strong&gt;: En tu terminal ejecuta &lt;code&gt;brew install kubectl&lt;/code&gt; y luego para verirficar que esta correctamente instalado ejecuta &lt;code&gt;kubectil version --client=true&lt;/code&gt;&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Skaffold&lt;/strong&gt;: de igual manera en tu terminal ejecuta &lt;code&gt;brew install skaffold&lt;/code&gt; &lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ahora-manos-a-la-obra&quot;&gt;Ahora: Manos a la obra!&lt;/h3&gt;&lt;p&gt;Manejaremos la siguiente &lt;strong&gt;estructura de archivos&lt;/strong&gt; para este tutorial&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-attr&quot;&gt;project_root:&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;api:&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# Archivos de Symfony y Dockerfile&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;deployment:&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# Archivos de CI/CD&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;base:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;configs:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;nginx:&lt;/span&gt;                &lt;span class=&quot;hljs-string&quot;&gt;default.conf&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;_registry-secret.yaml&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;api-app.yaml&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;api-load-balancer-service.yaml&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;kustomization.yaml&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;environments:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;dev:&lt;/span&gt;            &lt;span class=&quot;hljs-string&quot;&gt;api-app.patch.yaml&lt;/span&gt;            &lt;span class=&quot;hljs-string&quot;&gt;kustomization.yaml&lt;/span&gt;   &lt;span class=&quot;hljs-string&quot;&gt;skaffold.yaml&lt;/span&gt;   &lt;span class=&quot;hljs-string&quot;&gt;.build-template.json&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;.gitlab-ci.yaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;Skaffold&lt;/code&gt;, es una herramienta de lineas de comandos desarrollada por &lt;code&gt;Google&lt;/code&gt;, que hace el desarrollo local con &lt;code&gt;k8s&lt;/code&gt; un soplo, teniendo en cuenta lo complicado que es manejar todos los manifiestos yaml de &lt;code&gt;k8s&lt;/code&gt;,  buildear, pushear, etc, esta herramienta funciona a modo de &lt;code&gt;hot reloading&lt;/code&gt; para desarrollo local, con lo cual ante cualquier cambio de tu cdigo, &lt;strong&gt;buildea, rearma los yamls y deploya&lt;/strong&gt; para que puedas seguir desarrollando sin necesidad de trabajos manuales sobre los yamls.&lt;/p&gt;&lt;p&gt;&lt;code&gt;Kustomize&lt;/code&gt; es una herramienta de comandos que ahora viene incluida por default con &lt;code&gt;k8s&lt;/code&gt; y que permite poder &lt;strong&gt;&quot;templatizar&quot;&lt;/strong&gt; tus manifiestos de  &lt;code&gt;k8s&lt;/code&gt; y permitirte as poder a partir de un manifiesto base &lt;strong&gt;&quot;patchear&quot;&lt;/strong&gt; manifiestos  &lt;code&gt;k8s&lt;/code&gt; para cada uno de tus ambientes.&lt;/p&gt;&lt;p&gt;Veamos a ahora como es nuestro archivo de configuracin de &lt;code&gt;Skaffold&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;skaffold/v2beta21&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Config&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;api-app&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;build:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;artifacts:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;api&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;context:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;api&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;docker:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;dockerfile:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Dockerfile&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;sync:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;infer:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;**/*.php&apos;&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;**/*.js&apos;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;profiles:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;development&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;build:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;local:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;push:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;false&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;deploy:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;kubeContext:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;docker-desktop&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# or your local k8s cluster context like minikube&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;kustomize:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deployment/k8s/environments/dev&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;En este paso, ya lo que nos queda es armar nuestros manifiestos basados en los servicios y contenedores que nuestro aplicativo necesite, y junto con &lt;code&gt;kustomize&lt;/code&gt; ir modificando las variaciones para cada &lt;code&gt;stage&lt;/code&gt;, en el repositorio podrn ver un poco mas en detalle como estn construidos estos manifiestos.&lt;/p&gt;&lt;p&gt;Luego de esto podramos hacer el primer intento de correr localmente skaffold y hacer cambios en nuestro cdigo y con esto ya tendremos nuestro ciclo de desarrollo local completo.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;{} ~ skaffold dev -p development -f deployment/skaffold.yaml&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;ahora-pipelines-en-gitlab-y-cluster-en-google-cloud&quot;&gt;Ahora: Pipelines en GitLab y cluster en Google Cloud&lt;/h3&gt;&lt;p&gt;El paso ms importante en esta etapa es construir nuestro archivo &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; que es el encargado de orquestar todos los pasos a ejecutarse en nuestro pipeline, en este caso sern 4 pasos (jobs) para este tutorial:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;code&gt;Test&lt;/code&gt; test suites ejecutadas por &lt;code&gt;PHPUnit&lt;/code&gt; &lt;/li&gt;&lt;li&gt;&lt;code&gt;Build&lt;/code&gt; con Kaniko construimos las imgenes, las tagueamos y las enviamos a nuestro registro de imgenes (utilizaremos el mismo de gitlab en este caso), &lt;code&gt;Kaniko&lt;/code&gt; se encarga de guardar una capa de cache en el mismo registro.&lt;/li&gt;&lt;li&gt;&lt;code&gt;Deploy&lt;/code&gt; con &lt;code&gt;Skaffold&lt;/code&gt; se construyen dinmicamente los manifiestos junto con las imgenes construidas en el paso anterior generando un manifiesto final que es aplicado en el cluster&lt;/li&gt;&lt;li&gt;&lt;code&gt;Destroy&lt;/code&gt; con Skaffold se puede tambin eliminar del manifiesto aplicado actualmente.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629983077083/0RwoE6goV2.png&quot; alt=&quot;GitLab Pipeline&quot; /&gt;&lt;/p&gt;&lt;p&gt;Lo bueno de construir tus imgenes con &lt;code&gt;Kaniko&lt;/code&gt; &lt;em&gt;(al menos para mi)&lt;/em&gt; es que guarda en el mismo registro de imgenes una capa de cache, para que las posteriores construcciones sean mucho ms rpidas.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629982879941/Qi4tfze-j.png&quot; alt=&quot;Registry de GitLab con la capa de cache de Kaniko&quot; /&gt;&lt;/p&gt;&lt;p&gt;Luego de esto si debemos asegurarnos que nuestro cluster de &lt;code&gt;K8s&lt;/code&gt; esta integrado al proyecto en &lt;code&gt;GitLab&lt;/code&gt;, en realidad es un paso bastante sencillo y se puede ver detallado  &lt;a target=&quot;_blank&quot; href=&quot;https://docs.gitlab.com/ee/user/project/clusters/add_existing_cluster.html&quot;&gt;aqu&lt;/a&gt; &lt;/p&gt;&lt;p&gt;Una vez integrado tu cluster en tu proyecto de GitLab, el paso &lt;code&gt;deploy&lt;/code&gt; con &lt;code&gt;skaffold&lt;/code&gt; del pipeline ser capaz de promover el manifiesto k8s directo en tu cluster de &lt;code&gt;Google Cloud&lt;/code&gt; o de &lt;code&gt;Amazon Web Services&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629983273581/f7GcLhHXt.png&quot; alt=&quot;Deploy en Cluster GCP k8s&quot; /&gt;&lt;/p&gt;&lt;hr /&gt;&lt;p&gt;Enlaces a recursos&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Kaniko -&amp;gt;  &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/GoogleContainerTools/kaniko&quot;&gt;https://github.com/GoogleContainerTools/kaniko&lt;/a&gt; &lt;/li&gt;&lt;li&gt;Skaffold -&amp;gt;  &lt;a target=&quot;_blank&quot; href=&quot;https://skaffold.dev/&quot;&gt;https://skaffold.dev/&lt;/a&gt; &lt;/li&gt;&lt;li&gt;K8s CLI aka kubectl -&amp;gt;  &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/tasks/tools/&quot;&gt;https://kubernetes.io/docs/tasks/tools/&lt;/a&gt; &lt;/li&gt;&lt;/ul&gt;&lt;hr /&gt;&lt;div class=&quot;embed-wrapper&quot;&gt;&lt;div class=&quot;embed-loading&quot;&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;/div&gt;&lt;a class=&quot;embed-card&quot; href=&quot;https://gitlab.com/rcastellanosm/gitops-example-with-kaniko-and-skaffold&quot;&gt;https://gitlab.com/rcastellanosm/gitops-example-with-kaniko-and-skaffold&lt;/a&gt;&lt;/div&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;h3 id=&quot;la-prueba-de-concepto&quot;&gt;La prueba de concepto&lt;/h3&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629916807662/y4QaYpwPd.jpeg&quot; alt=&quot;ci-ce-release-cycle.jpeg&quot; /&gt;&lt;/p&gt;&lt;p&gt;Vamos a trabajar con lo siguiente:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Repositorio en &lt;code&gt;GitLab&lt;/code&gt; con una aplicacin de &lt;code&gt;Symfony&lt;/code&gt; default&lt;/li&gt;&lt;li&gt;Cluster activo de &lt;code&gt;Kubernetes&lt;/code&gt; puedes tenerlo en &lt;code&gt;AWS&lt;/code&gt; o &lt;code&gt;GCP&lt;/code&gt; y localmente con utilizar en Mac &lt;code&gt;Docker-Desktop&lt;/code&gt; o &lt;code&gt;minikube&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;&lt;code&gt;kubectl&lt;/code&gt; CLI oficial de kubernetes para interactuar con tu cluster&lt;/li&gt;&lt;li&gt;&lt;code&gt;Skaffold&lt;/code&gt; para manejar el ciclo de push, deploy de tu pipeline&lt;/li&gt;&lt;li&gt;&lt;code&gt;Kaniko&lt;/code&gt; para manejar el ciclo de build de tus imgenes&lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;instalando-todos-los-requisitos&quot;&gt;Instalando todos los requisitos&lt;/h3&gt;&lt;p&gt;Basado en MacOS sin embargo en los sitios de cada herramienta esta bien detallado cmo instalar para los dems sistemas operativos.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;kubectl&lt;/strong&gt;: En tu terminal ejecuta &lt;code&gt;brew install kubectl&lt;/code&gt; y luego para verirficar que esta correctamente instalado ejecuta &lt;code&gt;kubectil version --client=true&lt;/code&gt;&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Skaffold&lt;/strong&gt;: de igual manera en tu terminal ejecuta &lt;code&gt;brew install skaffold&lt;/code&gt; &lt;/li&gt;&lt;/ul&gt;&lt;h3 id=&quot;ahora-manos-a-la-obra&quot;&gt;Ahora: Manos a la obra!&lt;/h3&gt;&lt;p&gt;Manejaremos la siguiente &lt;strong&gt;estructura de archivos&lt;/strong&gt; para este tutorial&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-attr&quot;&gt;project_root:&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;api:&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# Archivos de Symfony y Dockerfile&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;deployment:&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# Archivos de CI/CD&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;base:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;configs:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;nginx:&lt;/span&gt;                &lt;span class=&quot;hljs-string&quot;&gt;default.conf&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;_registry-secret.yaml&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;api-app.yaml&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;api-load-balancer-service.yaml&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;kustomization.yaml&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;environments:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;dev:&lt;/span&gt;            &lt;span class=&quot;hljs-string&quot;&gt;api-app.patch.yaml&lt;/span&gt;            &lt;span class=&quot;hljs-string&quot;&gt;kustomization.yaml&lt;/span&gt;   &lt;span class=&quot;hljs-string&quot;&gt;skaffold.yaml&lt;/span&gt;   &lt;span class=&quot;hljs-string&quot;&gt;.build-template.json&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;.gitlab-ci.yaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;Skaffold&lt;/code&gt;, es una herramienta de lineas de comandos desarrollada por &lt;code&gt;Google&lt;/code&gt;, que hace el desarrollo local con &lt;code&gt;k8s&lt;/code&gt; un soplo, teniendo en cuenta lo complicado que es manejar todos los manifiestos yaml de &lt;code&gt;k8s&lt;/code&gt;,  buildear, pushear, etc, esta herramienta funciona a modo de &lt;code&gt;hot reloading&lt;/code&gt; para desarrollo local, con lo cual ante cualquier cambio de tu cdigo, &lt;strong&gt;buildea, rearma los yamls y deploya&lt;/strong&gt; para que puedas seguir desarrollando sin necesidad de trabajos manuales sobre los yamls.&lt;/p&gt;&lt;p&gt;&lt;code&gt;Kustomize&lt;/code&gt; es una herramienta de comandos que ahora viene incluida por default con &lt;code&gt;k8s&lt;/code&gt; y que permite poder &lt;strong&gt;&quot;templatizar&quot;&lt;/strong&gt; tus manifiestos de  &lt;code&gt;k8s&lt;/code&gt; y permitirte as poder a partir de un manifiesto base &lt;strong&gt;&quot;patchear&quot;&lt;/strong&gt; manifiestos  &lt;code&gt;k8s&lt;/code&gt; para cada uno de tus ambientes.&lt;/p&gt;&lt;p&gt;Veamos a ahora como es nuestro archivo de configuracin de &lt;code&gt;Skaffold&lt;/code&gt;:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;skaffold/v2beta21&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Config&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;api-app&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;build:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;artifacts:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;api&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;context:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;api&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;docker:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;dockerfile:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Dockerfile&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;sync:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;infer:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;**/*.php&apos;&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&apos;**/*.js&apos;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;profiles:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;development&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;build:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;local:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;push:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;false&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;deploy:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;kubeContext:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;docker-desktop&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# or your local k8s cluster context like minikube&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;kustomize:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;deployment/k8s/environments/dev&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;En este paso, ya lo que nos queda es armar nuestros manifiestos basados en los servicios y contenedores que nuestro aplicativo necesite, y junto con &lt;code&gt;kustomize&lt;/code&gt; ir modificando las variaciones para cada &lt;code&gt;stage&lt;/code&gt;, en el repositorio podrn ver un poco mas en detalle como estn construidos estos manifiestos.&lt;/p&gt;&lt;p&gt;Luego de esto podramos hacer el primer intento de correr localmente skaffold y hacer cambios en nuestro cdigo y con esto ya tendremos nuestro ciclo de desarrollo local completo.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;{} ~ skaffold dev -p development -f deployment/skaffold.yaml&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;ahora-pipelines-en-gitlab-y-cluster-en-google-cloud&quot;&gt;Ahora: Pipelines en GitLab y cluster en Google Cloud&lt;/h3&gt;&lt;p&gt;El paso ms importante en esta etapa es construir nuestro archivo &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; que es el encargado de orquestar todos los pasos a ejecutarse en nuestro pipeline, en este caso sern 4 pasos (jobs) para este tutorial:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;code&gt;Test&lt;/code&gt; test suites ejecutadas por &lt;code&gt;PHPUnit&lt;/code&gt; &lt;/li&gt;&lt;li&gt;&lt;code&gt;Build&lt;/code&gt; con Kaniko construimos las imgenes, las tagueamos y las enviamos a nuestro registro de imgenes (utilizaremos el mismo de gitlab en este caso), &lt;code&gt;Kaniko&lt;/code&gt; se encarga de guardar una capa de cache en el mismo registro.&lt;/li&gt;&lt;li&gt;&lt;code&gt;Deploy&lt;/code&gt; con &lt;code&gt;Skaffold&lt;/code&gt; se construyen dinmicamente los manifiestos junto con las imgenes construidas en el paso anterior generando un manifiesto final que es aplicado en el cluster&lt;/li&gt;&lt;li&gt;&lt;code&gt;Destroy&lt;/code&gt; con Skaffold se puede tambin eliminar del manifiesto aplicado actualmente.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629983077083/0RwoE6goV2.png&quot; alt=&quot;GitLab Pipeline&quot; /&gt;&lt;/p&gt;&lt;p&gt;Lo bueno de construir tus imgenes con &lt;code&gt;Kaniko&lt;/code&gt; &lt;em&gt;(al menos para mi)&lt;/em&gt; es que guarda en el mismo registro de imgenes una capa de cache, para que las posteriores construcciones sean mucho ms rpidas.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629982879941/Qi4tfze-j.png&quot; alt=&quot;Registry de GitLab con la capa de cache de Kaniko&quot; /&gt;&lt;/p&gt;&lt;p&gt;Luego de esto si debemos asegurarnos que nuestro cluster de &lt;code&gt;K8s&lt;/code&gt; esta integrado al proyecto en &lt;code&gt;GitLab&lt;/code&gt;, en realidad es un paso bastante sencillo y se puede ver detallado  &lt;a target=&quot;_blank&quot; href=&quot;https://docs.gitlab.com/ee/user/project/clusters/add_existing_cluster.html&quot;&gt;aqu&lt;/a&gt; &lt;/p&gt;&lt;p&gt;Una vez integrado tu cluster en tu proyecto de GitLab, el paso &lt;code&gt;deploy&lt;/code&gt; con &lt;code&gt;skaffold&lt;/code&gt; del pipeline ser capaz de promover el manifiesto k8s directo en tu cluster de &lt;code&gt;Google Cloud&lt;/code&gt; o de &lt;code&gt;Amazon Web Services&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1629983273581/f7GcLhHXt.png&quot; alt=&quot;Deploy en Cluster GCP k8s&quot; /&gt;&lt;/p&gt;&lt;hr /&gt;&lt;p&gt;Enlaces a recursos&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Kaniko -&amp;gt;  &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/GoogleContainerTools/kaniko&quot;&gt;https://github.com/GoogleContainerTools/kaniko&lt;/a&gt; &lt;/li&gt;&lt;li&gt;Skaffold -&amp;gt;  &lt;a target=&quot;_blank&quot; href=&quot;https://skaffold.dev/&quot;&gt;https://skaffold.dev/&lt;/a&gt; &lt;/li&gt;&lt;li&gt;K8s CLI aka kubectl -&amp;gt;  &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/tasks/tools/&quot;&gt;https://kubernetes.io/docs/tasks/tools/&lt;/a&gt; &lt;/li&gt;&lt;/ul&gt;&lt;hr /&gt;&lt;div class=&quot;embed-wrapper&quot;&gt;&lt;div class=&quot;embed-loading&quot;&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;div class=&quot;loadingRow&quot;&gt;&lt;/div&gt;&lt;/div&gt;&lt;a class=&quot;embed-card&quot; href=&quot;https://gitlab.com/rcastellanosm/gitops-example-with-kaniko-and-skaffold&quot;&gt;https://gitlab.com/rcastellanosm/gitops-example-with-kaniko-and-skaffold&lt;/a&gt;&lt;/div&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1629991891121/S8GlKusClV.jpeg</hashnode:coverImage></item></channel></rss>