<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Syaifuddin’s Growing Space]]></title><description><![CDATA[A personal documentation of what I've learned. Mostly about tech-related stuff.]]></description><link>https://blog.sya.my.id</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 11:36:39 GMT</lastBuildDate><atom:link href="https://blog.sya.my.id/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[[EN] Set Up Amazon ECR Pull-Through Cache for Docker Hub]]></title><description><![CDATA[The Problem
The developers are actively build and test their projects. There are dozens of projects that can be built in a day. This sometimes causes the disk of the CI/CD system (e.g., Jenkins, self-]]></description><link>https://blog.sya.my.id/en-set-up-amazon-ecr-pull-through-cache-for-docker-hub</link><guid isPermaLink="true">https://blog.sya.my.id/en-set-up-amazon-ecr-pull-through-cache-for-docker-hub</guid><category><![CDATA[AWS]]></category><category><![CDATA[ecr]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Wed, 22 Apr 2026 06:45:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/631dd8693e8d6f3497ad63e7/1ca35a5a-6303-4a86-badb-91961cf65694.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>The Problem</h1>
<p>The developers are actively build and test their projects. There are dozens of projects that can be built in a day. This sometimes causes the disk of the CI/CD system (e.g., Jenkins, self-hosted runner) to be full, so we always delete unused images and build cache after each build. When a project needs to be rebuilt, it has to pull the base images from Docker Hub. But there is a <a href="https://docs.docker.com/docker-hub/usage/">rate limit</a> from dockerhub, 100 pulls/6h unauthenticated, 200 authenticated per IPv4 address. If we use more than this, the build pipeline will fail.</p>
<h1>The Solution</h1>
<p>Because I use AWS, I can utilize AWS ECR pull-through cache to pull images from dockerhub, store them in ECR, then I can pull unlimited images from ECR. No dependency on Docker Hub uptime/rate limits. If Hub goes down, you can still pull (assuming the image you need is already cached on ECR). However, ECR storage cost for cached images. Mitigate it with lifecycle policies to evict old/unused tags.</p>
<h2>Create a Docker Hub personal access token</h2>
<p>This will be used by ECR to login to the Dockerhub. Token is used instead of your Docker hub password.</p>
<ol>
<li><p>Log into Docker Hub</p>
</li>
<li><p>Navigate to Account Settings &gt; Settings &gt; Personal access tokens</p>
</li>
<li><p>Click the "Generate new token" button</p>
</li>
<li><p>Fill the token description, set the expiration date, and the permissions. Usually, I only give read-only permission.</p>
</li>
<li><p>Write down the generated PAT for the next step in AWS configuration.</p>
</li>
</ol>
<img src="https://cdn.hashnode.com/uploads/covers/631dd8693e8d6f3497ad63e7/9a5ce307-4ec1-49f7-8961-2b8ae537b4a0.png" alt="" style="display:block;margin:0 auto" />

<h2>Setting up the ECR pull-through rule</h2>
<ol>
<li><p>First, you need to store your Docker Hub credentials in AWS Secrets Manager with a specific naming convention. Use this AWS CLI command:</p>
<pre><code class="language-shell">aws secretsmanager create-secret \
    --name "ecr-pullthroughcache/docker-hub" \
    --description "Docker Hub credentials for ECR pull through cache" \
    --secret-string '{
        "username": "&lt;your-docker-username&gt;",
        "accessToken": "&lt;your-docker-access-token&gt;"
    }' \
    --region &lt;region&gt;
</code></pre>
<p><strong>Important:</strong> The secret name must use the <code>ecr-pullthroughcache/</code> prefix and be in the same account and region as your pull-through cache rule.</p>
</li>
<li><p>Proceed with creating the pull-through rule. You can use this AWS CLI command:</p>
<pre><code class="language-shell">aws ecr create-pull-through-cache-rule \
    --ecr-repository-prefix docker-hub \
    --upstream-registry-url registry-1.docker.io \
    --credential-arn &lt;secretsmanager-arn-you-got-from-the-previous-command&gt; \
    --region &lt;region&gt;
</code></pre>
</li>
<li><p>Or, if you prefer to use the console, navigate to <strong>ECR &gt; Private Registry &gt; Pull through cache &gt; Add Rule</strong>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/631dd8693e8d6f3497ad63e7/9a8daa73-34cf-48e7-8a0b-6f6dc6d4268d.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>Set the <strong>Upstream registry</strong> to Docker Hub.</p>
</li>
<li><p>In the Authentication section, select the <strong>Use an existing AWS secret</strong> and select the secret created in step 1. Note that you can also skip step 1 and create the secrets from here if you use the console.</p>
</li>
<li><p>In the <strong>Step 3: Specify namespaces,</strong> configure the prefix configuration for both the cache and upstream namespace. I usually select use a specific prefix for the namespace, and no prefix in the Upstream namespace.</p>
</li>
<li><p>On <strong>Step 4: Review and create</strong>, review your configuration and choose <strong>Create</strong></p>
</li>
</ol>
<h2>Pull the image through the cache</h2>
<pre><code class="language-shell"># Login to the ECR private registry
aws ecr get-login-password --region &lt;region&gt; | docker login --username AWS --password-stdin &lt;your-acc-id&gt;.dkr.ecr.&lt;region&gt;.amazonaws.com

# Pull an image from docker hub:
docker pull aws_account_id.dkr.ecr.region.amazonaws.com/docker-hub/library/image_name:tag
</code></pre>
<p>Notice the <code>library</code> in the repository URL is specific to <strong>Docker Hub's official images</strong> and reflects how Docker Hub organizes its repositories internally.</p>
<pre><code class="language-shell"># Original Docker Hub command:
docker pull nginx:latest
docker pull alpine:3.23.4

# ECR pull-through cache equivalent:
docker pull aws_account_id.dkr.ecr.region.amazonaws.com/docker-hub/library/nginx:latest
</code></pre>
<p>When you pull these images, ECR creates repositories with these exact names:</p>
<ul>
<li><p><code>docker-hub/library/nginx</code></p>
</li>
<li><p><code>docker-hub/library/alpine</code></p>
</li>
</ul>
<h2>Setting up the Repository Policy</h2>
<p>I personally use a repository policy template to automatically apply a policy when ECR create Dockerhub pull-through repository. Such as, to only keep the last 6 images.</p>
<ol>
<li><p>Navigate to <strong>ECR &gt; Private Registry &gt; Features &amp; Settings &gt; Repository creation templates &gt; Configure &gt; Create Template.</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/631dd8693e8d6f3497ad63e7/674ea0b8-e6c5-498f-adc9-e22c37cca55d.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>On <strong>Step 1: Define template &gt;</strong> <strong>Template details &gt; Applied for</strong> select the <code>Pull through cache</code>. Because I use a prefix in the previous step, I select the A <code>specific prefix</code> and set the prefix there.</p>
</li>
<li><p>On <strong>Step 2: Add repository creation configuration &gt; Repository lifecycle policy,</strong> you can select predefined templates in the <strong>Lifecycle policy examples.</strong></p>
</li>
<li><p>On <strong>Step 4: Review and create</strong>, review your configuration and choose <strong>Create</strong></p>
</li>
</ol>
<p>Now, if you pull a Dockerhub image through ECR, a lifecycle policy will be automatically applied.</p>
]]></content:encoded></item><item><title><![CDATA[[EN] Track progress of MySQL Import/Export process using PV]]></title><description><![CDATA[The problem
Today, I need to export a MySQL database and then import it to a new server. The database is not too big, just a few GBs. But it was taking so long, and I wonder when this will be finished? By default, when you run mysqldump command to ex...]]></description><link>https://blog.sya.my.id/en-track-progress-of-mysql-importexport-process-using-pv</link><guid isPermaLink="true">https://blog.sya.my.id/en-track-progress-of-mysql-importexport-process-using-pv</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Fri, 21 Nov 2025 19:35:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/jf1EomjlQi0/upload/a7ee07f61c1dc2ad71b4cf2bb4523765.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-the-problem">The problem</h1>
<p>Today, I need to export a MySQL database and then import it to a new server. The database is not too big, just a few GBs. But it was taking so long, and I wonder when this will be finished? By default, when you run <code>mysqldump</code> command to export the database and then <code>mysql</code> to import it, it doesn’t show any progress. I'm not sure whether the process is stuck or running.</p>
<pre><code class="lang-bash">root@server:~<span class="hljs-comment"># mysqldump some_database &gt; backup.sql</span>

<span class="hljs-comment"># Nothing to see here.</span>
<span class="hljs-comment"># The terminal shows nothing until the process is finished.</span>
</code></pre>
<h1 id="heading-the-solution">The solution</h1>
<p>Then I discovered <a target="_blank" href="https://www.ivarch.com/programs/pv.shtml">Pipe Viewer (PV)</a>. According to the website, Pipe Viewer is:</p>
<blockquote>
<p>Pipe Viewer - is a terminal-based tool for monitoring the progress of data through a pipeline and modifying its flow. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion. Data flow rate, error handling strategy, buffer size, and cache interaction can all be adjusted.</p>
</blockquote>
<p>Let’s see how it works:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Install the pv package</span>
root@47be583c9aaf:/<span class="hljs-comment"># apt update &amp;&amp; apt install -y pv</span>

<span class="hljs-comment"># Let's export a database.</span>
<span class="hljs-comment"># Using mysqldump, there’s no "total expected size".</span>
<span class="hljs-comment"># While PV can show progress percentages only when it knows the total size.</span>
<span class="hljs-comment"># I know the DB size. So I tell PV using the "-s" option.</span>
root@localhost:~<span class="hljs-comment"># mysqldump some_database | pv  --size 3G &gt; some_database.sql</span>
36MiB 0:00:09 [65.8MiB/s] [========&gt;                           ] 17% ETA 0:00:42

<span class="hljs-comment"># Now let's import the database</span>
<span class="hljs-comment"># Now PV know the size of the DB dump, so I don't need the "-s" option.</span>
root@47be583c9aaf:~<span class="hljs-comment"># pv some_database.sql | mysql -p some_database</span>
5.09GiB 0:06:47 [12.8MiB/s] [==================================&gt;] 100%
</code></pre>
<p>Now I don’t have to worry, knowing when it will be finished and the process speed.</p>
<p>PV is not exclusive to MySQL-related activity. But it can also be used for other use cases. Such as monitoring the progress of file compression and decompression, copying a file, and many more.</p>
]]></content:encoded></item><item><title><![CDATA[[EN] Lesson learned from using the wrong AWS ElastiCache Redis endpoint]]></title><description><![CDATA[A couple of days ago, I learned the hard way that using the wrong endpoint in AWS ElastiCache for Redis can take your app down. I didn’t pay enough attention to the Primary Endpoint, Reader Endpoint, and the Node’s Endpoint. Here’s what happened.
Cur...]]></description><link>https://blog.sya.my.id/en-lesson-learned-from-using-the-wrong-aws-elasticache-redis-endpoint</link><guid isPermaLink="true">https://blog.sya.my.id/en-lesson-learned-from-using-the-wrong-aws-elasticache-redis-endpoint</guid><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Wed, 13 Aug 2025 03:50:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/emolMCqnKfg/upload/c7eb8197eb9ef632459ae6612b861cc6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A couple of days ago, I learned the hard way that using the <em>wrong</em> endpoint in AWS ElastiCache for Redis can take your app down. I didn’t pay enough attention to the <em>Primary Endpoint, Reader Endpoint,</em> and the <em>Node’s Endpoint</em>. Here’s what happened.</p>
<h1 id="heading-current-setup">Current setup</h1>
<p>I have a Redis OSS instance on AWS ElastiCache configured like this:</p>
<ul>
<li><p><strong>Cluster mode</strong>: Disabled (single shard)</p>
</li>
<li><p><strong>Nodes</strong>: 1</p>
</li>
<li><p><strong>Auto-failover</strong>: Disabled</p>
</li>
<li><p><strong>Multi-AZ</strong>: Disabled</p>
</li>
</ul>
<p>In short, it’s basically a <strong>standalone Redis instance</strong> that is enough for my application. For some reason (my laziness + “it’s working so why change it” + lack of reading documentation), my app was connecting directly to the <strong>Node Endpoint</strong> instead of the <strong>Primary Endpoint</strong>.</p>
<h1 id="heading-a-small-change-started-the-issue">A small change started the issue</h1>
<p>I changed some of the settings, including turning on the <strong>Encryption in Transit,</strong> set it to <strong>Preferred</strong>, and checking if I’m still able to connect to the Redis (somehow I'm still able to connect to the Redis using the old DNS). Seemed simple enough, right? <strong>No, it’s not.</strong></p>
<h1 id="heading-the-realization">The realization</h1>
<p>What I didn’t realize is that changing this setting <strong>forces AWS to change</strong> the <strong>Node Endpoint DNS</strong>. The AWS <a target="_blank" href="https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Endpoints.html">documentation here</a> clearly said:</p>
<blockquote>
<p>Unlike the primary endpoint, node endpoints resolve to specific endpoints. If you make a change in your cluster, such as adding or deleting a replica, you must update the node endpoints in your application. There is a difference depending upon whether or not In-Transit encryption is enabled.</p>
</blockquote>
<p><strong>In-transit encryption not enabled</strong></p>
<pre><code class="lang-plaintext">clusterName.xxxxxx.nodeId.regionAndAz.cache.amazonaws.com:port

example: redis-01.7abc2d.0001.usw2.cache.amazonaws.com:6379
</code></pre>
<p><strong>In-transit encryption enabled</strong></p>
<pre><code class="lang-plaintext">master.clusterName.xxxxxx.regionAndAz.cache.amazonaws.com:port

example: master.ncit.ameaqx.use1.cache.amazonaws.com:6379
</code></pre>
<p>The result was predictable. My app was still trying to connect to the <em>old</em> node hostname, but Redis was gone from that address. And the downtime occurred.</p>
<h1 id="heading-which-endpoints-to-use-with-valkey-or-redis-oss">Which endpoints to use with Valkey or Redis OSS?</h1>
<p>The AWS documentation says:</p>
<blockquote>
<p>For a <strong>standalone node</strong>, use the node's endpoint for both read and write operations.</p>
</blockquote>
<p>AFAIK, your Redis instance is considered Standalone if there is no replica, cluster mode is disabled, and encryption in transit is disabled <em>(correct me if I’m wrong). As you can see here, there is only a reader endpoint. So, you'd better connect directly to the node endpoint.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755055634827/3192ab3b-d7c7-45b5-9f3b-c97bfbd79e87.png" alt class="image--center mx-auto" /></p>
<hr />
<blockquote>
<p>For <strong>Valkey or Valkey or Redis OSS (cluster mode disabled) clusters</strong>, use the Primary Endpoint for all write operations. Use the Reader Endpoint to evenly split incoming connections to the endpoint between all read replicas. Use the individual Node Endpoints for read operations (In the API/CLI these are referred to as Read Endpoints).</p>
</blockquote>
<p>If you enable Encryption in transit, AWS will add a Primary Endpoint. They suggest you use Primary Endpoint for write operations, a Reader Endpoint or you can connect directly to the individual Node Endpoints for read operations. But I, myself, prefer connecting only to the Primary Endpoint for both write and read operations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755056036512/0c93e37b-3758-4665-a28f-3047c07ee2a5.png" alt class="image--center mx-auto" /></p>
<hr />
<blockquote>
<p>For <strong>Valkey or Redis OSS (cluster mode enabled) clusters</strong>, use the cluster's <em>Configuration Endpoint</em> for all operations that support cluster mode enabled commands. You must use a client that supports either Valkey Cluster, or Redis OSS Cluster on Redis OSS 3.2 and above. You can still read from individual node endpoints (In the API/CLI these are referred to as Read Endpoints).</p>
</blockquote>
<p>I haven’t tried this yet, but I believe you can figure it out yourself.</p>
<h1 id="heading-lesson-learned">Lesson learned</h1>
<ul>
<li><p>Make your client resilient: retry, reconnect, and respect DNS TTLs.</p>
</li>
<li><p>Test thoroughly and pay attention to details before you make changes to the production for real.</p>
</li>
<li><p>Plan disruptive changes (like enabling TLS) with a maintenance window.</p>
</li>
<li><p>Always read the official documentation.</p>
</li>
<li><p>Connect to the appropriate endpoints depending on your use case. I prefer using the Primary Endpoints for both write and read operations. This may result in read operations not being distributed evenly across all replicas. But I don't have any replicas, and cluster mode is disabled.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[[EN] My experience taking the KCNA certification]]></title><description><![CDATA[Last year, I bought a KCNA + CKA certification voucher while there was a Black Friday event. But I only dared to take the KCNA exam in the middle of this year. I'm the type of person who is well-prepared and wants to understand things thoroughly from...]]></description><link>https://blog.sya.my.id/en-my-experience-taking-the-kcna-certification</link><guid isPermaLink="true">https://blog.sya.my.id/en-my-experience-taking-the-kcna-certification</guid><category><![CDATA[KCNA Exam]]></category><category><![CDATA[Certification]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Thu, 07 Aug 2025 06:45:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/KXwPJtAJLfU/upload/3deba4b52e1b8e442179a495944ccb9e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last year, I bought a KCNA + CKA certification voucher while there was a Black Friday event. But I only dared to take the KCNA exam in the middle of this year. I'm the type of person who is well-prepared and wants to understand things thoroughly from the basics (a bit of a perfectionist, too). I finally dared to schedule the exam after I was about 90% confident in what I had studied. Here, I'll share my journey to passing the certification.</p>
<h1 id="heading-what-is-kcna-certification">What is KCNA certification?</h1>
<p>The Kubernetes and Cloud Native Associate (KCNA) certification is an <strong>entry-level</strong> credential designed to validate foundational knowledge of Kubernetes, containers, and the broader cloud native ecosystem. It’s ideal for beginners, offering a solid introduction to core concepts like pods, deployments, container runtimes, and CNCF projects.</p>
<p>People generally take this certification after completing the CKAD or CKA certifications because they already understand the basics, making it easier to pass the KCNA. Or, for those new to Kubernetes, this certification can be a starting point for learning Kubernetes and Cloud Native.</p>
<p>The exam consists of 60 multiple-choice questions that must be completed within 90 minutes. <strong>You will pass if you achieve a score above 75. One retake is included, so don’t worry if you fail once.</strong></p>
<h1 id="heading-my-background">My background</h1>
<p>Maybe you need to know about me first because the tips I share may not be suitable for you. I've worked in IT for the past 7 years. I've worked as a Linux Sysadmin, Network Engineer, SRE, and DevOps. So, I'm quite familiar with the cloud and DevOps worlds. I interact with cloud services like AWS and GCP every day. I use Docker every day and often use Kubernetes, but I don't mess with it much. That way, it didn't take me too long to prepare for this exam.</p>
<h1 id="heading-what-to-learn">What to learn?</h1>
<p>You can start by studying the KCNA curriculum, which can be read at this <a target="_blank" href="https://github.com/cncf/curriculum/blob/master/KCNA_Curriculum.pdf">link</a>. I prioritized these three chapters as essential to study because they account for the largest percentage of the learning:</p>
<ol>
<li><p>Kubernetes Fundamentals - 46% (The most important and the longest I studied).</p>
</li>
<li><p>Container Orchestration - 22% (Kubernetes complementary components).</p>
</li>
<li><p>Cloud Native Architecture - 16% (Mostly related to standards in Cloud Native and getting to know the CNCF organization).</p>
</li>
</ol>
<h1 id="heading-where-to-learn">Where to learn?</h1>
<p>I generally use these study materials:</p>
<ol>
<li><p><a target="_blank" href="https://www.udemy.com/share/10anKe3@WE3KqeCUBQYrYQ79m4G0tU2SNPM_m6vNoE9q86bf11sW5e_3AidRWZxKmYZFGlvt/">KCNA certification</a> + Hands-on Lab + Practice Exam, <strong>Udemy course by James Spurin</strong>. I found the explanations very easy to understand, clear, and concise. There is a quiz after each chapter, so what you have learned doesn’t evaporate quickly. There's a hands-on section to get a better feel for using Kubernetes. Two practice exams and many more quizzes, which I found to be very similar to the actual exam environment. See it by yourself.</p>
</li>
<li><p>Becoming KCNA Certified <strong>book by Dmitry Galkin.</strong> If you prefer reading books to watching videos.</p>
</li>
<li><p><a target="_blank" href="https://github.com/edithturn/KCNA-training">KCNA-training repository</a> by edithturn.</p>
</li>
<li><p>Of course, the <a target="_blank" href="https://kubernetes.io/docs/home/">Kubernetes official documentation</a>.</p>
</li>
<li><p>The <a target="_blank" href="https://www.cncf.io/">CNCF website</a> and <a target="_blank" href="https://landscape.cncf.io/">CNCF Landscape</a>.</p>
</li>
</ol>
<h1 id="heading-scheduling-the-exam">Scheduling the exam</h1>
<p>Scheduling an exam is quite straightforward. Be sure to read the <a target="_blank" href="https://helpdesk.psionline.com/hc/en-gb/articles/4409608794260-PSI-secure-browser-and-Chrome-Extension-System-Requirements">System Requirements</a> and <a target="_blank" href="https://docs.linuxfoundation.org/tc-docs/certification/lf-handbook2/candidate-requirements#testing-environment-requirements">Testing Environment Requirements</a>. Failure to meet either of these requirements could prevent you from starting the exam or even result in disqualification.</p>
<p>The links above or the exam rules may become outdated in the future. Make sure to follow the instructions on the registration page. Here are some requirements I pay special attention to:</p>
<ol>
<li><p>Prepare a quiet place where no other people are coming in and out.</p>
</li>
<li><p>The table and its surroundings must be clean of paper, stationery, and other electronic devices such as smartphones.</p>
</li>
<li><p>There must be an active webcam and mic during the exam process.</p>
</li>
<li><p>I recommend using Windows 11 for an easier setup. Use a laptop and have a data plan ready in case of a power outage.</p>
</li>
<li><p>The PSI browser download link will be available after scheduling the exam.</p>
</li>
<li><p>Take the exam simulation at least once to familiarize yourself with the exam conditions.</p>
</li>
<li><p>Be sure to check the exam rescheduling policy.</p>
</li>
</ol>
<h1 id="heading-taking-the-exam">Taking the exam</h1>
<p>I took the exam at midnight (11.30 PM UTC+7). The exam link will be available 30 minutes before the scheduled time. Be sure to start the exam as soon as possible because there will be a data verification process that may take a long time. <strong>Make sure your ID card (such as Indonesian KTP) is within reach.</strong> I joined the exam 15 minutes before it started yesterday (11.17 PM UTC+7). It turned out I only managed to complete it 30 minutes after the scheduled time (00.01 AM UTC +7). This is my setup during the exam:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754547936310/58e90391-42d0-4d91-8881-547be59c9689.jpeg" alt="My setup during the exam" class="image--center mx-auto" /></p>
<p>Pay special attention to these rules:</p>
<ol>
<li><p>I’m allowed to bring a bottle of water to drink. The bottle must be clear of labels or writings.</p>
</li>
<li><p>You communicate with the proctor or support via chat in English.</p>
</li>
<li><p>I can ask for a short break, but you must stay in your seat.</p>
</li>
<li><p>I'm not allowed to wear headphones or other accessories like hats.</p>
</li>
<li><p>Avoid covering your mouth, mumbling, or reading the question aloud when working on questions.</p>
</li>
<li><p>Avoid looking away from the screen.</p>
</li>
</ol>
<h1 id="heading-general-tips">General tips</h1>
<p>These are tips that work for me. I believe these tips also work for other certification exams:</p>
<ol>
<li><p>Ask for prayers from your mother or wife (if married). Treat them well.</p>
</li>
<li><p>Study 50% - 60% of the material that will help you understand the whole material.</p>
</li>
<li><p>Do hands-on exercises to make it easier to understand Kubernetes. Create a small project like deploying Wordpress on Kubernetes.</p>
</li>
<li><p>The closer to the exam date, the more I focus on practicing questions.</p>
</li>
<li><p>Don't spend too much time on a single question. Limit it to a maximum of 5 minutes. If you're unsure, you can mark the question and review it later.</p>
</li>
<li><p>Use a mnemonic (“jembatan keledai” in Indonesia) to help remember information.</p>
</li>
<li><p>Don't just read and listen to the material. You can explain (verbally or in writing) what you've learned so you don't forget it quickly.</p>
</li>
<li><p>Use multiple resources. Avoid believing solely in AI. Always verify information.</p>
</li>
<li><p>Don't fear mistakes. Learn from them. There is one question that often bamboozled me. I chose the right answer multiple times when practicing the questions, and I still second-guessed myself during the exam and chose the wrong answer. Yes, I’m still mad about it. But making mistakes is the most effective way to actually learn because it annoys the hell out of you.</p>
</li>
<li><p>The score you get won’t matter, and won't appear on your certificate. What matters is that you passed the exam. I'm saying this so you don't focus too much on getting a perfect score.</p>
</li>
</ol>
<h1 id="heading-final-thoughts">Final Thoughts</h1>
<p>KCNA is one of the great ways to get started with the cloud, the Cloud Native, and the Kubernetes world, especially if you want to work in the DevOps field. I’m sure it’s not the hardest exam, but it’s a step toward becoming a more confident and capable engineer who understands Kubernetes fundamentals. This exam isn’t about becoming a Kubernetes guru; it’s about proving you understand the ecosystem, the tools, the culture, and the direction the cloud-native movement is heading.</p>
<p>And don’t think too much about a perfect score. ChatGPT told me this after I made a trivial mistake:</p>
<blockquote>
<p>Don't chase a perfect score too much. A perfect score is sweet, but the real flex is knowing your stuff so well that one tiny misstep doesn’t shake your confidence.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[[EN] How I Stopped Copy-Pasting AWS EC2 IPs and Started SSHing Smarter]]></title><description><![CDATA[Remembering IP in a massive, dynamic environment is not easy. You may have an instance with IP 10.0.0.1 today, but there is no guarantee that the same server will be there tomorrow. If you are still SSH-ing to your server using a traditional method l...]]></description><link>https://blog.sya.my.id/en-how-i-stopped-copy-pasting-aws-ec2-ips-and-started-sshing-smarter</link><guid isPermaLink="true">https://blog.sya.my.id/en-how-i-stopped-copy-pasting-aws-ec2-ips-and-started-sshing-smarter</guid><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Wed, 21 May 2025 16:53:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/DXRP2PKlsFQ/upload/8767e91ce57fa7ff90bca9149c142626.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747819707681/9b61369a-d022-4719-8d38-6a5dfbe7e365.png" alt class="image--center mx-auto" /></p>
<p>Remembering IP in a massive, dynamic environment is not easy. You may have an instance with IP 10.0.0.1 today, but there is no guarantee that the same server will be there tomorrow. If you are still SSH-ing to your server using a traditional method like <code>ssh user@ip_address</code>you’ll have a hard time remembering the IP address of that server. Wouldn't it be easier to just run <code>server_a</code> or <code>server_b</code> to get into that particular server? You just have to know the name of the server, which is easier to remember. Also, the command is dynamically updated when a new server is created or deleted.</p>
<h1 id="heading-tldr">tl;dr</h1>
<ul>
<li><p>Allow the bastion host to read EC2 metadata</p>
</li>
<li><p>Create a Python script to get all the instance names and private IPv4 addresses</p>
</li>
<li><p>Add the command alias to your <code>.bashrc</code> or <code>.zshrc</code> file.</p>
</li>
<li><p>Add cron to run the script automatically and update your alias.</p>
</li>
</ul>
<h1 id="heading-allow-bastion-host-to-get-ec2-instances">Allow Bastion host to get EC2 instances</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Please review the IAM policy to comply with your security standards. Don’t blindly copy and paste.</div>
</div>

<ol>
<li><p>The script will get all the EC2 data using the <code>describe-instances</code> command. Create an AWS IAM policy to allow read access to the instances:</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
   <span class="hljs-attr">"Statement"</span>: [
     {
       <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
       <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"ec2:DescribeInstances"</span>,
       <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>
     }
   ]
 }
</code></pre>
</li>
<li><p>Create or modify your existing Bastion IAM role to attach this IAM policy.</p>
</li>
<li><p>Verify the access by running <code>aws ec2 describe-instances</code> inside the Bastion host.</p>
</li>
</ol>
<h1 id="heading-create-a-script-to-generate-the-command-alias">Create a script to generate the command alias</h1>
<p>The idea is to generate this output that can be fed to the <code>.bashrc</code>:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">alias</span> server_a=<span class="hljs-string">"ssh ubuntu@private_ipv4_address"</span>
...
</code></pre>
<p>Here’s the Python code to do that:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="c0078aea83866fda9488ed55d464fb1d"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/roboticpuppies/c0078aea83866fda9488ed55d464fb1d" class="embed-card">https://gist.github.com/roboticpuppies/c0078aea83866fda9488ed55d464fb1d</a></div><p> </p>
<p>Generate the alias and write it to <code>~/.awsvmaliases</code>:</p>
<pre><code class="lang-bash">python3 aliasgen.py &gt; ~/.awsvmaliases
</code></pre>
<h1 id="heading-automatically-load-the-aliases">Automatically load the aliases</h1>
<p>To automatically load those aliases when you open the terminal, open your <code>.bashrc</code> or <code>.zshrc</code> file and append this line:</p>
<pre><code class="lang-bash">...
<span class="hljs-built_in">source</span> ~/.awsvmaliases
</code></pre>
<h1 id="heading-automatically-update-the-list">Automatically update the list</h1>
<p>In my case, it is sufficient to update the script once an hour. So I just use cron to update the aliases:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Inside the crontab</span>
@hourly /usr/bin/python3 path/to/aliasgen.py &gt; ~/.awsvmaliases
</code></pre>
<p>If you need to manually update the list without waiting for cron to run, you can run this command:</p>
<pre><code class="lang-bash">python3 path/to/aliasgen.py &gt; ~/.awsvmaliases
<span class="hljs-comment"># If you use bash shell, tell bash to reload the .bashrc file and read the changes</span>
<span class="hljs-built_in">source</span> ~/.bashrc
</code></pre>
]]></content:encoded></item><item><title><![CDATA[[EN] Be careful before applying immediate modifications in AWS RDS]]></title><description><![CDATA[tl;dr

"Apply Immediately" applies everything in the pending modifications queue, not just your change.

Always check pending modifications and maintenance tabs first.

Don't accidentally trigger a db-upgrade unless you're ready for downtime.


What ...]]></description><link>https://blog.sya.my.id/en-be-careful-before-applying-immediate-modifications-in-aws-rds</link><guid isPermaLink="true">https://blog.sya.my.id/en-be-careful-before-applying-immediate-modifications-in-aws-rds</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS RDS]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Tue, 20 May 2025 18:44:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Q3WVbAfdOoY/upload/9911c1ad83b198fe03c8430111d00f3d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">tl;dr</h1>
<ul>
<li><p>"Apply Immediately" applies <strong>everything in the pending modifications queue</strong>, not just your change.</p>
</li>
<li><p>Always check pending modifications and maintenance tabs first.</p>
</li>
<li><p>Don't accidentally trigger a <code>db-upgrade</code> unless you're ready for downtime.</p>
</li>
</ul>
<h1 id="heading-what-just-happened">What just happened?</h1>
<p>Today I was preparing for a planned maintenance, just an hour before the scheduled time. We planned to upgrade one of our RDS instance to the latest minor version. It came the time when I needed to change the Parameter Group. It looked like a simple change and shouldn’t cause any downtime except when you reboot the instance to apply the changes. I have done this several times before and I can confirm it stays that way.</p>
<p>I checked my modifications again just to make sure I didn’t misclick anything. Auto minor upgrade was disabled, and the maintenance window wasn’t scheduled today. I chose “Apply Immediately”, assuming it would apply only to my intended modification. Instead, I got unexpected downtime. AWS started showing “Upgrading” status instead of “Available”, and the database went offline for a few minutes. Tried to figure out what just happened and keep calm while investigating how it happened. Turns out it was my modification that triggered the pending mandatory engine upgrade to run immediately.</p>
<h1 id="heading-so-how-did-that-happen">So, how did that happen?</h1>
<p>When the instance was still in the Upgrading state, I realized there was a required <code>db-upgrade</code> in the Maintenance &amp; Backup tab. It wasn’t scheduled today, so why did AWS apply it now? Turns out when you choose to apply the modification immediately, AWS RDS will think “Oh, the user is changing something. We think it’s a good time to apply any other pending modifications as well”, instead of just what I want.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747762624264/eab12de3-9b6c-4742-bb3e-e78d0007fbb7.png" alt class="image--center mx-auto" /></p>
<p>According to the <a target="_blank" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ModifyInstance.ApplyImmediately.html">official AWS documentation</a>:</p>
<blockquote>
<p>If you don't choose to apply changes immediately, RDS puts the changes into the pending modifications queue. During the next maintenance window, RDS applies any pending changes in the queue. If you choose to apply changes immediately, your new changes and any changes in the pending modifications queue are applied.</p>
</blockquote>
<p>So if you have any required maintenance actions queued up, like a minor version upgrade, they will be applied instantly along with your changes, even if you didn’t intend to touch the engine version. And yes, a <code>db-upgrade</code> restarts your instance, which means downtime.</p>
<h1 id="heading-check-before-you-fall-into-the-trap">Check before you fall into the trap</h1>
<p>tl;dr: Don’t Apply Immediately Blindly.</p>
<p>Before you make any changes that need to be applied immediately, <strong>always check for pending queue for that RDS instance!</strong> There may be a pending modification or pending maintenance action. Here’s how you do it:</p>
<h2 id="heading-from-console">From Console</h2>
<ol>
<li><p>Go to RDS console and find your instance.</p>
</li>
<li><p>Look for <code>Maintenance</code> column.</p>
</li>
<li><p>Click the “Maintenance &amp; Backups” tab.</p>
</li>
<li><p>Look for any changes listed in the “Pending maintenance” and “Pending modifications”.</p>
</li>
</ol>
<h2 id="heading-from-aws-cli">From AWS CLI</h2>
<ol>
<li><p>Look for pending modifications:</p>
<pre><code class="lang-bash"> aws rds describe-db-instances \
   --db-instance-identifier your-db-name \
   --query <span class="hljs-string">"DBInstances[*].PendingModifiedValues"</span>
</code></pre>
</li>
<li><p>Look for your instances in this list of pending maintenance actions:</p>
<pre><code class="lang-bash"> aws rds describe-pending-maintenance-actions --region &lt;your region&gt;
</code></pre>
</li>
</ol>
<h1 id="heading-undo-or-cancel-changes">Undo or cancel changes</h1>
<p>AFAIK, I haven’t found a way to cancel, defer, or make immediate changes without triggering the required update or changes in the queue. But you can cancel some of the non-required pending modifications by <a target="_blank" href="https://stackoverflow.com/a/59608712/7493146">following this answer</a> from Stack Overflow.</p>
<h1 id="heading-final-thoughts">Final thoughts</h1>
<p>AWS managed services, such as RDS, take a lot of the burden off an engineer’s shoulders. But it requires you to understand how it works. Otherwise, we might run into a problem similar to what I experienced. In my opinion, AWS should give the option to choose which modification will be applied instead of applying all pending modifications at once. Or just a list of what modifications (including in the queue) will be made before I click that final “Modify”.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747766248582/205c80ca-7926-459a-806f-b4b40062b6cc.png" alt="Show the list here" class="image--center mx-auto" /></p>
<p>Luckily, in my case, it was nighttime and only a few people were using the database, so the incident didn’t cause significant harm.</p>
]]></content:encoded></item><item><title><![CDATA[[ID] GCP Associate Cloud Engineer - Planning and configuring a cloud solution]]></title><description><![CDATA[💡
Baca official exam guide di sini. Catatan ini adalah rangkuman dari buku dan official documentation dari Google. Jika ada saran atau koreksi, langsung comment aja ya.


GCP Cloud Services
Compute

💡
Objective: Paham bedanya Compute Engine, Kubern...]]></description><link>https://blog.sya.my.id/id-gcp-associate-cloud-engineer-planning-and-configuring-a-cloud-solution</link><guid isPermaLink="true">https://blog.sya.my.id/id-gcp-associate-cloud-engineer-planning-and-configuring-a-cloud-solution</guid><category><![CDATA[google cloud]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Mon, 19 May 2025 18:01:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9AqIdzEc9pY/upload/023ea0f360d70c86bc0c09ca6db181a3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Baca official exam guide <a target="_self" href="https://services.google.com/fh/files/misc/associate_cloud_engineer_exam_guide_english.pdf">di sini</a>. Catatan ini adalah rangkuman dari buku dan official documentation dari Google. Jika ada saran atau koreksi, langsung comment aja ya.</div>
</div>

<h1 id="heading-gcp-cloud-services">GCP Cloud Services</h1>
<h2 id="heading-compute">Compute</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Objective:</strong> Paham bedanya Compute Engine, Kubernetes Engine, App Engine, dan Cloud Function. Tau layanan apa yang paling cocok digunakan dalam sebuah skenario.</div>
</div>

<ul>
<li><p><strong>Compute Engine:</strong> <mark>Ini layanan VM-nya GCP yang masuk kategori IaaS. Cocok digunakan jika kita ingin punya kontrol penuh atas resource ini.</mark> Misalnya seberapa besar resourcenya, pakai CPU apa, pilih OS sesuai kebutuhan, mau diinstall apa, kapan harus dibackup, kapan harus diupgrade, dan aktivitas lainnya. <em><mark>Downside-</mark></em><mark>nya adalah karena layanannya masih cukup mentah, kita harus paham dengan hal seputar administrasi server</mark>.</p>
<p>  Kelebihan lainnya adalah adanya opsi <mark>Spot VM yang memberikan diskon 60-91%</mark> dari harga VM biasa. Konsekuensinya adalah VM tersebut bisa kapan saja distop atau didelete oleh GCP. <mark>Cocok untuk </mark> <em><mark>fault-tolerant workload </mark></em> <mark>dan</mark> <em><mark>batch processing</mark></em> yang mungkin cuma butuh waktu sebentar untuk bekerja.</p>
</li>
<li><p><strong>Kubernetes Engine:</strong> <mark>Dipakai untuk ngejalanin banyak container diatas Kubernetes cluster yang berjalan diatas banyak VM.</mark> GCP akan nge-handle resource provisioning, node autoscaling, health check, dan replace unhealthy server secara otomatis. Intinya, nge-maintain Kubernetes cluster bakal lebih mudah kalau pakai GKE daripada langsung manage lewat Compute Engine. Sebenarnya GKE juga menggunakan Compute Engine di belakang layar sebagai worker nodenya.</p>
</li>
<li><p><strong>App Engine:</strong> <mark>Ini layanan PaaS dari GCP. App Engine memudahkan developer untuk menjalankan aplikasinya tanpa ribet urusan server.</mark> Sebenarnya tetep pakai Compute Engine di balik layar, tapi urusan administrasi server, konfigurasi, dan network hardeningnya sudah dihandle GCP. Jadi pricingnya ngga jauh-jauh dari Compute Engine. <em>AFAIK</em> konsep <strong>kasarnya</strong> kayak Heroku atau Vercel yang sama-sama PaaS tapi versi lebih powerfull. Atau mirip kayak AWS ElasticBeanstalk. Ada 2 tipe App Engine:</p>
<ul>
<li><p><a target="_blank" href="https://cloud.google.com/appengine/docs/standard/">Standard Environment</a>: <mark>App yang udah kita buat akan dijalankan di dalam </mark> <em><mark>language-specific container</mark></em> <mark>yang disupport GCP seperti Java, Go, Node.js, PHP, Python, dan Ruby.</mark> Jadi kalo kita butuh selain itu, bisa pilih Flexible environment yang mendukung custom runtime. Standard Environment cocok dipakai jika app kita ngga butuh OS packages atau software tambahan untuk menjalankan appnya.</p>
</li>
<li><p><a target="_blank" href="https://cloud.google.com/appengine/docs/flexible">Flexible Environment</a>: Beda dengan standard environment yang punya <em>language constraint.</em> Flexible environment bisa menjalankan Python, Java, Node.js, Go, Ruby, PHP, .NET, dan <em><mark>any software that can service HTTP requests (custom runtime)</mark></em>. Bisa specify custom Dockerfile juga.</p>
</li>
</ul>
</li>
<li><p><strong>Cloud Function:</strong> <mark>Layanan serverless untuk menjalankan sebuah </mark> <em><mark>function</mark></em> <mark> yang melakukan satu. Cocok digunakan untuk </mark> <em><mark>event-driven processing, lightweight computing, </mark></em> <mark>atau </mark> <em><mark>execute short-running code</mark></em><mark>.</mark> <em>In my opinion</em>, ini mirip kayak AWS Lambda. Salah satu use-case Cloud Function adalah ketika sebuah object diupload ke Cloud Storage, maka sebuah Cloud Function akan dipanggil untuk melakukan sesuatu. Misalnya membuat thumbnail, memproses object tadi, dan lain-lain. Function mensupport Node.js, Python, Go, Java, Ruby, PHP, dan .Net.</p>
</li>
</ul>
<h2 id="heading-storage">Storage</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Objective: </strong>Paham berbagai macam layanan storage, bedanya file dan object storage, dan kapan waktu yang tepat menggunakan layanan tersebut.</div>
</div>

<p><strong><mark>Block Storage</mark></strong> <mark> itu storage mentah semacam </mark> <em><mark>hard drive</mark></em> <mark> di komputer yang masih kosong dan belum ada </mark> <em><mark>filesystemnya</mark></em><mark>. Masih perlu diolah sebentar agar bisa digunakan.</mark> Karena masih mentah, kita bisa bebas pilih <em>filesystem</em> (misal ext4, xfs, dst.), diatur kayak gimana (misal partisi, size, dst.), dan dipakai untuk apa (misal diinstall OS, backup storage, dst.).</p>
<p><strong><mark>Object Storage</mark></strong> <mark> itu layanan penyimpanan file yang udah siap pakai.</mark> Paling sering dipakai untuk menyimpan file yang diupload dari suatu aplikasi baik mobile, desktop, maupun web. Di sini file disebut dengan <em>Object</em>. Setiap <em>Object</em> akan dikumpulkan dalam suatu <em>Bucket</em> dan <mark>bisa diakses menggunakan suatu URL yang unik.</mark> Layanan ini Serverless, jadi <mark>gak perlu manage server dan software storagenya.</mark> Object Storage bisa diakses dari banyak VM.</p>
<p><strong><mark>File Storage</mark></strong> <mark> itu layanan </mark> <em><mark>network shared file systems</mark></em> (sorry, agak bingung jelasin gampangnya gimana). Jadi pakai protokol NFS untuk nge-akses filenya. File Storage juga udah siap pakai. Tinggal memastikan sistem kita sudah <em>support NFS</em>. Tipe storage ini bisa diakses oleh banyak VM dalam waktu bersamaan.</p>
<ul>
<li><p><strong>Cloud Storage (GCS):</strong> <mark>Layanan Object Storage di GCP seperti layanan S3 di AWS.</mark> Hak akses ke file dan bucket bisa diatur lewat IAM. <mark>Cloud Storage cocok digunakan untuk menyimpan semua jenis file seperti gambar, video, dokumen, dan lain-lain.</mark> Contoh use-casenya adalah sebuah perusahaan dapat menggunakannya untuk menyimpan foto dan video yang sudah diupload oleh pengguna lewat aplikasi yang dibuat perusahaan itu.</p>
</li>
<li><p><strong>Persistent Disk (PD):</strong> <mark>Layanan Block Storage di GCP.</mark> Ada pilihan SSD dan HDD. Yang menarik adalah PD bisa <a target="_blank" href="https://cloud.google.com/compute/docs/disks/sharing-disks-between-vms">dipasang ke banyak VM</a> sekaligus. Maksimal kapasitas PD adalah 64TB. Ukurannya bisa di-resize tanpa downtime seperti <a target="_blank" href="https://blog.servercare.id/on-line-storage-resizing-in-cloud-instances">di artikel ini</a>.</p>
</li>
<li><p><strong>Cloud Storage for Firebase:</strong> Object storage <mark>khusus untuk aplikasi yang terintegrasi dengan Firebase. Cocok digunakan untuk upload dan download di aplikasi mobile yang jaringannya sering ngadat</mark> karena punya mekanisme <em>recovery</em> khusus. Sebenarnya pakai GCS di balik layar. Dia juga udah terintegrasi dengan Firebase Auth.</p>
</li>
<li><p><strong>Cloud Filestore:</strong> <mark>Layanan File storage yang bisa diakses (read-write) di banyak VM sekaligus pakai protocol NFS</mark>. Maksimal kapasitas Filestore adalah 100 TB, throughputnya 25 GB/s, dan IOPS-nya 920K.</p>
</li>
</ul>
<h2 id="heading-database">Database</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Database umumnya dibagi jadi Relational dan Non-Relational (disebut NoSQL). GCP banyak menyediakan layanan database serverless seperti Datastore, BigTable, Firestore, dan Memorystore.</div>
</div>

<ul>
<li><p><strong>Cloud SQL:</strong> <mark>Layanan RDBMS dari GCP. Support MySQL, PostgreSQL, dan SQL server</mark>. Banyak fitur untuk mengelola RDBMS seperti backup, patching, network configuration, monitoring, replication, failover, dan lain-lain. Cocok digunakan untuk mempermudah proses pengelolaan RDBMS dibanding menjalankannya di VM biasa.</p>
</li>
<li><p><strong>Cloud Spanner:</strong> Layanan RDBMS tapi <mark>globally scalable dan support standar SQL ANSI 2011</mark>.</p>
</li>
<li><p><strong>Cloud Bigtable:</strong> <mark>Layanan NoSQL jenis </mark> <em><mark>wide-column database</mark></em> yang didesain untuk mengelola database skala besar. Di soal latihan GCP ACE, biasanya dia dipakai untuk menerima dan menyimpan data dari ratusan sensor IoT dengan aktivitas write yang kecil-kecil tapi banyak banget. Atau bisa juga untuk real-time analytics.</p>
</li>
<li><p><strong>Cloud Datastore:</strong> <mark>Layanan NoSQL jenis </mark> <em><mark>document database</mark></em><mark>.</mark> Dia otomatis ngehandle <em>sharding</em> dan <em>replikasi</em> serta support <em>transaction, index</em>, dan query seperti SQL. Beberapa use-casenya adalah katalog produk yang menampilkan detail produk secara real-time dan user profile di suatu aplikasi yang <em>personally customized</em> berdasarkan aktivitas user dan preferensinya. <mark>Datastore beda dengan Bigtable dimana Bigtable lebih cocok digunakan untuk aktivitas read/write yang banyak dan bersamaan dengan latensi kecil</mark>.</p>
</li>
<li><p><strong>Cloud Firestore:</strong> <mark>Next generation-nya Datastore</mark> dan Google lebih menyarankan menggunakan ini (beda dengan Filestore ya!). Firestore itu lebih modern dan scalable dibanding Datastore serta kompatibel dengan MongoDB.</p>
</li>
<li><p><strong>Cloud Memor**</strong>y<strong>**store:</strong> <mark>Layanan serverless dan full-managed in-memory database seperti Redis, Valkey, dan Memcache</mark>. Datanya disimpen di memory (kayak RAM) jadi bisa diakses jauh lebih cepat dibanding dari hard drive. Cocok untuk nge-cache data yang sering diakses biar nggak membebani server databasenya. Full-managed itu artinya masalah scalability, high availability, patching, dan failover udah dihandle GCP sendiri.</p>
</li>
</ul>
<h2 id="heading-networking">Networking</h2>
<ul>
<li><p><strong>Virtual Private Cloud (VPC):</strong> Layanan networking yang <mark>memungkinkan kita mendesain network</mark> data center kita sendiri seperti alokasi IP, subnetting, firewall, dan lain-lain.</p>
</li>
<li><p><strong>Cloud Load Balancing:</strong> <mark>Mendistribusikan workload di dalam suatu region maupun globally antar region</mark>. Dia juga menyediakan fitur autoscaling dan auto-repair ketika ada server yang bermasalah. LB mendukung protokol HTTP, HTTPS, TCP, dan UDP.</p>
</li>
<li><p><strong>Cloud Armor:</strong> Layanan network security berupa <mark>DDoS protection dan WAF</mark>.</p>
</li>
<li><p><strong>Cloud CDN:</strong> <mark>Layanan Content Delivery Network</mark> (CDN) yang mempercepat akses suatu konten dari mana saja.</p>
</li>
<li><p><strong>Cloud Interconnect:</strong> <mark>Layanan untuk menyambungkan jaringan kita ke data center GCP secara langsung.</mark> Jadi dipakai kalau kita punya data center sendiri dan ingin nyambung secara fisik (kabel nyambung) ke data center GCP. Ini sebutannya adalah Direct Interconnect. Nah kalau gak bisa nyambung secara fisik, maka pilih Partner Interconnect dimana akan ada ISP perantara yang menghubungkan kita dengan GCP. <mark>Interconnect cocok digunakan kalau kita butuh koneksi yang </mark> <em><mark>low latency &amp; high bandwidth </mark></em> <mark>ke resource yang ada di GCP</mark>.</p>
</li>
<li><p><strong>Cloud DNS:</strong> Ya layanan DNS sih, kayak Route53 di AWS. Memungkinkan kita untuk memanage record DNS dari domain yang kita kelola.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[[EN] Building a secure container image]]></title><description><![CDATA[To ensure the security of an application, not only we have to keep the code safe but also keep them safe when storing and distributing it. I learned some of the best practices to do that and I’ll share them here.
This Dockerfile Works... But Should Y...]]></description><link>https://blog.sya.my.id/en-building-a-secure-container-image</link><guid isPermaLink="true">https://blog.sya.my.id/en-building-a-secure-container-image</guid><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Fri, 09 May 2025 15:32:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/yx20mpDyr2I/upload/40e6866f72de27ec15cc7bbcee694443.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To ensure the security of an application, not only we have to keep the code safe but also keep them safe when storing and distributing it. I learned some of the best practices to do that and I’ll share them here.</p>
<h1 id="heading-this-dockerfile-works-but-should-you-use-it">This Dockerfile Works... But Should You Use It?</h1>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Mistake #1: Using a single stage (no build separation)</span>
<span class="hljs-comment"># This makes the image larger than necessary</span>
<span class="hljs-comment"># and enlarge the attack surface, as the final image includes the build tools</span>
<span class="hljs-comment"># and source code, which are not needed for the final image</span>
<span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.21</span>-alpine
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Mistake #2: Using ENV instead of ARG and Hardcoded sensitive information</span>
<span class="hljs-comment"># This is a security risk, as ENV variables are stored in the image</span>
<span class="hljs-comment"># and can be accessed by anyone with access to the image</span>
<span class="hljs-comment"># Fortunately, Docker can detect hardcoded sensitive information</span>
<span class="hljs-comment"># and will warn you about it.</span>
<span class="hljs-keyword">ENV</span> DBPASSWORD=acompletelyinsecurepassword
<span class="hljs-keyword">ENV</span> DBNAME=postgres
<span class="hljs-keyword">ENV</span> DBUSER=postgres

<span class="hljs-comment"># Mistake #3: Not cleaning up unnecessary files</span>
<span class="hljs-comment"># Copies everything, including .git, .env, and dev files</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment"># There is a good dicsussion about CGO_ENABLED=0</span>
<span class="hljs-comment"># https://www.reddit.com/r/golang/comments/pi97sp/what_is_the_consequence_of_using_cgo_enabled0/</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go mod tidy &amp;&amp; CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o server .</span>

<span class="hljs-comment"># Mistake #4: Running the server as root or not specifying a user</span>
<span class="hljs-comment"># This is a security risk, as the server has full access to the system</span>
<span class="hljs-comment"># and can be used to escalate privileges</span>
<span class="hljs-comment"># In this example the default user for golang:1.21-alpine is root, which is a bad practice and should be avoided</span>
<span class="hljs-comment"># You can check by running the following command:</span>
<span class="hljs-comment"># docker run -it --rm golang:1.21-alpine whoami</span>
<span class="hljs-keyword">USER</span> root

<span class="hljs-comment"># Mistake #5: Exposing unnecessary ports</span>
<span class="hljs-comment"># Exposes SSH and an additional port, which are not needed for the server</span>
<span class="hljs-comment"># and can be used to attack the system</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">22</span> <span class="hljs-number">8080</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"./server"</span>]</span>
</code></pre>
<h1 id="heading-lets-fix-this-dockerfile">Let’s Fix This Dockerfile</h1>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Correction #1: Use multi-stage builds to reduce the image size</span>
<span class="hljs-comment"># and reduce the attack surface</span>
<span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.21</span>-alpine AS build
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Correction #2: Use .dockerignore to exclude unnecessary files</span>
<span class="hljs-comment"># The copy command here is okay to do because when using multi-stage builds</span>
<span class="hljs-comment"># the final image will only contain the binary file and not the source code</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go mod tidy &amp;&amp; CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o server .</span>

<span class="hljs-comment"># Correction #3: Use smaller base image as the final image</span>
<span class="hljs-comment"># You can use Alpine, Scratch, Ubuntu, or any other small base image depending on your needs</span>
<span class="hljs-comment"># You can also use Distroless images for more security</span>
<span class="hljs-comment"># Using Scratch or Distroless images will not have a shell, package manager, or any other programs in typical Linux distributions.</span>
<span class="hljs-comment"># So if a hacker gains access to the container, they won't be able to do much</span>
<span class="hljs-comment"># However, those images are not recommended for development because they are harder to debug</span>
<span class="hljs-comment"># since you can't exec into the container and run commands.</span>
<span class="hljs-comment"># They also requires more technical knowledge to use.</span>
<span class="hljs-comment"># So to balance between security and ease of use, I'll use Alpine here</span>
<span class="hljs-comment"># https://medium.com/google-cloud/alpine-distroless-or-scratch-caac35250e0b</span>
<span class="hljs-keyword">FROM</span> alpine:<span class="hljs-number">3.21</span>.<span class="hljs-number">3</span> AS final

<span class="hljs-comment"># Correction #4: Use ARG instead of ENV to pass build-time variables</span>
<span class="hljs-comment"># This is because ARG is only available during the build stage</span>
<span class="hljs-comment"># and the values are not stored in the final image</span>
<span class="hljs-keyword">ARG</span> DBPASSWORD
<span class="hljs-keyword">ARG</span> DBNAME
<span class="hljs-keyword">ARG</span> DBUSER

<span class="hljs-comment"># Correction #5: Use non-root user</span>
<span class="hljs-comment"># However, you need to make sure that the user has the necessary permissions to run the application</span>
<span class="hljs-keyword">USER</span> nobody:nogroup

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-comment"># The COPY command here is to copy the binary file from the build stage</span>
<span class="hljs-comment"># to the final image</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=build --chown=nobody:nogroup /app/server .</span>

<span class="hljs-comment"># Correction #6: Only expose the necessary port</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>

<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"/app/server"</span>]</span>
</code></pre>
<p>By making the image secure, usually you ended up with a smaller image. From what I’ve tried, here is the result:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Image</td><td>Size</td></tr>
</thead>
<tbody>
<tr>
<td>Insecure</td><td>292MB</td></tr>
<tr>
<td>Secure + Distroless Non-root</td><td>27MB</td></tr>
<tr>
<td>Secure + Alpine</td><td>14.6MB</td></tr>
<tr>
<td>Secure + Scratch</td><td>6.72MB</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[[EN] On-line storage resizing in cloud instances]]></title><description><![CDATA[Let’s say you’re using VMs in AWS, GCP, or any other cloud provider and running out of disk storage. Normally you can stop the instance, increase the storage, and then start it again to resize the disk. But that would cause a downtime. In my experien...]]></description><link>https://blog.sya.my.id/on-line-storage-resizing-in-cloud-instances</link><guid isPermaLink="true">https://blog.sya.my.id/on-line-storage-resizing-in-cloud-instances</guid><category><![CDATA[Linux]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Fri, 09 May 2025 09:52:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/1qL31aacAPA/upload/c76fd3afa4463ff31288b16339d16751.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s say you’re using VMs in AWS, GCP, or any other cloud provider and running out of disk storage. Normally you can stop the instance, increase the storage, and then start it again to resize the disk. But that would cause a downtime. In my experience, the easiest way to increase the storage without stopping the VM is using the <code>growpart</code> to resize the partition and <code>resize2fs</code> to resize the filesystem.</p>
<h1 id="heading-increase-the-block-storage">Increase the block storage</h1>
<p>In this case my Ubuntu VM is running on AWS with storage of 8GB and need to increase it to 12GB. To increase the volume size:</p>
<ol>
<li><p>Go to AWS EC2 Dashboard.</p>
</li>
<li><p>Click <strong>Volumes</strong> and find your volume that you want to increase.</p>
</li>
<li><p>Right click the volume name and choose <strong>Modify Volume</strong>.</p>
</li>
<li><p>Increase the volume size in the <strong>Size (GB)</strong> field.</p>
</li>
</ol>
<h1 id="heading-resize-the-partition">Resize the partition</h1>
<p>SSH into the server and run the following commands.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Check the commands before copying and running anything.</div>
</div>

<pre><code class="lang-bash"><span class="hljs-comment"># Let's inspect the block devices first</span>
<span class="hljs-comment"># Here I have nvme0n1 disk.</span>
<span class="hljs-comment"># The disk size was 8GB and increased to 12GB.</span>
<span class="hljs-comment"># Then I want the nvme0n1p1 (first partition) to use the remaining free space (4GB)</span>
ubuntu@i-00cfd0679ab3871a3:~$ lsblk 
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
...
nvme0n1      259:0    0   12G  0 disk 
├─nvme0n1p1  259:1    0    7G  0 part /
├─nvme0n1p15 259:2    0   99M  0 part /boot/efi
└─nvme0n1p16 259:3    0  923M  0 part /boot

<span class="hljs-comment"># Let's increase it using growpart</span>
<span class="hljs-comment"># https://access.redhat.com/solutions/5540131</span>
ubuntu@i-00cfd0679ab3871a3:~$ sudo growpart /dev/nvme0n1 1
CHANGED: partition=1 start=2099200 old: size=14677983 end=16777182 new: size=23066591 end=25165790

<span class="hljs-comment"># Check the partition again</span>
<span class="hljs-comment"># Make sure the nvme0n1p1 size is increased</span>
ubuntu@i-00cfd0679ab3871a3:~$ lsblk 
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
...
nvme0n1      259:0    0   12G  0 disk 
├─nvme0n1p1  259:1    0   11G  0 part /
├─nvme0n1p15 259:2    0   99M  0 part /boot/efi
└─nvme0n1p16 259:3    0  923M  0 part /boot
</code></pre>
<p><code>growpart</code> is part of utility to extend a storage in cloud environment. Usually it’s already installed in the OS image as part of <code>cloud-guest-utils</code> package.</p>
<h1 id="heading-resize-the-filesystem">Resize the filesystem</h1>
<p>After the partition has been resized, let’s resize the filesystem so we can actually use it.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Resize the FS</span>
ubuntu@i-00cfd0679ab3871a3:~$ sudo resize2fs /dev/nvme0n1p1
...
Filesystem at /dev/nvme0n1p1 is mounted on /; on-line resizing required
The filesystem on /dev/nvme0n1p1 is now 2883323 (4k) blocks long.

<span class="hljs-comment"># Verify the filesystem size again</span>
<span class="hljs-comment"># Now the FS size is matched with the partition size</span>
ubuntu@i-00cfd0679ab3871a3:~$ df -Th
Filesystem      Type      Size  Used Avail Use% Mounted on
/dev/root       ext4       11G  1.7G  8.9G  17% /
</code></pre>
]]></content:encoded></item><item><title><![CDATA[[EN] Create, mount, and make persistent a new ext4 filesystem on a new disk]]></title><description><![CDATA[When managing a Linux server, sometimes you need to add new partitions, e.g, directory to store your backup in a separate physical disk, or new place to put your files. In this article, I’ll create a new ext4 partition with the mkfs.ext4 command.
Pre...]]></description><link>https://blog.sya.my.id/create-mount-and-make-persistent-a-new-ext4-filesystem-on-a-new-disk</link><guid isPermaLink="true">https://blog.sya.my.id/create-mount-and-make-persistent-a-new-ext4-filesystem-on-a-new-disk</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mochammad Syaifuddin]]></dc:creator><pubDate>Thu, 08 May 2025 18:05:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/HhTfeSKz4xQ/upload/811a33114e2ca04eaf91b2af685a77e6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When managing a Linux server, sometimes you need to add new partitions, e.g, directory to store your backup in a separate physical disk, or new place to put your files. In this article, I’ll create a new ext4 partition with the <code>mkfs.ext4</code> command.</p>
<h1 id="heading-preparing-the-partition">Preparing the partition</h1>
<p>Here I have a new block device <code>/dev/sdb</code> of 5GB and will create one partition on it.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># List all block devices</span>
vagrant@vagrant:~$ lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0   64G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0    2G  0 part /boot
└─sda3                      8:3    0   62G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   31G  0 lvm  /
sdb                         8:16   0    5G  0 disk 

<span class="hljs-comment"># Create a new partition</span>
vagrant@vagrant:~$ sudo fdisk /dev/sdb

<span class="hljs-comment"># Let's see all the block devices again</span>
<span class="hljs-comment"># There should be /dev/sdb1 if that's the only partition</span>
vagrant@vagrant:~$ lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
...
sdb                         8:16   0    5G  0 disk 
└─sdb1                      8:17   0    5G  0 part
</code></pre>
<h1 id="heading-creating-the-filesystem">Creating the filesystem</h1>
<p>Then let’s create an ext4 filesystem on top of that new partition.</p>
<pre><code class="lang-bash">vagrant@vagrant:~$ sudo mkfs.ext4 /dev/sdb1 
...
Writing superblocks and filesystem accounting information: <span class="hljs-keyword">done</span> 

<span class="hljs-comment"># Let's create a /backup directory and try to mount that FS onto that</span>
vagrant@vagrant:~$ sudo mkdir /backup
vagrant@vagrant:~$ sudo mount /dev/sdb1 /backup/

<span class="hljs-comment"># Check what's inside the partition. It's empty ofc</span>
vagrant@vagrant:~$ ls /backup/
lost+found

<span class="hljs-comment"># Check the FS type, size, and the mount</span>
vagrant@vagrant:~$ df -Th
Filesystem                        Type    Size  Used Avail Use% Mounted on
...
/dev/sdb1                         ext4    4.9G   24K  4.6G   1% /backup
</code></pre>
<h1 id="heading-making-it-persistent-between-reboots">Making it persistent between reboots</h1>
<p>At this point if you reboot the server, the <code>/backup</code> won’t be mounted to the partition automatically. You need to make it persistent by adding a new entry into the <code>/etc/fstab</code> file.</p>
<pre><code class="lang-bash">vagrant@vagrant:~$ sudo nano /etc/fstab
...
<span class="hljs-comment"># Let's mount /dev/sdb1 to /backup using the ext4 filesystem with standard options</span>
<span class="hljs-comment"># and don’t worry about backing it up or checking it during boot (it's the 0 0 at the end).</span>
<span class="hljs-comment"># Read more: https://www.redhat.com/en/blog/etc-fstab</span>
/dev/sdb1 /backup ext4 defaults 0 0
</code></pre>
<p>Now you can safely reboot without worrying the disk mount is gone.</p>
]]></content:encoded></item></channel></rss>