Author: Berkhan Berkdemir

  • Moving from Docker to Podman for local development

    Previously, I wrote a short article about how I work without docker-compose. While it’s somewhat crucial software for my workload, I realized I could achieve the basic functionality with less code in my environment.

    The old article only used a PostgreSQL container in the system, which was fairly boring. Now that I’ve been using more Jupyter notebooks, I wanted to share this snippet with you. This code provides the bare minimum functionality of docker-compose, but it’s simple to run and work with.

    VAGRANTFILE_API_VERSION = "2" if not defined? VAGRANTFILE_API_VERSION
    
    Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
      config.vm.box = "debian/testing64"
    
      config.vm.network :private_network, ip: "192.168.33.12"
    
      config.vm.provider :libvirt do |guest|
        guest.memory = 2048
      end
    
      config.vm.provision :shell, inline: <<-SHELL
        apt-get update
        apt-get install -y podman zstd
    
        podman network create potato_network
      SHELL
    
      config.vm.provision :podman do |container|
        container.run "postgres", image: "docker.io/library/postgres:16", args: %w[
          --env POSTGRES_USER=potato
          --env POSTGRES_PASSWORD=potato
          --env POSTGRES_DB=potato_development
          --network potato_network
        ].join(" ")
    
        container.run "jupyter", image: "quay.io/jupyter/scipy-notebook:python-3.11", args: %w[
          --volume "/vagrant:/home/jovyan/work"
          --publish 10000:8888
          --network potato_network
        ].join(" ")
      end
    end

    I should add a warning: this script will not run properly the second time you provision. This is mainly because of how Podman handles network adapter creation. You could write an additional if statement with podman network exists, but I’ve found that simply running vagrant provision --provision-with podman is much easier for my workload.

    I also set a private IP address for this virtual machine that libvirt manages on the Debian host.

  • Reducing the amount of HN intake this year


    It’s been three months, I didn’t use Hacker News daily and in this post, you will find my reasoning and also my experience during that process. I need to warn that this experimentation is not about getting rid of one of the influential social media/news outlet for a tech person.

    I am not fan of the social media concept that is from 2010, mainly because 2010 was the time period of all these platforms started to evolve into something else, and therefore, affected the quality of the content in each platform. I want to give you a heads up that you will find this post as more like my personal experience and nothing more than that.

    The community in Hacker News and Lobte.rs is definitely interesting and diverse that I can see different opinions in the comment section. I am ignoring comments that are politically supported or have strong opinions, but many of these people share their ideas, and I love to read them.

    Not long time ago I started to look each link without thinking about the content. I want you to think this as you are scrolling in your social media feed without any further thinking involved. We might have a debate on this about how to consume social media feed, but it doesn’t match with my own knowledge consumption idealogy. That is the moment that I decided to reduce the usage amount of these sources.

    As you can see, this is more like a habit issue rather than what HN community is made of. So, since the first day of the year, I limited the amount of HN take.

    You might ask how I am doing or what are the alternative methods I have as replacement. I have been collecting RSS links for the past 8 years and I have enough that I had to built an RSS instance 4 years ago. Currently, I manage text content with Miniflux and podcasts with Antenapod.

    I need to add that amount of content on my Miniflux is not enough and therefore I have podcast software on my phone, but I found that I actually don’t have time to listen a podcast in 1.75 speed. This is the reason why I am also using Hacker News’s Best list.

    This list is a pretty good summary of the last 3 to 4 days and if I really want to go to further, I will see another 3-4 days. With this method I am still seeing interesting bloggers that I want to follow in the future, and thus, I add them to my Miniflux instance.

    To sum up this experiment, the amount of tech news I am getting from these platforms and resources getting closer to 45 minutes daily Hacker News consumption; moreover, I hear these news/posts from the source. I need to notify you that I couldn’t find any alternatives to the HN discussion for the RSS feed.

  • ThinkStation S30 Upgrades in 2022

    It’s been a while since I didn’t use my ThinkStation S30. You might already know that I already mentioned it in an [article]. It was sitting in the storage for about 2 years and now I am here to upgrade some pieces on it.

    I particularly like this computer because it’s components are cheap and easy to find. Maybe, this a known is fact for all Think series.

    Here is the list of things I planned to upgrade on the workstation:

    1. Intel E5-2670 or CPU which its TDP less than 115 watt
    2. RX 470 equivalent GPU
    3. Wi-Fi card. Doesn’t matter what technology it is
    4. Around 200 to 500 GB of SSD

    From the first build, I removed AMD Radeon RX 470 graphic cards from the system; instead, I had Quadro 400 which was installed before RX 470.

    There is or was a problem. The workstation is quiet the way I like, and in the hot California days that means, the hardware heats up easily. So, I had to come up with an idea: leave the side panel open. That should work.

    The workstation comes with 610 watt 80+ gold certified power supply that has been there since 2013, default heat sink that is certified for CPUs up to 135 watt. A side note about the revious CPU, E5-2609, may be required in here: the CPU is fairly low end (but still expensive) and is only 80 watt. That means, I will definitely need additional cooling options or a hack for long term usage.

    CPU Upgrade

    2 years ago I said I would upgrade the CPU for $40. OK, I will admit it. I was not expect to buy the CPU $6.75 + tax. This is the cheapest thing I bought for something that used to cost like $1500… I am talking about Intel E5-2670.

    From 4C/4T to 8C/16T upgrade, I would expect at least 1.5 to 2 times improvements on my regular tasks.

    Because my workload does not require higher CPU clocks time to time, I turned off Intel Turbo Boost Technology and similar options. This change was good enough to make the machine 10-15 C cooler.

    GPU Upgrade

    Because I sent my previous GPU, RX 470, back to my home, I was using Quadro 400 GPU. The card is good enough to perform daily work like writing code and browsing in the web. I could perform light work on both Photoshop and Lightroom without any problems.

    However, because I am interested to use X-Plane 11 on my machine, I need to change the graphics card. I also need to run some TensorFlow based software which requires CUDA enabled graphics cards.

    Disclosure: I do not run make-art-without-moving-a-brush models in my computer.

    Network Card Upgrade

    Previously, I was using ethernet cable to connect to internet. In the college housing I am staying, they don’t provide ethernet cables. So, Wi-Fi is the only option. I found some interesting Wi-Fi cards such as Wi-Fi 6 (which routers in here supports); however, I really didn’t want to spend extra instead I picked up the cheapest TP-Link Wi-Fi card.

    Hard Disk Drive Upgrade

    People say you can’t buy time with money. Sure, you can. You need to have the cheapest SSD in your machine and there you go. You will save great time with that SSD. Ordered Kingston A400 480 GB because that is brand and model I used before, and I am satisfied with the speed I am getting from it.

    Installed fresh Windows 10 on SSD and deleted old Windows directories from the HDD. Install all programs in the SSD and use HDD as data directory. In the near future, HDD will require an upgrade as well.

    OS-Level Changes

    Because I use Windows 10, I also need to configure for my needs. I disabled automatic device driver upgrade with
    Display Driver Uninstaller and I also installed security updates and then disabled Windows Updates from service.exe. Because I closely follow Windows Updates, I don’t want my computer to tell me when to install updates. I changed default directories such as Documents, Downloads, Pictures, Videos to D:\, so I don’t fill up C:\ with data.

    I use GlassWire Network Security solutions to track network activities. I used to use old versions of McAfee Endpoint which it had ask-to-block-if-you-don’t-know-the-application firewall. With the recent version, that behavior changed so I changed the software I am using.

  • Downgrade PostgreSQL 12 to 11


    I have been using Heroku for about 2 years for Miniflux RSS reader. It has been working with no issue, and I was a happy Heroku user. Recently we were shocked by the moves of Salesforce for it’s product, Heroku about series of problems about it’s security and how they manage the situation outside of their playground. It was almost similar problem that Atlassian had a few months ago: no or little communication to end-user.

    Well, I moved services off of Heroku, and on of them was Miniflux. It uses only Heroku PostgreSQL add-on. I clear the history when I reach the row limit of 10.000. If you exceed the limit, INSERT rights removed, so that you no longer read new posts on Miniflux. I don’t want to lose my history of what I read, but I also want a simple setup like Heroku. How could I do that? Well, I will be writing that in another post, and I will only be focusing PostgreSQL part of migration.

  • A week with unsupported Android device

    After losing my Android device during a bike ride, I needed to have my SIM card back and have a running cell phone, so that people can reach me. I didn’t have time to wait for a smartphone to be delivered in 3 days, so I found a smartphone in my used devices box. It was a Samsung Galaxy Ace from 2011 with Android 2.3.6.

    Most of the software that was preinstalled on the device is unusable because either the company/developer stopped supporting the software (Google News), or it requires higher security ciphers and fails (Maps, Play Store).

    Software such as Calendar, Contacts, Clock, Memo, and Messages work as expected.

    Beyond the pre-installed software problems, installing new applications proved equally challenging. I have had tried installing Firefox, WhatsApp, or Signal. Of course, I found APKs from archive websites, but I had no chance to run any of them.

    For instance, WhatsApp: The software that makes life easier. Without it, people started to call me directly. No one considered sending a simple SMS instead of expecting me to download WhatsApp or other messaging platforms. This dependency on modern apps affects not just WhatsApp, but also Signal, Tor, and virtually any contemporary tool.

    I need to emphasize that we are living in a modern internet age. We are connected through devices together and we perform complex tasks such as informing our loved ones that we are safe and healthy. This is impossible to do with security-flawed cipher suites. Also, as the device is becoming older and older, it can not compute timely manner, which would result bad user experience on video streaming platforms. Worse yet, the device and its operating system are riddled with security flaws that make them vulnerable to attacks.

    Looking toward the future of the web, we should remember that openness was key to the success of the early 2000s internet. This was lost with the smartphone era. To not repeat this again, we should try to use open technologies and protocols. However, security is not easy to achieve with open-only technology.

  • Migrating to Hugo

    This isn’t the first blog post you’ll see about static site generators on the web. I was a huge fan of Jekyll, but after migrating away from GitHub Pages, I found no advantages other than it being a database-less writing platform.

    I still recommend Jekyll to people with less experience writing online, since it requires a gentler learning curve. However, I’ve noticed people recommending JavaScript frameworks like Webpack when they hear Static Site Generator. SSGs aren’t the same as bundlers—this website needs redirect pages, sitemaps, generated robots.txt files, and RSS feeds. While generating each of these with bundlers is possible, I’ll state again: use the tool built for your intended purpose.

    I chose Hugo because Debian includes a package in the Buster distribution. I don’t care which version they’re distributing since I can easily switch the Debian image to Bullseye or Sid to generate content. I do this by taking advantage of Vagrant on KVM.

    So far, I’m happy with Hugo 0.54.0. I created my own default theme because I wanted full control over it. I could have gone to GitHub and used a theme as a submodule, but this approach works better for me.

    I had a previous attempt to migrate to Hugo, but the content wasn’t ready. Actually, the content still isn’t ready, so I’m just pushing a few posts at a time.

    Currently, I’m using GitHub Actions to publish content to Netlify. Since I host several WordPress sites, I have a reseller account where I can create a database-free package for this website. Yes, I’ll use FTP upload the content. The only reason I keep everything on GitHub is for seamless Git LFS support.

    As I mentioned, I’m migrating posts one by one. I’ve found mistakes in my writing and I’m updating the lastmod field instead of creating brand new content, which you can see on each post page.

  • Tragedy of HP ProLiant DL380 G5

    We bought a second-hand DL380 G5 to run our master Citrix XenServer, and it works without any problem. Except for the latest version of firmware updates…

    Tragedy (from the Greek: τραγῳδία, tragōidia) is a form of drama based on human suffering that invokes an accompanying catharsis or pleasure in audiences.

    Tragedy

    I am 20 years old and not a senior system administrator which, we can say that is due to lack of experience. This is the third time to work with a bare-metal system. Moreover, my father and I did not work with HP before; however, we decided to buy a HP server. It came with an Intel Xeon E5405, 32 GB DDR2 RAM and 6x 146 GB 10K SAS. That is awesome since the hardware works with no error, and we started to work with it.

    The day after the purchase, I wanted to update firmware. I started with the easy one: iLO. Upload the image to iLO and you are ready to go. It will restart itself a few times and you can work with after that moment.

    The second move was safely updating the BIOS. The latest version of the BIOS published at September 30, 2015. But, we cannot access the latest one because Hewlett Packard Enterprise changes their downloadable firmware policy.

    This went into effect today (February 1, 2014). Neither HP nor our HP reseller notified us of this change. So if you have some out-of-warranty server-related equipment that you want to keep up to date, you’re out of luck unless you pay up.

    CannonBall7

    Well, I bought this server in December 2018, and I was thinking that the vendor did not block the firmware from their customers, but they do.

    I can access any firmware which was published before February 1, 2014.

  • Automated CentOS Installation with Anaconda

    Today I am gonna write about automated CentOS installation. Every single amateur sysadmin trying these installation

    Use virtual machine because they have clone system. Install the system after that just clone it. That’s it.

    This is not a good idea. Why?

    When you clone a virtualized OS, basically you have to change hostname, or static ip… These are the simple but crucial facts to be changed for instance machine-id.

    Use container system. You can create an OS really fast way to.

    We are talking about operating systems that power the hardware, not container. In this case, you can take advantage of this guide to run Docker or rkt containers to run in that hardware.

    Configuring Anaconda with Kickstart

    This is special tool that you can easily configure 1 to 1000 server in same the time. Maybe, you heard Anaconda before. You maybe saw this word when you install CentOS, Fedora or Red Hat distributions. I want to show you how to take advantage of Anaconda software to use Kickstart scripts.

    In this tutorial I will be using CentOS 7.4.1708 minimal version, but you can find this feature at Fedora or Red Hat Enterprise Linux. However, some setting may vary from distribution to distribution.

    You will see a Kickstart configuration file that some parts need to be replaced by your need.

    # ks.cfg
    
    #version=DEVEL
    
    # System authorization information
    auth --enableshadow --passalgo=sha512
    
    # Use CDROM installation media
    cdrom
    
    # Use text install
    text
    
    # It is not run the Setup Agent on first boot
    # I don't recommended this 'firstboot --enable'
    # This one set the setting after that waiting your command for install.
    #firstboot --enable
    ignoredisk --only-use=sda
    
    # Keyboard layouts
    keyboard --vckeymap=us --xlayouts='us'
    
    # System language
    lang en_US.UTF-8
    
    # Network information
    network --bootproto=dhcp --device=enp0s3 --ipv6=auto --activate
    network --hostname=<HOSTNAME>.example.com
    
    # Root password
    rootpw <ROOT_PASSWORD>
    
    # System services
    services --enabled="chronyd"
    
    # System timezone
    timezone America/Los_Angeles --isUtc
    
    # System bootloader configuration
    bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
    
    # Partition clearing information
    clearpart --none --initlabel
    
    # Disk partitioning information
    # This partitioning for 8GB Harddisk
    part /boot --fstype="xfs" --ondisk=sda --size=476
    part pv.198 --fstype="lvmpv" --ondisk=sda --size=7715
    volgroup centos --pesize=4096 pv.198
    logvol swap --fstype="swap" --size=953 --name=swap --vgname=centos
    logvol / --fstype="xfs" --size=6271 --name=root --vgname=centos
    logvol /home --fstype="xfs" --size=476 --name=home --vgname=centos
    
    %packages
    @^minimal
    @core
    chrony
    kexec-tools
    
    %end
    
    %addon com_redhat_kdump --enable --reserve-mb='auto'
    
    %end
    
    %anaconda
    pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
    pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
    pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
    %end

    This file (ks.cfg) need to present in a FTP, HTTP or NFS server. I will be using PHP built-in server to distribute the configuration over HTTP.

    php -S 0.0.0.0:4444 ks.cfg

    Insert CentOS disk to your bare-metal/virtual server, don’t continue on Install CentOS 7 but instead press tab.

    kickstart-1

    like this, after that, delete the string in there and type following

    vmlinuz initrd=initrd.img inst.ks=http://<IPADDR>:<PORT>/path/to/ks/file
    kickstart-2

    you will see similar output. This means successfully executed the Kickstart configuration to start the installation

    kickstart-3

    when done with the installation wizard, just press enter. Don’t
    forget the remove CentOS installation CD from the machine. Reboot it. Than you will see the following welcome screen.

    kickstart-4

    Conclusion

    These days sysadmins use different automation tool but with this tool you can start saving time. Also it is easy to use and has easy to read configuration file.