26 September 2019

The fragmentation

Since  a lot of stupid people loves to say that all the time, let me explain using your falacies arguments.

Too many package managers

No, there's not "too many package managers", this is called choice. You can CHOOSE what use it. Like portage? Use gentoo. Like apt? Use a debian-oriented. And the list goes on. Don't like rpm? Then use something that doesn't use it and be happy. No one is obligated to use something just because you like it. Having choices is good for everyone, maybe not for you. 

Too many desktop managers/window managers

Again, you can choose one that suits you better. You can even use one that the configuration is done inside the code. It's good to have choices, and there's a plenty out there.

Too many init systems

Any decent distribution let's you choose what init you want, and you use what is easy/better for you. Even so, most distributions that uses, let say, systemd, there's an alternative without it.

Too many tools

For what job? Some tools have alternatives, some with a lot of features but not everyone wants lots of features. For example, I like syslog-ng better for a syslog, but there's simple ones. 

Final thoughts

I understand, people that born limited are unable to understand how is "having choices". Maybe you don't want to choose, maybe you want that the world turns around you, maybe you want that everyone make the exact same choices as you. Maybe you should stick to windows and stop talking this bullshit everywhere. Or even better, move to some place that everyone is forced to do the same.

11 September 2019

Recovery a freebsd after whatever problem

If you had any trouble with your install (ndis module someone), there's an easy way to fix it.

  1. Boot your pendrive/cdrom/dvd/whatever with the freebsd install
  2. Enter the shell or start the LiveCD option (livecd is root without any password)
  3. Create a directory where you can import your zfs pool: mkdir /tmp/zroot
  4. Import your zpool there: zpool import -fR /tmp/zroot zroot 
  5. Need access to /? No problem: mkdir /tmp/root && mount -t zfs zroot/ROOT/default /tmp/root
  6. When you're done, unmount everything and reboot: zpool export zroot

05 September 2019

Organizing your clusterfuck collection of wallpapers

I have a directory filled with wallpapers that syncs with my devices, so I can use a random wallpaper (usually, change at boot or every 24h, whatever comes first). For the sake of organization, let's make this straight:

First, convert everything to png, because why not?


find . -name "*.jpg" -exec mogrify -format png {} \;

Double check your files and then delete the remaining jpgs (or whatever format you're converting):

rm *.jpg

Now, let's organize by number:

num=0; for i in *; do mv "$i" "$(printf '%04d' $num).${i#*.}"; ((num++)); done
 
If you need to add more wallpapers to this directory, remember to change the num= to the last wallpaper of this directory +1.

21 August 2019

Can't unlock KDE Session

If for some reason you're unable to unlock your desktop, then probably is the permissions of kcheckpass. You have two choices:

1) Reinstall kde-plasma/kscreenlocker
2) Check the permissions of /usr/lib64/libexec/kcheckpass, it should be 4755 and owned by root:root
3) A more radical solution:
#!/bin/bash

# Screen locker broken in KDE with ConsoleKit
# See https://forums.gentoo.org/viewtopic-t-1046566.html
# and https://forums.gentoo.org/viewtopic-t-1054134.html

# Find which session is locked
session=Session$(ck-list-sessions | grep -B10 "x11-display = ':0" | grep -o -P '(?<=Session).*(?=:)')

# Create Bash script to unlock session
echo "#!/bin/bash" > $HOME/unlock.sh
echo "su -c 'dbus-send --system --print-reply --dest=\"org.freedesktop.ConsoleKit\" /org/freedesktop/ConsoleKit/$session org.freedesktop.ConsoleKit.Session.Unlock'" >> $HOME/unlock.sh
chmod +x $HOME/unlock.sh

# Run Bash script in another TTY
openvt -s -w $HOME/unlock.sh
 

30 July 2019

Filesystem benchmarks

Ok, let's get this straight.
When I choose to use JFS, it's because some years ago I see with my own eyes how JFS is reliable in different scenarios and it still is. Yes, EXT4 is reliable too, but the performance isn't on par with JFS, but both offers a reliable and secure solution. XFS in other hand isn't that secure and reliable (it is to point), but offers a quite good performance. When kernel 5.0 came out, they talked a lot about how BTRFS is good now and, different from most people that rely on "everyone uses, so I'll use too" I want to test myself because I'm not the guy that relies on this  "everyone guy" opinion.
The host used for this test is my main desktop using gentoo linux with the last available kernel (5.2.4-gento0) and last available tools to date (i5-3470 on a gigabyte motherboard, 16gb of ram, sata3 1Tb hdd). The disk is entirely formated with the filesystem being tested. All filesystems are mounted using noatime and using bfq scheduler (bfq offers better performance for rotational disks than mq-deadline)). The IOZone tests was executed with a reboot before creating the new filesystem to test to exclude any possible bias.

Test 1: Creating a 1Gb file with dd=/dev/urandom of=test bs=1024 count=1M (in seconds):
JFSEXT4XFSBTRFSZFS
7.313.2914.214.5

Test 2: Cold reboot during the creation of the 1Gb file from one filesystem to another (5 times)

JFSEXT4XFSBTRFSZFS
Auto Fixed?5/55/53/54/54/5
Mounted rw without fixing01241
Wasn't able to fixing00121
Continue working with problems without reporting00121

Test 3: Copy a 1Gb file from one filesystem to another
JFSEXT4XFSBTRFSZFS
9s15s11s19s17s

Test 4: Cold reboot during mariadb heavy worload (5 times)

JFSEXT4XFSBTRFSZFS
Auto Fixed?5/55/54/54/54/5
Corrupted databases?NYYYN
Mounted rw with problemsNNYYN
Continue working with problems without reportingNNYYN

Test 5: Boot from EFI to lightdm, sata3 hdd, mounting / and /home (5 times)

JFSEXT4XFSBTRFSZFS
Normal0:280:491:011:301:27
After a cold reboot0:421:221:402:112:10

Test 6: Shutdown from lightdm to total shutdown (openrc) 
JFSEXT4XFSBTRFSZFS
0:150:210:410:580:42

Test 7: IOZone

My overall opinion:
  • JFS and EXT4 are good, performance wise and secure enough for daily usage.
  • XFS was polished through the years and the most awkward problems was taken off (data corruption with power loss and others), it's a good option but have some issues with performance depending of size and mass of data
  • ZFS is good but you need ram to have a good overall performance, I would suggest start with 16Gb for a desktop (this isn't a problem for a server, of course). Also, keep in mind that the mainline kernel doesn't support ZFS, it takes sometime for ZFS reach the last line (right now, ZOL supports up to 5.1.x). Also, despite the comparison of performance between ZFS and BTRFS seems to be similar, ZFS is far more stable and trustful.
  • By cpu/mem footprint in a copy, JFS is the lighter and ZFS the heavier. Of course, will also depend of how much data are you copying from where to where and kind of storage, but in overall, let's put this way: Copy lots of tiny files JFS > EXT4 > BTRFS > ZFS > XFS. Copy lots of big files: JFS/EXT4 > XFS > ZFS > BTRFS. Of course, the mileage will vary depending on the use case (it can be irrelevant with a full non-virtualized server with lots of ram and a good storage system).
  • Only use BTRFS if you like to restore backups often. To date, I can't trust BTRFS, not after seeing a filesystem having corruption with simple tests. -> http://lkml.iu.edu/hypermail/linux/kernel/1907.1/05873.html

Notes by 20190810:
  • XFS resolved the issues caused by power loss and metadata corruption. 
  • For XFS, the performance can benefit more with mq-deadline than bfs, afaik. Otherwise, use this recommendation in udev and don't forget to use noatime if you don't need it.
  • BTRFS still not ready, it can offer some benefits and performance, but the stability is still far from being acceptable, the auto-healing doesn't work as expected (like zfs, for instance) and scrub doesn't fix as expected in some scenarios. They're getting straight, a lot of this stuff seems to being fixed from 5.2 and beyond.
  • The fact of a filesystem receiving more kernel or userland updates doesn't mean it's more stable, or better, or whatever.
  • You SHOULD do your tests instead of talking bullshit like "everyone uses" or "it's an uncommon use". If the kernel supports it, it's supported. Period.

29 July 2019

How to choose an OS

Here's how to choose an OS that fill your needs.

1) Try on a VM first
Try on a VM. You can test everything on this list inside a virtual machine. Virtualbox is free, but it's your choice.

2) Device drivers
The first thing you need to analyze and pay attention is how the system is supported by hardware and how you can fix it if possible. If you don't have enough knowledge to manually install a required driver, you should rely on the in-house driver installer (like ubuntu, linux mint, manjaro, etc). Even so, pay attention if the drivers was installed correctly and there's no issues, specially during an update.

2) Community Support
If you want to rely on community support (I don't suggest since some communities causes more trouble than help) you have to pay attention closely and don't use it as an unique source. Search on the internet and compare the results, usually the distribution have his own documentation about it and most of time you'll see that some irc/matrix/whatever support USUALLY doesn't rely on their own wiki and in his own standards, specially elitist communities. Let me get and example:

1) You've tested yourself a couple of filesystems and decided to choose, let's say, F2FS or JFS.
2) You had a problem with systemd and ask for help, and paste your dmesg
3) If they say that the problem is F2F2 or JFS without dmesg having something explicit about this, you're dealing with stupid people.

3) Custom configurations
This one is really easy. If the distribution have drivers loaded at boot for an specific filesystem (well supported by the kernel developers) and you can't use it as root (as stated by kernel developers), you're dealing with naive packagers and problems will occur in other places often. For example, some linux distributions doesn't support LVM properly, like.. they will boot correctly if you have /boot and / outside of LVM (?????). This is the result of naive packagers, stay away from that.

4) Elitist community
Over-complicating things to state how they're right based on own defaults without proof; stating age of a processor even if the system have a nice performance compared to a more recent processor (saying that a Intel I7 Haswell is slow because it's old); using "because everyone use it X" as answer; "no one uses this kind of hardware anymore" talking about something like 3 years old hardware. And the list goes on, this kind of crap should be redirected to a black hole.

5) OS implementation
Lack of multilib without any reason; Answering questions of poor implementation saying "because it's professional" instead of the real reason; Trying to hide problems with "because it's better in this way" it's a matter of naive developers.

6) Systemd

7) Documentation
Usually, any wiki should work with any distribution at some point, it'll depend on the implementation. So is good to choose something that is KISS as possible, so you'll have more documentation available if a problem arises. You don't need to use the specific wiki of the OS of your choice if the OS follows the standards of the application maded by the developer, so you can even rely in the application docs to fix something. Unless you're using a trollercoaster OS.

8) Learning curve
You don't need to understand what's happening in the background in a matter of days, but it's probably best for you if you learn how it works at your own pace. The less you rely on other people, the less is the chance of someone giving you a bad advise, and trust me, this will occurs more often than you expect.

9) FHS and POSIX
It seems no one cares about it, but FHS and POSIX standards are important. No, isn't a matter of "get used", it's a matter of having standards. I accept changes when it benefits, changes that came form stupid nature should be discarded ASAP (for example, linking /bin to /usr/bin). I suggest avoid OS maded by brainless people that take decisions without a sane reason.


28 June 2019

The stupid chit-chat around and how to fix it (for people that prefer reality than some herp-derp) - Volume 2

Yeah, it never stops, here the first part.

1. IBM will fuck up Redhat because IBM destroys everything it touches, IBM is not good to open source

Instead of farting with your mouth (or fingers) try researching about this topic a little more. IBM is one of the major contributors to the open source world, not only to kernel but there's a lot, I mean, A LOT of things IBM contributed, with open source. Research linux kernel just for start, even so, IBM also sponsors some open source things, iSCSI is just one of the examples

2. Why you use the "feature X" if no one uses?

Did you tried yourself or you just "use because everyone uses"? Instead of being stupid and answering a question with another question, did you tried to understand the problem first? It's a well reported problem? It's something like "I want to disable the spectre mitigation to have performance"? Oh.. no? Then why are you doing that? Because you only uses what everyone uses without testing yourself to see if fits to your scenario?
Let's get an example: A decent linux distribution will not care what filesystem you use if it's well supported by the linux kernel and will permit you to boot on that. If reiserfs is supported (to date, I'm not sure about that), a DECENT linux distribution should boot this without any problem. If I want to use a mix of LVM+JFS+REISERFS+WHATEVERFS, it should boot if the kernel have support, unless you use a fucking messed up distro that the packagers doesn't know what they're doing and/or "you should use what everyone uses" (I still don't know who is this everyone dude". Oh the distro doesn't support install with this filesystem? Cool, what a piece of shit distro are you using? Sooo the developers of your big ass distro have more knowledge than the kernel developers itself to know what can be stable and what cannot.

3. Rolling releases are unstable, point releases are way better.

Let's get one point here, yes, there's a chance of having a problem. But who the fuck knows what software is stable or not? The developer of this software, right? And with what authority or knowledge the packager of your distro know exactly when this package is stable enough to backport patches to an ancient package? Oh he just decided and tested as fuck? Are we talking about some sort of LTS package, why? It's there a major bump on everything like... QT4 to QT5? No? Then your fucking point release isn't better, is just a decision that the packager maded for you. And to be honest, 90% of the problems I had with loss of data and stuff crashing beyond repair was, guess what, with the "so stable point release distributions".
Now in fact, there IS a reason for some distributions like redhat/centOS and SLES being in that way: Some 3rd party software wants specific version of software to be well supported, and these are the ones to blame, they could even made their binaries static or whatever other thousand of ways to make everything works better, but since this doesn't happens, redhat/centos/sles have to stick into that.

4. This feature isn't implemented in the distribution because it's unstable.

Isn't implemented because the packagers didn't made it into the distribution, stop being this jackass. Tell the truth.

5. Systemd

I know, it's bad, do bad things, create own issues, it's like a virus that do all the shit, have a shitty QA, a god-level shitty Issue tracker, binary logs that can get corrupt with a blink of an eye, and an infinite list of awful bugs that will never be fixed because some developers have a crown in his ass. But if you want to argue why you doesn't like systemd, at least get your shit together. Don't go to reddit say "I don't like binary logs, because it's binary" or people will laugh at you and it's your fault.