Author: gsb

  • Renew LetsEncrypt in Traefik

    I’d setup Traefik to use LetsEncrypt, but due to the scarey words around rate limits etc, I felt it sensible to use the staging LetsEncrypt servers, rather than the production ones – basically I wasn’t sure how many changes I’d have to make as it has been a while since I played with any of this in anger. I assumed that Traefik would renew the certificates with LetsEncrypt when I needed production ones.

    I surprisingly got this working quite straightforwardly – I had more issues with getting Apache and WordPress (mainly Apache) playing nicely with everything.

    However, the staging certificates result in a warning which I didn’t want any longer than necessary. Since it now was working, I altered my certificate resolver in Traefik to point to the production LetsEncrypt server. I then performed a bounce to pick up the changes, expecting to see a new certificate and no more errors.

    Anxious pause…

    I waited a bit (pressing F5) and was disappointed to see that it hadn’t changed. Same certificate was in use, and I could see in the log that no attempt had been made to fetch a new certificate.

    Frantic prodding ensued as I attempted to work out what I had done wrong. Nothing was the answer. The problem, such as it is, is that Traefik already has a certificate that it can use. Traefik doesn’t feel it needs to change it the certificate.

    Solution?

    A quick google later found me a couple of links:

    Whilst the last link on the face of it initially seemed less relevant (it related to fixing a security flaw), on reading it gave the simplest answer for my situation.

    In my situation, it looked like I could simply delete the storage acme.json file. No certificates needed to be kept – I could just start again from scratch.

    I stopped Traefik, moved acme.json out of the way (belts and braces right), and then restarted it.

    Traefik Renew from LetsEncypt!

    Hey, Presto! I could see in the log file the lack of certificate being identified and Traefik calls to renew certificates from the production LetsEncrypt server.

    Padlock look right, everything is gravy! Let’s move on to the next thing.

  • Traefik Log Rotation

    An explanation of how to set up Traefik Log Rotation. This covers both the access log AND the traefik log.

    I initially setup simple log rotation using the standard logrotate package i have installed for everything else. I implemented this by following this page (there are others about) but this covered my scenario the best (as I have traefik running natively, not in a container.

    The main change I made was in the directory (mine is /var/log/traefik/) and then the files (I have an access.log and a traefik.log, so I altered the pattern to be *.log. I also added size – more on that in a minute.

    This resulted in a file /etc/logrotate.d/traefik below:

    <directory location>/*.log {
      compress
      create 0640 <user> <group>
      daily
      delaycompress
      missingok
      notifempty
      rotate 5
      size 10M
    
      postrotate
        kill -USR1 `pgrep traefik`
      endscript
    }

    However, I began to spot that it did not working as anticipated.

    • access.log wouldn’t rotate
    • traefik.log rotated but still wrote to the rotated our file

    It turns out access.log not rotating was because I misunderstood what size would do. It’s not an either/or with the daily setting – size supersedes daily if it is in the file. Simply removing size made access.log rotate as expect.

    However, traefik.log still won’t rotate on a USR1 – which seems contrary to their own documentation (link).

    I decided that something weird must be going on, so I dug about a bit. If you check the code in git (link) you’ll see that the code specifically mentions rotation of the access log, and their tests also only check the access log.

    This means that USR1 signal only rotates the access log. That was annoying.

    I wondered if I was missing something, so I went in search of rotate in git (link) and found something that whilst documented at source level seems to be missing in the Traefik log documentation (link).

    It turns out that Traefik can do some quite clever stuff with rotating its own log if you configure it right.

    log:
      # Log level
      #
      # Optional
      # Default: "ERROR"
      #
      level: <whatever level you want>
    
      # Sets the filepath for the traefik log. If not specified, stdout will be used.
      # Intermediate directories are created if necessary.
      #
      # Optional
      # Default: os.Stdout
      #
      filePath: <directory location>/<file>.log
    
      # Format is either "json" or "common".
      #
      # Optional
      # Default: "common"
      #
      #format: json
      maxSize:    5
      maxBackups: 50
      maxAge:     10
      compress:   true

    This will rotate at 5Mb, compress the old copies, and keep for 10 days and keep a maximum of 50 files. There are other options in the document – I have not used them.

    This then means I can make the logrotate configuration focus just on the access log:

    <directory location>/access.log {
      compress
      create 0640 <user> <group>
      daily
      delaycompress
      missingok
      notifempty
      rotate 5
    
      postrotate
        kill -USR1 `pgrep traefik`
      endscript
    }

    Combining these two pieces of information (access log rotation and traefik log rotation) together gives me a functional solution.

    -rw-r----- 1 traefik traefik   706383 Sep 15 13:46 access.log
    -rw-r----- 1 traefik traefik  2071725 Sep 14 23:59 access.log.1
    -rw-r----- 1 traefik traefik   101616 Sep 13 23:59 access.log.2.gz
    -rw-r----- 1 traefik traefik    43272 Sep 12 23:59 access.log.3.gz
    -rw-rw-r-- 1 traefik traefik   574683 Sep 12 12:29 access.log.4.gz
    -rw-r----- 1 traefik traefik       23 Sep 10 02:11 traefik-2023-09-10T01-11-42.862.log.gz
    -rw-r----- 1 traefik traefik    62635 Sep 12 20:48 traefik-2023-09-12T19-48-36.125.log.gz
    -rw-r----- 1 traefik traefik    45843 Sep 12 23:48 traefik-2023-09-12T22-48-51.210.log.gz
    -rw-r----- 1 traefik traefik    47196 Sep 13 02:50 traefik-2023-09-13T01-50-47.804.log.gz
    -rw-r----- 1 traefik traefik    46280 Sep 13 05:51 traefik-2023-09-13T04-51-02.691.log.gz
    -rw-r----- 1 traefik traefik    59591 Sep 13 08:44 traefik-2023-09-13T07-44-32.782.log.gz
    -rw-r----- 1 traefik traefik    51714 Sep 13 11:35 traefik-2023-09-13T10-35-02.705.log.gz
    -rw-r----- 1 traefik traefik    62231 Sep 13 14:31 traefik-2023-09-13T13-31-29.208.log.gz
    -rw-r----- 1 traefik traefik    47698 Sep 13 17:30 traefik-2023-09-13T16-30-44.233.log.gz
    -rw-r----- 1 traefik traefik   114460 Sep 13 20:00 traefik-2023-09-13T19-00-49.560.log.gz
    -rw-r----- 1 traefik traefik    88101 Sep 13 22:45 traefik-2023-09-13T21-45-27.953.log.gz
    -rw-r----- 1 traefik traefik    45687 Sep 14 01:48 traefik-2023-09-14T00-48-21.418.log.gz
    -rw-r----- 1 traefik traefik    48091 Sep 14 04:48 traefik-2023-09-14T03-48-36.486.log.gz
    -rw-r----- 1 traefik traefik    45339 Sep 14 07:49 traefik-2023-09-14T06-49-32.199.log.gz
    -rw-r----- 1 traefik traefik    58615 Sep 14 10:29 traefik-2023-09-14T09-29-32.183.log.gz
    -rw-r----- 1 traefik traefik    50523 Sep 14 13:24 traefik-2023-09-14T12-24-47.288.log.gz
    -rw-r----- 1 traefik traefik   111852 Sep 14 15:48 traefik-2023-09-14T14-48-17.251.log.gz
    -rw-r----- 1 traefik traefik   102566 Sep 14 18:19 traefik-2023-09-14T17-19-54.417.log.gz
    -rw-r----- 1 traefik traefik    49430 Sep 14 21:08 traefik-2023-09-14T20-08-39.426.log.gz
    -rw-r----- 1 traefik traefik    52971 Sep 14 23:53 traefik-2023-09-14T22-53-09.416.log.gz
    -rw-r----- 1 traefik traefik    48757 Sep 15 02:45 traefik-2023-09-15T01-45-41.245.log.gz
    -rw-r----- 1 traefik traefik    47160 Sep 15 05:37 traefik-2023-09-15T04-37-56.255.log.gz
    -rw-r----- 1 traefik traefik    48261 Sep 15 08:29 traefik-2023-09-15T07-29-56.249.log.gz
    -rw-r----- 1 traefik traefik    48879 Sep 15 11:18 traefik-2023-09-15T10-18-41.241.log.gz
    -rw-r----- 1 traefik traefik  4548970 Sep 15 13:46 traefik.log
  • Pi with FAN-SHIM problem diagnosis

    I purchased a FAN-SHIM and Raspberry Pi 4 from Pimoroni last year but occasionally saw a problem, and struggled with a diagnosis.

    The FAN-SHIM generally worked great, but I noticed that it didn’t seem to always work when I didn’t have a lead coming off the header to a breadboard. The fan would just run, the LEDS wouldn’t work – it was basically unhappy. Given it was directly related to the cable, I took the easy route to fixing things – leave it attached! However, I always wanted to properly diagnose what was going on.

    A picture of a Raspberry Pi 4 showing the FANSHIM in place , showing the friction fit header pushed over the GPIO pins - waiting for diagnosing problems!
    Stock Pi 4 with a FAN-SHIM plugged in.

    Exploring Problems with the FAN-SHIM on a Pi

    I recently had a chance to look at this again, and this is collecting my exploration and thoughts. In short, I did a lot of hunting about, until I found something that really helped me diagnose things. I knew the installation was working – as I said, with the cable attached all is good. However, I didn’t really want it plugged in all the time. It looked like it was pushed on OK – but I had no control.

    How could I work out if it was the SHIM with an issue, my Pi or something else?

    Eventually, I found this page. A comment by gadgetoid suggested a really good diagnostic approach.

    Diagnosing the FAN-SHIM problem on the Pi

    Basically, turn the fanshim service off:

    systemctl stop pimoroni-fanshim.service

    Then, start up python and run the following:

    import RPi.GPIO as GPIO
    GPIO.setmode(GPIO.BCM)
    GPIO.setup(18,GPIO.OUT)
    GPIO.output(18,0) # Should turn fan off
    GPIO.output(18,1) # Should turn fan on

    Basically, run the command to get the fanshim to do what it isn’t at the moment. Then adjust the fanshim until the action actually happens – so in my case it finally turned off!

    Having done this, I could run now run the other test scripts in the fanshim example source code.

    Finally the LED would change as expected, and the fan would turn on and off as expected.

    I now re-enabled the fan-shim service:

    systemctl start pimoroni-fanshim.service

    Now, I know how to check things if it goes wrong again!

  • Configure Microsoft Live Writer to work with Drupal

    Hey! I’ve managed to configure Microsoft Live Writer to work with my Drupal install (see this URL).

    I had a minor issue configuring Microsoft Live Writer, where I couldn’t login as my non-administration user. Initially, I worked that that returning the edit privilege in Drupal “Authenticated Users” made it work. Since then I had a more detailed look at this issue. It seems that rights to the Blog API needed to be granted. After that, I managed to restrict the rest of the rights to more or less where they were before.

    So, now I might actually start using Live Writer… it’s much easier to blog this way (yes, I like to do things the easy way were ever possible).

  • Persisting Bering changes locally

    So I decided to add an user wolagent and put the script into his home directory. The question is how to backup this user.  Simply using the ‘s’ – command from the Bering 3.x – menu doesn’t backup the user and his homedir. In short, there seemed to be no way to persist this Bering change locally.

    So how can I do that?
    Add your stuff to /var/lib/lrpkg/local.local

  • eBusiness Autoconfig Customisation

    Why customise?

    Autoconfig Customisation has the potential to make life much easier for an Oracle Applications DBA.

    In short, it enables the creation of controlled templates that can make all of the things that have to be done after running autoconfig go away.

    For example, generally after you run adautocfg.sh you then have to remember to comment out the mobile setting in the jserv files, otherwise the java server doesn’t start up. Wouldn’t it be nice to change the template so that it know that these should be commented out already? Well, with autoconfig customisation you can!

    There are a couple of different approaches to this, the simplest involves just changing the template. The second involves adding a customisation value to the context file and then modifying the template to use that value.

    Before we start

    Before anything else is done, we must validate that the current configuration is good.
    To do this run the following command (because of the profile we use, it is in our path):

    adchkcfg.sh contextfile=$CONTEXT_FILE appspass=

    The output of this is as follows:

    The log file for this session is located at: /home/aicprj04/ICPRJ04/appl/admin/I
    CPRJ04_ictap41/log/07211026/adconfig.log

    AutoConfig is running in test mode and building diffs...

    AutoConfig will consider the custom templates if present.
            Using APPL_TOP location     : /home/aicprj04/ICPRJ04/appl
            Classpath                   : /home/aicprj04/ICPRJ04/comn/java/jdk1.6.0_
    02/jre/lib/rt.jar:/home/aicprj04/ICPRJ04/comn/java/jdk1.6.0_02/lib/dt.jar:/home/
    aicprj04/ICPRJ04/comn/java/jdk1.6.0_02/lib/tools.jar:/home/aicprj04/ICPRJ04/comn
    /java/appsborg2.zip:/home/aicprj04/ICPRJ04/comn/java

            Using Context file          : /home/aicprj04/ICPRJ04/appl/admin/ICPRJ04_ictap41/out/07211026/ICPRJ04_ictap41.xml

    Context Value Management will now update the test Context file

            Updating test Context file...COMPLETED

            [ Test mode ]
            No uploading of Context File and its templates to database.

    Testing templates from all of the product tops...
            Testing AD_TOP........COMPLETED
            Testing FND_TOP.......COMPLETED
            ...
            Testing CSD_TOP.......COMPLETED
            Testing IGC_TOP.......COMPLETED

    Differences text report is located at: /home/aicprj04/ICPRJ04/appl/admin/ICPRJ04_ictap41/out/07211026/cfgcheck.txt
     
            Generating Profile Option differences report...COMPLETED
            Generating File System differences report......COMPLETED
    Differences html report is located at: /home/aicprj04/ICPRJ04/appl/admin/ICPRJ04_ictap41/out/07211026/cfgcheck.html

    Differences Zip report is located at: /home/aicprj04/ICPRJ04/appl/admin/ICPRJ04_ictap41/out/07211026/ADXcfgcheck.zip

    AutoConfig completed successfully.

    There are a couple of outputs, but I find the most useful output is the zip file. I generally copy this over and extract it out, then use a web browser to view the set of html pages. These explain what differences there are between what you have and what should be there. Ideally there should be no differences.

    ADX Config Report is here.

    The real benefit of this, is that you can look at the differences very easily and decide if they are important or not. Once the differences have been reviewed, and you are satisfied that the autoconfig baseline is correct, then we can move on to performing the customisation.

    Customise a template

    This is the simplest type of customisation as there is not additional variables.
    You might follow this approach if there was a need to add a comment permanently to a file (for example to add entries to url_fw.conf).

    To do this find the file that is needs to be changed and run the following:

    adtmpreport.sh contextfile=$CONTEXT_FILE target=

    The output will be like the following:

    #########################################################################
              Generating Report .....                                       
    #########################################################################
    For details check log file: /home/aicprj01/ICPRJ01/appl/admin/ICPRJ01_ictap37/log/07211256.log

    The important detail is in the log file:

    =================================================================
    Starting Utility to Report on Templates and their  Targets  at Mon Jul 21 12:58:36 BST 2008
    Using ATTemplateReport.java version 115.7
      

    [ INFO_REPORT ]

    [FND_TOP]
    TEMPLATE FILE   : /home/aicprj01/ICPRJ01/appl/fnd/11.5.0/admin/template/url_fw.conf
    TARGET FILE     : /home/aicprj01/ICPRJ01/comn/conf/ICPRJ01_ictap37/iAS/Apache/Apache/conf/url_fw.conf

    This indicates that if an addition needs to be made to the file urlfw.conf, then the template file is in $FND_TOP/admin/template and is called url_fw.conf

    Apparently, not all templates can be customised. If there is the word LOCK in the application driver file, then it is not customisable.
    So, in the above case, we should look for LOCK in $FND_TOP/admin/driver/fndtmpl.drv

    If the file is customisable, then the following steps need to be done:

    1. Move to the directory that contains the source template, and make a directory called custom.
      For example:
      cd $FND_TOP/admin/template
      mkdir custom
    2. Copy the template into here, and then make the change to it.
    3. Then you should verify the customisation using the command adchkcfg.sh command.
    4. Finally, run autoconfig to make the template properly.

    Adding a context variable

    Again, determine what the template that is going to be added as above. Then using Oracle Application Manager, add a new custom entry in the context file.

    Then create a custom template exactly as above. When making your changes, refer to the new context value. For example to allow for controlling of mobile entries using context values I added the following to jserv_ux_ias1022.conf:

    %c_DisableMobile%ApJServGroupMount /mobile              balance://OACoreGroup/mobile

    The %c_DisableMobile% is the custom value.

    Again, validate that the file is going to work right, and then run autoconfig to make the change.

    Final Note

    This is all based on Metalink Note 270519.1

  • Update Tripwire policy

    It’s pretty simple really.

    Just run this:

    sudo twadmin -m p > twpol.txt

  • Update Tripwire

    OK, only waited a few months before adding this!

    I’d recommend not doing this as root, as if you do, the root directory modification time will change as you modify twpol.txt. Also twpol.txt will change as you modify it.

    All of this means you’ll have to run 1-3 before you can run 4. And tripwire takes ages to run. Besides you should be using sudo anyway (you are right?!).

    1. Validate current policy

    sudo tripwire -m c

    2. Find the latest tripwire log

    sudo ls -lt /var/lib/tripwire/report/*.twr | head -1

    3. Use that to update the database

    sudo tripwire -m u -r <above file>

    4. Then update policy

    sudo tripwire -m p twpol.txt

    You should see this:

    Parsing policy file: twpol.txt
    Please enter your local passphrase:
    Please enter your site passphrase:
    ========
    Policy Update: Processing section Unix File System.
    ========
    Step 1: Gathering information for the new policy.
    The object: \"/lib/init/rw\" is on a different file system...ignoring.
    The object: \"/dev/.static/dev\" is on a different file system...ignoring.
    The object: \"/dev/pts\" is on a different file system...ignoring.
    The object: \"/dev/shm\" is on a different file system...ignoring.
    The object: \"/proc/bus/usb\" is on a different file system...ignoring.
    ========
    Step 2: Updating the database with new objects.
    ========
    Step 3: Pruning unneeded objects from the database.
    Wrote policy file: /etc/tripwire/tw.pol
    Wrote database file: /var/lib/tripwire/web-proxy.twd

    5. After the policy is accepted you need to run steps 1-3
    This is because if you don\’t and want to make further changes you\’ll see stuff like this:

    ========
    Policy Update: Processing section Unix File System.
    ========
    Step 1: Gathering information for the new policy.
    The object: \"/lib/init/rw\" is on a different file system...ignoring.
    The object: \"/dev/.static/dev\" is on a different file system...ignoring.
    The object: \"/dev/pts\" is on a different file system...ignoring.
    The object: \"/dev/shm\" is on a different file system...ignoring.
    The object: \"/proc/bus/usb\" is on a different file system...ignoring.
    ### Error: Policy Update Added Object.
    ### An object has been added since the database was last updated.
    ### Object name: /etc/tripwire/tw.pol.bak
    ### Error: Policy Update Changed Object.
    ### An object has been changed since the database was last updated.
    ### Object name: Conflicting properties for object /etc/tripwire
    ### > Size
    ### > Modify Time
    ### Error: Policy Update Changed Object.
    ### An object has been changed since the database was last updated.
    ### Object name: Conflicting properties for object /etc/tripwire/tw.pol
    ### > Modify Time ### > CRC32
    ### > MD5
    ========
    Step 2: Updating the database with new objects.
    ========
    Step 3: Pruning unneeded objects from the database. Policy update failed; policy and database files were not altered.

    This is because tripwire hasn\’t capture changes caused by the policy change.

    This might also be useful (I login as a normal user to do administration, so I want to do all of these sudo\’d). This script allows me to run a report, and then use that generated report to update the database.

    I call the script update_tripwire.bash

    #!/bin/bash
    sudo tripwire -m c
    sudo tripwire -m u -r $(/bin/ls -t /var/lib/tripwire/report/*.twr | head -1)

  • Configure Tripwire on Debian

    I have finally gotten around to configure the Tripwire setup on my Debian installation, after having it bleat at me for the last 3 years! I found details on http://articles.techrepublic.com.com/5100-10877_11-6034353.html which pointed me in the correct direction. My installation is Debian based, so it fitted the “no twinstall.sh” case shown most closely.

    I have had to tweak what the linked article says, slightly to make it work. I have also included the output that I saw, so you should know that you are in the correct place when you run the command (my principle is that sample output gives you the warm feeling that things are going well).

    First we should generate the site key:
    twadmin --generate-keys -S site.key
    (When selecting a passphrase, keep in mind that good passphrases typically have upper and lower case letters, digits and punctuation marks, and are at least 8 characters in length.)
    Enter the site keyfile passphrase:
    Verify the site keyfile passphrase:
    Generating key (this may take several minutes)...
    Key generation complete.

    Then generated the local key:
    twadmin --generate-keys -L ${HOSTNAME}-local.key
    (When selecting a passphrase, keep in mind that good passphrases typically have upper and lower case letters, digits and punctuation marks, and are at least 8 characters in length.)
    Enter the local keyfile passphrase:
    Verify the local keyfile passphrase:
    Generating key (this may take several minutes)...
    Key generation complete.

    Then had to edit the config template, before generating the configuration file:
    twadmin --create-cfgfile --cfgfile tw.cfg --site-keyfile site.key twcfg.txt
    Please enter your site passphrase:
    Wrote configuration file: /etc/tripwire/tw.cfg

    Then generated the policy file:
    twadmin --create-polfile --cfgfile tw.cfg --site-keyfile site.key twpol.txt
    Please enter your site passphrase:
    Wrote policy file: /etc/tripwire/tw.pol

    Set file permissions:
    chown root:root site.key $HOSTNAME-local.key tw.cfg tw.pol
    chmod 600 site.key $HOSTNAME-local.key tw.cfg tw.pol

    Finally, initialized the database:
    tripwire --init
    Please enter your local passphrase:
    Parsing policy file: /etc/tripwire/tw.pol
    Generating the database...
    *** Processing Unix File System ***
    ### Warning: File system error.
    ### Filename: /var/lib/tripwire/.twd
    ### No such file or directory
    ### Continuing... Wrote database file: /var/lib/tripwire/.twd
    The database was successfully generated.

    Then deleted the source file: rm twcfg.txt twpol.txt

    Haven’t run it for very long, so might update this if I have problems.

  • Compile an SRPM

    Download SRPM into a directory of choice. Generally files out to go into:

    /usr/src/redhat/SRPMS

    Then run:

    rpm --rebuild <file>-<version>.src.rpm

    This will then build the new RPM. Everything will then happen automatically, and the newly built RPMs will go here:

    /usr/src/redhat/RPM/<cpu-type>

    If the configuration file needs to be changed then run:

    rpm -i <file>-<version>.src.rpm

    The spec file will be extracted to:

    /usr/src/redhat/SPEC

    Generally it would be called:

    <file>-<version>.spec

    Changes can now be made to the spec file. Once the changes are complete the RPM needs to be made using it:

    rpm -bb --clean --rmsource <file>-<version>.spec

    If the build goes ok, the file will end up in the normal location:

    /usr/src/redhat/RPM/<cpu-type>

    Building a Kernel RPM (build UP and SMP Kernels)

    rpmbuild --target i386 --with up --with smp --without BOOT --without debug <spec file>