I’d setup Traefik to use LetsEncrypt, but due to the scarey words around rate limits etc, I felt it sensible to use the staging LetsEncrypt servers, rather than the production ones – basically I wasn’t sure how many changes I’d have to make as it has been a while since I played with any of this in anger. I assumed that Traefik would renew the certificates with LetsEncrypt when I needed production ones.
I surprisingly got this working quite straightforwardly – I had more issues with getting Apache and WordPress (mainly Apache) playing nicely with everything.
However, the staging certificates result in a warning which I didn’t want any longer than necessary. Since it now was working, I altered my certificate resolver in Traefik to point to the production LetsEncrypt server. I then performed a bounce to pick up the changes, expecting to see a new certificate and no more errors.
Anxious pause…
I waited a bit (pressing F5) and was disappointed to see that it hadn’t changed. Same certificate was in use, and I could see in the log that no attempt had been made to fetch a new certificate.
Frantic prodding ensued as I attempted to work out what I had done wrong. Nothing was the answer. The problem, such as it is, is that Traefik already has a certificate that it can use. Traefik doesn’t feel it needs to change it the certificate.
Whilst the last link on the face of it initially seemed less relevant (it related to fixing a security flaw), on reading it gave the simplest answer for my situation.
In my situation, it looked like I could simply delete the storage acme.json file. No certificates needed to be kept – I could just start again from scratch.
I stopped Traefik, moved acme.json out of the way (belts and braces right), and then restarted it.
Traefik Renew from LetsEncypt!
Hey, Presto! I could see in the log file the lack of certificate being identified and Traefik calls to renew certificates from the production LetsEncrypt server.
Padlock look right, everything is gravy! Let’s move on to the next thing.
An explanation of how to set up Traefik Log Rotation. This covers both the access log AND the traefik log.
I initially setup simple log rotation using the standard logrotate package i have installed for everything else. I implemented this by following this page (there are others about) but this covered my scenario the best (as I have traefik running natively, not in a container.
The main change I made was in the directory (mine is /var/log/traefik/) and then the files (I have an access.log and a traefik.log, so I altered the pattern to be *.log. I also added size – more on that in a minute.
This resulted in a file /etc/logrotate.d/traefik below:
However, I began to spot that it did not working as anticipated.
access.log wouldn’t rotate
traefik.log rotated but still wrote to the rotated our file
It turns out access.log not rotating was because I misunderstood what size would do. It’s not an either/or with the daily setting – size supersedes daily if it is in the file. Simply removing size made access.log rotate as expect.
However, traefik.log still won’t rotate on a USR1 – which seems contrary to their own documentation (link).
I decided that something weird must be going on, so I dug about a bit. If you check the code in git (link) you’ll see that the code specifically mentions rotation of the access log, and their tests also only check the access log.
This means that USR1 signal only rotates the access log. That was annoying.
I wondered if I was missing something, so I went in search of rotate in git (link) and found something that whilst documented at source level seems to be missing in the Traefik log documentation (link).
It turns out that Traefik can do some quite clever stuff with rotating its own log if you configure it right.
log:
# Log level
#
# Optional
# Default: "ERROR"
#
level: <whatever level you want>
# Sets the filepath for the traefik log. If not specified, stdout will be used.
# Intermediate directories are created if necessary.
#
# Optional
# Default: os.Stdout
#
filePath: <directory location>/<file>.log
# Format is either "json" or "common".
#
# Optional
# Default: "common"
#
#format: json
maxSize: 5
maxBackups: 50
maxAge: 10
compress: true
This will rotate at 5Mb, compress the old copies, and keep for 10 days and keep a maximum of 50 files. There are other options in the document – I have not used them.
This then means I can make the logrotate configuration focus just on the access log:
Hey! I’ve managed to configure Microsoft Live Writer to work with my Drupal install (see this URL).
I had a minor issue configuring Microsoft Live Writer, where I couldn’t login as my non-administration user. Initially, I worked that that returning the edit privilege in Drupal “Authenticated Users” made it work. Since then I had a more detailed look at this issue. It seems that rights to the Blog API needed to be granted. After that, I managed to restrict the rest of the rights to more or less where they were before.
So, now I might actually start using Live Writer… it’s much easier to blog this way (yes, I like to do things the easy way were ever possible).
So I decided to add an user wolagent and put the script into his home directory. The question is how to backup this user. Simply using the ‘s’ – command from the Bering 3.x – menu doesn’t backup the user and his homedir. In short, there seemed to be no way to persist this Bering change locally.
So how can I do that? Add your stuff to /var/lib/lrpkg/local.local
I’d recommend not doing this as root, as if you do, the root directory modification time will change as you modify twpol.txt. Also twpol.txt will change as you modify it.
All of this means you’ll have to run 1-3 before you can run 4. And tripwire takes ages to run. Besides you should be using sudo anyway (you are right?!).
1. Validate current policy
sudo tripwire -m c
2. Find the latest tripwire log
sudo ls -lt /var/lib/tripwire/report/*.twr | head -1
3. Use that to update the database
sudo tripwire -m u -r <above file>
4. Then update policy
sudo tripwire -m p twpol.txt
You should see this:
Parsing policy file: twpol.txt Please enter your local passphrase: Please enter your site passphrase: ======== Policy Update: Processing section Unix File System. ======== Step 1: Gathering information for the new policy. The object: \"/lib/init/rw\" is on a different file system...ignoring. The object: \"/dev/.static/dev\" is on a different file system...ignoring. The object: \"/dev/pts\" is on a different file system...ignoring. The object: \"/dev/shm\" is on a different file system...ignoring. The object: \"/proc/bus/usb\" is on a different file system...ignoring. ======== Step 2: Updating the database with new objects. ======== Step 3: Pruning unneeded objects from the database. Wrote policy file: /etc/tripwire/tw.pol Wrote database file: /var/lib/tripwire/web-proxy.twd
5. After the policy is accepted you need to run steps 1-3 This is because if you don\’t and want to make further changes you\’ll see stuff like this:
======== Policy Update: Processing section Unix File System. ======== Step 1: Gathering information for the new policy. The object: \"/lib/init/rw\" is on a different file system...ignoring. The object: \"/dev/.static/dev\" is on a different file system...ignoring. The object: \"/dev/pts\" is on a different file system...ignoring. The object: \"/dev/shm\" is on a different file system...ignoring. The object: \"/proc/bus/usb\" is on a different file system...ignoring. ### Error: Policy Update Added Object. ### An object has been added since the database was last updated. ### Object name: /etc/tripwire/tw.pol.bak ### Error: Policy Update Changed Object. ### An object has been changed since the database was last updated. ### Object name: Conflicting properties for object /etc/tripwire ### > Size ### > Modify Time ### Error: Policy Update Changed Object. ### An object has been changed since the database was last updated. ### Object name: Conflicting properties for object /etc/tripwire/tw.pol ### > Modify Time ### > CRC32 ### > MD5 ======== Step 2: Updating the database with new objects. ======== Step 3: Pruning unneeded objects from the database. Policy update failed; policy and database files were not altered.
This is because tripwire hasn\’t capture changes caused by the policy change.
This might also be useful (I login as a normal user to do administration, so I want to do all of these sudo\’d). This script allows me to run a report, and then use that generated report to update the database.
I call the script update_tripwire.bash
#!/bin/bash sudo tripwire -m c sudo tripwire -m u -r $(/bin/ls -t /var/lib/tripwire/report/*.twr | head -1)
I have finally gotten around to configure the Tripwire setup on my Debian installation, after having it bleat at me for the last 3 years! I found details on http://articles.techrepublic.com.com/5100-10877_11-6034353.html which pointed me in the correct direction. My installation is Debian based, so it fitted the “no twinstall.sh” case shown most closely.
I have had to tweak what the linked article says, slightly to make it work. I have also included the output that I saw, so you should know that you are in the correct place when you run the command (my principle is that sample output gives you the warm feeling that things are going well).
First we should generate the site key: twadmin --generate-keys -S site.key (When selecting a passphrase, keep in mind that good passphrases typically have upper and lower case letters, digits and punctuation marks, and are at least 8 characters in length.) Enter the site keyfile passphrase: Verify the site keyfile passphrase: Generating key (this may take several minutes)... Key generation complete.
Then generated the local key: twadmin --generate-keys -L ${HOSTNAME}-local.key (When selecting a passphrase, keep in mind that good passphrases typically have upper and lower case letters, digits and punctuation marks, and are at least 8 characters in length.) Enter the local keyfile passphrase: Verify the local keyfile passphrase: Generating key (this may take several minutes)... Key generation complete.
Then had to edit the config template, before generating the configuration file: twadmin --create-cfgfile --cfgfile tw.cfg --site-keyfile site.key twcfg.txt Please enter your site passphrase: Wrote configuration file: /etc/tripwire/tw.cfg
Then generated the policy file: twadmin --create-polfile --cfgfile tw.cfg --site-keyfile site.key twpol.txt Please enter your site passphrase: Wrote policy file: /etc/tripwire/tw.pol
Finally, initialized the database: tripwire --init Please enter your local passphrase: Parsing policy file: /etc/tripwire/tw.pol Generating the database... *** Processing Unix File System *** ### Warning: File system error. ### Filename: /var/lib/tripwire/.twd ### No such file or directory ### Continuing... Wrote database file: /var/lib/tripwire/.twd The database was successfully generated.
Then deleted the source file: rm twcfg.txt twpol.txt
Haven’t run it for very long, so might update this if I have problems.
I downloaded it as instructed on their site, and it almost works straight away. It was annoying to keep having to tell drush the site to use with the -l flag though, so I have configured it for my needs.
I did this by making a .drush directory, and copying the example aliases.drush.php and drushrc.php in. I then amended drushrc.php to refer to my site and install directory.
Having done that I have then began to use it.
I think being able to update the core using:
drush up drupal
to be very cool!
Similarly I can check the module status using:
drush ups
Individual modules can be done using:
drush up <module>
And then I can download modules using (make sure you are in the right place!):
The one thing that I didn’t realise initially is that you can’t use the gitweb URL to do clones etc. I spent ages trying to do this, until I found Seth’s page. It explains things in a very structured manner that can be applied to most situations I suspect.
The only other thing I think I should point out is related to rewrites. If you are using them in an Apache configuration section that is higher than site that everything will be accessed from you need to remember to set the following, otherwise they will be ignored:
RewriteEngine On
RewriteOptions Inherit
So, in my case, I am accessing git via a VirtualHost that I have. The virtual host needed these lines adding to it otherwise the rewrite configuration in conf.d/gitweb didn’t get picked up.
To enable LDAP, I also had to do this:
sudo a2enmod authnz_ldap
sudo a2enmod cgi
sudo service apache restart
In the end, to have a Git Repository authenticating with LDAP (with Group) authenticating, with GitWeb, some aliases, source IP restrictions and some rewrites to a gitweb file that looks like this:
Alias /<gitweb alias> /usr/share/gitweb
Alias /<shortened gitweb alias> /usr/share/gitweb
RewriteEngine On
RewriteRule ^/<shortened gitweb alias>/([^/]+)$ /g/?p=$1 [R,NE]
RewriteRule ^/<shortened gitweb alias>//([^/]+)/([0-9a-f]+)$ /<shortened gitweb alias>/?p=$1/.git;a=commitdiff;h=$2 [R,NE]
RewriteRule ^/<shortened gitweb alias>/([^/]+)/([0-9a-f]+)$ /<shortened gitweb alias>/?p=$1;a=commitdiff;h=$2 [R,NE]
<Directory /usr/share/gitweb>
Options FollowSymLinks +ExecCGI
AllowOverride all
AddHandler cgi-script .cgi
Order deny,allow
Deny from all
Allow from <restricting IP addresses>
SSLRequireSSL
AuthType basic
AuthName "Private git repository"
AuthBasicProvider ldap
AuthLDAPURL "ldap://<ldap server>:<port>/<LDAP User DN>?<LDAP User ID>?sub?(objectClass=*)"
Require valid-user
AuthLDAPGroupAttribute memberUid
AUthLDAPGroupAttributeIsDn off
Require ldap-group <LDAP Group DN>
</Directory>
ScriptAlias /<shortened git alias>/ /usr/lib/git-core/git-http-backend/
<Directory "/usr/lib/git-core/">
Options +ExecCGI
SetEnv GIT_PROJECT_ROOT <path to projects>
SetEnv GIT_HTTP_EXPORT_ALL
Order deny,allow
Deny from all
Allow from <restricting IP addresses>
SSLRequireSSL
AuthType basic
AuthName "Private git repository"
AuthBasicProvider ldap
AuthLDAPURL "ldap://<ldap server>:<port>/<LDAP User DN>?<LDAP User ID>?sub?(objectClass=*)"
Require valid-user
AuthLDAPGroupAttribute memberUid
AUthLDAPGroupAttributeIsDn off
Require ldap-group <LDAP Group DN>
</Directory>
And we are done (well other than making the virtual host allow the rewrites).
Just to prove it, here is a sample checkout:
~/temp$ git clone https://<server>/<GIT Alias>/test.git
Cloning into 'test'...
Username for 'https://<server>': <good user>
Password for 'https://<good user>@<server>':
remote: Counting objects: 10, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 10 (delta 0), reused 4 (delta 0)
Unpacking objects: 100% (10/10), done.
~/temp$ rm -rf test
~/temp$ git clone https://<server>/<GIT Alias>/test.git
Cloning into 'test'...
Username for 'https://<server>': <bad user>
Password for 'https://<bad user>@<server>':
fatal: Authentication failed