Author Archives: Paolo Brocco

DSC: A configuration is pending: quick solution

Sometimes a DSC script doesn’t wor, fails silently and stays in the system, “pending”. Then, when I try to run a new DSC script I get this error:

A configuration is pending. If you are in Pull mode, please run Update-DscConfiguration to pull a new configuration and apply it. If you are in Push mode, please run Start-DscConfiguration command with -Force parameter to apply a new configuration or run Start-DscConfiguration command with -UseExisting parameter to finish the existing configuration.

Normally I don’t have many deployments running at the same time, so I can afford to run this powershell command:

Remove-DscConfigurationDocument -Stage Pending

This deleted all the pending DSC configurations.

How to blacklist nvidia in grub

This guide is for you, if you passed through your GPU to a VM and now you want to use it in your host and have an easy way to reboot and choose a grub entry to pass it through again.

Introduction

If you have a setup similar to mine, e.g. you want to passthrough your nvidia graphic card to your VM for gaming purposes, but you want to take advantage of your powerful GPU for your daily use in your host:

  1. first of all configure your system so that the nvidia driver is working correctly;
  2. then you can create a new entry in grub, so that when you boot you can choose it for gaming.

Assumptions

I assume that you have your system set already correctly to run a VM with GPU passthrough e.g. as explained in this very good guide: Play games in Windows on Linux! PCI passthrough quick guide.

If this is the case, vfio is loaded before any other modules, so that it can claim your GPU (if it’s blacklisted), basically you should have these entries (this may vary a bit, if you use AMD or need other modules than I do) in your /etc/modules :

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
kvm
kvm_intel
apm power_off=1

1. Configure NVIDIA

If your system is set as in the assumptions mentioned above, now you need to “de-configure” your setup, so that can use your GPU in your host. Don’t worry, later you will re-add all the options you deactivate, but in a new entry in grub (see step 2).

In my case, the nouveau and nvidia drivers were blacklisted in /etc/modprobe/blacklist.conf  so I deactivated the blacklist by commenting those lines:

#blacklist nouveau
#blacklist nvidia

I also had to comment the options I added in my /etc/modprobe.d/vfio.conf :

#options vfio-pci ids=10de:13c0,10de:0fbb,8086:a12f

To make sure that the nvidia installation would detect my graphic card, I rebooted. Maybe this was not really needed, but I preferred to be safe:

sudo reboot

Installing the nvidia driver for my GTX 980 GPU was as easy as running this command:

sudo apt install nvidia-384

Now reboot and check if your nvidia graphic card works as expected.

For testing you may want to install glmark2, a tool to benchmark OpenGL:

sudo apt install glmark2

You can install steam and play some games in linux. Why playing games in a VM if you can play them directly in your host?! Try it.

2. Create an entry in grub to boot your system with GPU passthrough

I was looking for this method for a while, I even had 2 linux distros installed: one for VMs, one for my daily use. With this option, this is not needed anymore. I can have just one linux distro, but a different entry in grub to boot it with the necessary options to use my VMs.

Important: never update /boot/grub/grub.cfg , as it gets overwritten when you run the update-grub command, instead, add custom entries, as follows:

sudo pluma /etc/grub.d/40_custom

In my case, I copied the entry from /boot/grub/grub.cfg and edited as follows, to include the vfio options and to blacklist nvidia. Make sure to change the vfio pci ids with the ones of your devices you want to passthrough:

menuentry "Bionic VMs" {
	set root='hd0,gpt1'
        linux	/vmlinuz-4.15.0-20-generic root=/dev/mapper/vg0-bionic ro acpi=force apm=power_off intel_iommu=on vfio-pci.ids=10de:13c0,10de:0fbb,8086:a12f modprobe.blacklist=nouveau,nvidia,nvidia_uvm,nvidia_drm,nvidia_modeset
	initrd	/initrd.img-4.15.0-20-generic
}

Save, update grub, reboot and enjoy!

sudo update-grub
sudo reboot

Troubleshooting

If you can’t see your grub menu, or it’s too fast when you boot, you may customize it a bit, here how my grub is configured for me in /etc/default/grub

#GRUB_DEFAULT=3
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=false
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX_DEFAULT="acpi=force apm=power_off intel_iommu=on"
GRUB_CMDLINE_LINUX=""
GRUB_DISABLE_OS_PROBER=true

I kept some lines commented, as sometimes I need to play with these options. I kept my timeout very low, just one second, because I want my system to boot fast, increase it if it’s too fast for you and comment the GRUB_HIDDEN_TIMEOUT line, as I did. When I need to boot differently, I just keep pressing the up/down arrow keys at boot time, until I see the grub menu.

Conclusions

I hope that this guide was helpful for you, if so, consider buying a gadget at banggood using my referral link. Like this (comparing to a donation) we both benefit, you get a gadget that may be useful for you and I get something too (a little commission, but the price for you is the same).

Ubuntu guide: Dropbear SSH server to unlock LUKS encrypted PC

This guide explains how to unlock a LUKS encrypted ubuntu system via SSH. This is convenient if in example you want to turn on a server but don’t have a keyboard and screen attached to it. Or if you don’t have physical access to it. I assume that you know already how to set up an OpenSSH server and you know how to activate/deactivate public key login. Else read Ubuntu Help: OpenSSH Server and check more online resources.

This guide was tested with ubuntu 18.04 and ubuntu 17.10.

To connect from Windows, I used ssh from bash (if you install Git for Windows you get bash).

Open a terminal and install dropbear and busybox:

sudo apt install dropbear busybox

You will get a warning here as it completes: dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won’t work!, just ignore it for now.

Activate BUSYBOX and DROPBEAR in initramfs

sudo nano /etc/initramfs-tools/initramfs.conf

Change BUSYBOX=auto  to option to BUSYBOX=y  and add (below it or at the end of the file) this line:

DROPBEAR=y

Browse to the /etc/dropbear-initramfs/ directory, which includes all the dropbear configurations needed to be included in the initramfs:

cd /etc/dropbear-initramfs/

Note: host keys are already present, as they were automatically generated during the installation of the dropbear package, so there is no need to create new ones as other guides tell you to do. Just convert the rsa one, as follows:

sudo /usr/lib/dropbear/dropbearconvert dropbear openssh dropbear_rsa_host_key id_rsa
sudo dropbearkey -y -f dropbear_rsa_host_key |grep "^ssh-rsa " > id_rsa.pub

Add your client public key to the authorized_keys. If you are logged to your machine via SSH, and your public key is already in your authorized_keys file, you can copy the existing authorized_keys file, as follows:

cp ~/.ssh/authorized_keys .

Else you can add a public key as follows: sudo echo your public key >> authorized_keys 

Set dropbear to start:

sudo nano /etc/default/dropbear

Change NO_START=1  to NO_START=0 

In dropbear, use a different port from the one you are using in your host, so you won’t get the annoying “man in the middle attack” warning in your ssh client that will notice that the host has different keys. Different ports are considered different hosts, so you won’t get any warning at all. I’ve seen other complicated solutions to avoid the warning, but I think that using a different port is the easiest and most elegant solution.

sudo nano /etc/dropbear-initramfs/config

Uncomment the DROPBEAR_OPTIONS  line and add the option to specify the port. In this example I use port 21. Use the port you desire.

DROPBEAR_OPTIONS="-p 21"

Now add the script that will be needed to actually unlock your LUKS partition:

sudo nano /etc/initramfs-tools/hooks/crypt_unlock.sh

Copy and paste the contents from gusennan’ sh script in the file (or copy the text from the raw file), then give it executable rights:

sudo chmod +x /etc/initramfs-tools/hooks/crypt_unlock.sh

Update initramfs:

sudo update-initramfs -u

Disable the dropbear service on boot, so it won’t interfere with your openssh server:

sudo systemctl disable dropbear

ImportantI had to update grub and disable the splash screen, because with splash active, after connecting to dropbear and typing unlock the screen was blocked and I could not enter the LUKS password.

sudo nano /etc/default/grub

In the GRUB_CMDLINE_LINUX_DEFAULT line, replace "quiet slash"  with "quiet" , as follows:

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

Save and update grub:

sudo update-grub

Reboot your server:

sudo reboot

Try to connect to your machine. You must use the root user, and specify the port you configured in the previous step:

ssh root@YOURSERVER -p 21

Once connected you will see something like this:

Warning: Permanently added '[YOURSERVER]:22,[YOURIP]:22' (ECDSA) to the list of known hosts.
To unlock root partition, and maybe others like swap, run `cryptroot-unlock`
To unlock root-partition run unlock

BusyBox v1.22.1 (Ubuntu 1:1.22.0-19ubuntu2) built-in shell (ash)
Enter 'help' for a list of built-in commands.

type unlock , insert your LUKS password, if everything worked correctly your partition will decrypt and your machine will boot. You will see this:

...a bunch of other info...
Connection to 192.168.0.xx closed.

Give it time to boot, then you can finally ssh into your linux box, as usual.

Encrypted HOME directory

If not only your partition is encrypted, but also your home directory, you won’t be able to login with your public key, as the public key is saved in ~/.ssh/authorized_keys , which is encrypted.

To solve this, follow Stephen’s Encrypted Home directories + SSH Key Authentication guide.

Troubleshooting

If you get this error when you try to connect to your server, it’s because you didn’t follow my advise to change port in dropbear:

ssh root|youruser@YOURSERVER
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@	WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! 	@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:m/************/****.
Please contact your system administrator.
Add correct host key in /home/youruser/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/youruser/.ssh/known_hosts:12
  remove with:
  ssh-keygen -f "/home/youruser/.ssh/known_hosts" -R "YOURSERVER"
ECDSA host key for YOURSERVER has changed and you have requested strict checking.
Host key verification failed.

I still prefer my solution, but if you insist on using the same port, here a few nerdy solutions:

Solution 1, works like a charm in linux, but not really on bash on windows.

Solution 2, provide some command line hack to avoid the warning:

ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no youruser|root@YOURHOST -p YOURPORT

Credits

This guide was inspired by: https://oliverviebrooks.com/2017/12/05/unlocking-luks-volumes-without-local-access/

Thanks also to Stephen (link above) for his encrypted home directories solution.

I hope that this guide was helpful for you, if so, consider buying a gadget at banggood using my referral link. Like this (comparing to a donation) we both benefit, you get a gadget that may be useful for you and I get something too (a little commission, but the price for you is the same).

Change Docker images location in Windows

One of the things I wish I knew before using Docker for Windows, is that configuring where images and containers are stored is not really straight-forward. Here how to change the default location from C:\ProgramData\Docker to whatever drive and folder you like. In my case I like to keep the same structure, but in D:, like this: D:\ProgramData\Docker

To change the location via UI, from your system tray, right click on the docker (the whale) icon:

Note, this assumes that you are using Docker from the “stable” channel. With “edge” I think you may not have the “Daemon” option as in the screenshot, then keep reading to change the file manually.

Then from the menu, select “Settings…” > “Daemon” > click on the “Advanced” switch > Add (notice that every backslash is escaped with another backslash), feel free to change the location to your desired one:

"graph": "D:\\ProgramData\\Docker"

The result should look like this:

Click “Apply”, docker will restart and you are set.

Alternatively you can edit the C:\ProgramData\Docker\config\daemon.json file and add the “graph” property with your favorite text editor, then save and restart the docker service.

To restart docker: right click on the docker icon > at the bottom click on “Restart”.

Unluckily, the old images will stay in the old location and it’s up to you to manually delete them after restarting docker. They are inside C:\ProgramData\Docker\windowsfilter

I’m not sure if you can simply move them to the new location. I didn’t bother. I tried to copy the files from windowsfilter but got some errors and saw that some links were not copied correctly, so I gave up and simply built my docker files to generate a new, clean images and containers from scratch.

Ubuntu: how to prevent grub installation

I have several linux installations in my system, and I like to control the only boot partition from my main linux distro.

Sometimes I need to update the kernel image in my secondary linux distros, but apt tries to install grub. I don’t want that. I often had to let it be installed and remove it via apt remove.

A better, permanent solution is to add this file preferences.

Thanks to: jdthood answer in stackoverflow.

Uninstall DaVinci Resolve 14 on Ubuntu

If you, like me, were tempted to Download DaVinci Resolve “free”* for Linux. Be aware that when you add folders to your media library, you won’t see your videos encoded in h.264 (which is the most used format).

Install instructions

Unzip the contents, then from the terminal run:

sudo sh DaVinci_Resolve_14.0.1_Linux.sh

Try to launch it from the command line too, you find the “resolve” program in /opt/resolve/bin

If you see some errors about missing .so files, you may need to install the packages containing the libs:

sudo apt install libssl-dev

You may also need: libgstreamer-plugins-base1.0-0

Then create symbolic links as follows:

sudo ln -s /lib/x86_64-linux-gnu/libcrypto.so.1.0.0  /usr/lib/libcrypto.so.10
sudo ln -s /lib/x86_64-linux-gnu/libssl.so.1.0.0  /usr/lib/libssl.so.10

I wanted to quickly try this video editor on Ubuntu 17.04 and I was highly disappointed not to be able to import any of my DJI drone videos. What’s the point of having a powerful professional editor that doesn’t even natively open any of my videos, while other open source editors like kdenlive, openshot can open these files? And no, I don’t want to convert my videos in a different format, just to open them in DaVinci.

Uninstall instructions

Also I struggled to find a way to uninstall this software, so here the instructions if you want to clean your hard drive after a disappointment similar to mine:

sudo rm -f -r /opt/resolve
sudo rm /usr/share/applications/DaVinci\ Resolve.desktop

If you this page was helpful, visit banggood to find a cheap gadget or something useful for a few bucks (free delivery worldwide!), if you buy from my link I get a small commission, thank you.

Jenkins: checkout Gerrit patchset (Gerrit Trigger plugin)

Today I had to setup automatic pipeline triggering for each new patchset pushed to Gerrit for review. The Gerrit Trigger plugin makes it a piece of cake to achieve the goal.

In reply to How to Checkout a Gerrit Change in a Jenkins Sandbox Pipeline: such a snippet can be easily found directly in Jenkins > browse to your pipeline > click “configure” > click “pipeline syntax” > Sample step: select checkout: General SCM > fill what you need, click on advanced and add a refspec and generate the snippet. Here a snippet using the GERRIT variables exposed by the plugin.

node {
 stage('checkout gerrit patchset') {
 echo "gerrit branch: ${GERRIT_BRANCH}, gerrit refspec: ${GERRIT_REFSPEC}"
 checkout([$class: 'GitSCM', branches: [[name: "${GERRIT_BRANCH}"]], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'CleanBeforeCheckout']], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'jenkins-rsa', refspec: "${GERRIT_REFSPEC}", url: 'ssh://yourgerritserver:29418/yourrepo']]])
 }
}

Note extensions: [[$class: ‘CleanBeforeCheckout’]] is a good idea if you need to build from different branches, if your setup is simpler, you can just use extensions: [].

MSBuild command line ignoring publish properties: solution

MSBuild 2017 command line seems to ignore your publish profile? You are trying to run a command similar to this and your projects/artifacts are not copied where they are supposed t be copied?

msbuild /p:Configuration=Release /p:DeployProjects=true /p:PublishProfile=Release

I have msbuild in my PATH, which is here just in case:

C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\amd64\msbuild.exe

Maybe you are using a build server, with Jenkins or a similar tool and you did not install Visual Studio 2017. I have such a setup and I installed just the ms build tools standalone package, find it at the bottom of this page: Build Visual Studio Downloads > find Build Tools for Visual Studio 2017.

This seems to be a well known issue, on github somebody reported the problem, and many people joined with a comment “Same to me”.

Solution

In my case, I noticed I forgot to select also Web development build tools > .NET Framework 4.6.2 development tools:

Updating ms build tools and adding this missing component did the job. Now when I run msbuild I get my artifacts as expected.

If this solution helped you, consider buying something at my favorite affiliate website. I consider this better than asking a donation, because like this you get a product you may need and use and I just get a little commission, which doesn’t affect the price, so win to win 😉

Maintain an offline NuGet source

To restore nuget packages in an offline VM, at work we need to regularly add those packages in a folder. We do it from our local packages folder, running a command like this:

C:\MySolution\packages>..\.nuget\NuGet.exe init . D:\NuGetSource

Then on the build server we configure a task to run this command before bulding the solution:

.nuget\nuget restore MySolution.sln -Source d:\NuGetSource

I’m sharing this, because I was thinking I could manually create a folder with the package name, then a subfolder with the version, then copy the nuspec and nupkg files. But the package I needed was never found. It seems that the init command I shared above adds also a sha512 file, this is the only difference and probably that’s why the package was not found.

iOS App Store Submission headaches

At work, from time to time we have to update our app. With Android the process (automated in TeamCity) goes smoothly. With iOS, if you are reading this chances are that you already know that it can be quite a pain: expired certificates, Mac OS updates, Xcode updates (at the moment of writing we use Xcode 8.2.1), plist updates, command line commands updates… even if you want to use an old system you can’t, you always need to update something (grr!).

In our TeamCity setup, we have the following 2 steps:

echo "Step 1: Building xcode project..."
 xcodebuild \
 -configuration Release \
 -project "./myapp.xcodeproj" \
 -scheme "MyApp" \
 -archivePath "./build/myapp.xcarchive" \
 clean build archive

and:

echo "Step 2: Exporting application..."
 xcodebuild \
 -configuration Release \
 -exportArchive \
 -exportFormat ipa \
 -archivePath "./build/myapp.xcarchive" \
 -exportPath "./build/myapp.ipa"

After the second step, we normally get an IPA file and upload it to itunes connect via Application Loader. In the last week, after apparently successful upload, as usual I checked in itunes connect > my app > activity > all builds, but I could not see the build! I waited a few more minutes, nothing. Usually it should be visible as “processing” and that can take some more time, but I was worried to see nothing at all… so I checked the command line output and noticed:

Codesign check fails : /var/folders/vk/…/myapp.app: a sealed resource is missing or invalid

and also:

xcodebuild: WARNING: -exportArchive without -exportOptionsPlist is deprecated

So I thought that maybe it would work better using the Xcode UI: “Product” > Archive, then “Window” > Organizer > Upload to App Store… this somehow gave more feedback:

ERROR ITMS-90035: “Invalid Signature. A sealed resource is missing or invalid.[…]”

Seems the same problem, but this time I could see the error upon upload attempts. So as suggested in this blog post by Ash: iOS App Store Submission Problems I added an export.plist file containing just the bare minimum:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
 <key>teamID</key>
 <string>MYTEAMID123</string>
 <key>method</key>
 <string>app-store</string>
 <key>uploadSymbols</key>
 <true/>
</dict>
</plist>

I had to find out our team id: https://developer.apple.com/account/#/membership

Then I adapted the export step and added the exportOptionsPlist option, like this:

echo "Exporting application..."
 xcodebuild \
 -configuration Release \
 -exportArchive \
 -exportOptionsPlist export.plist \
 -archivePath "./build/myapp.xcarchive" \
 -exportPath "./build/"

I could upload the resulting IPA via Application Loader and could see the build processing in Itunes Connect, but then it disappeared after a few seconds (I was reloading the page to see if there were updates). This time my boss received an e-mail that something was wrong. In our case:

*Missing Info.plist key* – This app attempts to access privacy-sensitive data without a usage description. The app’s Info.plist must contain an NSCameraUsageDescription key with a string value explaining to the user how the app uses this data.

After adding the missing info.plist key and value, the app could be uploaded and processed without problems. The story, written like this seems kinda easy, but I wasted at least half a day to try different commands and figuring out the proper solution. Sharing in the hope to help a poor soul with similar/same problems, as Ash did (thx man!). Good luck!