The First Cut is the Deepest

Running LTIB for the First Time

I was recently reminded that getting LTIB running for the first time can be a bit frustrating, so I thought I’d do it on a freshly-installed instance of Xubuntu that I have running on my Asus netbook, and write up a quick tutorial.

I’ve also been touting that I can cross-compile a rootfs for the Pi “in under 10 minutes” with LTIB.  And this is absolutely true on my 6-core Xeon Linux workstation.  But I realize that despite the fact that that machine is a couple of years old, it’s still considered to be pretty fast.

So we’ll fire up my nearly 4 year-old Asus Aspire One and see how she does.   That should put an upper-bound on any time claims.  After all, who in their right mind would use a 1.6GHz Atom machine as a build platform?

[Update: 3 hours and 18 minutes wall-clock time to build the basic RPi configuration from scratch — with the vast majority of that building the kernel + modules — on a 4-year-old Netbook.]

Getting LTIB

This has been discussed before, but, briefly, to get the latest code from the repository (and you want the latest, as RPi support is not yet in any of the static releases) you’ll need perl5 installed, as well as cvs.   Issue the following commands:

wget http://ltib.org/pages/netinstall.txt
perl netinstall.txt

You should now have LTIB installed in the directory that you specified to netinstall.txt.

Installing packages

Before we run LTIB, let’s make sure the host is configured more-or-less properly.  LTIB will attempt to tell you what packages it needs to have installed on the host system, but the list falls short.   The LTIB mailing list provided an informal list of packages needed for Ubuntu about v12.04.  I’ve encapsulated that knowledge into the following script that you can run:

#!/bin/bash

set -e

# sudo apt-get install -y install
sudo apt-get install -y patch
sudo apt-get install -y g++
sudo apt-get install -y rpm
sudo apt-get install -y zlib1g-dev
sudo apt-get install -y m4
sudo apt-get install -y bison
sudo apt-get install -y libncurses5-dev
sudo apt-get install -y libglib2.0-dev
sudo apt-get install -y gettext
sudo apt-get install -y build-essential
sudo apt-get install -y tcl
sudo apt-get install -y intltool
sudo apt-get install -y libxml2-dev
sudo apt-get install -y liborbit2-dev
sudo apt-get install -y libx11-dev
sudo apt-get install -y ccache
sudo apt-get install -y flex
sudo apt-get install -y uuid-dev
sudo apt-get install -y liblzo2-dev
sudo apt-get install -y libglib2.0-dev

if [ `uname -m` == "x86_64" ] ; then
  sudo apt-get install -y ia32-libs gcc-multilib lib32g++

  if [ ! -e /usr/lib32/libstdc++.so ] ; then
    sudo ln -s /usr/lib32/libstdc++.so.6 /usr/lib32/libstdc++.so
  fi

else
  echo "Not 64-bit; skipping unneeded packages"
fi

Of course, package names can change, and other distros may have other dependencies, but this is the script that worked for my netbook.

Configuring a proxy server

Chances are you don’t need to configure a proxy server to use LTIB.   But if you get a timeout message when attempting to download packages from the gpp (Global Package Pool), you might need one.  Here’s how to set it up.

Unfortunately, LTIB doesn’t use the usual proxy server environment variables (http_proxy, HTTP_PROXY, etc.) but instead relies on the .ltibrc file in the directory where you checked out the ltib source.

# The HTTP proxy for internet access
# example: http://USER:PASSWORD@somehost.dom.net:PORT
%http_proxy
http://proxy.midnightyell.net:8080

And then further down, change %gpp_proxy from 0 to 1

%gpp_proxy
1

Running ltib

Change into the directory where you checked out the ltib tree, and type ./ltib.   You’ll see

Installing host support packages

and a warning about how this may take a long time the first time you run it.   What’s happening here is that LTIB is building and installing more packages needed for its own operation in /opt/ltib.   It’s installing them in a separate and distinct RPM database, so there’s no worry about it clobbering the versions of packages you already have installed.   If you ever need to change the packages in /opt/ltib, do the following:

./ltib --hostcf

And that will let you configure the packages installed on the build system.

Pre-Positioning the RPi Toolchains

While we’re waiting for LTIB to complete setup, we might as well look ahead a little bit.

Once we run ltib and choose RPi as our platform, it will complain about not being able to find the RPM that contains the toolchains.   The maintainers of the Global Package Pool are unable to host the RPi toolchains, so we must fetch it and place it in the Private Package Pool or in a local directory (one specified in .ltibrc:%ldirs) so that LTIB can find it.

The easiest thing to do is to download the official RPi toolchains as an RPM file from https://github.com/downloads/midnightyell/RPi-LTIB/raspberrypi-tools-9c3d7b6-1.i386.rpm and place the file in /opt/freescale/pkgs or /opt/ltib/pkgs.   LTIB will then install the RPM when it needs one of the toolchains contained therein.

Finally!

./ltib -c

Choose the Raspberry Pi as the platform, then exit, Save = yes.

The first time compiling for a new platform takes a bit longer than subsequent builds.   This is because LTIB actually caches the source tarballs in /opt/ltib/pkgs, so once the 100M kernel source tarball has been downloaded, you don’t have to do it again.

In fact, LTIB caches binary RPMs in rpm/RPMS/arm so if it needs to install a package that was previously built and hasn’t changed, it won’t waste time recompiling it.

Don’t go too crazy at first, choosing packages willy-nilly.  Not all packages work on the pi yet.  Start with the default configuration.   It will build a kernel from source, and use busybox for most of the utilities in the rootfs.  If all goes well, at the end of the build, you will be prompted with a banner like:

 _   _            _ 
| | | | ___ _   _| |
| |_| |/ _ \ | | | |
|  _  |  __/ |_| |_|
|_| |_|\___|\__, (_)
            |___/

That instructs you to issue 1-2 sudo commands in order to build the RPi SD card image. Unfortunately, the post-build script is executed with regular user privileges, and the LTIB maintainers decided that it would be better to prompt the user to issue the sudo commands than to potentially block an automated build by prompting for a sudo password.

In any case, you can edit config/platform/rpi/post_build.sh; so that it executes the commands rather than echoes them to the screen.    If you do, it makes life easier if you modify /etc/sudoers so that the commands can be executed with no password prompt.

My /etc/sudoers has the following line, and my build user is a member of the admin group, so this works for me:

%admin ALL = NOPASSWD: ALL

Writing the SD card image

The rpi_sdcard.img file that results from all of this is a bootable image that needs to be written to the SD card with dd.

Be.  Extremely. Careful. to ensure that you are writing to the correct device.   Failure to Be.  Extremely. Careful.  will result in you overwriting another disk drive in your system with this RPi image.   This would be most unfortunate if it were the disk you’ve booted from, or one containing your vacation pictures from the Tesla Museum.  Or the birth of your first child.

If you’re not 100% sure of what you’re doing, stop now.

Okay, that said, I determined that my SD card reader shows up as /dev/sdb on my machine.  There are several ways to do this, including watching the log for attach messages when you insert the card reader.  You might also try cat’ing /proc/partitions to see what’s new when you insert the SD card.

So, noting that we’re writing to the disk device (/dev/sdb) and not a partition (/dev/sdb1), for me, on my system (Be. Extremely. Careful. of what you’re doing before you cut & paste this command), the following command writes the sd card image to my SD card.

sudo dd if=rpi_sdcard.img of=/dev/sdb bs=1M ; sync

That’s it.

Next up: useful ./ltib tricks, and troubleshooting.

Advertisements

But I don’t want to go on the cart!

Is LTIB really the right choice for cross-compiling for the Raspberry Pi?  I mean, it’s designed to create an entire board support package for an embedded Linux system:  a kernel, bootloader, root file system, binutils, do memory allocation, system init, etc., etc., etc.

At first glance, that seems a bit like killing a horsefly with a flamethrower if all you want to do is cross-compile.  I was definitely of this opinion before I started making LTIB support the Pi.  I resisted doing the work.  I wanted to do things with my Pi, not futz around with things that were much like my day job.  This was supposed to be fun, right?  So I set out to manually cross-compile MAME for the Pi.

How’d it go?  Well, let’s take a look at my project notebook from a couple of weeks ago.

Image

God, my handwriting is atrocious.

Since I was compiling and linking for the ARM, all the libraries needed by MAME also needed to be compiled for ARM and installed on my build system.  And they needed to be installed in places other than /lib and /usr/lib, so I was going to have to pass in those locations to all the config scripts for all the packages.  Some of the software packages required that other packages be installed on the build system in order to work properly, so those needed to be compiled for x86 so that they could make ARM binaries.

And you can see a partial list of libraries needed just for MAME:  gtk+, gconf, pango, cairo, fontconfig, glib, gdk-pixbuf, atk, pkg-config, libivconv, libffi...

I was starting to feel like I was looking for metal so that I could make a shovel so that I could mine for iron that I needed to make the tool necessary to do the job that I wanted to do.

Ugh!

But these frustrations are exactly the problems that LTIB was designed to solve.  The fastest path through the woods, it turned out, was to make a tool that I was familiar with support my favorite new platform.

I knew that once I got basic support for the Pi working in LTIB, getting it to cross-compile AdvanceMAME would be pretty straightforward.  I could then grab the resulting RPM and install it on Raspian with no trouble at all.

More importantly, because it would cross-compile so quickly compared to a native compilation, I could easily try out different compilers, settings, etc. to see which combination produced the best binaries. [ Or which setup produced binaries at all. I ran into situations where I had to upgrade gcc from 4.6 to 4.7 when I was compiling advanceMAME inside QEMU or natively on the Pi; gcc-4.6 generated bad object files. It was painful to have a compilation fail like this after an hour. I was sold; cross-compiling in 5-10 minutes was for me! ]

Walkthrough for advanceMAME

To build advancemame-0.106, I did the following.

I put the source tarball into a direction mentioned in the lpp section of .ltibrc. /opt/freescale/pkgs, in my case.

In dist/lfs-5.1 I created advancemame/advancemame.spec as follows:

%define pfx /opt/freescale/rootfs/%{_target_cpu}

Summary   : Advance MAME Arcade Machine Emulator
Name      : advancemame
Version   : 0.106.1
Release   : 1
License   : MAME License
Vendor    : http:/advancemame.org
Packager  : Midnight Yell
Group     : Applications/Entertainment
Source    : %{name}-%{version}.tar.gz
BuildRoot : %{_tmppath}/%{name}
Prefix    : %{pfx}

%Description
%{summary}

%Prep
%setup

%Build
if [ ! -e obj/mame/linux/blend/cpu/m68000/m68kmake ] ; then
  # m68kmake must be built natively and advancemame doesn't handle
  # native building & spoofed paths well, so un-spoof the paths and
  # make it.
  ORIG_PATH=$PATH
  export PATH=$UNSPOOF_PATH
  ./configure
  mkdir -p obj/mame/linux/blend/cpu/m68000/
  make obj/mame/linux/blend/cpu/m68000/m68kmake
  export PATH=$ORIG_PATH
fi

./configure --prefix=$RPM_BUILD_ROOT/%{pfx}/%{_prefix} --host=$CFGHOST --build=%{_build} --mandir=%{pfx}/%{_mandir}
CFLAGS="-O2 -march=armv6j -mfpu=vfp -mfloat-abi=hard" \
make%Install
rm -rf $RPM_BUILD_ROOT
make install DESTDIR=$RPM_BUILD_ROOT/%{pfx}%Clean
rm -rf $RPM_BUILD_ROOT

%Files
%defattr(-,root,root)
%{pfx}/*

And note that AdvanceMAME is one of those packages that builds a tool that needs to be run natively so that it can build other files that run on the ARM — most packages will have simpler .spec files.

To add advmame to the menu system you have to modify 2 files in config/userspace.  pkg_map:

PKG_ADVMAME = advancemame

And extra_packages.lkc:

config PKG_ADVMAME
bool "advmame" 
help
  This package is a MAME emulator. It contains no ROMs.

That was it!  The next ./ltib -c had advMAME as a menu item under Packages, and when finished, LTIB left an RPM in rpm/RPMS/arm/advancemame-0.106.1-1.arm.rpm

Installing the rpm on Raspian is easy, though slightly non-standard in that you have to specify –relocate because the paths in the rpm include /path/to/ltib/dir/rootfs/usr/bin and you want to actually install to /usr/bin.

After I got AdvanceMAME and AdvanceMESS working, I made similar spec files for xtailpocketsphinx, and a few others.

Next, I want to get LTIB’s version of X up-to-date.  I have something special in mind for my Pi running a low-resource version of X.   Something special indeed…

Cross-compiling for the Pi made easy *

(*) Or at least a bit easier.

Last weekend at the talk given by Rob Bishop of the Raspberry Pi Foundation at Austin Hackerspace, I got up and spoke a little about some work I’d been doing around cross-compiling for the Pi using LTIB (the Linux Target Image Builder).  As LTIB has now made the Raspberry Pi an officially supported platform, I thought I’d write up something to introduce it to the community at large.

[ Update: See also my posts on Using QEMU to build for the Pi, and Using distcc to make cross-compiling for the Pi even easier! ]

First, some background:

What’s cross-compiling?

When you build software on the same type of system that it’s going to run on, it’s called building natively.  If you’ve been around Linux for a while, this is what you’re used to doing.  You build on x86, for an x86 target.

Cross-compiling is when you build on one platform to run on another.  Building on x86 for an ARM target like the Pi, for instance.

Why would I want to?

Speed!  My desktop machine is something like 50 times faster than my Raspberry Pi.  The Linux kernel takes about 3 hours to build natively on the Pi, and less than 10 minutes to build on my desktop machine.

What about Virtual Box, or QEMU?

Both are fine choices.  I’ve done QEMU.  It’s considerably faster than building natively on the Pi, but it still gets blown away by cross-compiling.

Are there disadvantages to cross-compiling?

Sure.  The main one is that cross-compiling is much more complex than building natively or using an emulator.  You have to get specialized versions of the C compiler, linker, etc. (collectively called the toolchain) for your host/target combination.  You have to ensure that the package build system knows how to use the right toolchain.  When it comes time to link your software, it must be against libraries that are also compiled for your target, which are going to be in directory locations other than the standard ones.

And some packages (I’m talking to you, Python, Apache, and MAME) weren’t written with cross compilation in mind.  These packages might compile program A, and then run A on the build system in order to build programs B, C and D.  So you have to know that A needs to be built for the x86, even though B, C and D are to be run on the Pi.

You have to dig in, root around, and hack things together to make things work.   All in all, it can be a huge pain in the ass.

Yuck!  Nevermind!

Wait!  There’s hope!

There’s a project called LTIB (the Linux Target Image Builder) that is designed to make cross-compiling much easier.  LTIB is designed to build an entire Linux distribution (really, a rootfs; it’s only a distro if you distribute it) for a variety of platforms using cross-compilers. It hides much of the complexity from you, and automates most of the task.

I was aware of LTIB because I used if for work.  It was great, but it didn’t support the Pi.

Until now.

As you can see from the screenshots above, LTIB uses the same menu interface as the Linux kernel.  If you’re comfortable building a custom kernel, you’ll do just fine in LTIB.

LTIB lets you pick the toolchain, kernel, & userspace packages you want to install.  You can even choose Busybox, instead of the full-sized versions of many common UNIX utilities.  And you can add your own packages fairly easily, too!

LTIB downloads the source tarballs from a repository (or looks at places on your local network or disk), expands them, builds everything for the target architecture, and creates a root file system.  In the Raspberry Pi’s case, LTIB will generate a bootable SD card image, ready to be copied to a card via dd.

On my desktop machine, I can build a basic image for the Pi from scratch, including the kernel, and write it to an SD card all in under 10 minutes.  Subsequent builds are even faster since LTIB caches previous build output and rebuilds only what it needs to.

Sounds great, sign me up!

Excellent!

Go to LTIB.org and follow the download instructions.  Bitshrine.org isn’t able to host the official RPi toolchains, so an RPM containing all three of them is available at https://github.com/downloads/midnightyell/RPi-LTIB/raspberrypi-tools-9c3d7b6-1.i386.rpm.  You may either install this yourself via rpm -i, or place the rpm file in /opt/ltib/pkgs, and LTIB will install it for you on the first run.

How exactly do I run this thing?

For now, I refer you to the LTIB documentation.   Once it’s installed, start with ./ltib -c

I’ll write up more step-by-step instructions in a later post.  I’ll also write a walkthrough on how to add your own packages, and tips on getting LTIB to successfully cross-compile them.

Please keep in mind that LTIB for Pi isn’t perfect.  Many of the supported packages are out-of-date.  Some may not build.  Not all of the packages have correct dependency information.

But if you need a low-memory usage Linux for your Pi, this is a hell of a start.

Why did you do this?

Mostly as a learning experience.  I wanted to make my own cross-compiling toolchain and rootfs.  I wanted to have Linux running in as little RAM as possible so that more would be available to my applications.  And I had already been working on related things at work, so I was fairly far along the learning curve when I started.

Links