Easytether and Networkmanager

Easytether is a nice application you can use to tether your Android phone regardless of restrictions put in place by your carrier.  It consists of an app for the phone, as well as a program you run on your computer.  The two then talk to each other using the phone’s USB debugging abilities (ie, through ADB, which is integrated).  On the computer-side, Linux, Windows, and OS X are supported, and you can even use it with a Raspberry Pi.

I use this to occasionally tether my laptop, which runs Xubuntu.  On GNU/Linux, the PC-side application works by creating a tun (virtual) interface.  You start it from the command line like this ($ indicating the command prompt):

$ sudo easytether-usb

When I first did this (must have been a few years ago), Networkmanager would immediately detect the interface and manage it, so I didn’t really have to do anything else.  As soon as I started the program, I’d have a network connection.   This does not work with Xubuntu 16.04, however – the interface comes up, but I have to configure it manually.  No big deal, but now after running the above I have to do this:

$ sudo dhclient easytether-tap

Or this instead of dhclient, to do it manually:

$ sudo ifconfig easytether-tap 192.168.117.2
$ sudo route add default gw 192.168.117.1$ sudo sh -c ‘echo “nameserver 8.8.8.8” >> /etc/resolv.conf’

That works fine, but it’s kind of a pain.  Networkmanager is fairly capable these days, and it would be nice to make it do some of this for us again.  The reason it doesn’t is that, with the current version (1.2.0-0ubuntu0.16.04.2 on this machine), virtual interfaces that it didn’t create itself are ignored by default.  Luckily, there is a way we can make a new connection and automate things.  Unfortunately, this doesn’t seem to be possible with the panel applet in Xubuntu, but we can do it from the command line using nmcli.

In the above lines, easytether-tap is the default name of the virtual interface that the PC application creates.  First, let’s create a new connection named ‘phone’ that uses this interface.  Then, we’ll add a DNS server.  (Note that the below lines can be done as a normal user, no sudo needed.)

$ nmcli connection add ifname easytether-tap con-name phone type tun mode tap ip4 192.168.117.2 gw4 192.168.117.1
$ nmcli con mod phone +ipv4.dns 8.8.8.8

You could of course name it something other than phone.  Note that that configures the interface manually, which I just did for simplicity.  Also, 8.8.8.8 is Google’s public DNS, and you could substitute a different server IP.  Now, assuming you have easytether-usb started like in the first command in this post, you can bring the interface up like this:

$ nmcli connection up phone

And then bring it down like this:

$ nmcli connection down phone

And that’s it!  But, we still have to start easytether-usb when we plug the phone in.  It’s possible to automate that, too, using Networkmanager’s dispatcher scripts.  I created the following script, and saved it as 90easytether.sh:

#!/bin/bash

IFACE=$1
STATUS=$2

if [ “$IFACE” == “easytether-tap” ]
then
case “$STATUS” in
pre-up)
logger -s “Starting easytether-usb daemon”
easytether-usb
;;
down)
logger -s “Stopping easytether-usb daemon”
killall easytether-usb
;;
esac
fi

Now, move the script into the dispatcher.d directory, then change the ownership and permissions, and create a symlink to the script in the pre-up directory:

$ sudo mv 90easytether.sh /etc/NetworkManager/dispatcher.d/
$ cd /etc/NetworkManager/dispatcher.d
$ sudo chown root:root 90easytether.sh$ sudo chmod 755 90easytether.sh
$ cd pre-up.d/
$ sudo ln -s /etc/NetworkManager/dispatcher.d/90easytether.sh ./

Now, we need to start the dispatcher service, an enable it on boot:

$ sudo systemctl start NetworkManager-dispatcher.service
$ sudo systemctl enable NetworkManager-dispatcher.service

I also found I needed to restart Networkmanager itself:

$ sudo systemctl restart network-manager.service

Basically, that script is triggered by the pre-up and down actions, and starts and kills easytether-usb for us.  The symlink was necessary because Networkmanager needs anything that acts on this to be in the pre-up.d directory.  Now, you plug in your phone, then use the nmcli commands from above to bring the interface up and down, and that’s it!

One caveat: You need to be able to use USB debugging on your phone.  Also, you may have to confirm on your phone that you want to be able to connect from your PC or laptop first.  I also find that I have to mount my phone (IE, click on the icon on the desktop and browse to it in the file manager) before easytether-usb will connect.  Make sure you get Easytether working manually like at the beginning of this post, and you should be fine.  Have fun!

No Icons on Cinnamon Desktop

I’ve posted before about Linux Mint, as well as using Xfce.  While I hadn’t gotten Cinnamon working on the older machine, I did realize that there is in fact an ebuild for this on Gentoo.  I’ve been trying it on a laptop, and have found it to be pretty nice.  However, after logging in a few times, I could no longer see any icons on my desktop, nor could I drag anything onto it.

This is caused by the filemanager, nemo, not starting at login and taking over the desktop.  Starting the file manager manually by clicking the taskbar icon brings the icons back, but this only lasts until you close it.  I wasn’t sure what was causing this, until I remembered that I still had Gnome3 installed, which uses nautilus to manage the desktop.  Cinnamon is derived from Gnome, and you can actually use gnome-session-properties to manage startup applications in both.  The problem was that both nautilus and nemo were trying to take over, and neither were winning.

I didn’t have gnome-session-poperties installed, but I got it by emerging gnome-media.  In the list of applications it listed ‘Files’ twice.  Find the one that starts nautilus (highlight it and click ‘Edit’, then see what command it uses) and disable it.  After logging out and logging in again I had my desktop back to normal.

Trying Linux Mint 13

Mint

I had been curious to try out Linux Mint (see also the Wikipedia entry) for a while now.  Basically, it’s a distribution based on Ubuntu, which aims to provide a better out-of-the-box experience by including some proprietary software.  Ie, when you install it, you shouldn’t have to install many codecs or whatever, as you do with Ubuntu.  It’s also gotten more and more attention as people have grown tired of UI changes with Gnome 3 and Ubuntu’s Unity, since its feel is closer to that of Gnome 2.  Partly because of this, Mint has grown in popularity, spreading and taking over like its namesake herb (though it still seems to be second to Ubuntu).  Because of this, I thought I’d give it a try.

I have an older desktop, actually the first one I built back in the early 2000s (I think around 2003 or 2004).  This machine is old, but not exactly decrepit – it’s a dual AMD MP2800 rig, with 1 GB of RAM and a 120 GB hard drive.  Not my most powerful computer, but it still works pretty well.  Since it wasn’t doing much I figured I’d use it to test Mint 13, the latest release as of this writing.  I’m not going to lie, I don’t really feel like doing a full review.  There are tons of those around, just go to Google.  However, I have some remarks and a random tip or two.

Overall, installation was a pain and took me several attempts.  Now, don’t get the wrong idea here, I don’t really blame Mint for this.  This machine does not have a DVD drive, and Mint 13 no longer distributes CD images – they’re too big.  Now, I tried using a USB stick, which seemed to work, but was also slow…  Because while the motherboard can boot from USB, the onboard USB is 1.1, and it won’t boot off the 2.0 PCI card I have in there.  Luckily, there is a guide on howto remaster the DVD image and shrink it.  Basically, you just remove some installed packages.  I didn’t have a Mint system already, so I had to mess around with the tool they mention in order to run it on an Ubuntu 12.04 system.  (Basically, you need to either install the mint-core package in Ubuntu, or download the .deb and force an install or extract it.  Sorry to gloss over this, but hopefully most people either have faster USB or a DVD drive. If someone wants instructions, I’ll try to put something together.)  I remastered the no-codec CD, and ended up removing samba, firefox, java, and a few other things.  This should get you an image small enough to fit on a CD.  When the install’s done, just use the package manager to put everything back (that you want).

So, I got the install rolling.  However, I then ran into a problem with the installer hanging toward the end of the process.  Everything seemed to finish, but the window would close and leave only the spinning mouse cursor to indicate something was happening.  After leaving it for a while, rebooting revealed a failed install.  A fix for that is here.  Basically, when the live CD (or USB stick or whatever) boots, open a terminal and type this:

sudo apt-get remove ubiquity-slideshow-mint

This should allow the install to finish.

Finally, the system would not detect my nVidia card.  This card is old, and requires the legacy drivers, which the proprietary driver manager was unable to find.  This was annoying, but I found a solution here.  That guide has you set up a repository for Ubuntu Oneiric, and install the Xserver from that distribution along with the legacy drivers.

After going through that, I had a working install!  Like I said, I don’t blame Mint.  Not too much, at least.  It would have been nice if there was some support still there for the older video card, but then again this hardware is ancient.  I can also understand why they don’t distribute CDs.  But there you go, I’m probably not the only one who will have to go through some hoops to get it running on something older.

There is one more problem, however.  Due to what I can only consider to be my older video card, the desktop effects are slow.  Too slow to be usable, in fact, so I turned them off.  This also means no Cinnamon desktop, although the fallback Gnome option is actually quite nice. I’ll probably use this system as it is for a while.  If you liked Gnome 2, and want an Linux distro that’s newer but has that kind of feel, I would highly recommend checking Mint out.  That said, I’m not sure I’d install this again, but that’s just me.  Overall, it’s very nice.

XFCE/Nautilus Hybrid Desktop

It’s been a while since posting here, but I haven’t forgotten about you :).  I’ve still been mulling over (read: procrastinating) on some things, but a lot of that has been put on hold due to some positive developments in my life which I will not go into.  However, I do have some other news:

  • Development on my inverter project continues, look for more information sometime in the future (who knows).
  • I recently received my Raspberry Pi, and am looking at making it into the server for this site.
  • I’ve also been considering ways to make the Pi run at least partially on solar power.  As I’ve mentioned before this would be a neat thing to have my server do, although there are some logistical obstacles.

That said, the rest of this post is not about anything mentioned in the above list.  Rather, it deals with my frustrations regarding Gnome.  I upgraded to Gnome 3 on my Gentoo machine, and for the most part was happy with it.  That is, once I got used to the layout.  I understand why people might not like it, but it didn’t bother me too much.  However, eventually it started getting unstable, as in certain things would make it crash.  At first it was not much of a problem, but then it started to get more and more random.  I switched to Fallback Mode, which I actually kind of liked too.  However, this too proved to crash a little too often.  (Note: I was also having some issues with the clutter-gst package and introspection USE flag not compiling, but I think my stability problems are related to the current nVidia drivers.)

The other day, a crash happened while playing around with the excellent EDA software KiCad, and that was it: time to migrate to something else.  I’d used Xfce in the past, and had been happy with it, although I usually stuck to Gnome because I was familiar with it most of all.  Desperate, I emerged it.  It’s a great environment as is, but it just felt lacking.  I wanted to manage my desktop like I did under Gnome, with my wallpapers and the like.  Actually, I wanted Gnome, but it just wasn’t working out.  This post will explain how I made this environment more like Gnome by using Xfce with Gnome’s Nautilus file manager.  It’s not difficult, and hopefully it will help someone out.

Xfce is a lightweight desktop, and I should point out that doing this sort of defeats the purpose of having it.  That said, I had the hardware resources to use Gnome more than comfortably, so this really isn’t an issue.  Also, if you really like it, you might consider a distribution that uses it by default, which will probably integrate it fairly well.

Anyway, assuming you’re switching from something else (Gnome in my case), to start with you’ll need to install Xfce.  To do this on Gentoo, I used the following command (as root):

emerge -avt xfce4-meta

(Note: The Gentoo Xfce Configuration Guide is a great place to check out if using this distro.)

Next, logout and log back in with Xfce selected as your desktop.  You should have a vanilla looking desktop, and now we want to have Nautilus manage the folders and icons.  Open the Settings Manager (Applications Menu->Settings), and click on Session and Startup.  Now, before I go on, I should say that we are going to determine what gets started when you log in to your Xfce desktop.  So, close everything you don’t want open (leave the Settings manager open though).  In the Session tab, you should see a list of running applications.  Select the xfdesktop program, and kill it.  Now click the Save Session button.

So now, Xfce isn’t managing the desktop.  Click the Application Autostart tab, and you will see a bunch of different services, some Xfce-related, some Gnome-related.  In my case there was one called Files, which I checked.  This basically runs the command nautilus -n, which has Nautilus manage the desktop.  (If you’re unsure, you can select it, hit Edit, and look at the command.)  I also activated some other things, like the SSH Key Agent.

Logging out and logging in again, you should now have Xfce panels, but with a desktop managed by Gnome that responds like it did before.  If you want, play with the Xfce panels, and you can make it look fairly Gnome-like.  It’s a little weird, but I’m happy with it.  It doesn’t have all the 3D effects of Gnome, but it’s responsive, and so far has been fairly stable.

Nautilus and vsftpd

Just recently I was playing around with setting up an anonymous FTP server using the excellent vsftpd software.  (The ‘vs’ is for ‘very secure’.)  To test this out, I decided to connect from Nautilus, on my Ubuntu laptop.  Unfortunately, I got an error like this:

Sorry, could not display all the contents of "/ on ftp.whatsmykarma.com"

So, I gave this a try instead with the commandline ftp client on this machine, and it worked fine.  This had me scratching my head, but I figured it out eventually.  Basically, the problem is one of passive vs active FTP – look a the Wikipedia page for more information.  As it turned out, the commandline FTP client was defaulting to active mode, which basically means that the server listens on one port, and connects to a port on the client (that is, the client has to have one open and listening) to make the transfer.  This is fine if you’re on a local network, or have a loose environment firewall-wise where the server can talk to the client this way.  However, because this setup is a bit more difficult, passive FTP was introduced – the server listens on one port and then some other random ports.  The client doesn’t have to listen, just connect to the server on port 21, and then get which other port it has to connect to on the server to actually do the transfer.  This is basically an improvement, but it still means we want to listen on more ports than just 21.  For whatever reason Nautilus only seems like passive FTP (which is probably a good thing), and was giving that error because I’d only forwarded port 21 on the server.

Now, the fix for this is simple, but before I give much of my vsftpd.conf file I would like to address a concern that will no doubt be brought up: FTP isn’t that great.  Well, not from a security standpoint, at least.  Normal FTP just sends everything in clear text, and with minimal effort someone who has access to the datastream (eg, the guy in charge of your company’s firewall, or someone on your wireless network) can easily get this information.  Perhaps your FTP login details are used elsewhere, like for logging into to a system account.  (Like, you sit at your workstation and login with those credentials, and someone else getting them means they can read your Email and delete your crap.)  These days there are better options, such as SFTP (basically an FTP-like protocol that tunnels over SSH, and only really needs OpenSSH) that are better for a lot of things.  However, FTP is nice for certain things, like if you have a server that hosts a bunch of big files you want to put up for download.  You can do this with anonymous FTP, with no need for sensitive usernames and passwords.  I’m going to assume that this is kind of why you’re looking into FTP, and that roughly you know what you’re doing.  (Note that it’s also possible to use FTP with SSL, which could be handy in some cases when you really want to use FTP with login info.)

Anyway, here’s what we have to do.  We want to use passive FTP, and configure vsftpd thusly.  To do this we need to forward port 21 to it (of course), but we also need to have it listen on another port to do the transfers.  Traditionally the FTP server would randomly pick a port from a range, but you really only need one.  I chose 2020.  It can be anything, so long as it’s above 1024.  This is because the server will try to bind it as a non-root user.

Now, the other thing the server will do when we tell it we want to go into passive mode is send an address for the client to connect to (with the given port).  In my case, I’m on a dynamic IP, so we would like to give it a hostname to use.  Luckily, vsftpd will allow us to do all this.  So, for a simple, anon-only FTP server that works in passive mode, here is my config:

listen=YES
local_enable=NO
anonymous_enable=YES
write_enable=NO
anon_root=/var/ftp
pasv_enable=YES
#
# Optional directives
#
#anon_max_rate=2048000
xferlog_enable=YES
listen_port=21
pasv_min_port=2020
pasv_max_port=2020
pasv_addr_resolve=YES
pasv_address=ftp.example.com

Change the details or add stuff to suit your setup.  Note that we have a pasv_min and pasv_max port, these can just be the same thing.  pasv_addr_resolv=YES just lets us specify a hostname.  And that’s that, restart vsftpd and enjoy FTP.

Gentoo Network Interfaces Problem

I recently had an issue on my Gentoo desktop which was sort of frustrating, but which I’ve since gotten past.  I turned my machine on one day, only to find the mouse and keyboard not responding once the login screen came up.  Now, most of the time that’s just what can happen after an update, if you don’t reemerge xorg-server and the xf86-input-whatever packages.  Problem was, I couldn’t log in, and for some reason, eth0 hadn’t come up.  This meant no SSH, either.

First, a little about my setup.  I have two network cards in this machine.  One is the onboard gigabit one, which is my main nic (eth0).  The other is an extra PCI one (eth1) that I have statically configured for cases when I want to do a direct transfer between machines, or troubleshoot (like in this case).  The problem was, eth0 wasn’t coming up, and was instead doing a DHCP timeout (as if it weren’t connected at all).  I checked the cables, all looked good.

I ended up booting into the System Rescue CD (which I recommend having on hand if you do anything with computers) and checking a few things out.  I checked /var/log/messages and found these lines:

Mar 27 15:18:56 fishingcat kernel: [   13.847407] ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 27 15:18:57 fishingcat dhcpcd[2401]: eth0: waiting for carrier

This had me puzzled, but then I found this:

Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7333]: Udev uses a devtmpfs mounted on /dev to manage devices.
Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7335]: This means that CONFIG_DEVTMPFS=y is required
Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7336]: in the kernel configuration.
Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7324]: ERROR: udev-mount failed to start
Mar 27 15:27:12 fishingcat /etc/init.d/udev[7323]: ERROR: cannot start udev as udev-mount would not start

Bingo.  I had remembered from an encounter at work that udev likes to create persistent naming rules for hardware.  This is a new feature, and is generally a good thing: it keeps eth0 as eth0 for the next reboot, same for eth1.  But with udev not starting, no dice.  So, following the error message, I enabled the appropriate kernel option in make menuconfig.  (For me, this was under Device Drivers -> Generic Driver Options -> Maintain a devtmpfs filesystem to mount at /dev.)

I’m not really sure what caused that to become disabled in the first place, but it’s fixed now, so there you go.

Raspberry Pi

I should take my second post of the new year to mention that I love raspberries.  I’m not sure if they’re my favorite, but they’re damn close.  I’m not sure what else to say, other than try some in champagne, or other drinks.  Try them with cakes.  Candy bars containing raspberry are good as well.

This post, however, is not about the berry, but the Raspberry Pi, a tiny embedded Linux system that boasts enough power to be a full-featured desktop.  (If you came upon this post, you’re probably already somewhat familiar.)  There are two models, at $25 and $35.  The more expensive one has two USB ports as well as an Ethernet port.  (The lesser, I believe, only has one USB and no Ethernet.  But, you could get a hub.)  Both have a 700 MHz ARM-based CPU, and 256 MB of RAM.  They are surprisingly capable graphics-wise (check out the link for more details), and they are also low power.  This is what interests me.

I posted before about possibly configuring my Web server to run at least partly on solar power.  On that post, someone commented on the possibility of using the Pi for this.  And so, I have a model B on order – this board uses about 3.5 watts, so that’s a start.  I would use an external USB drive for much of the filesystem, which would bring this up some, but it should still be less than my current setup which consists of a Mini-ITX board (about 1 GHz, with 256 MB RAM and a 120 GB hard drive).

I have a model B on order (should come in May), and my plan is to throw Debian on there and test this out.  My current server does a decent amount, but it doesn’t seem to get overtaxed.  I’d be looking to run Web (Apache; yes I know there are smaller servers that might work, but I’d like to try this), PHP, MySQL, Email (Postfix/Courier), and LDAP (Email backend).  This should be interesting, and if I make careful use of the onboard SD card I think there’s a shot that this could turn out well.

As for power, part of the inefficiencies of my current setup (roughly 30-4o watts at the plug) are due to the power supply.  With a much smaller supply I should be able to bring this down.  As I also mentioned in the other post, I would like to come up with some sort of power sharing solution, where the primary power source for the Pi is solar, with the mains as a fallback and a battery backup in case of a power failure.  The idea would be to keep a normal system battery charged, rather than taxing it by cycling it each night to keep the server running.  Maybe someday I’ll end up with so much solar that the ~10 watts the whole setup should draw will be a drop in the bucket 24 hours a day (10 * 24 = 240 watt-hours/day), but for now I would take this approach.  (Also, a backup battery is good for other things, and I probably won’t care much about my Web server compared to, say, charging the cell phone or pumping out the basement.)

If the Pi isn’t up to this, I’ll probably try it with a media center-type application, and maybe look into a Sheevaplug for the server.  (It has USB also, so I could literally almost drop it in place.)  My server does get its fair share of hits, between this site, the Fever Dreams mirror, and when I host images on message boards like Fark.  But, I think that a small machine should be fine.

Cyrus-SASL and IMAP and SMTP-Auth

I lost a little sleep over this, so now that it’s resolved, I’ll throw it here.  I have a Debian mail server, on which I have Email for some local user accounts (system ones, in /etc/passwd), and some virtual users in LDAP.  In order to do this I use the Courier authdaemon, which checks both those sources and lets me authenticate my IMAP and POP3 servers to them.  This works fine.  Thing is, I want to do SMTP auth as well, with Postfix.  (Note: Configuring all these things has been documented many times, look it up.)

SMTP-Auth is generally done via Cyrus-SASL (via saslauthd), which itself can look to a lot of places to determine if the user can log in.  Since I want to allow logins for SMTP to be the same for IMAP/POP, I have saslauthd try to log into my IMAP server (the rimap option, on Debian in /etc/default/saslauthd).  This worked great, until I upgraded to Debian 6.0 (“Squeeze”).

After I ran the upgrade, I was surprised to find that Thunderbird was reporting that Postfix was refusing my SMTP password.  I checked /var/log/auth.log, and found stuff like this:

Aug 21 14:11:48 clamato saslauthd[7289]: auth_rimap: unexpected response to auth request: * OK [ALERT] Filesystem notification initialization error — contact your mail administrator (check for configuration errors with the FAM/Gamin library)

ug 21 14:11:48 clamato saslauthd[7289]: do_auth         : auth failure: [user=myuser] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server]

Now, both of the above entries are relevant, but I spent most of my time focused on the second one.  (D’oh!)  I searched around for it quite a bit, and found a couple bug reports about it from various points in the past few years.  I even tried one or two patches, but these did not work.  I tried downgrading and backporting, but these did not help.

Turns out the actually cause of this all (ie, SMTP-Auth not happening) has to do with the first line in that file, the one about FAM/GAMIN and filesystem notification.  Courier’s IMAP program likes to have this.  Apparently, it will even go so far as to tell you this when you log into the IMAP server, as I found out while finally deciding to do this via telnet.  I logged in fine, but in the login banner was the same alert message in the log…

Basically, what’s happening then is that saslauthd is logging into the IMAP server, getting the successful login response along with the alert message, and is concluding that the login was not in fact successful.  I was able to fix this just by installing the gamin package; see this page for slightly more information.

In my opinion, there is in fact a bug here, aside from my own inability to notice the other log message.   Courier logged me in successfully, even with the alert.  Saslauthd should’ve picked up on that.  Here’s the code to check this:

if (!strncmp(rbuf, TAG ” OK”, sizeof(TAG ” OK”)-1)) {
if (flags & VERBOSE) {
syslog(LOG_DEBUG, “auth_rimap: [%s] %s”, login, rbuf);
}
return strdup(“OK remote authentication successful”);
}

This is part of the auth_rimap.c file, and causes the auth_rimap function to return if the banner was correct.  However, if the banner is not exactly what it was expecting, it skips this and gives the unexpected response message.  I may put together a patch to change this, but right now I think I will go for a swim.

(On another note, I need to get a good code display plugin for this blog, so I’m not using block quotes for that.)

Ubuntu Directory

Right now I am on vacation, as my last post may indicate.  It is beautiful here, the beaches are nice, and the fresh air is a much welcomed diversion.  However, I still think about random projects and things, and hence I take a few minutes here and there to right them down or even make up a neat little blog post.

The subject of this post is something I have kicked around for a little while, but have recently resolved to try to take on.  Basically, the situation is this: if you haven’t figured it out, I am a GNU/Linux user (though I do use OpenBSD and occasionally FreeBSD for some things as well).  At home I have a few different machines that run Linux, including a small server, a desktop, and a laptop.  There are several others as well, but these are the ones I use mainly.  The thing is, I would love an easy way to manage users and permissions across them.  Basically, I’m thinking of something similar to a domain, like you might see in a Microsoft-based network.

Now, yes, of course you can join *nix machines to a Windows domain (or have one be a domain controller via Samba).  I don’t really have any Windows machines, though, nor do I want to buy/pirate a server version of Windows.  I could also use Kerberos and LDAP, and in fact I do use them.  They work well for me for the most part, but I said in the last paragraph that I wanted an easy way to admin the network.  If you follow one of the various online tutorials about getting the two going the process of setting this up isn’t actually that bad, but adding and removing users can be a bit of a pain.  I mean, it’s not really complicated, but you’d need to add the user to Kerberos, then to LDAP.  Then I guess you can use an LDAP browser to manage the rest.  But, it seemed to me like there should be some sort of GUI tool that would manage both, IE let you create a user and add them to some networked groups and whatnot.  I was thinking of the name while considering a networked version on Ubuntu’s user and groups tool, hence the name.  (In other words, it wouldn’t have to be Ubuntu-specific by any means.)

Now, yes, there are other projects that aim to accomplish this kind of thing, like FreeIPA.  And I won’t lie, that one looks pretty neat.  But it just seemed to me that just having a frontend to take care of some basic user/group stuff would help out a lot.  Especially if you already have a Kerberos/LDAP setup.  So, at some point in the future I am going to see what I can do with my idea, and maybe whip something up in Python and GTK.  I can’t make any guarantees right now, nor say when I will get to working on it, but we shall see.

As a final disclaimer, I am not what you would call an “IT professional”.  I program, and I’ve done quite a bit of work with *nix and networking, but this isn’t my normal gig.  I haven’t extensively used Active Directory, and so I’m not trying to clone it.  I just want an easier way to manage some users across my home networks (for a couple friends, the cats, etc.), but to have the option of still getting at the guts if I want.  Hell, maybe the best way to go about this is just a nice shell script anyway.

Windows Shares in /etc/fstab

A project I am doing at work involves having a Debian Linux machine mount a Windows share, and then letting a script go in and pull some data out of files for processing.  A lot of this is easily handled by Perl, but the first task is to mount the share.  And this had me stumped for a second.

The syntax for mounting a share in /etc/fstab looks kind of like this (on one line):

//servername/sharename    /mount/point   username=user,password=pass,default   0 0

Now, I had something like that set up, but when I would try to mount it by typing mount /mount/point, I would get an error like this:

[mntent]: line 21 in /etc/fstab is bad
mount: can’t find /mount/point/ in /etc/fstab or /etc/mtab

Now, I couldn’t figure out what was causing this at first.  I mean, it worked fine when I just used the mount command.  But, it turned out that the problem was a space in the Windows share name (something like /servername/share name).  I had thought that using a simple \ to escape the space would work, but on a hunch I searched to see if there was a proper way to escape, specifically for /etc/fstab.  This post explained it, and now I have the following in /etc/fstab:

//servername/share\040name    /mount/point   username=user,password=pass,default 0 0

Basically I just replaced the space with \040, which is unicode for a space.  Now I can mount it with no problems.