Easytether and Networkmanager

Easytether is a nice application you can use to tether your Android phone regardless of restrictions put in place by your carrier.  It consists of an app for the phone, as well as a program you run on your computer.  The two then talk to each other using the phone’s USB debugging abilities (ie, through ADB, which is integrated).  On the computer-side, Linux, Windows, and OS X are supported, and you can even use it with a Raspberry Pi.

I use this to occasionally tether my laptop, which runs Xubuntu.  On GNU/Linux, the PC-side application works by creating a tun (virtual) interface.  You start it from the command line like this ($ indicating the command prompt):

$ sudo easytether-usb

When I first did this (must have been a few years ago), Networkmanager would immediately detect the interface and manage it, so I didn’t really have to do anything else.  As soon as I started the program, I’d have a network connection.   This does not work with Xubuntu 16.04, however – the interface comes up, but I have to configure it manually.  No big deal, but now after running the above I have to do this:

$ sudo dhclient easytether-tap

Or this instead of dhclient, to do it manually:

$ sudo ifconfig easytether-tap 192.168.117.2
$ sudo route add default gw 192.168.117.1$ sudo sh -c ‘echo “nameserver 8.8.8.8” >> /etc/resolv.conf’

That works fine, but it’s kind of a pain.  Networkmanager is fairly capable these days, and it would be nice to make it do some of this for us again.  The reason it doesn’t is that, with the current version (1.2.0-0ubuntu0.16.04.2 on this machine), virtual interfaces that it didn’t create itself are ignored by default.  Luckily, there is a way we can make a new connection and automate things.  Unfortunately, this doesn’t seem to be possible with the panel applet in Xubuntu, but we can do it from the command line using nmcli.

In the above lines, easytether-tap is the default name of the virtual interface that the PC application creates.  First, let’s create a new connection named ‘phone’ that uses this interface.  Then, we’ll add a DNS server.  (Note that the below lines can be done as a normal user, no sudo needed.)

$ nmcli connection add ifname easytether-tap con-name phone type tun mode tap ip4 192.168.117.2 gw4 192.168.117.1
$ nmcli con mod phone +ipv4.dns 8.8.8.8

You could of course name it something other than phone.  Note that that configures the interface manually, which I just did for simplicity.  Also, 8.8.8.8 is Google’s public DNS, and you could substitute a different server IP.  Now, assuming you have easytether-usb started like in the first command in this post, you can bring the interface up like this:

$ nmcli connection up phone

And then bring it down like this:

$ nmcli connection down phone

And that’s it!  But, we still have to start easytether-usb when we plug the phone in.  It’s possible to automate that, too, using Networkmanager’s dispatcher scripts.  I created the following script, and saved it as 90easytether.sh:

#!/bin/bash

IFACE=$1
STATUS=$2

if [ “$IFACE” == “easytether-tap” ]
then
case “$STATUS” in
pre-up)
logger -s “Starting easytether-usb daemon”
easytether-usb
;;
down)
logger -s “Stopping easytether-usb daemon”
killall easytether-usb
;;
esac
fi

Now, move the script into the dispatcher.d directory, then change the ownership and permissions, and create a symlink to the script in the pre-up directory:

$ sudo mv 90easytether.sh /etc/NetworkManager/dispatcher.d/
$ cd /etc/NetworkManager/dispatcher.d
$ sudo chown root:root 90easytether.sh$ sudo chmod 755 90easytether.sh
$ cd pre-up.d/
$ sudo ln -s /etc/NetworkManager/dispatcher.d/90easytether.sh ./

Now, we need to start the dispatcher service, an enable it on boot:

$ sudo systemctl start NetworkManager-dispatcher.service
$ sudo systemctl enable NetworkManager-dispatcher.service

I also found I needed to restart Networkmanager itself:

$ sudo systemctl restart network-manager.service

Basically, that script is triggered by the pre-up and down actions, and starts and kills easytether-usb for us.  The symlink was necessary because Networkmanager needs anything that acts on this to be in the pre-up.d directory.  Now, you plug in your phone, then use the nmcli commands from above to bring the interface up and down, and that’s it!

One caveat: You need to be able to use USB debugging on your phone.  Also, you may have to confirm on your phone that you want to be able to connect from your PC or laptop first.  I also find that I have to mount my phone (IE, click on the icon on the desktop and browse to it in the file manager) before easytether-usb will connect.  Make sure you get Easytether working manually like at the beginning of this post, and you should be fine.  Have fun!

OwnCloud Client Won’t Remember Account…

I use ownCloud on a few of my machines for keeping some folders synced via the desktop syncing client.  I’m fairly happy with it, at least for smaller files like documents and pictures.  (Part of this is because I’m running the server on a Raspberry Pi, which is a little slow, but for my purposes works fine.)  Recently, however, I ran into a problem with  version 1.8.1 of the desktop client, running on a laptop with Xubuntu 15.10.  Basically, every time I started the  client it would ask me for my username, password, and folder list, as if I were configuring it for the first time.  This happened when just logging in (when I have it set to start automatically), or if I killed it and restarted it.

The fix turned out to be relatively simple.  First, the desktop client stores its configuration files (on Linux) here:

$HOME/.local/share/data/ownCloud/

In that directory I found the following:

cookies.db folders owncloud.cfg socket

But when I tried to display the file owncloud.cfg, I got an Input/Output error.  I could list the folders directory; this just contained a file with conf info for each of the folder I sync.  On a hunch I deleted owncloud.cfg, and restarted the client.  It asked for my login information again, and then I told it to skip the folder configuration.  It worked, and even picked up the folders I had set up to sync (as those configuration files were readable).  I’m not sure what caused this, but after doing this I can view the new owncloud.cfg file that was created.

Hopefully this save someone some time.  It’s a little aggrivating  that I did have to poke around in the terminal, but not too bad – at least it’s possible.  I’m not sure if this could be a bug in the client, but if this happens again I’ll reevaluate.

No Icons on Cinnamon Desktop

I’ve posted before about Linux Mint, as well as using Xfce.  While I hadn’t gotten Cinnamon working on the older machine, I did realize that there is in fact an ebuild for this on Gentoo.  I’ve been trying it on a laptop, and have found it to be pretty nice.  However, after logging in a few times, I could no longer see any icons on my desktop, nor could I drag anything onto it.

This is caused by the filemanager, nemo, not starting at login and taking over the desktop.  Starting the file manager manually by clicking the taskbar icon brings the icons back, but this only lasts until you close it.  I wasn’t sure what was causing this, until I remembered that I still had Gnome3 installed, which uses nautilus to manage the desktop.  Cinnamon is derived from Gnome, and you can actually use gnome-session-properties to manage startup applications in both.  The problem was that both nautilus and nemo were trying to take over, and neither were winning.

I didn’t have gnome-session-poperties installed, but I got it by emerging gnome-media.  In the list of applications it listed ‘Files’ twice.  Find the one that starts nautilus (highlight it and click ‘Edit’, then see what command it uses) and disable it.  After logging out and logging in again I had my desktop back to normal.

Trying Linux Mint 13

Mint

I had been curious to try out Linux Mint (see also the Wikipedia entry) for a while now.  Basically, it’s a distribution based on Ubuntu, which aims to provide a better out-of-the-box experience by including some proprietary software.  Ie, when you install it, you shouldn’t have to install many codecs or whatever, as you do with Ubuntu.  It’s also gotten more and more attention as people have grown tired of UI changes with Gnome 3 and Ubuntu’s Unity, since its feel is closer to that of Gnome 2.  Partly because of this, Mint has grown in popularity, spreading and taking over like its namesake herb (though it still seems to be second to Ubuntu).  Because of this, I thought I’d give it a try.

I have an older desktop, actually the first one I built back in the early 2000s (I think around 2003 or 2004).  This machine is old, but not exactly decrepit – it’s a dual AMD MP2800 rig, with 1 GB of RAM and a 120 GB hard drive.  Not my most powerful computer, but it still works pretty well.  Since it wasn’t doing much I figured I’d use it to test Mint 13, the latest release as of this writing.  I’m not going to lie, I don’t really feel like doing a full review.  There are tons of those around, just go to Google.  However, I have some remarks and a random tip or two.

Overall, installation was a pain and took me several attempts.  Now, don’t get the wrong idea here, I don’t really blame Mint for this.  This machine does not have a DVD drive, and Mint 13 no longer distributes CD images – they’re too big.  Now, I tried using a USB stick, which seemed to work, but was also slow…  Because while the motherboard can boot from USB, the onboard USB is 1.1, and it won’t boot off the 2.0 PCI card I have in there.  Luckily, there is a guide on howto remaster the DVD image and shrink it.  Basically, you just remove some installed packages.  I didn’t have a Mint system already, so I had to mess around with the tool they mention in order to run it on an Ubuntu 12.04 system.  (Basically, you need to either install the mint-core package in Ubuntu, or download the .deb and force an install or extract it.  Sorry to gloss over this, but hopefully most people either have faster USB or a DVD drive. If someone wants instructions, I’ll try to put something together.)  I remastered the no-codec CD, and ended up removing samba, firefox, java, and a few other things.  This should get you an image small enough to fit on a CD.  When the install’s done, just use the package manager to put everything back (that you want).

So, I got the install rolling.  However, I then ran into a problem with the installer hanging toward the end of the process.  Everything seemed to finish, but the window would close and leave only the spinning mouse cursor to indicate something was happening.  After leaving it for a while, rebooting revealed a failed install.  A fix for that is here.  Basically, when the live CD (or USB stick or whatever) boots, open a terminal and type this:

sudo apt-get remove ubiquity-slideshow-mint

This should allow the install to finish.

Finally, the system would not detect my nVidia card.  This card is old, and requires the legacy drivers, which the proprietary driver manager was unable to find.  This was annoying, but I found a solution here.  That guide has you set up a repository for Ubuntu Oneiric, and install the Xserver from that distribution along with the legacy drivers.

After going through that, I had a working install!  Like I said, I don’t blame Mint.  Not too much, at least.  It would have been nice if there was some support still there for the older video card, but then again this hardware is ancient.  I can also understand why they don’t distribute CDs.  But there you go, I’m probably not the only one who will have to go through some hoops to get it running on something older.

There is one more problem, however.  Due to what I can only consider to be my older video card, the desktop effects are slow.  Too slow to be usable, in fact, so I turned them off.  This also means no Cinnamon desktop, although the fallback Gnome option is actually quite nice. I’ll probably use this system as it is for a while.  If you liked Gnome 2, and want an Linux distro that’s newer but has that kind of feel, I would highly recommend checking Mint out.  That said, I’m not sure I’d install this again, but that’s just me.  Overall, it’s very nice.

Nautilus and vsftpd

Just recently I was playing around with setting up an anonymous FTP server using the excellent vsftpd software.  (The ‘vs’ is for ‘very secure’.)  To test this out, I decided to connect from Nautilus, on my Ubuntu laptop.  Unfortunately, I got an error like this:

Sorry, could not display all the contents of "/ on ftp.whatsmykarma.com"

So, I gave this a try instead with the commandline ftp client on this machine, and it worked fine.  This had me scratching my head, but I figured it out eventually.  Basically, the problem is one of passive vs active FTP – look a the Wikipedia page for more information.  As it turned out, the commandline FTP client was defaulting to active mode, which basically means that the server listens on one port, and connects to a port on the client (that is, the client has to have one open and listening) to make the transfer.  This is fine if you’re on a local network, or have a loose environment firewall-wise where the server can talk to the client this way.  However, because this setup is a bit more difficult, passive FTP was introduced – the server listens on one port and then some other random ports.  The client doesn’t have to listen, just connect to the server on port 21, and then get which other port it has to connect to on the server to actually do the transfer.  This is basically an improvement, but it still means we want to listen on more ports than just 21.  For whatever reason Nautilus only seems like passive FTP (which is probably a good thing), and was giving that error because I’d only forwarded port 21 on the server.

Now, the fix for this is simple, but before I give much of my vsftpd.conf file I would like to address a concern that will no doubt be brought up: FTP isn’t that great.  Well, not from a security standpoint, at least.  Normal FTP just sends everything in clear text, and with minimal effort someone who has access to the datastream (eg, the guy in charge of your company’s firewall, or someone on your wireless network) can easily get this information.  Perhaps your FTP login details are used elsewhere, like for logging into to a system account.  (Like, you sit at your workstation and login with those credentials, and someone else getting them means they can read your Email and delete your crap.)  These days there are better options, such as SFTP (basically an FTP-like protocol that tunnels over SSH, and only really needs OpenSSH) that are better for a lot of things.  However, FTP is nice for certain things, like if you have a server that hosts a bunch of big files you want to put up for download.  You can do this with anonymous FTP, with no need for sensitive usernames and passwords.  I’m going to assume that this is kind of why you’re looking into FTP, and that roughly you know what you’re doing.  (Note that it’s also possible to use FTP with SSL, which could be handy in some cases when you really want to use FTP with login info.)

Anyway, here’s what we have to do.  We want to use passive FTP, and configure vsftpd thusly.  To do this we need to forward port 21 to it (of course), but we also need to have it listen on another port to do the transfers.  Traditionally the FTP server would randomly pick a port from a range, but you really only need one.  I chose 2020.  It can be anything, so long as it’s above 1024.  This is because the server will try to bind it as a non-root user.

Now, the other thing the server will do when we tell it we want to go into passive mode is send an address for the client to connect to (with the given port).  In my case, I’m on a dynamic IP, so we would like to give it a hostname to use.  Luckily, vsftpd will allow us to do all this.  So, for a simple, anon-only FTP server that works in passive mode, here is my config:

listen=YES
local_enable=NO
anonymous_enable=YES
write_enable=NO
anon_root=/var/ftp
pasv_enable=YES
#
# Optional directives
#
#anon_max_rate=2048000
xferlog_enable=YES
listen_port=21
pasv_min_port=2020
pasv_max_port=2020
pasv_addr_resolve=YES
pasv_address=ftp.example.com

Change the details or add stuff to suit your setup.  Note that we have a pasv_min and pasv_max port, these can just be the same thing.  pasv_addr_resolv=YES just lets us specify a hostname.  And that’s that, restart vsftpd and enjoy FTP.

Gentoo Network Interfaces Problem

I recently had an issue on my Gentoo desktop which was sort of frustrating, but which I’ve since gotten past.  I turned my machine on one day, only to find the mouse and keyboard not responding once the login screen came up.  Now, most of the time that’s just what can happen after an update, if you don’t reemerge xorg-server and the xf86-input-whatever packages.  Problem was, I couldn’t log in, and for some reason, eth0 hadn’t come up.  This meant no SSH, either.

First, a little about my setup.  I have two network cards in this machine.  One is the onboard gigabit one, which is my main nic (eth0).  The other is an extra PCI one (eth1) that I have statically configured for cases when I want to do a direct transfer between machines, or troubleshoot (like in this case).  The problem was, eth0 wasn’t coming up, and was instead doing a DHCP timeout (as if it weren’t connected at all).  I checked the cables, all looked good.

I ended up booting into the System Rescue CD (which I recommend having on hand if you do anything with computers) and checking a few things out.  I checked /var/log/messages and found these lines:

Mar 27 15:18:56 fishingcat kernel: [   13.847407] ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 27 15:18:57 fishingcat dhcpcd[2401]: eth0: waiting for carrier

This had me puzzled, but then I found this:

Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7333]: Udev uses a devtmpfs mounted on /dev to manage devices.
Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7335]: This means that CONFIG_DEVTMPFS=y is required
Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7336]: in the kernel configuration.
Mar 27 15:27:12 fishingcat /etc/init.d/udev-mount[7324]: ERROR: udev-mount failed to start
Mar 27 15:27:12 fishingcat /etc/init.d/udev[7323]: ERROR: cannot start udev as udev-mount would not start

Bingo.  I had remembered from an encounter at work that udev likes to create persistent naming rules for hardware.  This is a new feature, and is generally a good thing: it keeps eth0 as eth0 for the next reboot, same for eth1.  But with udev not starting, no dice.  So, following the error message, I enabled the appropriate kernel option in make menuconfig.  (For me, this was under Device Drivers -> Generic Driver Options -> Maintain a devtmpfs filesystem to mount at /dev.)

I’m not really sure what caused that to become disabled in the first place, but it’s fixed now, so there you go.

G530 Flicker Saga: The End?

Well, I hope everyone has had a great Solstice, and will continue having a wonderful holiday season with Christmas in a couple days.  Up here in NY it’s been pretty rainy and a little on the warm side, with little snow.  Hopefully we’ll have some for the 25th, as that would be fairly appropriate.  And hopefully when the inevitable lake effect comes we’ll all be safe.  If you’re reading this from somewhere out in the western US where they seem to be getting our winter weather, I wish you the best.

Anyway, as you may have seen me post here in the past, I’ve had some interesting issues with my Lenovo G530 laptop.  First, the screen became wobbly and shaky.  Then, it started to flicker.  The first of these was easy to fix, the second very annoying, and almost as easy to fix.  Well, I write now because the dreaded LCD flickering has returned.  Now, it’s been a while since I posted about it last, but in truth the fix lasted for maybe three weeks.  Figuring the cable had come loose, I repeated it, giving me another couple weeks.  Finally, a couple weeks ago I reseated the video cable only to have reliable operation for maybe a few hours before the flashing came back.  It seemed that the work around involved slapping the display repeatedly in certain locations along the sides, which served to jostle the wiring back into place, as well as relieve some of my frustration.  Then last week, after doing this for a while, I got fed up with this.  Here was the result:

Now, you might think that that was a little bit harsh.  But, I disassembled part of the screen and put it back together again.  I figured that somewhere in there something was in a bad position, and just needed to be tweaked a little.  And, it worked!  Since doing this I’ve had no flickering.  You might be wondering what, specifically was the problem.  Well, I still am too – all I did was take it apart and put it back together again, and it seems fine.

Now, given the popularity of previous posts a nice guide is in order.  However, there are a few things to keep in mind before going through this and attempting it yourself:

  • After taking pictures, I realized that I probably could have gotten some better ones for illustrative purposes.  So, I recommend that you take a look at Lenovo’s page with take-apart instructions for this unit.  Also, look over the entire graphic carefully before attempting anything, just to get a general idea.
  • You need to first follow the instructions for getting at the screen hinges, as well as reseating the video cables.  The first of these is more important.  Use it to get at the hinges; don’t tighten them as we’ll be unscrewing them.  The second is less necessary, but I recommend it to have easier access to the cables, and because you may as well reseat those while you’re tearing this thing apart.
  • There are a tone of small screws and such in this.  You probably already know this if you’ve taken it apart before, but it bears restating.  Find a clear, hard surface like a kitchen table to work on this, and keep track of your screws.  It should go without saying that an appropriate screwdriver set is a must (though you’re probably good if you’ve done this before).
  • Be careful when removing the screen bevel (after unscrewing the screws under the little rubber feet).  Use a small, flathead screwdriver and beware of power and data cables, as well as the camera up top.
  • This isn’t a bad time to clean the laptop screen while you’re at it.  I used a paper towel I dampened, and added a drop of dish detergent to it.  Try not to get and soap or water into the sides of the display.  Take another damp paper towel to rinse it.
  • Finally, this procedure is a bit more involved than ones before.  So, BE CAREFUL.  If you aren’t comfortable doing this, seek assistance.  And of course, do this at your own risk; I am not responsible for any damage to your laptop, yourself, or any other possession.

So, here it is:

Good luck.  This may help you, or it may not, but if the flickering has really been getting to you it’s worth a shot.  Overall, if you’re thinking of buying a G530, I’d recommend against it.  It’s pretty nice for a crappy machine, but it is a crappy machine.  But if you’re stuck with one, at least it’s not impossible to take apart.

Cyrus-SASL and IMAP and SMTP-Auth

I lost a little sleep over this, so now that it’s resolved, I’ll throw it here.  I have a Debian mail server, on which I have Email for some local user accounts (system ones, in /etc/passwd), and some virtual users in LDAP.  In order to do this I use the Courier authdaemon, which checks both those sources and lets me authenticate my IMAP and POP3 servers to them.  This works fine.  Thing is, I want to do SMTP auth as well, with Postfix.  (Note: Configuring all these things has been documented many times, look it up.)

SMTP-Auth is generally done via Cyrus-SASL (via saslauthd), which itself can look to a lot of places to determine if the user can log in.  Since I want to allow logins for SMTP to be the same for IMAP/POP, I have saslauthd try to log into my IMAP server (the rimap option, on Debian in /etc/default/saslauthd).  This worked great, until I upgraded to Debian 6.0 (“Squeeze”).

After I ran the upgrade, I was surprised to find that Thunderbird was reporting that Postfix was refusing my SMTP password.  I checked /var/log/auth.log, and found stuff like this:

Aug 21 14:11:48 clamato saslauthd[7289]: auth_rimap: unexpected response to auth request: * OK [ALERT] Filesystem notification initialization error — contact your mail administrator (check for configuration errors with the FAM/Gamin library)

ug 21 14:11:48 clamato saslauthd[7289]: do_auth         : auth failure: [user=myuser] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server]

Now, both of the above entries are relevant, but I spent most of my time focused on the second one.  (D’oh!)  I searched around for it quite a bit, and found a couple bug reports about it from various points in the past few years.  I even tried one or two patches, but these did not work.  I tried downgrading and backporting, but these did not help.

Turns out the actually cause of this all (ie, SMTP-Auth not happening) has to do with the first line in that file, the one about FAM/GAMIN and filesystem notification.  Courier’s IMAP program likes to have this.  Apparently, it will even go so far as to tell you this when you log into the IMAP server, as I found out while finally deciding to do this via telnet.  I logged in fine, but in the login banner was the same alert message in the log…

Basically, what’s happening then is that saslauthd is logging into the IMAP server, getting the successful login response along with the alert message, and is concluding that the login was not in fact successful.  I was able to fix this just by installing the gamin package; see this page for slightly more information.

In my opinion, there is in fact a bug here, aside from my own inability to notice the other log message.   Courier logged me in successfully, even with the alert.  Saslauthd should’ve picked up on that.  Here’s the code to check this:

if (!strncmp(rbuf, TAG ” OK”, sizeof(TAG ” OK”)-1)) {
if (flags & VERBOSE) {
syslog(LOG_DEBUG, “auth_rimap: [%s] %s”, login, rbuf);
}
return strdup(“OK remote authentication successful”);
}

This is part of the auth_rimap.c file, and causes the auth_rimap function to return if the banner was correct.  However, if the banner is not exactly what it was expecting, it skips this and gives the unexpected response message.  I may put together a patch to change this, but right now I think I will go for a swim.

(On another note, I need to get a good code display plugin for this blog, so I’m not using block quotes for that.)

Windows Shares in /etc/fstab

A project I am doing at work involves having a Debian Linux machine mount a Windows share, and then letting a script go in and pull some data out of files for processing.  A lot of this is easily handled by Perl, but the first task is to mount the share.  And this had me stumped for a second.

The syntax for mounting a share in /etc/fstab looks kind of like this (on one line):

//servername/sharename    /mount/point   username=user,password=pass,default   0 0

Now, I had something like that set up, but when I would try to mount it by typing mount /mount/point, I would get an error like this:

[mntent]: line 21 in /etc/fstab is bad
mount: can’t find /mount/point/ in /etc/fstab or /etc/mtab

Now, I couldn’t figure out what was causing this at first.  I mean, it worked fine when I just used the mount command.  But, it turned out that the problem was a space in the Windows share name (something like /servername/share name).  I had thought that using a simple \ to escape the space would work, but on a hunch I searched to see if there was a proper way to escape, specifically for /etc/fstab.  This post explained it, and now I have the following in /etc/fstab:

//servername/share\040name    /mount/point   username=user,password=pass,default 0 0

Basically I just replaced the space with \040, which is unicode for a space.  Now I can mount it with no problems.

Lenovo G530 Screen Flicker

Well, I have no doubt that my trusty Lenovo G530 will still be functioning after the rapture.  It’s getting a bit flakier in its old age, but it’s still chugging along quite well.  However I do, of course, have to tend to it from time to time.

If you have one of these you might have gotten the telltale flicker of the LCD screen.  Sometimes this seems to accompany the loose screen hinges.  Well, it is slightly related; this problem is actually caused by a loose video cable leading to the monitor.  For me, it wasn’t actually that hard to fix, basically amounting to a connector that needed to be reseated.  (Another cause could be the cable itself, which would be bad, but it’s probably just the connector.)  To fix it, just remove the keyboard, disconnect the video connector, and reconnected it.  Once again I have prepared a handy graphic to guide you (though be sure to check out my post on fixing the screen hinges and check out the graphic there first; you’ll need to take those steps to get to the screws here):

Howto: Reseat the Lenovo G530 video cable

Be very careful, as the parts in here are kind of delicate.  Particularly don’t yank the keyboard too much.  Pry the video connectors out with a screwdriver (carefully), and then just stick the back in.  The problem could be as simple as crud on the contacts, and just doing this can work wonders.

This fixed the problem for me, your mileage may very.  Of course, do this at your own risk (ie, I am not responsible for damage to your laptop), but if you’re careful there’s not a lot to mess up.

UPDATE: This may not fix the problem permanently.  If the problem comes back and maybe even gets worse, I have a new post with a solution that may be a little more effective.