Monday, May 22, 2006

Finally! Reliable atheros wireless in openSuSE 10.1

Earlier I had mentioned that I was a little concerned that the madwifi drivers were not being shipped with any SuSE products. I quickly found out why madwifi-ng wouldn’t be supported: it was very unreliable for me, especially when using it with NetworkManager, which enforces the use of wpa_supplicant. (This seems to be a known problem: )

I failed to get madwifi-old working, and quickly gave up on that (my wlan device could never associate with my accesspoint).

I then started scraping the web to find a chipset that had well supported drivers that were included a stock install of suse, worked with NetworkManager/wpa_supplicant, and supported wpa. That was a couple of frustrating hours :) I was seriously ready to go buy another card but it seemed the best chipset choices were in cards that were no longer in production. Plus, I had two perfectly working atheros cards… this was getting ridiculous.

Then it dawned on me: why not try using ndiswrapper with my cards? I hadn’t had much luck with ndiswrapper a couple of years ago but now I was starting to get desparate.

Turned out that my pcmcia atheros card worked pretty well with my Dell c600. Not real convenient that I had to go find the exact driver from my vendor (opposed to just using one linux driver), but hey, at least it works and I don’t have to go buy another card. I later found out that there’s an opensuse wiki page suggesting this same solution: Atheros_ndiswrapper.

Then I tried the ndiswrapper driver on my t42p ibm laptop, which has builtin atheros. This sorta worked, but I would get disconnected every couple of minutes. This wasn’t much better than my situation with madwifi-ng.

I realized I needed to find out why the madwifi-old driver wasn’t working since this driver worked flawlessly on Ubuntu for months and months and months.

Turns out the solution was this: I got the madwifi-old driver working, but this required that wpa_supplicant needed to be compiled with the headers from madwifi-old, not madwifi-ng. Now, I’ve finally been using rock solid wireless using wpa with networkmanager on suse 10.1 without any problems.

Update: Packman packages the madwifi kmp kernel package. This package (along with the stock wpa_supplicant) works wonderfully! Not sure what I did wrong in trying madwifi-ng, but I’m glad that this solution works really well.


  1. Simon Geard Says:

    I’ve been putting in a bit of effort in this area myself lately, and you might be glad to know that the madwifi-ng drivers have improved somewhat in the last month or so, to the extent wpa_supplicant can be used with it’s generic wireless-extensions support instead of using madwifi-specific code. As such, I find it now works pretty well with NetworkManager.

    The only remaining problem for me is that they still report signal strength differently from everyone else, so that NM reports a much weaker signal than it actually has. For that reason, I keep the Gnome netstatus applet running as well, since that appears to have workaround code to display a correct signal.

  2. Wade Berrier Says:

    Simon, thanks for the info. Unfortunately, I’m kind of new to this blogging stuff and didn’t find your gem of a comment in the midst of the hundred spam comments I had:) I did manage to try packman’s madwifi-ng package and that’s been working really well.

    Too bad I didn’t notice your comment or else I would have tried it much earlier…

Tuesday, April 25, 2006


Yay for Zenworks on SuSE RC2!

I previously mentioned that rug and Zenworks in SuSE beta 8 was a great addition, but not quite there yet.

It also wasn’t there in Beta9. Then it occurred to me: the SuSE ‘factory’ is a yum repository. So, I successfully upgraded from beta9 to beta10 with yum. Everything went smoothly.

Now, it was time to trying upgrading to factory (which I believe was rc2 at the time). I manually installed libzypp, zmd, libzypp-zmd-backend from factory. Then I deleted everything from /var/lib/zmd and /var/lib/zypp, as well as /var/cache/zmd.


/etc/init.d/novell-zmd restart

rug sa factory
rug sa factory-e
rug sa --type=zypp packman

rug sub factory
rug sub factory-e
rug sub packman

rug update

Everything went right along. All deps were resolved, all packages were downloaded (1 GB total), and the rpms were beginning to be installed. zmd quit at about 40% of the installation transaction, but the backend transaction finished until completion. I’m not sure what’s going on there.

Anyway, much progress, and I’m counting on rug/zmd being usable. The memory leaks have been fixed, but the process of adding repositories, refreshing, adding packages, and removing packages is still too slow. This taxes my 2 Ghz mobile chip way too much. Hopefully some further improvements will be made.

The cool thing about zmd is that it supports several repository formats:

wberrier@wberrier:~> rug st

Alias | Name | Description
yum | YUM | A service type for YUM servers
zypp | ZYPP | A service type for ZYPP installation source
nu | NU | A service type for Novell Update servers
rce | RCE | A service type for RCE servers
zenworks | ZENworks | A service type for Novell ZENworks servers
mount | Mount | Mount a directory of RPMs

Congrats to the people working on Zenworks and zypp and I’m looking forward to further improvements.

Sunday, April 2, 2006

General Conference

There have been some great talks this weekend from the Brethren. Most striking to me have been about the Atonement and towards the comfort of the weary.

I’ve never seen President Monson be so hillarious! Even President Hinckley said something like, “President Monson is a hard act to follow.”

President Hinckley boldly denounced racism and encouraged all to be nicer and more generous to everyone.

As with several previous conferences, my in-laws came to visit and it was nice to spend the weekend with them.

Saturday, March 25, 2006

Novell Brainshare 2006

This is my official first Mono/work related blog entry!

Let me start with the few pictures that I took.

I was able to go to Novell Brainshare in the Salt Palace in Salt Lake City. It was absolutely fantastic! I had a great time and learned a lot about Mono (mostly from listening to some tidbits of information that Paco and ‘the Mystery Shopper’ always had readily available). It was also nice to be able to get to know some collegues better as well as meet lots of people who are excited about Mono. And if they weren’t excited when they got to our booth, most of them were excited by the time they left.

My first demo was an interesting experience. I was approached on Monday morning pretty much right when the Technology Lab opened. He started out with, “Hey, uh, what’s new in the Mono world?” I continued to say how this cool new gui plugin had just been checked in a few weeks ago. I then asked, “Are you familiar with Mono or .NET technologies?” “Yeah, a little.” I wrapped up the stetic demo with, “Pretty cool, huh?” In the meantime, Frank (Rego) comes back from his session and says, “Hey, Niel! How’s it going?” I look down to find a nametag with “Niel Bornstein” on it. Wow, was that embarrassing or what? I let him know how tricky that was and he said he had fun being the ‘Mystery Shopper’. I ended up bringing my copy of ‘Mono: A Developer’s Handbook’ with me the next day and he signed it. So, Niel, this is to you: next book you co-author, make sure it’s got a picture of you on the back cover. That would really help us poor vulnerable fellows out :) In fact, if I man a booth sometime in the future I’ve got a few intro questions up my sleeve now. “Have you heard of Mono?” “Have you used it before?” “Have you written any books on Mono lately?” Niel hung out and helped at the booth for several days. It was great to meet him and to have another person with some serious Mono experience there to help answer questions and man the crowd.

Anyway, like others have been blogging about, we did some Winforms demos as well as showed off the new stetic MonoDevelop plugin. I wasn’t sure what I should show for demos. I hadn’t messed with MD nor stetic much. Lluis integrated things so well, that it only took a couple of minutes to put a simple example together. Of course, Dan deserves much credit.

It was really easy to show Mono off. I would start out with, “Did you see the SLED [NLD] demo? All of those cool apps Nat and Guy showed off were implemented in Mono!” Then we’d do a simple gtk# app in Mono with stetic, and copy the binary to a win32 box and run it.

So, Paco is a Mono evangelizing machine! I’d be doing the above demo, and he’d say, “You know… that’s cool, but why don’t we try that binary on another machine… say my Nokia 770!” We had great fun.

We also showed SWF apps on win32 and linux. And wow, what a difference between and People would ask, “So, how far has Mono come since 1.1.4?” Or, “Will my app work.” “I dunno, let’s try it out!”

There were a lot of people there interested in One cool example that I’d never considered was pulling content out of eDirectory. I knew that eDir was fast, scalable, replicatable, etc… but never thought about it until the guys from AppGenie showed me their site ( ) running on Mono using the Novell.Directory.Ldap connector.

So then I’d try to answer any questions best I could. Can’t wait for Brainshare 2007!

One more tidbit… I switched to Debian (from RH) in 2001. Ubuntu totally rocked Debian’s desktop and I’ve been using that for a few years. I’ve always made sure that I was running either NLD or SuSE on my work machines so I would be familiar with it. But, I must say, this latest CODE10 release that is coming soon leaves me next to no excuse for not running it at home. The Gnome is beautiful (and certainly much more attractive than Dapper’s new orange theme). A big part of of my mindset change is the new zmd/rug integration. apt coupled with Ubuntu’s vast repositories is an amazing combination. I’ve never been a big fan of the yast module for package management. Now, there are some performance problems with zmd in beta8, but one memory leak has already been fixed and hopefully the cpu taxing will be fixed soon. In time these issues will get ironed out. I am a little concerned that ath/nvidia/fglrx/etc drivers will not be shipped (Ubuntu makes this space very easy for the user), but I understand that it’s difficult for Novell to provide support and fixes for software they don’t have access to. Hopefully some collaboration can take place between these companies to provide a pleasant experience for the user.


  1. Paco Martinez Says:

    What a fantastic way to debut in monologue!


  2. wberrier Says:

    Thanks! Now… I just need to figure out how to get the full posting listed in monologue instead of the description overview. This also happens when I read my blog via blam. But, I’ve noticed that other people using wordpress don’t have this issue. As far as I can tell, the xml looks the same… Ideas?

  3. wberrier Says:

    Ok… I just modified wp-rss2.php to put the same stuff in as what’s put in . Seems that’s what others’ feeds using wordpress were doing…

Friday, March 24, 2006

DIY Projector

I’ve been enjoying my homemade projector for several weeks now and it’s about time to post some pictures.

Projector Screenshots

Some of the pictures are quite blurry, but this is because of my digital camera. This camera typically takes good pictures. Unfortunately, they are often fuzzy, especially when there is a lack of light in the shot.I first heard of building a projector from Dan Rhimer while living at Wymount on BYU campus. I later saw this article which showed how to do this with very little construction and easy to get parts.

Cheryl and I enjoy watching movies together and I’ve always wanted a projector. Me being the cheapskate that I am jumped aboard the idea. I told my Dad and some family members about what I wanted to do. My first purchase was a flat panel display at Novell’s surplus store in Orem. I bought it as a gamble at $30 without a power supply. I soon bought a universal power supply for lcds and laptops for $30. This screen worked pretty well, but I realized that at 18″ it was too big for the surface of an overhead projector.

My Dad loves shopping at places like DI and garage sales and such (surely where I get it from). He found an overheard projector at Another Way for $5. Bingo.
The next peice of the puzzle came when I was visiting Jake Cahoon’s office at work. Baha Masoud, a fellow co-worker had an old monitor that needed new backlight bulbs. Jake’s quite a handy man and decided to see what it would take to replace the backlights. After finding it would be $45 he wasn’t sure it was worth it and was going to bag the screen. I happened to show up and I told him I was looking for an lcd with a broken backlight. He didn’t object to my aquiring the screen. I use it at 1024×768 because that’s the max resolution of the driving laptop, but I believe the lcd can do 1280×1024.
The only missing link now were the bulbs for the overhead. I found them on the net for $5 a peice at 350 watts lasting 75 hours. (Which, interestingly enough, happens to be about the same cost per hour as commercial projector bulbs). I was quite impatient one Saturday night and decided to try the setup out with a halogen lamp (the types used for night time construction). I had worried if the image would be bright enough and I figured that if this lamp wasn’t bright enough, nothing would be. I mean, this lamp could practically heat a small home.

This image ended up being horribly fuzzy and almost indistinguishable. I was quite disappointed, but my ordered bulbs were already in the mail. Oh well, I figured if it didn’t work out I was only out $20.

So, I finally got the bulbs and tried them out. The image looked GREAT!! I was very excited to have my own bigscreen in the comfort of our home.

Cheryl helped me tune the color with methods I learned from this article.
I sometimes notice that the image still isn’t bright enough. Luckily, pumping up the brightness does the trick. I’m able to do this with Totem, mplayer, and mythtv. Not bad for $20. Now if I could just figure out where to put it in my house…

Thursday, February 23, 2006

HP Laserjet 2100

A little history… after Cheryl and I had graduated, our use of the trusty HP Desktop 842C greatly depreciated. To say it short, using an inkjet after it’s been sitting for a while is disappointing. My ink dried up, and so to print a couple of things I was out almost 40 bux getting some new ink cartidges.

That’s fine and dandy, but I realized that this $40 was probably going to dry out as well. This put me on the lookout for an inexpensive laser printer. In fact, I found a Lexmark at DI (one of my favorite places to shop). I didn’t end up getting it for one reason or another. Sure enough, the next time I went back it was gone.

I told my friend Andrew about my quest and he had a friend who was selling an HP Laserjet 2100 for $10 (without toner). In fact, he even sent me the search link for a $25 toner from Toner Pirate. I was hoping for a USB printer, but this would certainly do.

So, for about the same price, I got a new printer with a toner that has a yeild of 5000 pages. Compare that with my $40 inkjet of 600 pages (not including the cost of the printer, of course).

You may be saying, “Well, you can’t print in color.” True. For my needs, it’s cheaper to head to Kinko’s for a color print :)

The actual reason for me writing this entry: It’s a note to self. When using this printer in Cups, use either the ljet4 or hpijs drivers. The recommended plxmono driver doesn’t work. Also, don’t turn off the printer when it’s printing pages of garbage. Instead, kill cups and let the spewing cease on its own.

Also noted bonuses of using hpijs: my margins are more accurate, and the dithering is a little fuzzier (makes the picture look better in greyscale).

Monday, February 20, 2006

Xgl and Compiz fun

I must confess, one of my main purposes of moving my email server to another dedicated box was so that I could try out Xgl and compiz without having the worries of hosing my box. I followed this guide. At first when I tried compiz I could only see vauge white shadowish type boxes. I saw on this guide that I needed a newer glitz if I had an older Nvidia card.

I must say this stuff is very impressive! I have a rather old card (GeForce4 MX 440 AGP) but after I used the newer glitz package in Dapper things went right along! I’ve posted my screenshots here.

Video didn’t work quite as well as I hoped. Xv output didn’t work at all (it froze), but gl2 or x11 with software zooming worked as long as I wasn’t doing any cool compiz effects :) And there are some other minor things like middle clicking a title bar doesn’t drop the window behind all others.

I’m not quite ready to run Dapper yet as it’s been locking up my computer quite frequently. I’m getting ready to downgrade my computer back to breezy, and I’m already feeling nostalgic. Guess I’ll have to try it on my Laptop at work with the fglrx driver.

Saturday, February 18, 2006

Allowing authentication with postfix

I’m running my own mail server at Doing this has been fantastic for spam filtering and mail processing. For spam, I use spamassasin as well as greylisting as suggested by Andrew. I get very little spam.

There are some options when setting up postfix with tls authentication. I could use a sasl db to authenticate users against, but I don’t want to maintain 2 sets of accounts (/etc/passwd accounts as wel as sasldb). Also, if I use the sasldb I must authenticate in user@host format, where as my normal logins are only with user. I need some consistency. So, I want to use saslauthd against pam. The only problem is that the only authentication methods available for this option are PLAIN and LOGIN. I don’t feel very good about sending my passwords in the clear, so I’m also going to set up postfix to force TLS when authenticating.

I had some troubles with postfix finding the sasl socket. This page had the answers:

You have to modify the saslauthdb setup to run inside the postfix chroot.

So, inside of /etc/postfix/sasl/smtpd.conf:

pwcheck_method: saslauthd
mech_list: plain login

And for /etc/postfix/

# Sasl authentication
# Also added the permit_sasl_authenticated above for this to work
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_security_options = noanonymous
# And now for using tls to authenticate
# Update: don’t do this next line… it forces tls for people trying to send mail to you.
#smtpd_enforce_tls = yes
# This was what I had intended:
smtpd_tls_auth_only = yes
smtp_use_tls = yes
smtpd_use_tls = yes
smtp_tls_note_starttls_offer = yes
#smtpd_tls_key_file = /etc/ssl/smtpd.key
smtpd_tls_key_file = /etc/ssl/private/dovecot.pem
smtpd_tls_cert_file = /etc/ssl/certs/dovecot.pem
#smtpd_tls_CAfile = /etc/ssl/cert/dovecot.pem
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom

As for the modifications for saslauthdb, check the above link, it explains it well.
That’s it! Now, if I could just send email without getting blocked by sorbs…

Note: In order to authenticate with Evolution, you must select “Whenever Possible” under “Use Secure Connection” in the “Sending Email” tab. I’m not sure exactly why this is, but the other options don’t seem to work.


  1. Andrew Jorgensen Says:

    I’m sorry to have to report that I couldn’t send you a mail today because you’re requiring TLS on incoming connections from other mail hosts as well. I would tell you by email but well…

  2. wberrier Says:

    Doh! Thank you. Wow, and I thought I wasn’t getting any email because of my exceptional spam filtering!

Saturday, January 28, 2006

Hello World!

Howdy World! Wordpress seems really nice! Like the title says, if I had only started this 10 years ago. There’s been many a computer projects come my way.

The latest has been my website and email migration to a little box I received from a neighbor. One man’s junk is another man’s treasure :) Installing Debian on a Motorola Powerpc MTX+ is another adventure in itself.

This will help with my uptime and I can play with some cool stuff on my desktop without worrying about losing all my email.

On another note, I’ve updated my Qwest dsl service to 7MB, and that’s been really nice. Thanks to Comcast for putting some competition in the market. My jump from 1.5 to 7MB was only an additional $10 / mo.

I also found out why I had a hard time forcing local web traffic through a filter/proxy. There used to be a netfilter option in the kernel: IP_NF_NAT_LOCAL. This option was removed in 2.6.11 and that explains why I’ve been having a hard time getting it working on my work suse laptop (2.6.13), even though it was working on the stock Ubuntu kernels. After I compiled this option for 2.6.8 on Debian sarge, things worked as they should. I’ll post more about this some other time. I’ll have to figure out how to get this working on SuSE sometime.

  1. Francisco "Paco" Martinez Says:

    Looking good. :)