I was checking out the openSUSE 11.0 Gnome LiveCD to gather some information about a Mono bug, and accidentally discovered gvfs.
I guess it's a replacement for gnome-vfs. From a quick glance, nautilus seems pretty much the same to me as when it used gnome-vfs. But, low and behold, when I opened up an sftp:// uri in nautilus, that 'share' was available via fuse in /home/linux/.gvfs!!! How cool is that??
This is probably old news, but I'm pretty excited about this. I guess there'll also be a kio interface. It seems gvfs has some really great potential to bridge the vfs gap.
Great work!
Thursday, April 3, 2008
Wednesday, April 2, 2008
Gmail and IMAP
I've been using google hosted for my personal email for some time now. Cheryl was using their web client and I was fetching all my mail over pop to a local dovecot server.
After I heard they were going to support IMAP I decided that maybe I will finally migrate all my emails (back to 1996) to the google servers.
I noticed that messages copied via imap had incorrect dates when viewed from the web client. That hindered my decision for some time, but Andrew mentioned that they were going to eventually fix that. The dates still appear correctly in imap cilents, so I wasn't too worried. I'll mostly use an imap client, but it will be nice to be able to check and send mail from a web client.
(When hosting my own mail with dovecot, I had squirrelmail set up, but my mail was often rejected because it was sent from a dynamic ip. The unreleased squirrelmail beta had the option of configuring one authenticated account for outbound smtp, but using that feature with gmail was a little clunky because it seemed the mails weren't masqueraded properly.)
One of the things I really like about using gmail over imap is the ability to tag spam by moving it to the [Gmail]/spam folder. I had pretty good luck with spamassassin and although it was fun getting to work, I had some false positives and decided I didn't really want to think about spam any more.
The last of my concerns were answered by this help thread:
http://mail.google.com/support/bin/answer.py?answer=77657
I just hope I don't start deleting messages while using other email servers and expect them to be in my 'All Mail' folder :)
The performance is ok, but not as good as using my own dovecot server serving one account. But since I get the above features and I don't have to worry about backups or my computer going down, that's something I'm willing to live with.
Update:
Some people have asked how I did the actual migration. I configured two imap servers in Evolution and manually copied messages/folders from one account to the other. This took several hours of babysitting the process for roughly 250MB of mail.
It may be worth looking into imapsync.
Update:
Andrew sent me this: google-email-uploader
After I heard they were going to support IMAP I decided that maybe I will finally migrate all my emails (back to 1996) to the google servers.
I noticed that messages copied via imap had incorrect dates when viewed from the web client. That hindered my decision for some time, but Andrew mentioned that they were going to eventually fix that. The dates still appear correctly in imap cilents, so I wasn't too worried. I'll mostly use an imap client, but it will be nice to be able to check and send mail from a web client.
(When hosting my own mail with dovecot, I had squirrelmail set up, but my mail was often rejected because it was sent from a dynamic ip. The unreleased squirrelmail beta had the option of configuring one authenticated account for outbound smtp, but using that feature with gmail was a little clunky because it seemed the mails weren't masqueraded properly.)
One of the things I really like about using gmail over imap is the ability to tag spam by moving it to the [Gmail]/spam folder. I had pretty good luck with spamassassin and although it was fun getting to work, I had some false positives and decided I didn't really want to think about spam any more.
The last of my concerns were answered by this help thread:
http://mail.google.com/support/bin/answer.py?answer=77657
I just hope I don't start deleting messages while using other email servers and expect them to be in my 'All Mail' folder :)
The performance is ok, but not as good as using my own dovecot server serving one account. But since I get the above features and I don't have to worry about backups or my computer going down, that's something I'm willing to live with.
Update:
Some people have asked how I did the actual migration. I configured two imap servers in Evolution and manually copied messages/folders from one account to the other. This took several hours of babysitting the process for roughly 250MB of mail.
It may be worth looking into imapsync.
Update:
Andrew sent me this: google-email-uploader
Monday, February 25, 2008
Accessibility Team looking for packager
Jared Allen asked me to keep an eye out for anyone interested in packaging for Novell's accessibility project. Send me your resume if you're interested.
Friday, February 22, 2008
Novell Hack Week #2
I decided to continue on with my hack week idea from last year.
I spent the better part of a day getting the devel environment set up (compiling and setting up myth from HEAD, setting up the latest compiz-fusion from the build service, and gathering some test HD videos for myth) only to find out that it looks like it's been fixed already! We'll have to wait until the next major release of myth, but it's in there. Moving on.
The next item was a leftover idea that had been kicking around from the Tomboy hack night last December.
For that event I wrote a little python script that rapidly created notes over the Tomboy dbus interface. I gathered some data about how Tomboy performs with a large number of notes. The main findings were:
Boyd had mentioned that Everaldo and crew had done an sqlite backend for the maemo Tomboy port. My first objective was to port that code from the 0.7.x codebase to trunk (0.9.x?)
It turns out the maemo port was done mainly to work around a bug in Mono running on the n800. The maemo sqlite port allowed a mechanism for storing multiple notes inside one file in order to work around the aforementioned bug. That alone wouldn't solve the above issues. (In fact, this sqlite backend was significantly slower than the file backend unless delayed writes were enabled for the sqlite db. With delayed writes, they performed roughly the same.)
I spent the rest of hack week getting introduced to git and git-svn (which really rock!), getting my feet wet with C#, reading Tomboy source code, investigating Linq, and writing the C# code to do the db schema creation and schema upgrades. The main conclusive points of interest are:
To alleviate at least the last point, Andrew wrote a sweet command line util: Tomboy Remote. (Because you shouldn't be poking at a program's internal data anyways!) Update: Source download.
In conclusion, there's quite a bit of work remaining. The main benefits of this week were that I got some C# exposure (finally!), experienced a great use case for decentralized scms, and got more familiar with the Tomboy codebase. More for next time!
On another hack week semi-related note, I just upgraded my home system. (I got an intel mb, Core 2 Duo (E6550), 4 GB of ram, and an nvidia 6200le card for $300 after rebates. Thanks Joel and Steve!) Anyway, the onboard sound only has one audio port. Luckily Herbert was kind enough to add my last years hack week PulseAudio patch to the Packman rpms. Great timing!
I spent the better part of a day getting the devel environment set up (compiling and setting up myth from HEAD, setting up the latest compiz-fusion from the build service, and gathering some test HD videos for myth) only to find out that it looks like it's been fixed already! We'll have to wait until the next major release of myth, but it's in there. Moving on.
The next item was a leftover idea that had been kicking around from the Tomboy hack night last December.
For that event I wrote a little python script that rapidly created notes over the Tomboy dbus interface. I gathered some data about how Tomboy performs with a large number of notes. The main findings were:
- Start up time was pretty dismal with a large number of notes (even 1000, which isn't that inconceivable)
- Note creation time steadily increased as the number of notes increased
- The time it took to delete notes was much longer than desired when you had a large number of notes
- Tomboy performed quite well during typical use cases, even with a large number of notes
Boyd had mentioned that Everaldo and crew had done an sqlite backend for the maemo Tomboy port. My first objective was to port that code from the 0.7.x codebase to trunk (0.9.x?)
It turns out the maemo port was done mainly to work around a bug in Mono running on the n800. The maemo sqlite port allowed a mechanism for storing multiple notes inside one file in order to work around the aforementioned bug. That alone wouldn't solve the above issues. (In fact, this sqlite backend was significantly slower than the file backend unless delayed writes were enabled for the sqlite db. With delayed writes, they performed roughly the same.)
I spent the rest of hack week getting introduced to git and git-svn (which really rock!), getting my feet wet with C#, reading Tomboy source code, investigating Linq, and writing the C# code to do the db schema creation and schema upgrades. The main conclusive points of interest are:
- To utilize the sql db, queries are needed to pull only the notes into memory that are of interest (otherwise, with all notes in memory, I'm guessing that's a main reason as to why the previous list of shortcomings occur, especially startup time)
- Find out if the current note buffering scheme is needed during note editing. If not, the code could be simplified by persisting changes straight to the db.
- We'll likely need an interface to transparently search and interact with notes in memory or from the db (meaning, I'm guessing the findings from #2 may be futile)
- Provide note migration from xml to db
To alleviate at least the last point, Andrew wrote a sweet command line util: Tomboy Remote. (Because you shouldn't be poking at a program's internal data anyways!) Update: Source download.
In conclusion, there's quite a bit of work remaining. The main benefits of this week were that I got some C# exposure (finally!), experienced a great use case for decentralized scms, and got more familiar with the Tomboy codebase. More for next time!
On another hack week semi-related note, I just upgraded my home system. (I got an intel mb, Core 2 Duo (E6550), 4 GB of ram, and an nvidia 6200le card for $300 after rebates. Thanks Joel and Steve!) Anyway, the onboard sound only has one audio port. Luckily Herbert was kind enough to add my last years hack week PulseAudio patch to the Packman rpms. Great timing!
Thursday, November 15, 2007
openSUSE Build Service
Note: I drafted this neglected post in Feb '07, and since I'm talking about the build service at the Mono Summit, I decided to post as is.
I'm trying out the build service with the intent of migrating as much of Mono's packaging as possible.
I first heard about this service at BrainShare 2006 and thought it looked really neat. They did a demo build from the web client.
I just discovered the command line client: osc, and it's amazing! You can do local builds of your projects for multiple distributions! Then you can makes changes, tweak your files, do a local test build, and then commit your changes to the server. The server will add your packages to the queue and create a repository for download.
The package build system that Mono uses has cut out a lot of the manual work with building packages. The problem with it is that no one else besides me can use the system to build packages. (Someone could, but it would require creating jails, setting up ssh authentication, etc...). The great thing about the build service is that anyone that is a maintainer on the package can test a local build and submit changes from their local machine.
I've always been impressed with SuSE's autobuild system. It allowed for local builds and submitting build jobs to be done on the build farm. This is fine for SuSE builds, but I was unable to utilize this service for Mono packaging because I needed to build on several non-SUSE distros.
The buildservice has solved that. There are a few remaining issues that I'll need to sort out before I can move completely over. First of all, only x86 and x86_64 is supported. Plus, I'll need to figure out how to make previous releases available in the build service. (I'm assuming I can create a new namespace for each release, but I haven't looked into this).
This will also give better testing on the various distros, since for Mono, we only build on a lowest common denominator distro and use it everywhere for that arch.
Good job on SUSE's part, and all I gotta say is, "Wow" :)
I'm trying out the build service with the intent of migrating as much of Mono's packaging as possible.
I first heard about this service at BrainShare 2006 and thought it looked really neat. They did a demo build from the web client.
I just discovered the command line client: osc, and it's amazing! You can do local builds of your projects for multiple distributions! Then you can makes changes, tweak your files, do a local test build, and then commit your changes to the server. The server will add your packages to the queue and create a repository for download.
The package build system that Mono uses has cut out a lot of the manual work with building packages. The problem with it is that no one else besides me can use the system to build packages. (Someone could, but it would require creating jails, setting up ssh authentication, etc...). The great thing about the build service is that anyone that is a maintainer on the package can test a local build and submit changes from their local machine.
I've always been impressed with SuSE's autobuild system. It allowed for local builds and submitting build jobs to be done on the build farm. This is fine for SuSE builds, but I was unable to utilize this service for Mono packaging because I needed to build on several non-SUSE distros.
The buildservice has solved that. There are a few remaining issues that I'll need to sort out before I can move completely over. First of all, only x86 and x86_64 is supported. Plus, I'll need to figure out how to make previous releases available in the build service. (I'm assuming I can create a new namespace for each release, but I haven't looked into this).
This will also give better testing on the various distros, since for Mono, we only build on a lowest common denominator distro and use it everywhere for that arch.
Good job on SUSE's part, and all I gotta say is, "Wow" :)
Friday, July 13, 2007
Monobuild updates
During the latter part of this week I revamped monobuild to use the .spec files from SuSE's buildservice rather than using Ximian buildbuddy. This was a long overdue move. When our build machine's 700 GB disk crashed, I decided to dive in. There are some nice advantages to this:
It is interesting to note that there has been some talk of coming up with a cross linux distro xml description to be used in the buildservice. Kinda funny, since buildbuddy had the ability to build rpm and deb. Oh well...
One of the other monobuild features I finished up is the ability to build rpms on your local machine. Previously you could only build on a machine connected through ssh. It's not real user friendly to get this working, but it's possible. I mainly wanted to implement this to work toward to goal of enabling others to easily create the installers.
The easiest way to build local rpms is definitely with the suse buildservice. It rocks. In fact, it has replaced much of the functionality of monobuild. But, since the build service doesn't support all the platforms or distros that we build on, we'll continue to use monobuild for releases of those missing platforms. Monobuild also works great for continuously building from trunk. (There's no reason monobuild could use the buildservice tools to locally build out of trunk, but there hasn't been a need at this point.)
- I'm not using the obsolete buildbuddy
- I maintain only .spec files now instead of merging changes back and forth in buildbuddy
- Those spec files can be shared with monobuild, suse build service, and suse autobuild
- When setting up a new distro chroot, I don't have to rebuild buildbuddy with the new distro info
It is interesting to note that there has been some talk of coming up with a cross linux distro xml description to be used in the buildservice. Kinda funny, since buildbuddy had the ability to build rpm and deb. Oh well...
One of the other monobuild features I finished up is the ability to build rpms on your local machine. Previously you could only build on a machine connected through ssh. It's not real user friendly to get this working, but it's possible. I mainly wanted to implement this to work toward to goal of enabling others to easily create the installers.
The easiest way to build local rpms is definitely with the suse buildservice. It rocks. In fact, it has replaced much of the functionality of monobuild. But, since the build service doesn't support all the platforms or distros that we build on, we'll continue to use monobuild for releases of those missing platforms. Monobuild also works great for continuously building from trunk. (There's no reason monobuild could use the buildservice tools to locally build out of trunk, but there hasn't been a need at this point.)
Monday, July 2, 2007
Novell Hack Week
There are two technologies that I really want to use all the time:
PulseAudio
Xgl
The problem is that I run mythtv quite a bit, and myth doesn't work very well with either of the aforementioned pieces of software. As a result, I usually don't have PulseAudio nor Xgl running, because it's a pain to constantly switch them on and off.
So I decided to hack on mythtv for a week to fix this.
PulseAudio
Rationale:
In order to output to pulseaudio from MythTV, you have to use an oss emulation wrapper (padsp). Patch myth to have real pulseaudio support.
Results:
I took Monday to set up the myth development environment, set up my usb tuner on my laptop, and get the build infrastructure for PULSE output set up. By Tuesday morning I had unsynchronized audio/video going to the pulse server using the simple api. I assumed that by using this api, a/v would be out of sync. But by trying it out I was able to make sure of this, as well as get the basic framework implemented.
I read some pulse documentation about the asynchronous api, and before diving in, decided to look at fixing the alsa output support to see what that would take. It ended up being really simple to fix alsa: don't use mmap access to the sound device. In case there were objections for my patch because I didn't use mmap, the final patch tries to use mmap, but then falls back to non-mmap. I spent the rest of Tuesday and most of Wednesday reading ALSA documentation, doing the final patch and making some packman derived mythtv packages that included my patch: (which are hosted here , although I'm hoping this will get into the myth sources, so these packages will eventually disappear.)
Patch posted to the myth bug http://svn.mythtv.org/trac/ticket/3598 .
Fixing ALSA was also nice in the fact that no new dependencies were needed for myth. If for some reason there are additional benefits of implementing native pulse support, I might re-address this later.
Xgl
Rationale:
I usually don't run XGL because mythtv crashes Xgl when you try to display video. This needs fixing.
Results:
I figure that mplayer works under Xgl using XVideo just fine, and that Myth should be able to do the same.
MythTV has a branch called mythtv-vid where they are working on an OpenGL output driver. I spent a while installing this branch and getting the latest xgl and compiz-fusion running, just to make sure this problem wasn't fixed already. It wasn't, and I couldn't get the opengl output on myth working.
At this point I wondered if I should just drop this idea and wait for the mythtv-vid project to finish the GL out support. I decided to do some simple benchmarks with mplayer to compare gl out and xvideo out. This can be done by disabling sound and telling mplayer to spit out the frames as fast as possible. The xvideo out ended up being slightly faster. (This was using ATI's fglrx driver. It would be interesting to run the same test with some different video cards and drivers). That was Thursday and a little bit of Friday.
The rest of Friday morning I spent debugging the myth sources to find the crash. This went rather slowly because each testrun crashed Xgl and I had to constantly re-login. The crash is happening during some xvideo initializations. I've located the X calls that cause the crash, but that's as far as I got. I don't know enough about xvideo to debug this further, so I've got some more digging to do. That took me until about noon on Friday.
The next few hours were spent setting up another machine so that I could demo mythtv running on a computer with synchronized output to 2 computers. The demo was video taped, but it's kind of difficult to experience synchronized output with a video camera :) I finished the rest of the day debugging xgl a little more, but ended up not making any progress.
Conclusion
I have MythTV working with PulseAudio (even though it didn't quite happen as expected), and made some good progress towards finding out why MythTV crashes Xgl. The hack week was a blast and it seems like overall a lot of great progress was made. Can't wait for the next one!
Update! Lightning talk:
PulseAudio
Xgl
The problem is that I run mythtv quite a bit, and myth doesn't work very well with either of the aforementioned pieces of software. As a result, I usually don't have PulseAudio nor Xgl running, because it's a pain to constantly switch them on and off.
So I decided to hack on mythtv for a week to fix this.
PulseAudio
Rationale:
In order to output to pulseaudio from MythTV, you have to use an oss emulation wrapper (padsp). Patch myth to have real pulseaudio support.
Results:
I took Monday to set up the myth development environment, set up my usb tuner on my laptop, and get the build infrastructure for PULSE output set up. By Tuesday morning I had unsynchronized audio/video going to the pulse server using the simple api. I assumed that by using this api, a/v would be out of sync. But by trying it out I was able to make sure of this, as well as get the basic framework implemented.
I read some pulse documentation about the asynchronous api, and before diving in, decided to look at fixing the alsa output support to see what that would take. It ended up being really simple to fix alsa: don't use mmap access to the sound device. In case there were objections for my patch because I didn't use mmap, the final patch tries to use mmap, but then falls back to non-mmap. I spent the rest of Tuesday and most of Wednesday reading ALSA documentation, doing the final patch and making some packman derived mythtv packages that included my patch: (which are hosted here , although I'm hoping this will get into the myth sources, so these packages will eventually disappear.)
Patch posted to the myth bug http://svn.mythtv.org/trac/ticket/3598 .
Fixing ALSA was also nice in the fact that no new dependencies were needed for myth. If for some reason there are additional benefits of implementing native pulse support, I might re-address this later.
Xgl
Rationale:
I usually don't run XGL because mythtv crashes Xgl when you try to display video. This needs fixing.
Results:
I figure that mplayer works under Xgl using XVideo just fine, and that Myth should be able to do the same.
MythTV has a branch called mythtv-vid where they are working on an OpenGL output driver. I spent a while installing this branch and getting the latest xgl and compiz-fusion running, just to make sure this problem wasn't fixed already. It wasn't, and I couldn't get the opengl output on myth working.
At this point I wondered if I should just drop this idea and wait for the mythtv-vid project to finish the GL out support. I decided to do some simple benchmarks with mplayer to compare gl out and xvideo out. This can be done by disabling sound and telling mplayer to spit out the frames as fast as possible. The xvideo out ended up being slightly faster. (This was using ATI's fglrx driver. It would be interesting to run the same test with some different video cards and drivers). That was Thursday and a little bit of Friday.
The rest of Friday morning I spent debugging the myth sources to find the crash. This went rather slowly because each testrun crashed Xgl and I had to constantly re-login. The crash is happening during some xvideo initializations. I've located the X calls that cause the crash, but that's as far as I got. I don't know enough about xvideo to debug this further, so I've got some more digging to do. That took me until about noon on Friday.
The next few hours were spent setting up another machine so that I could demo mythtv running on a computer with synchronized output to 2 computers. The demo was video taped, but it's kind of difficult to experience synchronized output with a video camera :) I finished the rest of the day debugging xgl a little more, but ended up not making any progress.
Conclusion
I have MythTV working with PulseAudio (even though it didn't quite happen as expected), and made some good progress towards finding out why MythTV crashes Xgl. The hack week was a blast and it seems like overall a lot of great progress was made. Can't wait for the next one!
Update! Lightning talk:
Subscribe to:
Posts (Atom)