28th Sep2016

Prevent duplicate Drafts, Junk, Sent, Trash folders in Dovecot

by Gyro

I've set up a new mail server for a client, and then used imapsync to copy all existing mail from the old server to the new server.

Everything seemed to have run smoothly, but shortly after they started using email via the new server, they had multiple folders for drafts, junk, sent, and trash.

After investigating this a bit, I found out that different mail clients expect different folder names on the server, and dovecot will create them when missing.

This really confused me, as this was not happening on the old server, so why was this now happening on the new server?!

Well, I could have prevented this from happening, if I would have had a look at the old server's configuration.

Turns out, you have to configure "Namespaces" to properly configure folders and possible duplicates.

Below is the configuration that was (and now is) in place. It has to be added to /etc/dovecot/dovecot.conf


namespace inbox {
  type = private
  separator = .
  prefix = INBOX.
  inbox = yes

  mailbox Drafts {
    special_use = \Drafts
    auto = subscribe
  }

  mailbox Junk {
    special_use = \Junk
    auto = create
  }

  mailbox spam {
    special_use = \Junk
    auto = no
  }

  mailbox Spam {
    special_use = \Junk
    auto = no
  }

  mailbox Trash {
    special_use = \Trash
    auto = subscribe
  }

  mailbox Sent {
    special_use = \Sent
    auto = subscribe
  }

  mailbox "Sent Mail" {
    special_use = \Sent
    auto = no
  }

  mailbox "Sent Messages" {
    special_use = \Sent
    auto = no
  }

  mailbox Archive {
    special_use = \Archive
    auto = create
  }

  mailbox "Archives" {
    special_use = \Archive
    auto = no
  }
}

With this configuration you get this setup:

INBOX
INBOX.Drafts
INBOX.Junk
INBOX.Trash
INBOX.Sent

Meaning, the Drafts, Junk, Trash, and Sent folders will reside inside the Inbox. Mail clients wanting to be different will have their special folder names mapped to the "one and only" folder of that type, such as iPhone's "Sent Messages" being the actual "Sent" folder.

I hope this helps anyone facing the same issue. Please leave a comment, if you found another special folder name that should be added to this list.

Enjoy! :)

3864

26th Sep2016

Load ISPConfig on subdomain using website’s ssl certificate

by Gyro

I've spent the last few days installing and configuring ISPConfig 3.1 on a new server, and one thing I really don't like about ISPConfig is the custom port it is running on.

So, I thought it would be really cool to use a subdomain instead and forget about the port all together.

It took me quite a while to figure out how to make ISPConfig load on a subdomain and have the subdomain configured for each website automatically. Of course I googled it, but information on how to accomplish this is quite rare (non-existent?), and I had to take stuff from a few different sources to come up with the (in my opinion) perfect solution.

The result: ISPConfig loads on an automatically configured subdomain and even works with each website's ssl certificate!

EDIT: This approach currently does not work with letsencrypt, because letsencrypt does not create a SSL certificate including the subdomain used for ISPConfig, so your browser willl warn you about an invalid SSL certicate being used. I am working on a solution. If you have a wildcard SSL certificate from a different vendor, this will work though.

Prerequisite

1. Make sure the following mods are enabled
~$ sudo a2enmod proxy_http
~$ sudo a2enmod proxy

2. You have to activate SSL for each website
A self-signed SSL certificate is sufficient, but I recommend getting a free one from StartSSL or LetsEncrypt.
ISPConfig 3.1+ can automatically setup a valid LetsEncrypt SSL certificate for each website.

Modify Vhost Master Template

~$ sudo nano /usr/local/ispconfig/server/conf/vhost.conf.master

Add the following code directly under </VirtualHost>, near the bottom of the file.

This will only work with https, and it will redirect http to https


#--------------------------------------------
# START: Add ISPConfig subdomain to all accounts
#--------------------------------------------
<tmpl_if name='ssl_enabled'>
<VirtualHost {tmpl_var name='ip_address'}:{tmpl_var name='port'}>
ServerName panel.{tmpl_var name='domain'}
SSLProxyEngine On
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
ProxyVia off
ProxyRequests off
ProxyPreserveHost on
ProxyPass / https://localhost:1155/
ProxyPassReverse / https://localhost:1155/
</VirtualHost>
<tmpl_else>
# Redirect unsecure to secure connection
<VirtualHost {tmpl_var name='ip_address'}:{tmpl_var name='port'}>
ServerName panel.{tmpl_var name='domain'}
Redirect 301 / https://panel.{tmpl_var name='domain'}/
</VirtualHost>
</tmpl_if>
#--------------------------------------------
# END: Add ISPConfig subdomain to all accounts
#--------------------------------------------

This will work with both -- http and https conections


#--------------------------------------------
# START: Add ISPConfig subdomain to all accounts
#--------------------------------------------
<tmpl_if name='ssl_enabled'>
<VirtualHost {tmpl_var name='ip_address'}:{tmpl_var name='port'}>
ServerName panel.{tmpl_var name='domain'}
SSLProxyEngine On
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
ProxyVia off
ProxyRequests off
ProxyPreserveHost on
ProxyPass / https://localhost:1155/
ProxyPassReverse / https://localhost:1155/
</VirtualHost>
<tmpl_else>
<VirtualHost {tmpl_var name='ip_address'}:{tmpl_var name='port'}>
ServerName panel.{tmpl_var name='domain'}
SSLProxyEngine On
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
ProxyVia off
ProxyRequests off
ProxyPreserveHost on
ProxyPass / https://localhost:1155/
ProxyPassReverse / https://localhost:1155/
</VirtualHost>
</tmpl_if>
#--------------------------------------------
# END: Add ISPConfig subdomain to all accounts
#--------------------------------------------

Notes

1. You have to change the port (1155) to match the port that your ISConfig installation runs on (default is 8080).
2. You may want to replace "panel" with a different word for the subdomain.

Enjoy! :)

1795

29th May2015

Report to New Relic APM on cPanel/CloudLinux with CageFS

by Gyro

I finally found out how to get PHP apps hosted on a cPanel server with CloudLinux and CageFS enabled to report to New Relic APM.

Since I spent days searching and trying, with giving up on several occasions, I figured this definitely deserves a post on my blog.

So, how did I manage to have apps report to new relic for ALL enabled cagefs users, and them being able to use their own new relic license/account?

Well, the problem was the default settings for the socket. The socket was simply not available for cagefs users being in the /tmp folder.

Login via ssh and check /tmp as root and as a user, and you will see that .newrelic.socket is missing for the users (~$ ll /tmp)

AFAIK this is due to cagefs creating "fake" /tmp directories for each user, for example you will see all the sessions of a user only in the /tmp of the user, but not in the physical /tmp of the server's fileystem.

To get around this, I did the following:

1. I created the folder /var/run/newrelic-global/ (not sure if it was necessary)

2. I added the following lines in /etc/newrelic/newrelic.cfg:

pidfile = "/var/run/newrelic-global/newrelic-daemon.pid"
port = "/var/run/newrelic-global/.newrelic.sock"

3. I added the following line in /etc/cagefs/cagefs.mp:

/var/run/newrelic-global

After that I ran these commands:
~$ /etc/init.d/newrelic-daemon restart
~$ cagefsctl --remount-all

Now, users can either define the php flags via .htaccess (if possible), or configure their own php.ini (if available), depending on how the server has been configured and/or what their preferences are. Below the example of what to do in the .htaccess file.

<IfModule mod_php5.c>
php_value newrelic.daemon.port "/var/run/newrelic-global/.newrelic.sock"
php_value newrelic.daemon.pidfile "/var/run/newrelic-global/newrelic-daemon.pid"
php_value newrelic.license "MyNewRelicLicense"
php_value newrelic.appname "MyAppName"
</IfModule>

To check, if it is working, have a look at the log file (Default location is: /var/log/newrelic/newrelic-daemon.log). You should see something like this:


…info: [‘MyAppName'] ‘Reporting to:…

Before all this, you should of course install the php agent:
https://docs.newrelic.com/docs/agents/php-agent/installation/php-agent-installation-redhat-centos
It will ask to update the php.ini during the installation of the agent. Select the one located in /usr/local/lib.
Once the installation is completed, run ~$ cagefsctl --force-update (those are 2 dashes infront of force-update).

Also, you may want to have a look at this page, if you are using the newrelic-sysmond daemon to monitor your server.
https://www.lucasrolff.com/cpanel/new-relic-and-cloudlinux/

Enjoy :)

1698

04th Nov2014

ISPConfig/Dovecot Fix: message exceeds temporary size limit

by Gyro

A simple ISPConfig/Dovecot Fix: message exceeds temporary size limit.

I just setup a dedicated server as a webserver using ISPConfig as the control panel following pretty much this guide:
The Perfect Server -- Ubuntu 12.04 LTS (nginx, BIND, Dovecot, ISPConfig 3)

Everything went smoothly, until I setup and tested email addresses.

Thunderbird kept giving me this error popup when trying to send an email:

The size of the message you are trying to send exceeds a temporary size limit of the server. The message was not sent; try to reduce the message size or wait some time and try again. The server responded: (IP, Sender) first encounter..

So, I searched Google for that exact phrase as well as part of it, like "The size of the message you are trying to send exceeds a temporary size limit of the server" or "message exceeds a temporary size limit", and so on, then I finally found a small post that (accidentally?) contained the solution, a parameter that was missing in my config file!

It took quite a while to find the answer, so hopefully this post will help others to get the solution quicker :)

FIX: message exceeds temporary size limit

edit /etc/postfix/main.cf and add this (missing) line:

virtual_mailbox_limit = 0

Final step: restart dovecot:
$ service dovecot restart

Done, enjoy! :crazy:

Sources:
http://www.howtoforge.com/forums/showthread.php?t=1325
http://www.postfix.org/postconf.5.html

1328

23rd Jul2014

Ubuntu: Autostart Dropbox during Boot / System Startup

by Gyro

Today, I wanted to autostart dropbox during boot, so it is loaded during system startup and not only after I login.

To accomplish this, I needed to create a file and execute a few terminal commands.

1. Get startup script

Go to: dropboxwiki.com and click on the "Debian/Ubuntu" bar, copy the code.

2. Create a new file, paste the code into new file, and adjust it a bit

$ sudo nano /etc/init.d/dropbox

You need to provide the username(s) that use dropbox.
Edit the follwing line in the code, replace user1 user2 with the correct username(s):

DROPBOX_USERS="user1 user2"

Save the file, close editor, and make the file executable.
$ sudo chmod +x /etc/init.d/dropbox

3. Add the file to the autostart with lower priority

$ sudo update-rc.d -f dropbox remove
$ sudo update-rc.d dropbox defaults 98 02
This will load dropbox near the end of the system startup, found it in the ubuntuforums.

That should be all, enjoy :crazy:

3713

26th Jun2014

VirtualBox – moving a VDI file and re-linking it to the Guest

by Gyro

virtualbox

Today I decided to move my vdi files to another partition.
So, I moved them and then started the Oracle VM VirtuaBox Manager to re-link the vdi files to the respective guest.First, I opened the settings of each Virtual Machine and deleted the "old" hard disk under "Storage".Then I clicked on "Add Attachment", selected "Add Hard Disk" and "Choose existing disk", and then selected the vdi file.

Next came a very strange error:

The Problem

Cannot register the hard disk ‘FULL PATH TO NEW LOCATION OF VDI' with UUID {UUID OF THE VDI} because a hard disk ‘FULL PATH TO OLD LOCATION OF VDI' with UUID {UUID OF THE VDI} already exists in the media registry (‘/home/user/.VirtualBox/VirtualBox.xml').

This confused me pretty good, as I knew for a fact that I moved the vdi, so it did not exist at the old location anymore. I also had no clue what this media registry was supposed to be, and before opening that xml file and messing around with it, I decided to see if others had this issue before.

After a bit of google-ing, I found the solution in the VirtualBox forum:

Even though the thread was started on 2. Mar 2009, 16:43, some kind soul decided to register to the forum to make one single post to the forum in this thread on 21. Dec 2012, 22:00 (Christmas spirit?) with the simple solution to this problem.

Hi there,

I had the same UUID problem and a very easy fix worked for me. I went File>virtual media manager. I found the vdi file that was giving me issues. Selected it and removed (though it was showing up as inaccessible). Then started the VM again and selected the file from my local hard-disk. That was it!
Hope this works for you too. :)
Om

So, to summarize with a bit more clarity:

The Solution

1. Open the Oracle VM VirtuaBox Manager
     Click on File -> Virtual Media Manager (or Ctrl+D)
2. Delete the hard disk entry in question (select and press "Del" on keyboard)
3. Open "Settings" of the Virtual Machine, go to "Storage", click "Add Attachment", select "Add Hard Disk" and "Choose existing disk", then selected the vdi file and you are done.

Enjoy :)

266620

23rd Jan2014

Update/Install ImageMagick on CentOS 5 and CentOS 6

by Gyro

Today I had a classic example of what happens when you do not RTFM. I had to install ImageMagick on a webserver running CentOS 5 with cPanel. First I simply ran "yum install imagemagick", which did install ImageMagick just fine.

Now, the reason I had to install it was to do a batch resize of about 20,000 images and place the resized images in a new directory. So I did:
# mogrify -resize 250 -quality 70 -path /home/somesite/public_html/image/thumbs/ -format jpg /home/somesite/public_html/images/*

To my surprise I got the error message:

mogrify: unrecognized option `-path'.

After a quick google search, I realized that yum installed a super old version that was not supporting the -path option?!

So, after some more searching, I found this great guide on how to Install ImageMagick 6.6.5 and followed the instructions. But when I got to the final installation command, I got a dependencies error?!

After another google search, I came across a forum post saying "Make sure that RPMForge repository is installed first, and then everything works."… and when I looked back at the guide, I realized that it did in fact tell me to do that. The reason I did not see it was because it was a one liner with a link to another website, and I was expecting to be able to simply copy paste commands, so I skipped reading that line. I did not RTFM properly…

So, to make it easy for the future, I will combine these two pages, in order to have a step by step set of instructions without having to go to another website :)

Before starting, make you sure you know whether you are using a 32-bit or 64-bit system.
# uname -i
x86_64 would be 64-bit

OK, here we go!

1. Uninstall ImageMagick

# rpm -e --nodeps ImageMagick.i386 ImageMagick-devel.i386 ImageMagick.x86_64 ImageMagick-devel.x86_64
this actually didn't work for me, so I did:
# yum remove imagemagick

2. Add the EPEL repository to yum:

CentOS 6:
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
CentOS 5:
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

3. Download and Install RPMforge (Check here for newer version.)

CentOS 6:
Download RPMforge for 32-bit systems: rpmforge-release-0.5.3-1.el6.rf.i686.rpm
# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.i686.rpm
Download RPMforge for 64-bit systems: rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Install DAG's GPG key:
# rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt

If you get an error message like the following the key has already been imported:
error: http://apt.sw.be/RPM-GPG-KEY.dag.txt: key 1 import failed.

Verify the package you have downloaded:
# rpm -K rpmforge-release-0.5.3-1.el6.rf.*.rpm

Install RPMforge:
# rpm -i rpmforge-release-0.5.3-1.el6.rf.*.rpm

CentOS 5:
Download RPMforge for 32-bit systems: rpmforge-release-0.5.3-1.el5.rf.i386.rpm
# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el5.rf.i386.rpm
Download RPMforge for 64-bit systems: rpmforge-release-0.5.3-1.el5.rf.x86_64.rpm
# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el5.rf.x86_64.rpm

Install DAG's GPG key:
# rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt

If you get an error message like the following the key has already been imported:
error: http://apt.sw.be/RPM-GPG-KEY.dag.txt: key 1 import failed.

Verify the package you have downloaded
# rpm -K rpmforge-release-0.5.3-1.el5.rf.*.rpm

Install RPMforge
# rpm -i rpmforge-release-0.5.3-1.el5.rf.*.rpm

4. Install dependencies for ImageMagick 6.6

For 32-bit systems:
# yum install djvulibre OpenEXR jasper ghostscript librsvg2 libwmf libtool-ltdl
For 64-bit systems:
# yum install djvulibre OpenEXR jasper ghostscript librsvg2.x86_64 libwmf.x86_64 libtool-ltdl.x86_64

Systems running CentOS 6 will need the old version of libtool-ltdl installed:
For 32-bit systems:
# rpm -ivh --force ftp://ftp.muug.mb.ca/mirror/centos/5.9/os/i386/CentOS/libtool-ltdl-1.5.22-7.el5_4.i386.rpm
For 64-bit systems:
# rpm -ivh --force ftp://ftp.muug.mb.ca/mirror/centos/5.9/os/x86_64/CentOS/libtool-ltdl-1.5.22-7.el5_4.x86_64.rpm

5. Download and Install ImageMagick 6.6.5

Download for 32-bit systems: ImageMagick-6.6.5-10.i386.rpm
# wget http://www.lassosoft.com/_downloads/public/Lasso_Server/CentOS-Extra/ImageMagick-6.6.5-10.i386.rpm
Download for 64-bit systems: ImageMagick-6.6.5-10.x86_64.rpm
# wget http://www.lassosoft.com/_downloads/public/Lasso_Server/CentOS-Extra/ImageMagick-6.6.5-10.x86_64.rpm

Install ImageMagick
# rpm -ivh ImageMagick-6.6.5*

DONE! :crazy:

9687

18th Oct2013

How to install MS Office 2007 in Ubuntu 12.04 using wine

by Gyro

This is a quick and easy guide on how to install MS Office 2007 in Ubuntu 12.04 using wine. This includes winetricks settings to run Office 2007 using wine.

Most of what you can read here comes from an excellent post I found explaining how to do this on Ubuntu. I have slightly modified those steps, in order to make the installation on Ubuntu 12.04 as easy and as fast as possible.

1. Install wine1.4, winetricks, wine-gecko1.4.
If winbind is not installed install it also.
~$ sudo apt-get install wine1.4 winetricks wine-gecko1.4

2. Install Microsoft Core fonts
~$ sudo apt-get install msttcorefonts
OR
~$ sudo apt-get install ttf-mscorefonts-installer

If you have any problem with the installation then download the offline installer and do the setup as follows:
~$ wget http://imaginux.com/repos/pool/renzo/msttcorefonts-offline_1.0-0ubuntu1_all.deb
~$ sudo dpkg -i msttcorefonts-offline_1.0-0ubuntu1_all.deb

3. 32 bit users can directly go to step 4.
64 bit users only must run the following commands in a terminal. This will set the Windows version as 32 bit, as Microsoft Office is available for 32 bit only.

Warning: rm -rf ~/.wine will remove all programs and configurations you have setup under wine. If you have something important in that directory please backup BEFORE running the following commands.

~$ rm -rf ~/.wine
~$ export WINEARCH=win32
~$ wineboot --update

4. Make sure you got the files you need for step 5.
Check if the folder "~/.cache/winetricks/msxml3" exists, containing the file "msxml3.msi". You may have to create the folder, download the file, and place the file into it.
You can download "msxml3.msi" from:
http://download.cnet.com/Microsoft-XML-Parser-MSXML-3-0-Service-Pack-7-SP7/3000-7241_4-10731613.html

5. Run winetricks and go through the individual installation windows
~$ winetricks
Select "Install a Windows DLL or Component".

In the next window tick the following packages
a) dotnet20
b) msxml3
c) gdiplus
d) riched20
e) riched30
f) vcrun2005

Press Ok.

This step could also be run via command line:
~$ winetricks dotnet20 msxml3 gdiplus riched20 riched30 vcrun2005

If the dotnet20 Installation fails

It may return the error: dotnet20 requires Microsoft Installer 3.0.

In that case download Microsoft Installer 3 and install it:
~$ wine [ path to installer ]

You can download Microsoft Installer 3 here:
http://www.microsoft.com/en-us/download/confirmation.aspx?id=16821

After you installed Microsoft Installer 3, reset your wine again.

Warning: rm -rf ~/.wine will remove all programs and configurations you have setup under wine. If you have something important in that directory please backup BEFORE running the following commands.

~$ rm -rf ~/.wine
~$ export WINEARCH=win32
~$ wineboot --update
~$ winecfg

now restart Step 5.

6. Install Office 2007
Navigate to the folder where the Setup.exe is locates and run it with wine.
~$ wine ./Setup.exe

You will find all installed office apps in your dash home.

On First Run, select "I dont want to use Microsoft Update".

Extra: Download and Install SP2 or SP3

Since Service Pack 2 you have the option to Save As… PDF, so it may be good idea to install at least that one.

Service Pack 2
~$ wget http://download.microsoft.com/download/A/1/4/A14E308D-529C-48F9-9DAF-7C3BDC88FA57/office2007sp2-kb953195-fullfile-en-us.exe
~$ wine ./office2007sp2-kb953195-fullfile-en-us.exe

Service Pack 3
~$ wget http://download.microsoft.com/download/2/2/A/22AA9422-C45D-46FA-808F-179A1BEBB2A7/office2007sp3-kb2526086-fullfile-en-us.exe
~$ wine ./office2007sp3-kb2526086-fullfile-en-us.exe

Sources:
NITHIN C

6113

11th Sep2013

Ubuntu Flash Player Plugin Update

by Gyro

Today YouTube and Facebook kept telling me that my flash player plugin is out of date and that I need to update.

However, when following the link in the warning, all I got offered was of some .rpm file I could download.

As I rarely need to use an .rpm file in Ubuntu, I figured I missed something. So I searched a bit more, and more, and more… and bingo, there is actually a package I can install via apt-get!

This is not really a ubuntu flash player plugin update, but rather an installation of the flash player plugin. It's a bit weird as this stuff used to come with the browser (Chromium) and I never had to update the flash player itself, nor install it. Seems that something has changed, and there is a package on launchpad called “adobe-flashplugin” package in Ubuntu: https://launchpad.net/ubuntu/+source/adobe-flashplugin

Since it is there, you don't need to download anything, just open your terminal and type:
# sudo apt-get install adobe-flashplugin
And you are done!

You will have the latest flash player installed, and it works perfectly fine (for me) with Firefox and Chrome. I am guessing it will update just like any other package… time will tell.

Enjoy! ;)

1421

30th Jul2013

Cannot Change Screen Resolution for Ubuntu 12.04 Desktop

by Gyro

Today my Ubuntu decided to "forget" it got 2 monitors connected. Once I activated the 2nd monitor again, I could only choose a max screen resolution of 1024×768.

After some searching I found the solution on Ask Ubuntu :)

Open Terminal and type:
$ xrandr
it will show you the available screen resolutions for the connected monitors.

$ xrandr
Screen 0: minimum 320 x 200, current 2944 x 1080, maximum 8192 x 8192
HDMI-0 disconnected (normal left inverted right x axis y axis)
DVI-0 connected 1920×1080+1024+0 (normal left inverted right x axis y axis) 509mm x 286mm
1920×1080 60.0*+
1280×1024 60.0
1440×900 59.9
1280×800 59.8
1152×864 75.0
1024×768 70.1 60.0
800×600 60.3 56.2
640×480 66.7 60.0
720×400 70.1
VGA-0 connected 1024×768+0+312 (normal left inverted right x axis y axis) 0mm x 0mm
1024×768 60.0*
800×600 60.3 56.2
848×480 60.0
640×480 59.9

VGA-1 is the monitor that has a max screen resolution of 1024×769.

So, next command in terminal:
$ xrandr --addmode VGA-0 1920×1080

And check:
$ xrandr

$ xrandr
Screen 0: minimum 320 x 200, current 2944 x 1080, maximum 8192 x 8192
HDMI-0 disconnected (normal left inverted right x axis y axis)
DVI-0 connected 1920×1080+1024+0 (normal left inverted right x axis y axis) 509mm x 286mm
1920×1080 60.0*+
1280×1024 60.0
1440×900 59.9
1280×800 59.8
1152×864 75.0
1024×768 70.1 60.0
800×600 60.3 56.2
640×480 66.7 60.0
720×400 70.1
VGA-0 connected 1024×768+0+312 (normal left inverted right x axis y axis) 0mm x 0mm
1024×768 60.0*
800×600 60.3 56.2
848×480 60.0
640×480 59.9
1920×1080 60.0

Last step: Open Display settings and change the resolution for the monitor in question.

Done, enjoy ;)

Reference:

Screen resolution stuck at 1024×768

http://askubuntu.com/questions/37411/screen-resolution-stuck-at-1024×768

3938

23rd Mar2013

VirtualBox USB Add Filter From Device No Device Available Ubuntu

by Gyro

I am running an old XP license on a Virtualbox guest. One things that has always bugged me was the inability to use USB devices within my VirtualBox guest.

After searching google a few times for "Virtualbox USB Add Filter From Device No Device Available Ubuntu" and similar search terms, I found A LOT of different "solutions" that just wouldn't do what I wanted -- to be able to access my USB Flash Drive on my Virtual Box Windows XP guest.

So, on to the solution that worked for me (and will most likely work for you)!

1. Enable USB 2.0 (EHCI) Support

First, check your virtual box version!

I am using 4.1.12, so I had to go to:

http://download.virtualbox.org/virtualbox/4.1.12/
 If you are using a different VirtualBox version, simply replace 4.1.12 with your version in the link.

There I downloaded:

Oracle_VM_VirtualBox_Extension_Pack-4.1.12-77245.vbox-extpack
 The important part here is the "-77245" in the file name, (if this is updated, the number may change, just look for the one that has the -7xxxx)

This is a special Extension Pack that allows you to enable USB 2.0 support. To install it just double-click the downloaded file, which should open VirtualBox and start the actual installation. You need to do this regardless of whether you have installed another extension pack already or not. After that enable USB 2.0 (EHCI) in the Settings of your VirtualBox Guest.

2. Add your linux user to the virtual box group

Manually add the linux user to the vboxusers group:
# sudo usermod -a -G vboxusers USERNAME
Where USERNAME is the user you are currently logged in with.

To make the user work in the new group, I had to logout and login again.

Now, when I open the Settings for my VirtualBox Guest, I see all my USB devices in the list when clicking on the "Add Filter From Device" icon. However, I did not have to add a Filter to have the USB flash drive available as a Removable Device within Windows XP. Also, when I plug in my external Hard Drive, it gets mounted by Ubuntu and is also available in my VirtualBox Guest at the same time.

4623

21st Feb2013

Restore large mysql databases with nginx 504 Gateway Timeout

by Gyro

I had to restore a 36MB sql database today and kept getting a "504 Gateway Time-out".

After playing around with the config files, adding fastcgi_read_timeout to the settings for php5-fpm, and fiddling with the php.ini, I realized that this can be accomplished with one single line using ssh:

Assuming that you have packed the .sql file into a zip file, and you are currently in that folder:
# gunzip < databasebackup.sql.gz | mysql -DDATABASENAME -uUSERNAME -pPASSWORD

DATABASENAME = name of the database
USERNAME = mysql username
PASSWORD = mysql password

I hope this helps when trying to restore large mysql databases with nginx 504 Gateway Timeout

2337

04th Feb2013

Ubuntu update/upgrade fails with dpkg error message

by Gyro

Today, my Ubuntu was unable to install update.

I got this error message after trying to update Ubuntu:

dpkg: error: parsing file ‘/var/lib/dpkg/available' near line 15054 package ‘unity-common':
duplicate value for `Architecture' field
E: Sub-process /usr/bin/dpkg returned an error code (2)

After some research and trial'n'error, I came up with this solution:

1.Clear the content of the "available" file
# sudo nano /var/lib/dpkg/available
or
# sudo rm /var/lib/dpkg/available
# sudo touch /var/lib/dpkg/available

2. "Reset"
# sudo dpkg --configure -a
# sudo apt-get clean

3. Update/Upgrade again
# sudo apt-get update
# sudo apt-get upgrade

That should be all!

Step 2 may be unnecessary, but I have no way of testing it again… maybe someone with the same or a similar problem can try it without step 2 and let me know? ;)

1364

21st Jan2013

Gitlab service doesn’t run on startup/boot/autostart

by Gyro

I have been using Gitlab for a while now to have backups of all my git repositories on my dedicated servers. Now, I needed to reboot one of them for the first time since I installed Gitlab, and ended up with a ‘502 bad gateway' warning by nginx, when trying to access the Gitlab web interface.

I found this on the github, hidden in a small pull request:

gitlab doesn't start on boot if you follow the install instructions on Ubuntu (12.04 LTS Server -- probably others). Turns out gitlab requires redis-server to be running for gitlab to be able to start. Startup script S20redis-server isn't run until after S20gitlab so gitlab fails to start on boot. Webserver will be up, bad gateway 502 error is usually seen/reported. Starting gitlab manually works (because redis-server has started). This change makes gitlab start after redis-server.

The single line for victory:
# sudo update-rc.d gitlab defaults 70 30

Enjoy :enjoy:

source: github

2976

15th Jan2013

Install php-gd on Ubuntu without recompiling php

by Gyro

For some reason one of my client's website stopped showing captcha codes for the mad4joomla component, the server is running nginx with php-fpm instead of Apache.

Looking at the error.log, I got

 [error] 21110#0: *100 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Call to undefined function imageantialias() in /var/www/clients/client1/web1/web/components/com_mad4joomla/sec/im4.php on line 39"

Time to check the server, that function is supposed to be part of the GD library.

First, I checked if GD was loaded:
# php5 -m | grep -i gd

gd

ok, so GD is there… now on to the function.

# php -r "var_dump( function_exists(‘imageantialias'));"

bool(false)

hmm… :wooty:

So, on I went to google, trying to find out how I can get imageantialias() to work (again)… to my surprise I actually stumbled upon a post talking about the exact same issue, while using the same server setup -- Ubuntu with ISP Config. The solution presented was to recompile php, which I was reluctant to do.

After some more searching I came across this amazing little page (http://nossie.addicts.nl/php5-gd.html) which I am going to post (slightly modified) right here, mainly because it looks like that page could be gone tomorrow…

Getting bundled php-gd working on Ubuntu without having to recompile php

Like many other people out there, I ran into the problem that the version of php-gd shipped with Ubuntu (and Debian) is different from the version used by many other distributions.
The version Ubuntu uses misses some functions like imagerotate and imageantialias, which are needed by an increasing number of software projects.

One solution to this problem is recompiling PHP with the bundled version of GD.
It is not particularly hard to do, but there are a reasons not to do it, one of them being that it is not neccesary.

The following steps solved the problem for me.

Install what is needed to do this:
# sudo apt-get install php5-cli php5-gd rpm mc

Check what version of php you are running (php5 -v). In my case it was 5.3.2-1ubuntu4.7
Go to rpmfind.net and search for php-gd-bundled
Download the version that matches your PHP version and architecture, php-gd-bundled-5.3.2-1mdv2010.1.x86_64.rpm in my case.

If you have installed mc (midnight commander) and rpm, you can use mc to open the downloaded .rpm file (start mc, goto the .rpm file and hit enter)
Inside the .rpm file you will see CONTENT.cpio, navigate to that and hit enter. Goto usr/lib64/php/extensions, there you will find gd-bundled.so.

The original php5-gd is installed at /usr/lib/php5/20090626/gd.so (for my installation), backup the original gd.so, and copy the gd-bundled.so to that location and rename to gd.so (you can copy files with F5 in mc).
The new gd.so expects to find libjpeg.so.8, this was not present on my system, but that can be sovled by creating a symlink from the installed libjpeg.so
On my system I found /usr/lib/libjpeg.so.64.0.0
Create the symlink with:
# ln -s /usr/lib/libjpeg.so.62.0.0 /usr/lib/libjpeg.so.8

This is it, restart apache and you should be running with the bundled version of gd.
# php -r "var_dump(function_exists(‘imageantialias'));"
Should return bool(true) this time.

Now, if there is an update of the php5-gd package, your modified version gets overwritten. To prevent this from happening, you can hold the php5-gd package so it will not be updated.
# aptitude hold php5-gd

I hope this will help you as much as it did me!

Enjoy :enjoy:

51559

14th Jan2013

Using rsync to backup a remote folder onto your computer via SSH

by Gyro

I've been having a hard time remembering the correct way of doing this, so I figured I write it down…

Backup the "folder":
# rsync -avz -e ssh user@remotemachine:/some/folder/ /backup/some/folder/

When using a different ssh port (xxxx is the ssh port number):
# rsync -avz -e ‘ssh -p xxxx' user@remotemachine:/some/folder/ /backup/some/folder/

When there is a space in the folder name, you need to put single or double quotes around the folder name:
# rsync -avz -e ssh user@remotemachine:/some/folder/'with a lot of spaces'/ /backup/some/folder/'with a lot of spaces'/

Some cool commands you can add:

--delete
will remove files from the destination folder that are not present in the origin

--progress
shows a nice % progress on the current file

# rsync -avz -e --delete --progress ssh user@remotemachine:/some/folder/'with a lot of spaces'/ /backup/some/folder/'with a lot of spaces'/
This will download the content of the remote folder with a lot of spaces to your local machine with a folder of the same name, while deleting any file that is inside your local folder and not in the remote folder, and it will give you some details to the files being downloaded.

1885

13th Nov2012

Burn MP3s with K3B without converting to WAV first (Linux/Ubuntu)

by Gyro

Today I got fed up with having to use the Sound Converter to create WAV files from MP3s I want to burn for my car, Yes, I got a crappy car stereo that will not play MP3 CDs… drop it already.

So, turns out it is quite easy to get K3B to convert the MP3 files on the fly, all you got to do is install the plugin for it:
# sudo apt-get install libk3b6-extracodecs

That's it, now you can burn Music CDs with K3B by adding MP3 files to the list directly.

Enjoy! (and share) ;)

2185

08th Nov2012

Problems uploading files to ownCloud

by Gyro

I've installed ownCloud a while back and finally got around to test it today.

As ownCloud has a "Music" section, I figured I start by uploading some of my music. I first tried uploading an MP3 file via the web interface, but the spinning wheel would never go away, and an eventual refresh showed no new file. I then tried it with a few small files, and realized that anything over 1MB file size will result in the file not being uploaded, and the original gets deleted by the owncloud sync tool when it was used to upload/sync the file.

(more…)

5330

Pages:12»