0

MySQL Character Encoding

I ran into some issues with a project where I was developing on one computer, pushing changes to a test server, while the production data was on another server, with a lot of different platforms and different versions of software. When I would do a backup from production to dev the character encoding was coming out wrong.

What I discovered was that running mysqldump and redirecting the output to a file can result in the terminal’s character encoding reinterpreting the output, and that the dump file from one version/platform of MySQL was not creating the new database with same character encoding.

My fix was to do the following:

mysqldump -u username -p -c -e –default-character-set=utf8 –single-transaction –skip-set-charset –add-drop-database -B database -r dump.sql

Then run the following:

sed -e ‘s/DEFAULT CHARACTER SET latin1/DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci/g; s/DEFAULT CHARSET=latin1/DEFAULT CHARSET=utf8/g’ -i.bak dumpfile.sql

After the export and conversion you can run

mysql -u username -p –default-character-set=utf8 database
mysql> SOURCE dumpfile.sql

0

Migrating Data to a Synology DS1812+

Migrating data from one storage platform to another can be slow and tedious, if you just plug it into the network and hope for the best.

For example, my customer’s network runs at 10/100 Mbps. So, when we plugged in the new DS1812+ to the network we achieved an underwhelming 7MB per second on average. This would have taken over 4 days to transfer 4TB.

To see what is going on, first look at the network speed. A 100Mbps network has about 10% overhead just to send packet headers. After converting BITS to BYTES, you can achieve about 11MB per second. The files we are transferring vary in size, and every start and stop of a file transfer creates a pause in data being transferred, which reduces the resulting transfer rates.

Additionally, the tool you use to transfer files has overhead. In the default cause of using rsync, it uses SSH to transfer files, which means every file has to be encrypted and decrypted. This also reduces the transfer rate because now the processor on both ends needs to do additional work.

To speed things up, we performed a few tricks. First we directly connected the server from a secondary NIC to the DS1812+ secondary NIC, which allowed both devices to connect at 1Gbps speed, which translates in a theoretical max of 100+MB per second. Then we configured both the Synology and DS1812+ to use an Maximum Transmission Unity (MTU) of 9000 BYTES. Then we told rsync to use the rsync daemon (rsyncd) instead of SSH so we could avoid the encryption overhead. Since these devices are directly connected in our datacenter there is no real risk of exposure of data.

Here is how to set the MTU on an OSX server:

sudo networksetup -setMTU en1 9000

The resulting transfer rates now peak out at 50+MB per second. Again, there is overhead for rsync to start transferring each file, and if the file is less than 50MB, the whole file will transfer in less than a second so you will see a lot of transfers being 10MB per second, or 30MB per second, etc, depending on file size.

Here is the command we used:

rsync –recursive –copy-links –times –verbose –progress /Volumes/source rsync://user@192.168.1.1/target

0

linux dependency hell

One of the old servers I discovered in a forgotten office was running Debian 4. We wanted to do a physical to virtual (P2V) migration so it was no longer running on the old hardware, which was about 8-10 years old. Unfortunately, this old box was not running SSH, and, as seems to happen with “things that have been forgotten”, nothing “just works”.

In order to run VMWare Converter you need to have ssh access. But, sshd was not running on the box, and it appeared the binaries were missing.

I tried to run aptitude install openssh-server and found there was a dependency problem where libc6-dev had been updated to 2.7-18lenny7, but libc6 was still using 2.7-18lenny4. All attempts to update libc6 were met with errors finding programs like locale, or ldconfig, or /etc/init.d/glibc.sh. The /etc/apt/sources.list was so old the mirror no longer existed, so I looked up Debian’s archives and changed it to http://archive.debian.org/debian-archive/debian and did an aptitude clean and aptitude update.

At this point I could actually download packages again, but upgrading still failed. After trying to clear aptitude’s cache and trying again, it still failed. So, I ran aptitude download libc6, and then ran dpkg-deb -x libc6*.deb libc6-unpacked

I then copied the ldconfig and glibc.sh programs from the extracted folder and put them back on the system where they were supposed to be. Then I ran dpkg -i libc6_2.7-18lenny8_amd64.deb, which successfully installed and allowed me to run aptitude upgrade to bring the whole box up to date and run aptitude install openssh-server.

Great, back to VMWare Converter. Enter the IP, name, and password… and error: Unable to query live Linux source. I tested out connecting to the box with an ssh client and was greeted with “Permission denied” as soon as I connected. Looking at the sshd_config revealed there it had no “PasswordAuthentication yes” line, so I added that and did service sshd restart. Now the VMWare Converter could connect and the migration started running.

The next problem was the import failed. Looking at the box start up it could not find the root partition on /dev/hda1. VMWare 5.0 uses LSI Logic SATA drives, so it was clear the old kernel was compiled without the correct drivers. Back to the old box, download the linux src, extract it, make menuconfig, I went with most of the defaults but added Executable Emulation for 32bit binaries on an amd64 core. Did a make, make-modules_install install. The old box was using lilo, but someone had tried to install grub, so I finished the config file and had it point to the old kernel with a boot option for the new kernel. Ran grub-install, rebooted, then ran the converter again.

The new kernel didn’t have the right NIC drivers, so I let it boot into the old kernel. It failed at the same point during the conversion, but this time I just booted it myself and selected my new kernel and both the LSI Logic and VMXNET3 network cards worked, and the services all started up.

0

VMware 2.0 to 5.0 Migration

The things you find in old closets. Sometimes they might be better left in the closet, hidden from view, but when it is an old server and I’m trying to secure your network, it has to be dragged into light and exorcised.

One of my favorite discoveries has been an old 2008 server (I was worried is was going to be Windows NT!) that was running VMWare Server 2.0. Now, I’ve been doing IT for 20 years, but I had never actually seen VMWare Server 2.0 before. So this was quite an exciting discover. I felt like an archaeologist unearthing an ancient Roman artifact.

After the initial laughter and sending screenshots to everyone I know I decided to migrate the one VM (a Debian 4 distro) that was running on the server to the production environment so it could be backed up and decommissioned properly. But, the big question was, would I be able to successfully migrate it from VMWare 2.0 to VMWare 5.0?

Since you can’t convert an VM that is running, and nobody had the password for the old VM, I just powered it off. Then I loaded up VMWare Converter, told it to convert an “other” image type, and pointed it at the old-vm-servere$ and browsed to the vmdk file. It took an hour to migrate it and convert it to an ESX 5.0 host with hardware level 8. I went ahead and added a VMXNET3 network card instead of the old VMWare 2.0 “Flexible” network card. Then I powered the guest on and rooted the password (edit startup command and add init=/bin/bash, then run mount -rw -o remount /, change the root password, and reboot). Once I logged in with my new root password I modified /etc/network/interfaces to use the new network card and restarted the server again just to make sure everything worked. And it did!

Needless to say, I am very impressed that VMWare has made it so easy to migrate from a 2.0 guest to their latest 5.0 environment. So often big companies will leave no migration paths. This just shows that VMWare is a good company with a great product!

 

0

OSX Firewalls – a dismal experience

I’m spoiled on unix firewalls extreme flexibility, and paradoxically, Windows firewall ease of configuration.

There should be a good middle ground in there. Mac does a great job of “being” unix, but with a much easier interface than Windows. Which is a feat. But, let me just put on my rant hat and rant pants. WHAT THE HELL IS WRONG WITH THE OSX FIREWALL!?!?

Why would you move from ipfilters to the more featureful PF firewall that the unix environment offers, and then only provide a brain dead interface that allows you to select Applications to allow through the firewall, and ZERO ability to limit the networks or IPs that are allowed to use those applications?

What kind of security is provided by either allowing a) the entire world to access Screen Sharing, or b) nobody…

Yes, you can make an argument that the corporate firewall, or even your home router, should be acting as hardware firewall to protect you. But when I go to Starbucks, who is protecting me there? When I’m in the airport, who is protecting me? Nobody is. Thanks Apple.

Microsoft gets it right in this department. And, as far as I am concerned, Apple doesn’t even actually offer a useable firewall. At least not out of the box.

 

Here is my solution: PFLists by Hany El Imam

 

This handy little app allows you to specify which networks or IP addresses are allowed to connect to which ports on your computer.

The only thing missing is Microsoft’s concept of “network location” so I can be more open at home and more secure at Starbucks.

0

Bulk Password Testing

Client has a ton of unix hosts, and they all have different passwords, and are not well-documented, and we need to secure them. Not wanting to root all of them or trying to type in a list of different possible passwords and accounts to try, you can use ncrack in an automated way to scan a network and test username and password combinations.

Install ncrack

apt-get install build-essential checkinstall libssl-dev  libssh-dev
wget http://nmap.org/ncrack/dist/ncrack-0.4ALPHA.tar.gz
tar xvfz ncrack-0.4ALPHA.tar.gz
cd ncrack-0.4ALPHA/

./configure
make
sudo checkinstall
sudo dpkg -i ncrack_0.4ALPHA-1_amd64.deb

Create a password list

For my purposes we had a list of passwords we could try. If you don’t have enough information to create a reasonable password list, you can grab a list of 500 passwords from skullsecurity.org.

wget http://downloads.skullsecurity.org/passwords/500-worst-passwords.txt

Run ncrack

Note that you can specify multiple user accounts to try as a comma separate list.

(Oh, and this is just sample output and not from one of our servers.)

ncrack -p 22 –user root -P 500-worst-passwords.txt 192.168.1.0/24

## sample output ##

Starting Ncrack 0.4ALPHA ( http://ncrack.org ) at 2011-05-05 16:50 EST
Stats: 0:00:18 elapsed; 0 services completed (1 total)
Rate: 0.09; Found: 0; About 6.80% done; ETC: 16:54 (0:04:07 remaining)
Stats: 0:01:46 elapsed; 0 services completed (1 total)
Rate: 3.77; Found: 0; About 78.40% done; ETC: 16:52 (0:00:29 remaining)

Discovered credentials for ssh on 192.168.1.10 22/tcp:
192.168.1.10 22/tcp ssh: ‘root’ ‘toor’

Ncrack done: 1 service scanned in 138.03 seconds.

Ncrack finished.

0

Ubuntu Desktop Still not Pro Level

Last year I wrote a few posts about trying out Ubuntu Desktop. After many frustrating weeks, I gave up on Ubuntu Desktop. I didn’t post why.

Ubuntu Desktop let’s you log in, and fairly easily download things from the app-store, and browse the internet. It manages to come close to feeling like you are “looking at a Mac”. But that’s it. Once you start using it, nothing is smooth, it doesn’t make a lot of sense. Configuration options you might want to change are just not available in the GUI so you have to drop to the console to run commands or edit files in a text editor. Apps you might want to use, like photo editing, or document writing, just don’t compare with the features in commercial products.

So, yes, you can install an email client, a Word-like program, something that works kind of like spreadsheet software. But I’ll be damned if any of them opened any existing documents or files without conversion errors. And anything I made could not be shared without errors. Calendaring was abysmal. You’d have to be hard pressed to choose GIMP over Photoshop.

Which means that, for me, Ubuntu Desktop might work for someone’s mom to check Yahoo! mail, or to browse Facebook. But it does not work the way a business professional would need it to. It won’t work in an enterprise environment that is Microsoft heavy.

Maybe some startups, or small groups of people could make it work. But, I suspect those folks are all using a Mac. Which *does* work, with just about everything I’ve ever needed it to do.

 

There are some Unix tools I like to use, which become very hard to run on Windows. But, they generally run on a Mac. And, for those times where you can’t use a Unix tool on Mac or Windows, I use VirtualBox to keep an Ubuntu Desktop install accessible. It actually works extremely well as a virtual instance full-screened on a second monitor and I no longer “hate using it” because it’s there as another tool I can use, not as an obstacle keeping me from doing every minor task I need to do.

 

 

0

Heartbleed Testing

With all the attention Heartbleed is getting right now, I wanted to test out my client’s servers and network devices. One of the easiest ways to check hosts and networks for vulnerabilities is with nmap. There is a new script for scanning for Heartbleed, but it requires LUA scripts, and a recent nmap version. 

Here is how to get everything working on an out-of-the box Unbutu 12.04 Desktop.

If you don’t have Ubuntu 12.04 Desktop, download it and install it using one of these methods:

  • Dual boot your computer
  • Replace your OS
  • Install to flash drive
  • Install on VirtualBox (my preferred solution, be sure to install the VirtualBox Extensions for both the host and guest)

If you don’t have a recent nmap, download requirements and install nmap from svn:

sudo apt-get update

sudo apt-get dist-upgrade

sudo reboot

sudo apt-get install build-essential autoconf checkinstall

sudo apt-get install subversion

svn co https://svn.nmap.org/nmap

cd nmap

./configure

make

sudo checkinstall

 

If you have a recent nmap, you can try to just download the latest requirements and heartbleed script

cd [install-path]/nmap/nselib/
sudo wget https://svn.nmap.org/nmap/nselib/tls.lua
cd [install-path]/nmap/scripts/
sudo wget https://svn.nmap.org/nmap/scripts/ssl-heartbleed.nse
sudo nmap –script-updatedb

 

Run nmap with the Heartbleed script:

nmap –datadir [install-path] -sV -p 443 –script ssl-heartbleed [server/network]

 

Example of a vulnerable system:

[snip]
443/tcp open https
| ssl-heartbleed:
| VULNERABLE:
| The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. It allows for stealing information intended to be protected by SSL/TLS encryption.
| State: VULNERABLE
| Risk factor: High
| Description:
| OpenSSL versions 1.0.1 and 1.0.2-beta releases (including 1.0.1f and 1.0.2-beta1) of OpenSSL are affected by the Heartbleed bug. The bug allows for reading memory of systems protected by the vulnerable OpenSSL versions and could allow for disclosure of otherwise encrypted confidential information as well as the encryption keys themselves.
|
| References:
| http://cvedetails.com/cve/2014-0160/
| http://www.openssl.org/news/secadv_20140407.txt
|_ https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160

 

 

0

Linux Transparent Bridge + Firewall

I was called in to help secure a network in pinch. This called for some quick action, with very little resources. No time to purchase a firewall, or drastically redesign the network. We needed something now.

The clients network had their printers, desktops, servers, SANS, and switches all on one subnet, publicly accessible to the internet, with no hardware firewall. Hackers were exploiting NTP bugs, trying default accounts and passwords, and trying to brute force their way into everything. Without having a complete understanding of the infrastructure, and what renumbering and redesigning the entire network might impact, I decided to implement a quick fix while a firewall was ordered and careful redesign steps could be planned for.

This quick fix was to create a transparent bridge and move all the vulnerable devices onto a private VLAN, while allowing the transparent bridge to firewall and secure all of these devices.

First, I had to reclaim an old Dell R310 server. Nobody knows the BIOS passwords for any of the servers, so after a quick BIOS password clear and reboot, I installed Ubuntu 12.04LTS using basic settings, and updates. After consulting with my Cisco experts, we configured two ports:

interface gi 1/0/1
switchport mode access
switchport access vlan 24

interface gi 1/0/2
switchport mode access
switchport access vlan25

On the server I setup bridge networking by installing bridge-utils

apt-get install bridge-utils

and adding these lines to /etc/network/interfaces

auto br-vlan25
iface br-vlan25 inet dhcp
bridge_ports eth0 eth1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
up /sbin/ifconfig $IFACE up || /sbin/true

When I brought up the interfaces the bridge started forwarding Spanning Tree Protocol (STP) packets, and the switch immediately killed one of the interfaces to prevent a loop.

My solution was to install the ebtables package

sudo apt-get install ebtables

And add the following rules

ebtables -P INPUT DROP
ebtables -P FORWARD DROP
ebtables -P OUTPUT DROP
ebtables -A OUTPUT -p IPv4 -j ACCEPT
ebtables -A OUTPUT -p arp -j ACCEPT
ebtables -A INPUT -p IPv4 -j ACCEPT
ebtables -A INPUT -p arp -j ACCEPT
ebtables -A FORWARD -p IPv4 -j ACCEPT
ebtables -A FORWARD -p arp -j ACCEPT

And then modify /etc/default/ebtables so that all the “no” settings were “yes”, that way the rules would preserve on reboot or interface reset

I now had a functioning bridge, but no firewall, so I added these rules to iptables to only allow locally sourced traffic through

iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
iptables  -I FORWARD -s X.Y.Z.o/24 -j ACCEPT
iptables -I FORWARD -m state –state ESTABLISHED,RELATED -j ACCEPT

And then installed the iptables-persistent package to save iptables rules across reboots and interface resets

apt-get install iptables-persistent

The next step was to look at all the switch ports, identify all the devices that needed to be secured, and move them to the new private vlan.

show int status

find all the vulernable device ports

conf t
int gi 1/0/X
switchport access vlan 25

Then I went to the vCenter and looked at all the guests that needed to be secured, including the esxi hosts themselves, and changed them to the new private vlan.

Now an NMAP scan from on site has access to their equipment, and an NMAP scan from offsite shows just a collection of desktops, printers, and public facing servers. No more free access to esxi hosts, equallogic storage, video cameras, environmental sensors, etc…

0

Awesome mini wireless keyboard + trackpad

Looking at making a server crash kit and found this little gem…

A miniature keyboard and trackpad over at adafruit.com

Image

“Add a miniature wireless controller to your computer project with this combination keyboard and touchpad. We found the smallest wireless USB keyboard available, a mere 6″ x 2.4″ x 0.5” (152mm x 59mm x 12.5mm)! It’s small but usable to make a great accompaniment to a computer such as the Beagle Bone or Raspberry Pi. The keyboard itself is battery powered (there’s a rechargeable battery inside that you charge up via the included USB cable). The keyboard communicates back to the computer via 2.4 GHz wireless link (not Bluetooth) 

The keyboard can only be used with a USB-host such as a computer. Its not intended to be used with an Arduino or Basic Stamp, etc. We tested it with the Raspberry Pi and it works great: uses only one USB port for both mouse and keyboard.”