removing Skype from Windows 10

Skype has somehow become annoying bloatware that ships with Win10. It’s also buggy as hell – in my case, it never shows me contact requests, which makes me look like an idiot in front of clients. I’m sticking with using the webapp version at all times – at least it seems to more-or-less work.

More unfortunately, you cannot uninstall it using the normal control panel route, you have to do it with PowerShell.

To uninstall Skype for Windows 10

1. Click the start window icon on your taskbar
2. Type powershell. Watch it will find the link before you get too far into typing.
3. Right-click on the Powershell start menu and select “Run as administrator”
4 enter this command: get-appxpackage *skype* | remove-appxpackage

using nginx to proxy different domains to different ports

I’ve got a secret devops weapon in the works. At a very simple level, part of it involves running multiple web applications on a single IP, but without having control of the webserver. Well, full disclosure, the applications include built-in tomcat instances and automatically install themselves on separate ports, and figuring out how to hack giant pre-built applications and modify tomcat was obviously not the quick option.

So I decided to do a proof of concept. What I needed was to be able to route requests at the domain level, all ingressing on port 80 (443 and SSL/HTTPS is the next step I suppose), and route them to a local port, all within the webserver (the machine is on Google Cloud, sans loadbalancer, and AFAIK there is no option in the console to handle this at the TCP/IP level).

A quick google turned up two obvious proxy solutions: Apache with mod_proxy, or nginx.

I’m an nginx fan, and haven’t had the opportunity to use it much recently, so I chose to do the PoC with nginx and proxy_pass directives, on a local Ubuntu VM.

The preconditions I wanted to set up were pretty straightforward – I needed two web apps, running on different, non-standard http ports, that were both visible in browser if you added the port number.

– site1 on port 81, accessible in browser via http://proxytest.local:81
– site2 on port 82, accessible in browser via http://proxytest.local:82

Then end result I wanted to create was also easily defined – two sites, running on the same ip, but different domains, with each domain transparently forwarded to the right application on its non-standard local port:

– http://site1.local forwards to local port 81
– http://site2.local forwards to local port 82

Setting up the preconditions

Install nginx, if needed – if you’re running apache or another process that owns port 80, be sure to stop it first

service apache2 stop
apt-get install nginx

create web files for both site1 and site2

mkdir /var/www/proxy_test;
mkdir /var/www/proxy_test/site1;
mkdir /var/www/proxy_test/site2;
nano mkdir /var/www/proxy_test/site1/index.html;
add whatever text to make site1 identifiable
nano mkdir /var/www/proxy_test/site2/index.html;
add some text to make it apparent this is site2

edit /etc/hosts to make the test domains resolve to localhost

127.0.0.1 proxytest.local site1.local site2.local

create nginx config files for site1 and 2, in /etc/nginx/sites-available

[email protected]:/etc/nginx/sites-available# cat ./site1
server {
listen 81;
listen [::]:81;

server_name site1.local;

root /var/www/proxy_test/site1;
index index.html;

# location / {
# try_files $uri $uri/ =404;
# }
}

[email protected]:/etc/nginx/sites-available# cat ./site2
server {
listen 82;
listen [::]:82;

server_name site2.local;

root /var/www/proxy_test/site2;
index index.html;

# location / {
# try_files $uri $uri/ =404;
# }
}

… and symlink them to /etc/nginx/sites-enabled to enable the sites

cd /etc/nginx/sites-enabled ; ln -s ../sites-available/site1 ; ln -s ../sites-available/site2

[email protected]:/etc/nginx/sites-enabled# ll
total 8
drwxr-xr-x 2 root root 4096 Jul 27 07:56 ./
drwxr-xr-x 8 root root 4096 Jul 27 07:28 ../
lrwxrwxrwx 1 root root 34 Jul 27 07:28 default -> /etc/nginx/sites-available/default
lrwxrwxrwx 1 root root 24 Jul 27 07:56 site1 -> ../sites-available/site1
lrwxrwxrwx 1 root root 24 Jul 27 07:56 site2 -> ../sites-available/site2

restart nginx to apply changes

service nginx restart

verify you can see all the sites in browser, using their port numbers

default nginx page on port 80
2018-07-27_10h08_47

site 1
2018-07-27_10h09_01

site 2
2018-07-27_10h09_11

Setting up the Proxy

So preconditions were set up. Now I needed to figure out how to implement nginx as a proxy to let users access each site by domain, sans port number.

Turns out it was a lot easier than I thought it would be. Just create the proxy config at /etc/nginx/sites-available/proxy_test

[email protected]:/etc/nginx/sites-enabled# cat ./proxy_test
server {
listen 80;
server_name site1.local;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:81;
}
}
server {
listen 80;
server_name site2.local;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:82;
}
}

… symlink it into /etc/nginx/sites-enabled like you did with the site1 and site2 config files above, restart nginx, and viola! Now each site is accessible on external port 80, by domian name.

Site 1

2018-07-27_11h38_32

Site 2

2018-07-27_11h38_42

 

I love you nginx <3

Martinez-bros-concrete.com – completed WordPress project

My work life isn’t all Magento. Sometimes between contracts, I do other stuff.

I just did the logo, site design, and WordPress build for martinez-bros-concrete.com – http://martinez-bros-concrete.com, a concrete and landscaping company based in Denton, TX. Took about a week.

My design skills aren’t 1337, but they’re usually good enough to get something professional put together. And honestly, the North Texas concrete business is not exactly the most competitive sector when it comes to web design.

(Re)Installing Magento 2 from shell

Sometimes, you accidentally bork your Magento 2 development database. Sometimes, you just want to reinstall as a sanity check. Luckily, you don’t have to re-extract the installer – you can simply un-install and re-install Magento 2 using the ./bin/magento CLI tool, see:

https://devdocs.magento.com/guides/v2.2/install-gde/install/cli/install-cli-install.html

However, the command needs a lot of arguments. I just dropped the following into a shell script at ./bin/reinstall.sh

[0s][/var/www/m2/bin]$ cat ./reinstall.sh
#!/bin/bash

./magento setup:install --admin-firstname Andy --admin-lastname Boyd \
--admin-email [email protected] --admin-user admin \
--admin-password Password1! --base-url http://m2.local --backend-frontname m2_admin \
--db-host localhost --db-name m2 --db-user root --db-password Password1!

Magento 2 and PHP Unit – ‘no tests executed’ issue

When starting on unit tests for a Magento 2 module, It seemed that phpunit refused to find any of my test methods. After some experimentation, I found a fix (well, maybe a workaround)

– copy (magento root)/dev/tests/phpunit.xml.dist to ./phpunit.xml, and then pass that as the config file (this was pretty obvious)
– you have to call Magento’s phpunit binary, not your system phpunit binary (this was way less obvious)

So where you could normally do something like

cd /path/to/magento2/app/code/YourCompany/YourModule/Tests/Unit ;
phpunit . ;

You instead need to do

cd /path/to/magento2/ ;
./vendor/phpunit/phpunit/phpunit -c /path/to/magento2/dev/test/unit/phpunit.xml .

The problem with just doing ‘phpunit’ is it calls /usr/bin/phpunit, or whatever your local system phpunit is.

For convenience, you can replace the long file paths above with a couple symlinks and save yourself a LOT of typing. Become root on your system and do

cd /usr/bin ;
mv ./phpunit ./phpunit-system ;
ln -s /path/to/magento2/vendor/phpunit/phpunit/phpunit ./ ;
cd /path/to/magento2/app/code/YourCompany/YourModule/Tests/Unit ;
ln -s /path/to/magento2/dev/tests/unit/phpunit.xml ./ ;

You replaced the usual phpunit system binary with a symlink to Magento’s, and added a symlink to the phpunit config file in your modules unit tests directory. So now to run your module’s unit tests, just do

cd /path/to/magento2/app/code/YourCompany/YourModule/Tests/Unit ;
phpunit -c ./phpunit .

๐Ÿ˜€

Speed up VmWare virtual machine by changing location of nvram (virtual RAM) file

I’ve got multiple VMs on a HDD RAID array. I/O is ok, but it’s platter style drives in RAID-1 – speed is constrained by the limitations of the drive design.

I also have a small SSD RAID array of only 80G that houses the base Windows O/S, it idea being that the base O/S should run as quick as possible. Obviously, with a Win10 install, I’ve only got a few GB to spare on that partition, so there’s no way I could move the VMs over.

Lightbulb: I could still increase VM performance many times over if I could have VmWare use the SSD array for *just* the virtual memory file.

Luckily, the VM config file has just an option for this:

workingdir = "c:\VmWareWorkingDir"

Now the VM disk images still live on the HDD array, but the virtual memory file now lives on the SSD array – with much faster I/O ๐Ÿ˜€

Prevent Magento 1 admins from sharing credentials with Admin Single Session

Just released a free Magento extension designed to prevent admin users from sharing credentials (which wrecks accountability when things go wrong). It prevents an admin account from having more than one active session at a time. If a second user logs in using the same username and password, the first user gets redirected to the login screen with a nice message explaining they’ve been kicked because another user logged in with the same credentials.

It’s just a beta release, not thoroughly tested across different Mage versions (developed on CE 1.9.2.2), but the extension is simple enough it should work fine in most cases. Obviously, as with everything, test before pushing to production.

https://github.com/siliconrockstar/magento-admin-single-session

Thanks go to Jared (http://molotovbliss.com/) for adding modman support ๐Ÿ™‚

UPDATE: awesomely enough, this feature is baked-in to Magento 2 ๐Ÿ™‚

Using the stress command to generate CPU load

I needed a reliable way to test a script I wrote to monitor server load. Luckily I found it in bash’s stressย command.

On most versions of Linux, you can install it with

yum install stress

or

apt-get install stress

depending on your distro.

I was on Centos 7.2, which of course doesn’t have the package in a repo (including EPEL), so I downloaded it from here

ftp://fr2.rpmfind.net/linux/dag/redhat/el7/en/x86_64/dag/RPMS/stress-1.0.2-1.el7.rf.x86_64.rpm

and did

yum localinstall stress-1.0.2-1.el7.rf.x86_64.rpm

Usage is straightforward, all I needed to do was generate 90 seconds of greater than 70% CPU usage on a one CPU cloud virt, so I did

stress --cpu 2 --timeout 90

Shell script to monitor server load

Don’t feel like shelling out $$$ for NewRelic or Blackfire.io? Got a micro instance hosting a blog that nobody reads that doesn’t have the memory to support enterprise monitoring anyway (*cough* this blog *cough*)?

Here’s a shell script that will email you when server load gets above whatever threshold you specify. Would be pretty easy to adapt to monitor memory using the ‘free‘ command as well. Just schedule it using ‘crontab -e‘, like ‘*/5 * * * * /path/to/script.sh‘ for every 5 minutes, and you’re set.

Server load monitoring on a budget!

#!/bin/bash
# requires bc library - yum install bc

# config
# alert threshold, as decimal
ALERT=.7; # 70% CPU utilization
# admin email
EMAIL='[email protected]';

# add /usr/bin to path so cron works
export PATH=$PATH:/usr/bin;

# get number of processors
NPROC=`nproc`;
# get first utilization metric
UTIL=`cat /proc/loadavg | cat -d ' ' -f 1`;

# divide util by number of processors, accounting for 0.00 util
RESULT=`bc <<< "scale = 2; $UTIL / $NPROC"`; 

# email alert if util is greater than alert threshold 
if [[ `bc <<< "$RESULT > $ALERT"` -eq 1 ]]
then
  # calculate a percentage
  PERC=`bc <<< "scale = 2; $RESULT * 100"`;
  echo "CPU utilization is above threshold at $PERC %";
  # add top output to email
  TOPOUTPUT=`top -n 1 -b`;
  `/usr/bin/mailx -s "Utilization high on $HOSTNAME" -r "$EMAIL" "$EMAIL" <<< "CPU on $HOSTNAME is at $PERC % 

$TOPOUTPUT"`;
# else
  # echo "Utilization is only $RESULT";
fi

You can test it by generating CPU load with the stress utility if you like.

Google Analytics – take daily with a grain of salt

Haven’t messed with Google Analytics in literally years, but since I’m getting a new site up I logged in. I noticed these spikes on all of my accounts for different sites, even sites that do not exist anymore:

WtfGoogleAnalytics
Most of the traffic was from Great Britain. I’m sure there’s an explanation… that I don’t have the time or motivation to uncover.

My point is – approach your analytics data with a touch of healthy skepticism.