SMTP Auth – SMTP Relay

If you are getting an error with the following error

SMTP error from remote mail server after RCPT TO:<admin @domain.com>: 550 smtp auth requried

from the script that is running on your server but the domain.com isn't locate on the same physical machine. You are most likely doing a SMTP Relay and your exim isn't really happy with not having authentication credential being provided.

In this case, you can just add the following to your exim.conf file, assuming 123.123.123.123 is your script server,

domainlist local_domains = dsearch;/etc/exim4/domains/
domainlist relay_to_domains = dsearch;/etc/exim4/domains/
hostlist relay_from_hosts = 127.0.0.1 : 123.123.123.123
hostlist whitelist = net-iplsearch;/etc/exim4/white-blocks.conf
hostlist spammers = net-iplsearch;/etc/exim4/spam-blocks.conf

you can also placed 123.123.123.123 into the file /etc/exim4/white-blocks.conf to whitelist the host on your server.

you might also need to add auth_advertise_hosts = * as show below,

host_lookup = *
auth_advertise_hosts = 123.123.123.123
rfc1413_hosts = *

which expand to all host other than localhost (of course, you might want to change it to ip instead of *)

this should allows your script to sent email using your smtp server as a relay without the need for authentication.

WordPress libgomp: Thread creation failed: Resource temporarily unavailable

Another fresh issue, pretty straight forward but if you are facing this issue with the error in apache saying

libgomp: Thread creation failed: Resource temporarily unavailable, referer: https://xxxxx.org/wp-admin/media-new.php

this is most likely due to your server limit has been reached either by user level or root level. The quickest way to resolve this temporary is to increase your soft limit as shown below,

ulimit -s 999999

once you've done that. you should try upload file in WordPress and you shouldn't see the HTTP Error message. but to make this permanent after you've reboot.

Open the file located at vi /etc/security/limits.d/90-nproc.conf

*          soft    nproc     999999
root       soft    nproc     unlimited

update and change the value u see above and it should do the trick. If this doesn't do the trick, you might want to try adding the following to your .htaccess

SetEnv MAGICK_THREAD_LIMIT 1

This happens when the full installation of ImageMagick cannot be done which causes the HTTP Error to show.

Fix Getting $_FILES ERROR 3

Alright. its been a year of resting and facing the normal issue which I could easily google and get the answer out of everyone (as usual), but recently I faced this issue where my customer upload fails 'sometimes' and the error they are getting from $_FILES is 3 which is "The uploaded file was only partially uploaded."

Now, this is all good and great and should be easily be resolved but in the end, you get stuck for a long long time. Some of the solutions you can try are the following,

Ensure your PHP setup are correct. Such as the below config

upload_max_filesize=80M
post_max_size=80M
file_uploads = On
upload_tmp_dir = /tmp
max_file_uploads = 20
memory_limit=-1

and if these are correctly configured and you are still getting an issue and you are suspecting maybe its AWS load balancer or you might have a load balancer, you can forget that idea since its 'partial upload' error meaning, its uploaded but it gets 'terminated' suddenly. So the next possible candidate here would be the web server.

And since the customer was using an apache2 server. Do check the following to ensure they are not set to '0'.

Timeout 3000
KeepAlive On
MaxKeepAliveRequests 1000

My issue was that someone set Timeout to 0 and it causes this issue consistently. The other issue that you may face and get stuck would be that Timeout or MaxkeepAliveRequests is too short which cause error 3 to show up due to the web server cutting your users off during upload but not everyone gets cut off. Hence, you guys were unable to reproduce this consistently.

Hope this helps!

getting  infront of json api call

After i migrated a server from apache to docker nginx, i notice that all the request coming from this nginx setup has a  infront that is not visible to the browser but when you do a script call, your script will complain about invalid json format.

What is 

Well, first of all, this characters that we can't see is actually UTF-8 BOM or Byte order mark with the byte sequence 0xEF,0xBB,0xBF at the front of the file.

What to do

Lucky, There is 2 solutions for you. You can either change all the file format to UTF-8 and i meant all of it in the folder since any file included with UTF-8 BOM will cause this problem. Hence, all the file will need to be convert to UTF-8 or remove the byte sequence 0xEF,0xBB,0xBF

You can use the following script to run recursively on your root folder which will convert all the file from utf8 BOM to utf8.

find . -type f -exec sed -i.bak -e '1s/^\xEF\xBB\xBF//' {} \; -exec rm '{}.bak' \;

Personally, this works best but you'll need to work a little bit before its completed depending on how big your folder is.

The other solution is to remove BOM on the receiving end. so once you've grab your api content, remove the BOM with the following script,

        function remove_utf8_bom($text)
        {
                $bom = pack('H*','EFBBBF');
                $text = preg_replace("/^$bom/", '', $text);
                return $text;
        }

or you can just do this

                $raw_body = str_replace("\xEF\xBB\xBF",'',$raw_body);

both works pretty much the same. Then you should be able to parse your json_decode normally.

For more details file replacement you can visit muzso blog. Hope it helps!

easy resize kvm without lvm

Basically i have a chance to resize a kvm without lvm / lvm2. Adding size to a kvm is pretty straight forward, all you need to do is the following,

qemu-img resize vmdisk.img +40G

and if you boot up your machine, you'll see 10G if you hit the following command,

fdisk -l

now, we need to increase this partition so that our existing partition will increase from 60GB to 100GB.

[[email protected] ~]# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-2097151, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): 
Using default value 2097151

Command (m for help): p

Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2dbb9f13

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   83  Linux

Command (m for help): w
The partition table has been altered!

Once you've done that you should have a bigger partition of 100GB

Resize your filesystem with resize2fs

now just do the following and your size should increase to 100GB

[email protected]:~# resize2fs /dev/sda1
resize2fs 1.43.5 (04-Aug-2017)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 8, new_desc_blocks = 13
The filesystem on /dev/sda1 is now 26214144 (4k) blocks long.

pretty straight forward i must say!