BitBucket laravel initial commit

I have found it here but since I have older phpstorm there is a way to do it easier

https://stackoverflow.com/questions/47613209/how-to-set-up-bitbucket-in-phpstorm-2019

Step 1: Install Bitbucket Linky using this guide. Note Bitbucket Connector is no longer being developed.
Step 2: Set up a new private repository on Bitbucket. If you’re using WordPress, I’d recommend creating a repository for each plugin you are developing, rather than for the entire install. Leave it empty for now.
Step 3 Create a new project in PHPStorm or use an existing one. Click VCS>Create Git Repository. Select the path of your project.
Step 4 Now the repository is properly set up, I then added – using Finder – all of my files into the local repository folder.
Step 5 Select all of your project – which should be in red text – and click VCS>Git>Add. This will have added these files to the project, but not yet pushed them to the remote server.
Step 6 Now select the root directory of your project and click VCS>Git>Commit Directory.
Step 7 Go to VCS>Git>Remotes and add a new remote. You need to get this in Bitbucket by clicking ‘I will be starting from scratch’ and then copying the URL (https://youraccount@bitbucket.org/youraccount/yourrepo.git) and pasting it into PHPStorm.
Step 8 Click VCS>Git>Push. This actually moves the files to the server.
Hope that helps some one – took me ages to figure out. You’re now in the capable hands of many guides on how to use Git.

After step 6 I couldn’t find Remotes so I just clicked on project folder, right click, git->repository->push and then clicked on top remote and then added bitbucket link (https) and all the files were being pushed there.

That’s it.

MySQL import/export dump timestamp problem

I have just lost 4 hours today for a simple export import problem. Why it always has to be that way?!?!?

I had to transfer ip of the domain with the data and of course db. Like always everything was smooth but after trying domain (after dns change) I had to pictures online?!

I have tried everything. Problem was simple (after the battle everyone are generals) – it was mysql :

Dump TIMESTAMP columns in UTC (enables TIMESTAMP columns to be dumped and reloaded between servers in different time zones)

Mysql columns were using timestamps because of the start and end date (23:59:59) for local time BUT when exporting that option was clicked and made strange dates on the importing mysql server.

After unchecking this box time was still strange but at least dates were not going -1 day.

Backup script – simple

If you want to make backups all of your server files (home dir or whatever) you should first make connection without passwords (using ssh).

First choose username you will be using like user981129, make it in server A (with backups). Then do this:

ssh-keygen -t rsa

You have now .ssh folder with public key (id_rsa.pub). Now go to the second server (server B) where backups will be transferred. Go to the /home/someuser and type:

mkdir .ssh; vi authorized_keys;

and then copy .ssh/id_rsa.pub from server A to authorized_keys.

Then on server B chmod .ssh 700; chmod .ssh/authorized_keys 640;

Then on server A make file with chmod a+x something like in /home/user981129:

vi backup.sh

and then paste this:

#!/bin/bash
NOW=$(date +"%m-%d-%Y")
SER="ip of the server where you are transfering files or nameserver"
FILE="backup.$NOW.tar.gz"
echo "Backing up data, please wait..."
rsync -avz /home/user981129/admin_backups/admin.root.admin.tar.gz someuser@ipofserverforbackup:/home/someuser/backups/$SER/$NOW/

Then open crontab -e and paste this:

0 3 * * * /home/user981129/backup.sh >/dev/null 2>&1

YOU ARE READY TO GO

edit: this post is for me, to speed things up when needed

Copy from folder to folder preserving everything

Either use

sudo cp -rp /home/folder /media/backup/folder

sudo cp -a /home/folder /media/backup/folder (UPDATE: use this one!!!preserves everything)

Or use:

 rsync -avz
-p     same as --preserve=mode,ownership,timestamps

 --preserve[=ATTR_LIST]
          preserve the specified attributes (default: mode,ownership,timestamps),
          if possible additional attributes: context, links, xattr, all

Source: https://unix.stackexchange.com/questions/43605/how-do-i-copy-a-folder-keeping-owners-and-permissions-intact

Note to myself – ODS and mysql

When importing data you have got from the datacenter (statistical files) use first openoffice .ods format and import it into already built table (copied table with all the structure) and then just import files to that table. Make sure that openoffice sheet is the same name as db table.

If you previously made table (copied table from table you were using before) then the data will just be imported automatically. You might have some problems with columns number but predefined table (copied) must be the same fields count as with .ods file you are importing.

End of note to myself

MySQL rand function

If you need to add some random number to some of the mysql columns then here it goes…

UPDATE tableSET column = FLOOR( 1 + RAND( ) *100 )

The last number (100) means you are about to add any value until 100. So 99, 1, 5, 33, 45…any between 1 and 100.
If you put 10 instead of 100 then you will get between 1 and 10 (2, 5, 6).

Returns a random floating-point value v in the range 0 <= v < 1.0.

RAND

That’s it.

Laravel paths explained

For the reference because I tend to forget things sometimes.

// project's root folder    
echo base_path();

// 'app' folder    
echo app_path();        

// 'public' folder but inside app folder and not /home/admin/web/domain.com/public_html
echo public_path();

// 'storage' folder    
echo storage_path();

// 'storage/app' folder    
echo storage_path('app');

Dedicated/VPS test speed

Very nice find. You can check various speeds from your server with this little script.

I have checked it and it’s working just fine. Kudos to the developer.

https://www.lowendtalk.com/discussion/134290/new-year-special-serverreview-benchmark-v3

I have just tried it with little vps from hetzner:

[root@snapshot-5553580-centos-2gb-nbg1-1 ~]

# curl -LsO git.io/bench.sh; chmod +x bench.sh && ./bench.sh -a

Benchmark started on 06-Jul-2019 07:16:24

## System Information

OS Name : CentOS Linux release 7.6.1810 (Core) N (64 bit)
Kernel : KVM / 3.10.0-957.21.3.el7.x86_64
Hostname : snapshot-5553580-centos-2gb-nbg1-1
CPU Model : Intel Xeon Processor (Skylake, IBRS)
CPU Cores : 1 core @ 2099.998 MHz
CPU Cache : 16384 KB
Total RAM : 1790 MiB (Free 1533 MiB)
Total SWAP : SWAP not enabled
Total Space : 20GB (5% used)
Running for : 23 minutes 6 seconds

## CDN Speedtest

CacheFly : 136.55 MiB/s | 1092.39 Mbps | ping 3.415ms
CDN.net :ping: 993660212.r.worldcdn.net: Name or service not known
0 B/s | N/A | ping error!
Gdrive : 22.61 MiB/s | 180.86 Mbps | ping 3.335ms

## North America Speedtest

Softlayer, Washington, USA : 531.01 KiB/s | 4.15 Mbps | ping 88.651ms
SoftLayer, San Jose, USA : 771.82 KiB/s | 6.03 Mbps | ping 153.363ms
SoftLayer, Dallas, USA : 240.00 KiB/s | 1.88 Mbps | ping 119.020ms
Vultr, New Jersey, USA : 15.26 MiB/s | 122.10 Mbps | ping 80.797ms
Vultr, Seattle, USA : 7.06 MiB/s | 56.45 Mbps | ping 152.982ms
Vultr, Dallas, USA : 7.61 MiB/s | 60.91 Mbps | ping 123.992ms
Vultr, Los Angeles, USA : 5.95 MiB/s | 47.60 Mbps | ping 148.648ms
Ramnode, New York, USA : 4.22 MiB/s | 33.76 Mbps | ping 93.103ms
Ramnode, Atlanta, USA : 3.18 MiB/s | 25.44 Mbps | ping 109.802ms
OVH, Beauharnois, Canada : 1.23 MiB/s | 9.87 Mbps | ping 89.242ms

## Europe Speedtest

Vultr, London, UK : 70.07 MiB/s | 560.54 Mbps | ping 17.909ms
LeaseWeb, Frankfurt, Germany : 203.57 MiB/s | 1628.57 Mbps | ping 3.728ms
Hetzner, Germany : 121.62 MiB/s | 972.94 Mbps | ping 0.270ms
Ramnode, Alblasserdam, NL : 81.99 MiB/s | 655.92 Mbps | ping 10.764ms
Vultr, Amsterdam, NL : 115.90 MiB/s | 927.20 Mbps | ping 9.734ms
EDIS, Stockholm, Sweden : 2.44 KiB/s | 0.02 Mbps | ping 29.973ms
OVH, Roubaix, France : 101.74 MiB/s | 813.95 Mbps | ping 11.424ms
Online, France : 89.67 MiB/s | 717.36 Mbps | ping 12.747ms
Prometeus, Milan, Italy : 99.90 MiB/s | 799.21 Mbps | ping 15.812ms

## Exotic Speedtest

Sydney, Australia : 755.84 KiB/s | 5.91 Mbps | ping 307.885ms
Lagoon, New Caledonia : 1.13 MiB/s | 9.05 Mbps | ping 354.923ms
Hosteasy, Moldova : 26.20 MiB/s | 209.60 Mbps | ping 37.409ms
Prima, Argentina : 174.20 KiB/s | 1.36 Mbps | ping error!

## Asia Speedtest

SoftLayer, Singapore : 645.17 KiB/s | 5.04 Mbps | ping 179.195ms
Linode, Tokyo, Japan : 2.61 MiB/s | 20.89 Mbps | ping 268.694ms
Linode, Singapore : 5.90 MiB/s | 47.22 Mbps | ping 211.743ms
Vultr, Tokyo, Japan : 2.51 MiB/s | 20.08 Mbps | ping 251.989ms

## IO Test

CPU Speed:
bzip2 512MB – 83.2 MB/s
sha256 512MB – 261 MB/s
md5sum 512MB – 330 MB/s

Disk Speed (512MB):
I/O Speed – 495 MB/s
I/O Direct – 63.4 MB/s

RAM Speed (895MB):
Avg. write – 1654.5 MB/s
Avg. read – 4949.3 MB/s

Benchmark finished in 129 seconds
results saved on /root/bench.log

[root@snapshot-5553580-centos-2gb-nbg1-1 ~]

#

MariaDB/MySql server – load data infile

When using php7.2 and onward make sure you are having:
mysqli.allow_local_infile = On
in php.ini setup or uncommented.

Otherwise this will not work. I’ve spent 2 days trying to make it working on mariaDB config file because few years ago I had the same problems and most of it were on DB end (optimization) but now this totally surprised me. It was on php end.

This is really nice stuff. Check it

https://stackoverflow.com/questions/13016797/load-data-local-infile-fails-from-php-to-mysql-on-amazon-rds?rq=1

Please not…ini_set is not always working so you might need to change allow_local_infile directly to the server’s php.ini for specific php version. I’m using directadmin almost every day. Have 3 php versions installed and testing everything there.