linux 网络相关命令

linux查看httpd进程数

ps -ef | grep httpd | wc -l

查看Apache的并发请求数及其TCP连接状态

netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'

返回结果示例:

LAST_ACK 5
SYN_RECV 30
ESTABLISHED 1597
FIN_WAIT1 51
FIN_WAIT2 504
TIME_WAIT 1057

其中的SYN_RECV表示正在等待处理的请求数;ESTABLISHED表示正常数据传输状态;TIME_WAIT表示处理完毕,等待超时结束的请求数。

来源http://blog.s135.com/post/269/

linux并发连接数查看

1、查看Web服务器(Nginx Apache)的并发请求数及其TCP连接状态:

netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'
netstat -n|grep  ^tcp|awk '{print $NF}'|sort -nr|uniq -c

或者:

netstat -n | awk '/^tcp/ {++state[$NF]} END {for(key in state) print key,"t",state[key]}'

返回结果一般如下:

 
LAST_ACK 5 (正在等待处理的请求数)
SYN_RECV 30
ESTABLISHED 1597 (正常数据传输状态)
FIN_WAIT1 51
FIN_WAIT2 504
TIME_WAIT 1057 (处理完毕,等待超时结束的请求数)

其他参数说明:

 
CLOSED:无连接是活动的或正在进行
LISTEN:服务器在等待进入呼叫
SYN_RECV:一个连接请求已经到达,等待确认
SYN_SENT:应用已经开始,打开一个连接
ESTABLISHED:正常数据传输状态
FIN_WAIT1:应用说它已经完成
FIN_WAIT2:另一边已同意释放
ITMED_WAIT:等待所有分组死掉
CLOSING:两边同时尝试关闭
TIME_WAIT:另一边已初始化一个释放
LAST_ACK:等待所有分组死掉

2、查看Nginx运行进程数

ps -ef | grep nginx | wc -l

返回的数字就是nginx的运行进程数,如果是apache则执行

ps -ef | grep httpd | wc -l

3、查看Web服务器进程连接数:

netstat -antp | grep 80 | grep ESTABLISHED -c

4、查看MySQL进程连接数:

ps -axef | grep mysqld -c

来源http://itnihao.blog.51cto.com/1741976/830365
参考https://www.jianshu.com/p/e72ed5504b0c

shell后台并发执行实践

shell如何在后台执行

1.nohup命令
通常我们都是远程登录linux终端,而当我们退出终端时在之前终端运行的程序都会终止,有时候先想要退出终端也要程序继续执行这时nohup就登场了。nohup命令可以将程序以忽略挂起信号的方式运行起来,被运行的程序的输出信息将不会显示到终端。
nohup command > myout.file 2>&1 &

2.&后台执行
在命令后面加 & 可以让程序在后台执行
command &

3.Ctrl + z
当一个程序正在执行并且占用当前终端时我们同时按下 Ctrl + z ,这样就会把正在执行的前台程序放到后台挂起。

并发执行

1.正常执行

#!/bin/bash
Njob=10    #任务总数
for ((i=0; i<$Njob; i++)); do
{
	echo  "progress $i is sleeping for 1 seconds zzz…"
	sleep  1
}
done
echo -e "time-consuming: $SECONDS seconds"    #显示脚本执行耗时

执行结果

progress 0 is sleeping for 1 seconds zzz…
progress 1 is sleeping for 1 seconds zzz…
progress 2 is sleeping for 1 seconds zzz…
progress 3 is sleeping for 1 seconds zzz…
progress 4 is sleeping for 1 seconds zzz…
progress 5 is sleeping for 1 seconds zzz…
progress 6 is sleeping for 1 seconds zzz…
progress 7 is sleeping for 1 seconds zzz…
progress 8 is sleeping for 1 seconds zzz…
progress 9 is sleeping for 1 seconds zzz…
-e time-consuming: 10 seconds

2.并发后台执行

#!/bin/bash
Njob=10
for ((i=0; i<$Njob; i++)); do
    echo  "progress $i is sleeping for 3 seconds zzz…"
    sleep  3 &       #循环内容放到后台执行
done
wait      #等待循环结束再执行wait后面的内容
echo -e "time-consuming: $SECONDS seconds"    #显示脚本执行耗时

执行结果

progress 0 is sleeping for 3 seconds zzz…
progress 1 is sleeping for 3 seconds zzz…
progress 2 is sleeping for 3 seconds zzz…
progress 3 is sleeping for 3 seconds zzz…
progress 4 is sleeping for 3 seconds zzz…
progress 5 is sleeping for 3 seconds zzz…
progress 6 is sleeping for 3 seconds zzz…
progress 7 is sleeping for 3 seconds zzz…
progress 8 is sleeping for 3 seconds zzz…
progress 9 is sleeping for 3 seconds zzz…
-e time-consuming: 3 seconds

这种方式从功能上实现了使用shell脚本并行执行多个循环进程,但是它缺乏控制机制。

for设置了Njob次循环,同一时间Linux就触发Njob个进程一起执行。假设for里面执行的是scp,在没有pam_limits和cgroup限制的情况下,很有可能同一时刻过多的scp任务会耗尽系统的磁盘IO、连接数、带宽等资源,导致正常的业务受到影响。

一个应对办法是在for循环里面再嵌套一层循环,这样同一时间,系统最多只会执行内嵌循环限制值的个数的进程。不过还有一个问题,for后面的wait命令以循环中最慢的进程结束为结束(水桶效应)。如果嵌套循环中有某一个进程执行过程较慢,那么整体这一轮内嵌循环的执行时间就等于这个“慢”进程的执行时间,整体下来脚本的执行效率还是受到影响的。

分批并行的方式并发执行

#!/bin/bash
NQ=3
num=5
for ((i=0; i<$NQ; i++)); do
     for ((j=0; j<$num; j++)); do
         echo  "progress $i is sleeping for 3 seconds zzz…"
        sleep 3 &
     done
     wait
 done
#等待循环结束再执行wait后面的内容
echo -e "time-consuming: $SECONDS    seconds"    #显示脚本执行耗时

执行结果

 progress 0 is sleeping for 3 seconds zzz…
 progress 0 is sleeping for 3 seconds zzz…
 progress 0 is sleeping for 3 seconds zzz…
 progress 0 is sleeping for 3 seconds zzz…
 progress 0 is sleeping for 3 seconds zzz…
 progress 1 is sleeping for 3 seconds zzz…
 progress 1 is sleeping for 3 seconds zzz…
 progress 1 is sleeping for 3 seconds zzz…
 progress 1 is sleeping for 3 seconds zzz…
 progress 1 is sleeping for 3 seconds zzz…
 progress 2 is sleeping for 3 seconds zzz…
 progress 2 is sleeping for 3 seconds zzz…
 progress 2 is sleeping for 3 seconds zzz…
 progress 2 is sleeping for 3 seconds zzz…
 progress 2 is sleeping for 3 seconds zzz…
-e time-consuming: 9    seconds

3.使用模拟队列来控制进程数量

要控制后台同一时刻的进程数量,需要在原有循环的基础上增加管理机制。

一个方法是以for循环的子进程PID做为队列元素,模拟一个限定最大进程数的队列(只是一个长度固定的数组,并不是真实的队列)。队列的初始长度为0,循环每创建一个进程,就让队列长度+1。当队列长度到达设置的并发进程限制数之后,每隔一段时间检查队列,如果队列长度还是等于限制值,那么不做操作,继续轮询;如果检测到有并发进程执行结束了,那么队列长度-1,轮询检测到队列长度小于限制值后,会启动下一个待执行的进程,直至所有等待执行的并发进程全部执行完。

#!/bin/bash
Njob=15 #任务总数
Nproc=5 #最大并发进程数

function PushQue {      #将PID值追加到队列中
           Que="$Que $1"
           Nrun=$(($Nrun+1))
}

function GenQue {       #更新队列信息,先清空队列信息,然后检索生成新的队列信息
           OldQue=$Que
           Que=""; Nrun=0
           for PID in $OldQue; do
                 if [[ -d /proc/$PID ]]; then
                        PushQue $PID
                 fi
           done
}

function ChkQue {       #检查队列信息,如果有已经结束了的进程的PID,那么更新队列信息
           OldQue=$Que
           for PID in $OldQue; do
                 if [[ ! -d /proc/$PID ]];   then
                 GenQue; break
                 fi
           done
}

for ((i=1; i<=$Njob; i++)); do
           echo "progress $i is sleeping for 3 seconds zzz…"
           sleep 3 &
           PID=$!
           PushQue $PID
           while [[ $Nrun -ge $Nproc ]]; do          # 如果Nrun大于Nproc,就一直ChkQue
                 ChkQue
                 sleep 0.1
           done
done
wait
echo -e "time-consuming: $SECONDS   seconds"    #显示脚本执行耗时

执行结果

progress 1 is sleeping for 3 seconds zzz…
progress 2 is sleeping for 3 seconds zzz…
progress 3 is sleeping for 3 seconds zzz…
progress 4 is sleeping for 3 seconds zzz…
progress 5 is sleeping for 3 seconds zzz…
progress 6 is sleeping for 3 seconds zzz…
progress 7 is sleeping for 3 seconds zzz…
progress 8 is sleeping for 3 seconds zzz…
progress 9 is sleeping for 3 seconds zzz…
progress 10 is sleeping for 3 seconds zzz…
progress 11 is sleeping for 3 seconds zzz…
progress 12 is sleeping for 3 seconds zzz…
progress 13 is sleeping for 3 seconds zzz…
progress 14 is sleeping for 3 seconds zzz…
progress 15 is sleeping for 3 seconds zzz…
-e time-consuming: 3   seconds

这种使用队列模型管理进程的方式在控制了后台进程数量的情况下,还能避免个别“慢”进程影响整体耗时的问题:

4.使用fifo管道特性来控制进程数量

管道是内核中的一个单向的数据通道,同时也是一个数据队列。具有一个读取端与一个写入端,每一端对应着一个文件描述符。
命名管道即FIFO文件,通过命名管道可以在不相关的进程之间交换数据。FIFO有路径名与之相关联,以一种特殊设备文件形式存在于文件系统中。

FIFO有两种用途:

• FIFO由shell使用以便数据从一条管道线传输到另一条,为此无需创建临时文件,常见的操作cat file|grep keyword就是这种使用方式;
• FIFO用于客户进程-服务器进程程序中,已在客户进程与服务器进程之间传送数据,下面的例子将使用这种方式。

根据FIFO文件的读规则(参考http://www.cnblogs.com/yxmx/articles/1599187.html),如果有进程写打开FIFO,且当前FIFO内没有数据,对于设置了阻塞标志的读操作来说,将一直阻塞状态。

利用这一特性可以实现一个令牌机制。设置一个行数等于限定最大进程数Nproc的fifo文件,在for循环中设置创建一个进程时先read一次fifo文件,进程结束时再write一次fifo文件。如果当前子进程数达到限定最大进程数Nproc,则fifo文件为空,后续执行的并发进程被读fifo命令阻塞,循环内容被没有触发,直至有某一个并发进程执行结果并做写操作(相当于将令牌还给池子)。

需要注意的是,当并发数较大时,多个并发进程即使在使用sleep相同秒数模拟时,也会存在进程调度的顺序问题,因而并不是按启动顺序结束的,可能会后启动的进程先结束。

#!/bin/bash

Njob=15 #任务总数

Nproc=5 #最大并发进程数

mkfifo ./fifo.$$ && exec   9<>  ./fifo.$$     #通过文件描述符777访问fifo文件

for ((i=0; i<$Nproc; i++)); do  #向fifo文件先填充等于Nproc值的行数
  echo  "init time add $i" >&9
done
for ((i=0; i<$Njob; i++)); do
{
  read  -u  9             #从fifo文件读一行
  echo  "progress $i is sleeping for 3 seconds zzz…"
  sleep  3
  echo  "real time add $(($i+$Nproc))"  1>&9 #sleep完成后,向fifo文件重新写入一行
} &
done
wait
echo -e "time-consuming: $SECONDS seconds"
rm -f ./fifo.$$

执行结果

progress 0 is sleeping for 3 seconds zzz…
progress 1 is sleeping for 3 seconds zzz…
progress 2 is sleeping for 3 seconds zzz…
progress 3 is sleeping for 3 seconds zzz…
progress 4 is sleeping for 3 seconds zzz…
progress 5 is sleeping for 3 seconds zzz…
progress 6 is sleeping for 3 seconds zzz…
progress 8 is sleeping for 3 seconds zzz…
progress 12 is sleeping for 3 seconds zzz…
progress 13 is sleeping for 3 seconds zzz…
progress 9 is sleeping for 3 seconds zzz…
progress 11 is sleeping for 3 seconds zzz…
progress 14 is sleeping for 3 seconds zzz…
progress 10 is sleeping for 3 seconds zzz…
progress 7 is sleeping for 3 seconds zzz…
-e time-consuming: 10 seconds

原文地址:
Shell脚本实现并发多进程
Shell脚本并发执行

Linux and Curl: How to use Bash to Read a File Line by Line and Execute Curl command to get HTTP Response Code

for URL in `cat crunchify.txt`; do echo $URL; curl -m 10 -s -I $1 "$URL" | grep HTTP/1.1 |  awk {'print $2'}; done

-m: Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours due to any network issue

-s: show error message

-I: Show document info only

awk: matches the pattern and prints result

Centos7.3配置Supervisor遇到的一些小问题

首先安装:

yum install -y supervisor

然后创建扩展配置文件:

cd /etc/supervisord.d
vi laravel-worker.ini

写入以下内容:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /你的Linux完整项目地址/artisan queue:work redis --sleep=3 --tries=3
autostart=true
autorestart=true
user=www
numprocs=8
redirect_stderr=true
stdout_logfile=/www/wwwroot/app/worker.log

以下是说明:

command:XX 程序启动命令
autostart=true ;在supervisord启动的时候也自动启动
autorestart=true ;程序退出后自动重启,可选值:[unexpected,true,false],默认为unexpected,表示进程意外杀死后才重启
user=www ;用哪个用户启动进程,默认是root
numprocs=8 ;指定 Supervisor 运行 8 个 queue:work 进程并且监管它们,
redirect_stderr=true ;stderr重定向到stdout,默认false
stdout_logfile=/www/wwwroot/app/worker.log ;stdout 日志文件,需要注意当指定目录不存在时无法正常启动,所以需要手动创建目录(supervisord 会自动创建日志文件)

执行命令启动Supervisor:

执行:

sudo supervisorctl reread

但是报错:

error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib64/python2.7/socket.py line: 224

执行:

supervisord

再执行:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

OK了

How do I find all files containing specific text on Linux?

Do the following:

grep -rnw '/path/to/somewhere/' -e 'pattern'
  • -r or -R is recursive,
  • -n is line number, and
  • -w stands for match the whole word.
  • -l (lower-case L) can be added to just give the file name of matching files.

Along with these, --exclude, --include, --exclude-dir flags could be used for efficient searching:

  • This will only search through those files which have .c or .h extensions:
    grep --include=\*.{c,h} -rnw '/path/to/somewhere/' -e "pattern"
    
  • This will exclude searching all the files ending with .o extension:
    grep --exclude=*.o -rnw '/path/to/somewhere/' -e "pattern"
    
  • For directories it’s possible to exclude a particular directory(ies) through --exclude-dirparameter. For example, this will exclude the dirs dir1/, dir2/ and all of them matching *.dst/:
    grep --exclude-dir={dir1,dir2,*.dst} -rnw '/path/to/somewhere/' -e "pattern"
    

This works very well for me, to achieve almost the same purpose like yours.

For more options check man grep.

A Few cURL Tips for Daily Use

Though I knew cURL as a powerful tool; so far I never made an attempt to get familiar with it. Most of the time, I would just wade through its man pages to find a way get my stuff done. Recently I found myself make use of it for many of my daily tasks. By those excess usage, couple of recurring patterns emerged.

If you are already familiar with cURL, you may not find anything interesting or new here (but feel free to point out any improvements or other useful tips in comments).

Resume failed downloads

cURL has this handy option (

-C or --continue-at

) to set a transfer offset, which helps to resume failed downloads. On most cases, setting the offset as a single dash, will let cURL to decide how to resume the download.

  curl -C - -L -O http://download.microsoft.com/download/B/7/2/B72085AE-0F04-4C6F-9182-BF1EE90F5273/Windows_7_IE9.part03.rar

It’s a shame that I came to know about this very recently. I would now be cursing lot less at my ISP.

Fetch request body from a file

Nowadays, most web service APIs demand request bodies to be formatted as JSON. Manually entering a JSON formatted string in command-line is not a very convenient option. Better way to do it would be to prepare the request body in a file and provide it to cURL.

Here’s an example of creating a gist, providing the payload from a JSON file.

  curl -d @input.json https://api.github.com/gists

Start the data parameter with a @ to tell cURL, it should fetch the file in given path.

Mimic AJAX requests

Sometimes I need create endpoints in web apps, that produces alternate responses when accessed via AJAX (eg. not rendering the layout). Testing them directly in browser is not much viable as it require bootstrapping code. Instead, we can mimic AJAX requests from cURL by providing X-Requested-With header.

  curl -H "X-Requested-With: XMLHttpRequest" https://example.com/path

Store and Use Cookies

Another similar need is to test the behavior of cookies. Especially, when you want to alter a response depending on a cookie value.

You can use cURL to download the response cookies to a file and then use them on the subsequent requests. You can inspect the cookie file and even alter it to test the desired behavior.

  curl -L -c cookies.txt http://example.com 
  curl -L -b cookies.txt http://example.com

View a web page as GoogleBot

When I was running this blog with WordPress, Google marked it as a site infected with malware. Panicked, I visited the site and checked the source. I couldn’t see anything suspicious. Later I discovered, the malware is only injected to the site only when it is accessed by the GoogleBot. So how do you see a site’s output as GoogleBot?

cURL’s option (

-A or --user-agent

) to change the user-agent of a request comes handy on such instances. Here’s how you can impersonate GoogleBot:

  curl -L -A "Googlebot/2.1 (+http://www.google.com/bot.html)" http://example.com

Peep into others’ infrastructure

This is not exactly a cURL feature, but comes in handy when I want to find out what others’ use to power their apps/sites.

  curl -s -L -I http://laktek.com | grep Server

Linux下查看机器信息

1. 查看物理CPU的个数

#cat /proc/cpuinfo |grep "physical id"|sort |uniq|wc -l
 1

2. 查看逻辑CPU的个数

#cat /proc/cpuinfo |grep "processor"|wc -l
 8

3. 查看CPU是几核

#cat /proc/cpuinfo |grep "cores"|uniq
 cpu cores : 4

4. 查看CPU的主频

#cat /proc/cpuinfo |grep MHz|uniq
 cpu MHz  : 1600.000

其他信息

# uname -a
Linux 54acm 2.6.32-25-generic-pae #44-Ubuntu SMP Fri Sep 17 21:57:48 UTC 2010 i686 GNU/Linux
(查看当前操作系统内核信息)
# cat /etc/issue 
Ubuntu 10.04 LTS
(查看当前操作系统发行版信息)
 
# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
       8  Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz
(看到有8个逻辑CPU, 也知道了CPU型号)
 
# cat /proc/cpuinfo | grep physical | uniq -c
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
      1 physical id : 1
      1 address sizes : 40 bits physical, 48 bits virtual
 
(说明实际上是两颗4核的CPU)
 
# getconf LONG_BIT
32
(说明当前CPU运行在32bit模式下, 但不代表CPU不支持64bit)
 
# cat /proc/cpuinfo | grep flags | grep ' lm ' | wc -l
8
(结果大于0, 说明支持64bit计算. lm指long mode, 支持lm则是64bit)

如何获得CPU的详细信息:
linux命令:cat /proc/cpuinfo
用命令判断几个物理CPU,几个核等:
逻辑CPU个数:
# cat /proc/cpuinfo | grep “processor” | wc -l
物理CPU个数:
# cat /proc/cpuinfo | grep “physical id” | sort | uniq | wc -l
每个物理CPU中Core的个数:
# cat /proc/cpuinfo | grep “cpu cores” | wc -l
是否为超线程?
如果有两个逻辑CPU具有相同的”core id”,那么超线程是打开的。
每个物理CPU中逻辑CPU(可能是core, threads或both)的个数:

# cat /proc/cpuinfo | grep "siblings"
siblings : 8
1.查看CPU信息命令
cat /proc/cpuinfo
2.查看内存信息命令
cat /proc/meminfo
3.查看硬盘信息命令
fdisk -l

linux压缩和解压缩命令大全

.tar
  解包:tar zxvf FileName.tar
  打包:tar czvf FileName.tar DirName
  ———————————————
  .gz
  解压1:gunzip FileName.gz
  解压2:gzip -d FileName.gz
  压缩:gzip FileName
  .tar.gz 和 .tgz
  解压:tar zxvf FileName.tar.gz
  压缩:tar zcvf FileName.tar.gz DirName
  ———————————————
  .bz2
  解压1:bzip2 -d FileName.bz2
  解压2:bunzip2 FileName.bz2
  压缩: bzip2 -z FileName
  .tar.bz2
  解压:tar jxvf FileName.tar.bz2
  压缩:tar jcvf FileName.tar.bz2 DirName
  ———————————————
  .bz
  解压1:bzip2 -d FileName.bz
  解压2:bunzip2 FileName.bz
  压缩:未知
  .tar.bz
  解压:tar jxvf FileName.tar.bz
  压缩:未知
  ———————————————
  .Z
  解压:uncompress FileName.Z
  压缩:compress FileName
  .tar.Z
  解压:tar Zxvf FileName.tar.Z
  压缩:tar Zcvf FileName.tar.Z DirName
  ———————————————
  .zip
  解压:unzip FileName.zip
  压缩:zip FileName.zip DirName
  ———————————————
  .rar
  解压:rar a FileName.rar
  压缩:r ar e FileName.rar
  
  rar请到:http://www.rarsoft.com/download.htm 下载!
  解压后请将rar_static拷贝到/usr/bin目录(其他由$PATH环境变量指定的目录也可以):
  [root@www2 tmp]# cp rar_static /usr/bin/rar
  ———————————————
  .lha
  解压:lha -e FileName.lha
  压缩:lha -a FileName.lha FileName
  
  lha请到:http://www.infor.kanazawa-it.ac.jp/~ishii/lhaunix/下载!
  >解压后请将lha拷贝到/usr/bin目录(其他由$PATH环境变量指定的目录也可以):
  [root@www2 tmp]# cp lha /usr/bin/
  ———————————————
  .rpm
  解包:rpm2cpio FileName.rpm | cpio -div
  ———————————————
  .deb
  解包:ar p FileName.deb data.tar.gz | tar zxf –
  ———————————————

Fixing “WARNING: UNPROTECTED PRIVATE KEY FILE!” on Linux

If you are getting this error then you probably reset the permissions on your hidden .ssh directory in your user folder, and your keys aren’t going to work anymore. It’s very important that these files not be writable by just anybody with a login to the box, so openssh will give you an error if you try to use them.

The full error message:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0744 for '/home/geek/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /home/geek/.ssh/id_rsa

To fix this, you’ll need to reset the permissions back to default:

sudo chmod 600 ~/.ssh/id_rsa
sudo chmod 600 ~/.ssh/id_rsa.pub

If you are getting another error:

Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/geek/.ssh/known_hosts).
This means that the permissions on that file are also set incorrectly, and can be adjusted with this:

sudo chmod 644 ~/.ssh/known_hosts

Finally, you may need to adjust the directory permissions as well:

sudo chmod 755 ~/.ssh

This should get you back up and running.