澳门新萄京官方网站-www.8455.com-澳门新萄京赌场网址

澳门新萄京官方网站:zabbix自动截图留档_python版

2019-11-04 作者:数据库网络   |   浏览(119)

 

领到Zabbix监察和控制平台单台服务器图形并发邮件Python达成

须求:希望每一天邮件发出当天某台服务器的监察景况,如若某天都登录zabbix截图很劳苦,何况并不能保障每一天都准点操作,于是写了黄金年代段脚本达成自动抓取图片,并建立设成html,通过按期邮件发送,实现日报自动化。

一、效果图:

澳门新萄京官方网站 1

二、代码:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import MySQLdb
import datetime
import cookielib, urllib2,urllib
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
#数据库相关新闻
dbhost = "服务器IP"
dbport = 3306
dbuser = "zabbix登陆顾客"
dbpasswd = "zabbix数据库密码"
dbname = "zabbix"
#发送邮件配置:
receiver = '收件人邮箱地址'
Subject = 'zabbix监控平台数据'
smtpserver = 'smtp.exmail.qq.com'
mail_username = '发送邮箱地址'
mail_password = '密码'
#查找zabbix的Hostname
HostName = "Zabbix server"
#查找图像名称
GraphsName = "CPU utilization"
#此url是得到图片是的,请介意饼图的U凯雷德L 和此UENVISIONL区别样,请留意考察!
gr_url=""
#登陆URL
indexURL=""
username="sunday"
password="Aa(2016)"
#用以图片寄存的目录
image_dir="/tmp/zabbix"

class ReportForm:
    def __init__(self):
        #开拓数据库连接
        self.conn = MySQLdb.connect(host=dbhost,user=dbuser,passwd=dbpasswd,db=dbname,port=dbport,charset='utf8')
        self.cursor = self.conn.cursor(cursorclass=MySQLdb.cursors.DictCursor)

    def getGraphID(self,HostName,GraphsName):
        #获取graphid
        sql = 'select distinct graphs_items.graphid from items join graphs_items on graphs_items.itemid=items.itemid join graphs on graphs_items.graphid=graphs.graphid  where items.hostid=(select hostid from hosts where host="%s") and graphs.name="%s"' % (HostName,GraphsName)
        if self.cursor.execute(sql):
            graphid = self.cursor.fetchone()['graphid']
        else:
            graphid = None
        return graphid

    def __del__(self):
        #关闭数据库连接
        self.cursor.close()
        self.conn.close()

class ZabbixGraph(object):
    def __init__(self,url,name,password):
        self.url=url
        self.name=name
        self.password=password
        #初阶化的时候生成cookies
        cookiejar = cookielib.CookieJar()
        urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
        values = {"name":self.name,'password':self.password,'autologin':1,"enter":'Sign in'}
        data = urllib.urlencode(values)
        request = urllib2.Request(url, data)
        try:
            urlOpener.open(request,timeout=10)
            self.urlOpener=urlOpener
        except urllib2.HTTPError, e:
            print e
    def GetGraph(self,url,values,image_dir):
        data=urllib.urlencode(values)
        request = urllib2.Request(url,data)
        url = self.urlOpener.open(request)
        image = url.read()
        imagename="%s/%s_%s.png" % (image_dir, values["graphid"], values["stime"])
        f=open(imagename,'wb')
        f.write(image)

    def SendMail(self,receiver,Subject,smtpserver,mail_username,mail_password,values,image_dir,HostName,GraphsName):
        msgRoot = MIMEMultipart('related')
        msgRoot['Subject'] = Subject
        msgRoot['From'] = mail_username
 sendText='<b>服务器: <i>"%s"</i></b>  提取的图像名叫<b>"%s"</b><br><img src="cid:image1"><br>多谢!' % (HostName,GraphsName)
        msgText = MIMEText(sendText,'html','utf-8')
        msgRoot.attach(msgText)
        sendpng="%s/%s_%s.png" % (image_dir, values["graphid"], values["stime"])
        fp = open(sendpng, 'rb')
        msgImage = MIMEImage(fp.read())
        fp.close()
        msgImage.add_header('Content-ID', '<image1>')
        msgRoot.attach(msgImage)
        smtp = smtplib.SMTP()
        smtp.connect(smtpserver)
        smtp.login(mail_username, mail_password)
        smtp.sendmail(mail_username, receiver, msgRoot.as_string())
        smtp.quit()

if __name__ == "__main__":
    Report = ReportForm()
    get_graphid=Report.getGraphID(HostName,GraphsName)
    #图形的参数,该字典起码传入graphid。
    stime=datetime.datetime.now().strftime('%Y%m%d%H%M%S')
    values={"graphid":get_graphid,"stime":stime,"period":86400,"width":800,"height":200}
    ZabbixG=ZabbixGraph(indexURL,username,password)
    ZabbixG.GetGraph(gr_url,values,image_dir)
    ZabbixG.SendMail(receiver,Subject,smtpserver,mail_username,mail_password,values,image_dir,HostName,GraphsName)

通过以上得到的图样,在组装html,然后接纳系统陈设义务可达成自动化早报发送。

有个别Zabbix相关学科群集:

Ubuntu 14.04下Zabbix2.4.5 源码编写翻译安装 

安装配置分布式监察和控制体系Zabbix 2.06

《安装配备布满式监察和控制体系Zabbix 2.06》

CentOS 6.3下Zabbix安装配备

Zabbix布满式监察和控制系统实行

CentOS 6.3下Zabbix监控apache server-status

CentOS 6.3下Zabbix监察和控制MySQL数据库参数

64位CentOS 6.2下安装Zabbix 2.0.6   

ZABBIX 的亲力亲为介绍:请点这里
ZABBIX 的下载地址:请点这里

正文永恒更新链接地址:

必要:希望每日邮件发出当天某台服务器的督察状态,假若某天都登入zabbix截图很麻烦...

何以接收Python zabbix_api 获取质量数据

日前领导索要意气风发份数据,OpenStack ,VMware,物理机之间的习性报告,在创作报告前面要求多少支撑,大家利用的是zabbix 监察和控制,供给运用16日内的历史数据作为相比较,那数据怎么样收获,请看以下章节 

大器晚成:安装zabbix api 接口,更正zabbix api 调用接口,获取数据、

*****************************************************************************************
easy_install zabbix_api
*****************************************************************************************
 
   
 
#!/usr/bin/python
 
# The research leading to these results has received funding from the
# European Commission's Seventh Framework Programme (FP7/2007-13)
# under grant agreement no 257386.

# Copyright 2012 Yahya Al-Hazmi, TU Berlin
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#

#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
 
 
# this script fetches resource monitoring information from Zabbix-Server
# through Zabbix-API
#
# To run this script you need to install python-argparse "apt-get install python-argparse"
 
from zabbix_api import ZabbixAPI
import sys
import datetime
import time
import argparse
 
def fetch_to_csv(username,password,server,hostname,key,output,datetime1,datetime2,debuglevel):
        zapi = ZabbixAPI(server=server, log_level=debuglevel)
        try:
        zapi.login(username, password)
    except:
            print "zabbix server is not reachable: %s" % (server)
        sys.exit()
        host = zapi.host.get({"filter":{"host":hostname}, "output":"extend"})
    if(len(host)==0):
        print "hostname: %s not found in zabbix server: %s, exit" % (hostname,server)
        sys.exit()
    else:
        hostid=host[0]["hostid"]
    print '*' * 100
    print key
  print '*' * 100
    if(key==""):
        print '*' * 100
        items = zapi.item.get({"filter":{"hostid":hostid} , "output":"extend"})
        if(len(items)==0):
            print "there is no item in hostname: %s, exit" % (hostname)
                        sys.exit()
        dict={}
        for item in items:
            dict[str(item['itemid'])]=item['key_']
        if (output == ''):
                        output=hostname ".csv"
        f = open(output, 'w')
        str1="#key;timestamp;valuen"
 
        if (datetime1=='' and datetime2==''):
            for itemid in items:
                itemidNr=itemid["itemid"]
                            str1=str1 itemid["key_"] ";" itemid["lastclock"] ";" itemid["lastvalue"] "n"
            f.write(str1)
                        print "Only the last value from each key has been fetched, specify t1 or t1 and t2 to fetch more data"
            f.close()
        elif (datetime1!='' and datetime2==''):
            try:
                            d1=datetime.datetime.strptime(datetime1,'%Y-%m-%d %H:%M:%S')
            except:
                print "time data %s does not match format Y-m-d H:M:S, exit" % (datetime1)
                sys.exit()
                        timestamp1=time.mktime(d1.timetuple())
                        timestamp2=int(round(time.time()))
                    inc=0
            history = zapi.history.get({"hostids":[hostid,],"time_from":timestamp1,"time_till":timestamp2, "output":"extend" })
                    for h in history:
                            str1=str1 dict[h["itemid"]] ";" h["clock"] ";" h["value"] "n"
                            inc=inc 1
                        f.write(str1)
                        f.close()
                        print str(inc) " records has been fetched and saved into: " output
        elif (datetime1=='' and datetime2!=''):
            for itemid in items:
                                itemidNr=itemid["itemid"]
                                str1=str1 itemid["key_"] ";" itemid["lastclock"] ";" itemid["lastvalue"] "n"
                        f.write(str1)
                        print "Only the last value from each key has been fetched, specify t1 or t1 and t2 to fetch more data"
                        f.close()
        else:
            try:
                            d1=datetime.datetime.strptime(datetime1,'%Y-%m-%d %H:%M:%S')
                        except:
                                print "time data %s does not match format Y-m-d H:M:S, exit" % (datetime1)
                                sys.exit()
            try:
                            d2=datetime.datetime.strptime(datetime2,'%Y-%m-%d %H:%M:%S')
                        except:
                                print "time data %s does not match format Y-m-d H:M:S, exit" % (datetime2)
                                sys.exit()
                        timestamp1=time.mktime(d1.timetuple())
                        timestamp2=time.mktime(d2.timetuple())
            inc=0
                        history = zapi.history.get({"hostids":[hostid,],"time_from":timestamp1,"time_till":timestamp2, "output":"extend" })
                        for h in history:
                                str1=str1 dict[h["itemid"]] ";" h["clock"] ";" h["value"] "n"
                                inc=inc 1
                        f.write(str1)
                        f.close()
                        print str(inc) " records has been fetched and saved into: " output
    else:
        #print "key is: %s" %(key)
            itemid = zapi.item.get({"filter":{"key_":key, "hostid":hostid} , "output":"extend"})
        if(len(itemid)==0):
            print "item key: %s not found in hostname: %s" % (key,hostname)
            sys.exit()
            itemidNr=itemid[0]["itemid"]
            if (output == ''):
                    output=hostname ".csv"
            f = open(output, 'w')
            str1="#key;timestamp;valuen"
       
            if (datetime1=='' and datetime2==''):
                    str1=str1 key ";" itemid[0]["lastclock"] ";" itemid[0]["lastvalue"] "n"
                    #f.write(str1)
            f.write(str1)
            f.close()
                print "Only the last value has been fetched, specify t1 or t1 and t2 to fetch more data"
            elif (datetime1!='' and datetime2==''):
                    d1=datetime.datetime.strptime(datetime1,'%Y-%m-%d %H:%M:%S')
                    timestamp1=time.mktime(d1.timetuple())
                    timestamp2=int(round(time.time()))
                history = zapi.history.get({"history":itemid[0]["value_type"],"time_from":timestamp1,"time_till":timestamp2, "itemids":[itemidNr,], "output":"extend" })
                    inc=0
                    for h in history:
                        str1 = str1 key ";" h["clock"] ";" h["value"] "n"
                        inc=inc 1
            f.write(str1)
                        f.close()
                    print str(inc) " records has been fetched and saved into: " output
            elif (datetime1=='' and datetime2!=''):
                str1=str1 key ";" itemid[0]["lastclock"] ";" itemid[0]["lastvalue"] "n"
            f.write(str1)
                        f.close()
                print "Only the last value has been fetched, specify t1 or t1 and t2 to fetch more data"
        else:
                    d1=datetime.datetime.strptime(datetime1,'%Y-%m-%d %H:%M:%S')
                    d2=datetime.datetime.strptime(datetime2,'%Y-%m-%d %H:%M:%S')
                    timestamp1=time.mktime(d1.timetuple())
                    timestamp2=time.mktime(d2.timetuple())
                    history = zapi.history.get({"history":itemid[0]["value_type"],"time_from":timestamp1,"time_till":timestamp2, "itemids":[itemidNr,], "output":"extend" })
                    inc=0
                    for h in history:
                        str1 = str1 key ";" h["clock"] ";" h["value"] "n"
                        inc=inc 1
                    print str(inc) " records has been fetched and saved into: " output
                f.write(str1)
                        f.close()
       
parser = argparse.ArgumentParser(description='Fetch history from aggregator and save it into CSV file')
parser.add_argument('-s', dest='server_IP', required=True,
                  help='aggregator IP address')
parser.add_argument('-n', dest='hostname', required=True,
                  help='name of the monitored host')
parser.add_argument('-k', dest='key',default='',
                  help='zabbix item key, if not specified the script will fetch all keys for the specified hostname')
parser.add_argument('-u', dest='username', default='Admin',
                  help='zabbix username, default Admin')
parser.add_argument('-p', dest='password', default='zabbix',
                  help='zabbix password')
parser.add_argument('-o', dest='output', default='',
                  help='output file name, default hostname.csv')
parser.add_argument('-t1', dest='datetime1', default='',
                  help='begin date-time, use this pattern '2011-11-08 14:49:43' if only t1 specified then time period will be t1-now ')
parser.add_argument('-t2', dest='datetime2', default='',
                  help='end date-time, use this pattern '2011-11-08 14:49:43'')
parser.add_argument('-v', dest='debuglevel', default=0, type=int,
                  help='log level, default 0')
args = parser.parse_args()
 
fetch_to_csv(args.username, args.password, "", args.hostname, args.key, args.output, args.datetime1,args.datetime2,args.debuglevel)

二:撰写通过key获取一周内的数码

items : 在zabbix 中搜索主机,选拔新型数据,找到项目(items),点击步向,能看出机械的兼具keys,在负载到程序的items 字典中,程序会循环读取items ,获取数据

#/usr/bin/env python
#-*-coding:UTF-8
"""
    wget
"""
 
import os,sys,time
 
users=u'admin'
pawd = 'zabbix'
 
exc_py = '/data/zabbix/fetch_items_to_csv.py'
os.system('easy_install zabbix_api')
os.system('mkdir -p /data/zabbix/cvs/')
 
if not os.path.exists(exc_py):
    os.system("mkdir -p /data")
    os.system("wget -O /data/zabbix/fetch_items_to_csv.py")
 
def show_items(moniter, dip):
    items = dict()
    items['io_read_win'] = "perf_counter[\2\16]"
    items['io_write_win'] = "perf_counter[\2\18]"
 
    items['io_read_lin'] = "iostat[,rkB/s]"
    items['io_write_lin'] = "iostat[,wkB/s]"
    items['io_read_lin_sda'] = "iostat[sda,rkB/s]"
    items['io_write_lin_sda'] = "iostat[sda,wkB/s]"
 
    items['io_read_lin_sdb'] = "iostat[sdb,rkB/s]"
    items['io_write_lin_sdb'] = "iostat[sdb,wkB/s]"
 
    # Add items, iostate vdb,vdb
    items['io_read_lin_vda'] = "iostat[vda,rkB/s]"
    items['io_write_lin_vda'] = "iostat[vda,wkB/s]"
 
    items['io_read_lin_vdb'] = "iostat[vdb,rkB/s]"
    items['io_write_lin_vdb'] = "iostat[vdb,wkB/s]"
   
    items['cpu_util'] = "system.cpu.util"
 
    items['net_in_linu_vm_web'] = "net.if.in[eth0]"
    items['net_out_lin_vm_web'] = "net.if.out[eth0]"
 
    items['net_in_linu_vm_db'] = "net.if.in[eth1]"
    items['net_out_lin_vm_db'] = "net.if.out[eth1]"
 
    items['net_in_win_vm'] = "net.if.in[Red Hat VirtIO Ethernet Adapter]"
    items['net_in_win_vm_2'] = "net.if.in[Red Hat VirtIO Ethernet Adapter #2]"
    items['net_in_win_vm_3'] = "net.if.in[Red Hat VirtIO Ethernet Adapter #3]"
 
    items['net_out_win_vm'] = "net.if.out[Red Hat VirtIO Ethernet Adapter]"
    items['net_out_win_vm_2'] = "net.if.out[Red Hat VirtIO Ethernet Adapter #2]"
    items['net_out_win_vm_3'] = "net.if.out[Red Hat VirtIO Ethernet Adapter #3]"
澳门新萄京官方网站, 
    items['net_in_phy_web_lin'] = "net.if.in[bond0]"
    items['net_out_phy_web_lin']澳门新萄京官方网站:zabbix自动截图留档_python版,如何采用Python。 = "net.if.out[bond0]"
 
    items['net_in_phy_db_lin'] = "net.if.in[bond1]"
    items['net_out_phy_db_lin'] = "net.if.out[bond1]"
 
    items['net_in_phy_web_win'] = "net.if.in[TEAM : WEB-TEAM]"
    items['net_out_phy_web_win'] = "net.if.in[TEAM : WEB-TEAM]"
 
    items['net_in_phy_db_win'] = "net.if.in[TEAM : DB Team]"
    items['net_out_phy_db_win'] = "net.if.out[TEAM : DB Team]"
 
    items['net_in_phy_web_win_1'] = "net.if.in[TEAM : web]"
    items['net_out_phy_web_win_1'] = "net.if.out[TEAM : web]"
 
    items['net_in_phy_db_win_1'] = "net.if.in[TEAM : DB]"
    items['net_out_phy_db_win_1'] = "net.if.out[TEAM : DB]"
 
    items['net_in_win_pro'] = "net.if.in[Intel(R) PRO/1000 MT Network Connection]"
    items['net_out_win_pro'] = "net.if.out[Intel(R) PRO/1000 MT Network Connection]"
 
    items['net_in_phy_web_hp'] = "net.if.in[HP Network Team #1]"
    items['net_out_phy_web_hp'] = "net.if.out[HP Network Team #1]"
 
    items['iis_conntion'] = "perf_counter[\Web Service(_Total)\Current Connections]"
    items['tcp_conntion'] = "k.tcp.conn[ESTABLISHED]"
 
    for x,y in items.items():
        os.system('mkdir -p /data/zabbix/cvs/%s' % dip)
        cmds = """
        python /data/zabbix/fetch_items_to_csv.py -s '%s' -n '%s' -k '%s' -u 'admin' -p '%s' -t1 '2015-06-23 00:00:01' -t2 '2015-06-30 00:00:01' -o /data/zabbix/cvs/%s/%s_%s.cvs""" %(moniter,dip,y,pawd,dip,dip,x)
 
        os.system(cmds)
 
        print "*"  * 100
        print cmds
        print "*" * 100
 
 
def work():
    moniter='192.168.1.1'
   
    ip_list = ['192.168.1.15','192.168.1.13','192.168.1.66','192.168.1.5','192.168.1.7','192.168.1.16','192.168.1.38','192.168.1.2','192.168.1.13','192.168.1.10']
 
    for ip in ip_list:
        show_items(moniter,ip )
 
 
if __name__ == "__main__":
    sc = work()

三:数据搜聚截至,举办格式化输出

#!/usr/bin/env python
#-*-coding:utf8-*-
import os,sys,time
workfile = '/home/zabbix/zabbix/sjz/'
def collect_info():
    dict_doc = dict()
    for i in os.listdir(workfile):
        dict_doc[澳门新萄京官方网站:zabbix自动截图留档_python版,如何采用Python。i] = list()
        for v in os.listdir('%s%s' %(workfile,i)):
            dict_doc[i].append(v)
 
    count = 0
    for x,y in dict_doc.items():
        for p in y:
            fp = '%s/%s/%s' %(workfile,x,p)
            op = open(fp,'r').readlines()
            np = '%s.txt' %p
            os.system( """ cat %s|awk -F";" '{print $3}' > %s """ %(fp,np))
            count = 1
    print count
if __name__ == "__main__":
    sc = collect_info()

四,整理数据,陈诉成图形,撰写才能报告

有个别Zabbix相关学科会集:

Ubuntu 14.04下Zabbix2.4.5 源码编写翻译安装 

安装配置遍及式监察和控制体系Zabbix 2.06

《安装配置遍及式监察和控制连串Zabbix 2.06》

CentOS 6.3下Zabbix安装配备

Zabbix分布式监控系统实施

CentOS 6.3下Zabbix监控apache server-status

CentOS 6.3下Zabbix监察和控制MySQL数据库参数

64位CentOS 6.2下安装Zabbix 2.0.6   

ZABBIX 的详细介绍:请点这里
ZABBIX 的下载地址:请点这里

正文恒久更新链接地址:

zabbix_api 获取质量数据 如今首长索要大器晚成份数据,OpenStack ,VMware,物理机之间的性质报告,在作文报告前边供给多少支撑,作者...

1 背景  

    每个DB Server皆有zabbix监察和控制,除了分外景况的报告急察方消息外,也会在日检、周检、月检等职业中用到zabbix的监督检查数据,对zabbix监察和控制数据会做三种管理:1 数据剖判(同比解析、最大值、最小值及平均值深入分析卡塔尔;2 首要检查评定连串折线图留档(为什么必要留档呢,因为zabbix监察和控制过多服务器,监控数据仅保留四个月到1年间卡塔尔国。

 

    关于 数据深入深入分析类的,已放手 日检邮件报告跟 月度报告 中,而 zabbix 监察和控制图留档 一向没兑现自动化,每一种月都以人为取截图。适逢其会近日境遇国庆db报告跟2月数据库报告,要求各类截图留档,然后触发了写个小脚本来自动下载 zabbix的监督图。

 



 

    如果转载,请表明博文来源: www.cnblogs.com/xinysu/   ,版权归 今日头条 苏家小萝卜 全数。望各位援助!

 



   

2 写个小本子

2.1 获取图片url

    首先张开普通的zabbix监察和控制图页面,点击 F12,然后点击澳门新萄京官方网站 2,那个时候选中页面中的折线图,就能够看见对应的HTML代码,最终点击相应的html代码右键 copy下图片的链接地址,就能够以知道道 zabbix的监督图 的url。

     澳门新萄京官方网站 3

 

    依据获得的url如下:

   

 

    这里边有多少个参数表明下

stime 是督查的起首时间根据 '%Y%m%d%H%M%S' 的年月格式

period 是监察和控制图的时间长度,从 stime开始要出示 多少秒 的监督检查数据

itemid[0] 是 监察和控制项目在zabbix 数据库的 itemid 号

  • 这一个怎么查吗?首先依据host表格,找到监察和控制服务器的hostid,然后依据items表格找到相应的监督检查id
  • select i.hostid,itemid,i.name,key_,i.description from items i join hosts h on i.hostid=h.hostid where h.name = 'hostname';

curtime这里能够不填写,然而注意 stime 加上 period秒数后,不要超越方今询问时间就可以

width 为图片的长度

 

    遵照须求,仅保留4个参数,这里注意 stime 加上 period秒数后,不要抢先近些日子查询时间 ,简化后的url如下(把zabbix布署的域名依旧网站IP替换掉 company.moniter.com卡塔尔国:

   

2.2 脚步及测量检验

    小脚本达成的职能是:依照批量的itemid,自动下载图片到地方目录,并且重命名图片名称。

 

    代码实现如下:

    

 1 # -*- coding: utf-8 -*-
 2 __author__ = 'xinysu'
 3 __date__ = '2017/10/12 14:38'
 4 import sys
 5 import datetime
 6 import http.cookiejar, urllib.request, urllib
 7 from lxml import etree
 8 import requests
 9 class ZabbixChart(object):
10     def __init__(self, name, password):
11         url="http://company.monitor.com/index.php";
12         self.url = url
13         self.name = name
14         self.password = password
15         cookiejar = http.cookiejar.CookieJar()
16         urlOpener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cookiejar))
17         values = {"name": self.name, 'password': self.password, 'autologin': 1, "enter": 'Sign in'}
18         data = urllib.parse.urlencode(values).encode(encoding='UTF8')
19         request = urllib.request.Request(url, data)
20         try:
21             urlOpener.open(request, timeout=10)
22             self.urlOpener = urlOpener
23         except urllib.request.HTTPError as e:
24             print(e)
25     def download_chart(self, image_dir,itemids,stime,etime):
26         # 此url是获取图片是的,请注意饼图的URL 和此URL不一样,请仔细观察!
27         url="http://company.monitor.com/chart.php";
28         # 折线图的大小
30         url_par={}
31         url_par={"width":1778, "height":300,"itemids":itemids}
32         # 开始日期、结束日期从str转换为datetime
33         stime = datetime.datetime.strptime(stime, "%Y-%m-%d")
34         etime=datetime.datetime.strptime(etime, "%Y-%m-%d")
35         # 计算period
36         diff_sec = etime - stime
37         period = diff_sec.days*24*3600   diff_sec.seconds
38         url_par["period"] = period
39         # stime转换str
40         stime = stime.strftime('%Y%m%d%H%M%S')
41         url_par["stime"] = stime
42         key = url_par.keys()
43         data = urllib.parse.urlencode(url_par).encode(encoding='UTF8')
44         request = urllib.request.Request(url, data)
45         url = self.urlOpener.open(request)
46         image = url.read()
47         html = requests.get('http://company.monitor.com/history.php?action=showgraph&itemids[]={}'.format(itemids)).text
48         page = etree.HTML(html)
49         hostname_itemname = page.xpath('//div[@class="header-title"]/h1/text()')[0].split(':')
50         hostname = hostname_itemname[0]
51         hostname_itemname.pop(0)
52         itemname = '_'.join(hostname_itemname).replace('/','_')
53         imagename = "{}{}_{}_{}_({}).png".format(image_dir,hostname,stime,etime.strftime('%Y%m%d%H%M%S'),itemname)
54         f = open(imagename, 'wb')
55         f.write(image)
56 

 

    依据写好的类,输入zabbix的记名帐号、监察和控制图的开局跟甘休时间、当地寄存图片目录、itemid的list,运维后如下:

 1 # 登陆URL
 2 username = "xinysu"
 3 password = "passwd"
 4 
 5 # 图片的参数,该字典至少传入graphid
 6 stime = "2017-09-01"
 7 etime = "2017-10-01"
 8 
 9 # 用于图片存放的目录
10 image_dir = "E:\03 WORK\03 work_sql\201709"
11 
12 #运行
13 b = ZabbixChart(username, password)
14 item_list =(35295,35328,38080,37992,38102,38014,35059,35022,42765,35024,35028,35035,35036,35044,35045,35046,35047,38248,36369,36370,36371,36372)
15 for i in item_list:
17     itemids = i
18     b.download_chart(image_dir,itemids,stime,etime)

 

      随意输入的itemid 测量检验下载,实际须求基于监察和控制必要过滤itemid,下载后在文件夹中显得如下:

澳门新萄京官方网站 4

 澳门新萄京官方网站 5

 

本文由澳门新萄京官方网站发布于数据库网络,转载请注明出处:澳门新萄京官方网站:zabbix自动截图留档_python版

关键词:

  • 上一篇:没有了
  • 下一篇:没有了