澳门新萄京官方网站-www.8455.com-澳门新萄京赌场网址

澳门新萄京官方网站多线程与多进程,学习笔记

2019-06-15 作者:www.8455.com   |   浏览(62)

1.线程进程
进程:程序并不能单独运行,只有将程序装载到内存中,系统为它分配资源才能运行,而这种执行的程序就称之为进程,不具备执行感念,只是程序各种资源集合

python 3.x 学习笔记15(多线程),python3.x

1.线程进程
进程:程序并不能单独运行,只有将程序装载到内存中,系统为它分配资源才能运行,而这种执行的程序就称之为进程,不具备执行感念,只是程序各种资源集合

线程:线程是操作系统能够进行运算调度的最小单位。它被包含在进程之中,是进程中的实际运作单位。一条线程指的是进程中一个单一顺序的控制流,一个进程中可以并发多个线程,每条线程并行执行不同的任务

2.线程与进程的区别

线程共享内存空间,                                                                                               进程的内存是独立的

线程共享创建它的进程的地址空间;                                                                        进程拥有自己的地址空间。

线程可以直接访问其进程的数据段;                                                                        进程拥有父进程的数据段的自己的副本。

线程可以直接与其进程的其他线程通信;                                                                 进程必须使用进程间通信与兄弟进程进行通信。

新线程很容易创建;                                                                                                  新流程需要重复父流程。

线程可以对同一进程的线程进行相当程度的控制;                                                   进程只能对子进程进行控制。

对主线程的更改(取消,优先级更改等)可能会影响进程其他线程的行为;            对父进程的更改不会影响子进程。

 

3.一条线程至少有一条线程

4.线程锁
    每个线程在要修改公共数据时,为了避免自己在还没改完的时候别人也来修改此数据,可以给这个数据加一把锁, 这样其它线程想修改此数据时就必须等待你修改完毕并把锁释放掉后才能再访问此数据

 

5.Semaphore(信号量)

    互斥锁 同时只允许一个线程更改数据,而Semaphore是同时允许一定数量的线程更改数据 ,比如厕所有3个坑,那最多只允许3个人上厕所,后面的人只能等里面有人出来了才能再进去。

 

6.join的作用是 等待线程执行完毕

 

7.练习

信号量

__author__ = "Narwhale"

import threading,time

def run(n):
    semaphore.acquire()
    time.sleep(1)
    print('线程%s在跑!'%n)
    semaphore.release()

if __name__ == '__main__':
    semaphore = threading.BoundedSemaphore(5)      #最多5个线程同时跑
    for i in range(20):
        t = threading.Thread(target=run,args=(i,))
        t.start()

while threading.active_count() !=1:
    pass
else:
    print('所有线程跑完了!')

生产者消费者模型

__author__ = "Narwhale"
import queue,time,threading
q = queue.Queue(10)

def producer(name):
    count = 0
    while True:
        print('%s生产了包子%s'%(name,count))
        q.put('包子%s'%count)
        count  = 1
        time.sleep(1)

def consumer(name):
    while True:
        print('%s取走了%s,并且吃了它。。。。。'%(name,q.get()))
        time.sleep(1)


A1 = threading.Thread(target=producer,args=('A1',))
A1.start()

B1 = threading.Thread(target=consumer,args=('B1',))
B1.start()
B2 = threading.Thread(target=consumer,args=('B2',))
B2.start()

红绿灯

__author__ = "Narwhale"

import threading,time

event = threading.Event()

def light():
    event.set()
    count = 0
    while True:
        if count >5 and count < 10:
            event.clear()
            print('33[41;1m红灯亮了33[0m' )
        elif count > 10:
            event.set()
            count = 0
        else:
            print('33[42;1m绿灯亮了33[0m')
        time.sleep(1)
        count  =1


def car(n):
    while True:
        if event.isSet():
            print('33[34;1m%s车正在跑!33[0m'%n)
            time.sleep(1)
        else:
            print('车停下来了')
            event.wait()

light = threading.Thread(target=light,args=( ))
light.start()
car1 = threading.Thread(target=car,args=('Tesla',))
car1.start()

 

3.x 学习笔记15(多线程),python3.x 1.线程进程 进程:程序并不能单独运行,只有将程序装载到内存中,系统为它分配资源才能运行,而这...

线程参考文档

    线程:

一,进程与线程

1.什么是线程
线程是操作系统能够进行运算调度的最小单位。它被包含在进程之中,是进程中的实际运作单位。一条线程指的是进程中一个单一顺序的控制流,一个进程中可以并发多个线程,每条线程并行执行不同的任务
一个线程是一个执行上下文,这是一个CPU的所有信息需要执行一系列的指令。
假设你正在读一本书,你现在想休息一下,但是你希望能够回来,恢复从你停止的位置。实现这一点的方法之一是通过草草记下页码、行号和数量。所以你读一本书的执行上下文是这三个数字。
如果你有一个室友,她使用相同的技术,她可以把书当你不使用它,并继续阅读,她停了下来。然后你就可以把它拿回来,恢复你在哪里。
线程在相同的方式工作。CPU是给你的错觉同时做多个计算。它通过花一点时间在每个计算。它可以这样做,因为它有一个为每个计算执行上下文。就像你可以与你的朋友分享一本书,很多任务可以共享CPU。
更多的技术水平,一个执行上下文(因此一个线程)由CPU的寄存器的值。
最后:线程不同于流程。执行线程的上下文,而进程是一群资源与计算有关。一个过程可以有一个或多个线程。
澄清:与流程相关的资源包括内存页(一个进程中的所有线程都有相同的视图的内存),文件描述符(如。、打开的套接字)和安全凭据(如。,用户的ID开始这个过程)。

2.什么是进程
一个执行程序被称为过程的实例。
每个进程提供了所需的资源来执行一个程序。进程的虚拟地址空间,可执行代码,打开系统处理对象,一个安全上下文,一个独特的进程标识符,环境变量,优先类,最小和最大工作集大小和至少一个线程的执行。每个流程开始一个线程,通常被称为主要的线程,但从它的任何线程可以创建额外的线程。

3.进程与线程的区别

  1. 线程共享创建它的进程的地址空间,进程有自己的地址空间。
  2. 线程直接访问的数据段过程;过程有自己的复制父进程的数据段。
  3. 线程可以直接与其他线程的通信过程,过程必须使用进程间通信和同胞交流过程。
  4. 新创建的线程很容易;新工艺需要复制父进程。
  5. 线程可以锻炼相当大的控制线程相同的过程;流程只能控制子进程。
  6. 主线程变更(取消、优先级变化等)可能会影响进程的其他线程的行为;父进程的变化不会影响子进

4.Python GIL(Global Interpreter Lock)
全局解释器锁在CPython的,或GIL,是一个互斥锁,防止多个本地线程执行Python字节码。这把锁是必要的,主要是因为CPython的内存管理不是线程安全的。(然而,由于GIL存在,其他功能已经习惯于依赖保证执行)。
首先需要明确的一点是GIL并不是Python的特性,它是在实现Python解析器(CPython)时所引入的一个概念。就好比C 是一套语言(语法)标准,但是可以用不同的编译器来编译成可执行代码。有名的编译器例如GCC,INTEL C ,Visual C 等。Python也一样,同样一段代码可以通过CPython,PyPy,Psyco等不同的Python执行环境来执行。像其中的JPython就没有GIL。然而因为CPython是大部分环境下默认的Python执行环境。所以在很多人的概念里CPython就是Python,也就想当然的把GIL归结为Python语言的缺陷。所以这里要先明确一点:GIL并不是Python的特性,Python完全可以不依赖于GIL
参考文档:**

线程:线程是操作系统能够进行运算调度的最小单位。它被包含在进程之中,是进程中的实际运作单位。一条线程指的是进程中一个单一顺序的控制流,一个进程中可以并发多个线程,每条线程并行执行不同的任务

线程是操作系统能够进行运算调度的最小单位,它被包含在进程中,是进程中的实际运作单位

    什么是线程?

二、多线程

多线程类似于同时执行多个不同程序,多线程运行有如下优点:

  1. 使用线程可以把占据长时间的程序中的任务放到后台去处理。

  2. 用户界面可以更加吸引人,这样比如用户点击了一个按钮去触发某些事件的处理,可以弹出一个进度条来显示处理的进度

  3. 程序的运行速度可能加快
  4. 在一些等待的任务实现上如用户输入、文件读写和网络收发数据等,线程就比较有用了。在这种情况下我们可以释放一些珍贵的资源如内存占用等等。
  5. 线程在执行过程中与进程还是有区别的。每个独立的线程有一个程序运行的入口、顺序执行序列和程序的出口。但是线程不能够独立执行,必须依存在应用程序中,由应用程序提供多个线程执行控制。
  6. 每个线程都有他自己的一组CPU寄存器,称为线程的上下文,该上下文反映了线程上次运行该线程的CPU寄存器的状态。
  7. 指令指针和堆栈指针寄存器是线程上下文中两个最重要的寄存器,线程总是在进程得到上下文中运行的,这些地址都用于标志拥有线程的进程地址空间中的内存。
  8. 线程可以被抢占(中断)。
  9. 在其他线程正在运行时,线程可以暂时搁置(也称为睡眠) -- 这就是线程的退让。

1.threading模块

直接调用:
import threading
import time

def code(num): #定义每个线程要运行的函数

    print("running on number:%s" %num)

    time.sleep(3)

if __name__ == '__main__':

    t1 = threading.Thread(target=code,args=(1,)) #生成一个线程实例
    t2 = threading.Thread(target=code,args=(2,)) #生成另一个线程实例

    t1.start() #启动线程
    t2.start() #启动另一个线程

    print(t1.getName()) #获取线程名
    print(t2.getName())
或者:
#!/usr/bin/env python
#coding:utf-8
import threading
import time
class A(object):#定义每个线程要运行的函数
   def __init__(self,num):
        self.num = num
        self.run()
   def run(self):
       print('线程',self.num)
       time.sleep(1)
for i in range(10):
t = threading.Thread(target=A,args=(i,))#生成一个线程实例 target对应你要执行的函数名
t.start()#启动线程

继承类调用:

import threading
import time
class MyThread(threading.Thread):#继承threading.Thread
    def __init__(self,num):
        threading.Thread.__init__(self)
        self.num = num
    def run(self):#定义每个线程要运行的函数

        print("我是第%s个程序" %self.num)

        time.sleep(3)#执行结束后等待三秒

if __name__ == '__main__':
    t1 = MyThread(1)
    t2 = MyThread(2)
    t1.start()
    t2.start()
或者:
import threading
import time
class MyThread(threading.Thread):#继承threading.Thread
    def __init__(self,num):
        threading.Thread.__init__(self)
        self.num = num
    def run(self):#定义每个线程要运行的函数

        print("我是第%s个程序" %self.num)

        time.sleep(3)#执行结束后等待三秒

if __name__ == '__main__':
    for i in range(10):
        t = MyThread(i)
        t.start()

上述代码创建了10个“前台”线程,然后控制器就交给了CPU,CPU根据指定算法进行调度,分片执行指令

2.线程与进程的区别

一个进程实际上可以由多个线程的执行单元组成。每个线程都运行在进程的上下文中,并共享同样的代码和全局数据。

    线程是操作系统能够进行运算调度的最小单位。它被包含在进程之中,是进程中的实际运作单位。一条线程指的是进程中一个单一顺序的控制流,一个进程中可以并发多个线程,每条线程并行执行不同的任务

规定与方法:

import threading
首先导入threading 模块,这是使用多线程的前提。

  • start 线程准备就绪,等待CPU调度
  • setName 为线程设置名称
  • getName 获取线程名称
  • setDaemon 设置为后台线程或前台线程(默认)
    如果是后台线程,主线程执行过程中,后台线程也在进行,主线程执行完毕后,后台线程不论成功与否,均停止
    如果是前台线程,主线程执行过程中,前台线程也在进行,主线程执行完毕后,等待前台线程也执行完成后,程序停止
  • join 逐个执行每个线程,执行完毕后继续往下执行,该方法使得多线程变得无意义
  • run 线程被cpu调度后执行Thread类对象的run方法

2.Join & Daemon

线程共享内存空间,                                                                                               进程的内存是独立的

由于在实际的网络服务器中对并行的需求,线程成为越来越重要的编程模型,因为多线程之间比多进程之间更容易共享数据,同时线程一般比进程更高效


  • 线程是操作系统能够进行运算调度的最小单位。它被包含在进程中,是进程中的实际运作单位。一条线程指的是进程中一个单一顺序的控制流,一个进程中科并发多个线程,每条线程并行执行不同的任务。
  • OS调度CPU的最小单位|线程:一堆指令(控制流),线程是负责执行的指令集
  • all the threads in a process have the same view of the memory在同一个进程里的线程是共享同一块内存空间的

  • IO操作不占用CPU(数据读取存储),计算操作占用CPU(1 1...)
  • python多线程,不适合CPU密集型操作,适合IO密集型操作

    每一个程序的内存是独立的,互相不能直接访问。

join

1).join方法的作用是阻塞主进程(挡住,无法执行join以后的语句),专注执行多线程。
2).多线程多join的情况下,依次执行各线程的join方法,前头一个结束了才能执行后面一个。
3).无参数,则等待到该线程结束,才开始执行下一个线程的join。
4.设置参数后,则等待该线程这么长时间就不管它了(而该线程并没有结束)。
不管的意思就是可以执行后面的主进程了。

例如:
如果不使用join

import time
import threading

def run(n):

    print('正在运行[%s]n' % n)
    time.sleep(2)
    print('运行结束--')
def main():
    for i in range(5):
        t = threading.Thread(target=run,args=[i,])
        #time.sleep(1)
        t.start()
        t.join(1)
        print('进行中的线程名', t.getName())
#第一个执行的
m = threading.Thread(target=main,args=[])
m.start()
print("---main thread done----")
print('继续往下执行')

结果如下:

---main thread done----  #线程还没结束就执行
正在运行[0]
继续往下执行               #线程还没结束就执行

进行中的线程名 Thread-2
正在运行[1]

运行结束--
进行中的线程名 Thread-3
正在运行[2]

运行结束--
进行中的线程名 Thread-4
正在运行[3]

运行结束--
进行中的线程名 Thread-5
正在运行[4]

运行结束--
进行中的线程名 Thread-6
运行结束--

如果使用join:

import time
import threading

def run(n):

    print('正在运行[%s]n' % n)
    time.sleep(1)
    print('运行结束--')
def main():
    for i in range(5):
        t = threading.Thread(target=run,args=[i,])
        t.start()
        t.join(1)
        print('进行中的线程名', t.getName())
#第一个执行的
m = threading.Thread(target=main,args=[])
m.start()
m.join()#开启join
print("---main thread done----") #结果是线程执行完毕之后 才执行
print('继续往下执行')              #结果是线程执行完毕之后 才执行

注:join(time)等time秒,如果time内未执行完就不等了,继续往下执行
如下:

import time
import threading

def run(n):

    print('正在运行[%s]n' % n)
    time.sleep(1)
    print('运行结束--')
def main():
    for i in range(5):
        t = threading.Thread(target=run,args=[i,])
        #time.sleep(1)
        t.start()
        t.join(1)
        print('进行中的线程名', t.getName())
#第一个执行的
m = threading.Thread(target=main,args=[])
m.start()
m.join(timeout=2) #设置时间
print("---main thread done----")
print('继续往下执行')

结果:

正在运行[0]

进行中的线程名 Thread-2
运行结束--  
正在运行[1]

运行结束--
进行中的线程名 Thread-3
---main thread done----  #执行了
继续往下执行               #执行了
正在运行[2]

运行结束--
进行中的线程名 Thread-4
正在运行[3]

运行结束--
进行中的线程名 Thread-5
正在运行[4]

运行结束--
进行中的线程名 Thread-6

线程共享创建它的进程的地址空间;                                                                        进程拥有自己的地址空间。

进程

    进程:

daemon

一些线程做后台任务,比如发送keepalive包,或执行垃圾收集周期,等等。这些只是有用的主程序运行时,它可以杀死他们一旦其他,非守护线程退出。
没有守护程序线程,你要跟踪他们,和告诉他们退出,您的程序可以完全退出。通过设置它们作为守护进程线程,你可以让他们运行和忘记他们,当程序退出时,任何守护程序线程自动被杀。

import time
import threading

def run(n):

    print('正在运行[%s]n' % n)
    time.sleep(1)
    print('运行结束--')
def main():
    for i in range(5):
        t = threading.Thread(target=run,args=[i,])
        time.sleep(1)
        t.start()
        t.join(1)
        print('进行中的线程名', t.getName())
#第一个执行的
m = threading.Thread(target=main,args=[])
m.setDaemon(True)#将主线程设置为Daemon线程,它退出时,其它子线程会同时退出,不管是否执行完任务
m.start()

print("---main thread done----")
print('继续往下执行')

注意:守护程序线程突然停在关闭。他们的资源(如打开的文件、数据库事务,等等)可能不会正常发布。如果你想让你的线程停止优雅,让他们non-daemonic和使用合适的信号机制等


线程可以直接访问其进程的数据段;                                                                        进程拥有父进程的数据段的自己的副本。

程序并不能单独和运行只有将程序装载到内存中,系统为他分配资源才能运行,而这种执行的程序就称之为进程。

    以一个整体的形式暴露给操作系统管理,里面包含对各种资源的调用,内存的对各种资源管理的集合就可以称为进程。进程本身是不可以执行的,只是一堆指令,操作系统是线程执行的。

线程锁

一个进程下可以启动多个线程,多个线程共享父进程的内存空间,也就意味着每个线程可以访问同一份数据,此时,如果2个线程同时要修改同一份数据那就会出现数据修改会被不是一个进程修改
由于线程之间是进行随机调度,并且每个线程可能只执行n条执行之后,CPU接着执行其他线程。所以,可能出现如下问题:

import time
import threading

def addNum(ip):
    global num #在每个线程中都获取这个全局变量
    print('--get num:',num,'线程数',ip )
    time.sleep(1)
    num   =1 #对此公共变量进行-1操作
    num_list.append(num)

num = 0  #设定一个共享变量
thread_list = []
num_list =[]
for i in range(10):
    t = threading.Thread(target=addNum,args=(i,))
    t.start()
    thread_list.append(t)

for t in thread_list: #等待所有线程执行完毕
    t.join()

print('final num:', num )
print(num_list)

结果:

--get num: 0 线程数 0
--get num: 0 线程数 1
--get num: 0 线程数 2
--get num: 0 线程数 3
--get num: 0 线程数 4
--get num: 0 线程数 5
--get num: 0 线程数 6
--get num: 0 线程数 7
--get num: 0 线程数 8
--get num: 0 线程数 9
final num: 10
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

正常来讲,这个num结果应该是0, 但在python 2.7上多运行几次,会发现,最后打印出来的num结果不总是0,为什么每次运行的结果不一样呢? 哈,很简单,假设你有A,B两个线程,此时都 要对num 进行减1操作, 由于2个线程是并发同时运行的,所以2个线程很有可能同时拿走了num=100这个初始变量交给cpu去运算,当A线程去处完的结果是99,但此时B线程运算完的结果也是99,两个线程同时CPU运算的结果再赋值给num变量后,结果就都是99。那怎么办呢? 很简单,每个线程在要修改公共数据时,为了避免自己在还没改完的时候别人也来修改此数据,可以给这个数据加一把锁, 这样其它线程想修改此数据时就必须等待你修改完毕并把锁释放掉后才能再访问此数据。
*注:不要在3.x上运行,不知为什么,3.x上的结果总是正确的,可能是自动加了锁

加上锁之后

import time   
import threading

def addNum():
    global num #在每个线程中都获取这个全局变量
    print('--get num:',num )
    time.sleep(1)
    lock.acquire() #修改数据前加锁
    num  -=1 #对此公共变量进行-1操作
    lock.release() #修改后释放
num = 100  #设定一个共享变量
thread_list = []
lock = threading.Lock() #生成全局锁
for i in range(100):
    t = threading.Thread(target=addNum)
    t.start()
    thread_list.append(t)

for t in thread_list: #等待所有线程执行完毕
    t.join()

print('final num:', num )

RLock(递归锁)
说白了就是在一个大锁中还要再包含子锁

Semaphore(信号量)

互斥锁 同时只允许一个线程更改数据,而Semaphore是同时允许一定数量的线程更改数据 ,比如厕所有3个坑,那最多只允许3个人上厕所,后面的人只能等里面有人出来了才能再进去。

event

一个事件是一个简单的同步对象;

事件代表一个内部国旗,和线程

可以等待标志被设置,或者设置或清除标志本身。

事件= threading.Event()

、#客户端线程等待国旗可以设置

event.wait()#服务器线程可以设置或重置它

event.set()

event.clear()

如果设置了国旗,等方法不做任何事。

如果标志被清除,等待会阻塞,直到它再次成为集。

任意数量的线程可能等待相同的事件。

Python提供了Event对象用于线程间通信,它是由线程设置的信号标志,如果信号标志位真,则其他线程等待直到信号接触。

Event对象实现了简单的线程通信机制,它提供了设置信号,清楚信号,等待等用于实现线程间的通信。

1 设置信号

使用Event的set()方法可以设置Event对象内部的信号标志为真。Event对象提供了isSet()方法来判断其内部信号标志的状态。当使用event对象的set()方法后,isSet()方法返回真

2 清除信号

使用Event对象的clear()方法可以清除Event对象内部的信号标志,即将其设为假,当使用Event的clear方法后,isSet()方法返回假

3 等待

Event对象wait的方法只有在内部信号为真的时候才会很快的执行并完成返回。当Event对象的内部信号标志位假时,则wait方法一直等待到其为真时才返回。

事件处理的机制:全局定义了一个“Flag”,如果“Flag”值为 False,那么当程序执行 event.wait 方法时就会阻塞,如果“Flag”值为True,那么event.wait 方法时便不再阻塞。

  • clear:将“Flag”设置为False
  • set:将“Flag”设置为True

案例:

#!/usr/bin/env python
#codfing:utf-8
#__author__ = 'yaoyao'
import threading
def do(event):
    print ('最先执行')
    event.wait()
    print ('最后执行')
event_obj = threading.Event()
for i in range(10):
    t = threading.Thread(target=do, args=(event_obj,))
    t.start()
print ('开始等待')
event_obj.clear()
inp = input('输入true:')
if inp == 'true':
    event_obj.set()

queque队列:
队列是特别有用在线程编程时必须在多个线程之间交换安全信息。

class queue.Queue(maxsize=0) #先入先出
class queue.LifoQueue(maxsize=0) #last in fisrt out
class queue.PriorityQueue(maxsize=0) #存储数据时可设置优先级的队列

构造函数为一个优先队列。最大尺寸是整数集upperbound限制数量的物品可以放在队列中。插入块一旦达到这个尺寸,直到队列项。如果最大尺寸小于或等于零,队列大小是无限的。

生产者消费模型

线程可以直接与其进程的其他线程通信;                                                                 进程必须使用进程间通信与兄弟进程进行通信。

## 程序和进程的区别在于:程序是指令的集合,它是进程的静态描述文本;进程是程序的一次执行活动,属于动态概念。

    表面看进程在执行,其实是线程在执行,一个进程至少包含一个线程。

二、多进程

案例:

#!/usr/bin/env python
#codfing:utf-8
from multiprocessing import Process
import threading
import time

def foo(i):
    print ('开始',i)
if __name__ == "__main__":
    for i in range(10):
        p = Process(target=foo,args=(i,))
        p.start()
        print('我是华丽的分隔符')

新线程很容易创建;                                                                                                  新流程需要重复父流程。

进程是操作系统对一个正在运行的程序的一种抽象。即进程是处理器,主存,IO设备的抽象

    线程:线程就是可执行的上下文,CPU执行所需要的最小单位。CPU只负责运算。单核的CPU同时只能做一件事情,为什么我们可以切换各种程序,是由于CPU的执行速度很快,在来回切换,让我们看起来程序是执行多个进程。

注意:由于进程之间的数据需要各自持有一份,所以创建进程需要的非常大的开销。

线程可以对同一进程的线程进行相当程度的控制;                                                   进程只能对子进程进行控制。

操作系统可以同时运行多个进程,而每个进程都好像在独占的使用硬件


  • 每个程序在内存里都分配有独立的空间,默认进程间是不能互相访问数据和操作的
  • (QQ,excel等)程序要以一个整体的形式暴露给操作系统管理,里面包含各种资源的调用(调用内存的管理、网络接口的调用等),对各种资源管理的集合就可以称为进程。
  • 例如整个QQ就可以称为一个进程
  • 进程要操作CPU(即发送指令),必须先创建一个线程;
  • 进程本身不能执行,只是资源的集合,想要执行必须先生成操作系统进行调度运算的最小单元-》线程;一个进程要执行,必须至少拥有一个线程。当创建一个进程时,会自动创建一个线程

    操作系统通过PID,进程ID来区分进程。进程标识符,PID。进程能够设置优先级。

进程数据共享

进程各自持有一份数据,默认无法共享数据
比如:

#!/usr/bin/env python
#codfing:utf-8
#__author__ = 'yaoyao'
from multiprocessing import Process
li = []

def foo(i):
    li.append(i)
    print ('进程里的列表是',li)
if __name__ == '__main__':
    for i in range(10):
        p = Process(target=foo,args=(i,))
        p.start()
print ('打开列表 是空的',li)

显示如下:

打开列表 是空的 []
进程里的列表是 [0]
打开列表 是空的 []
进程里的列表是 [2]
打开列表 是空的 []
进程里的列表是 [3]
打开列表 是空的 []
进程里的列表是 [1]
打开列表 是空的 []
进程里的列表是 [5]
打开列表 是空的 []
进程里的列表是 [4]
打开列表 是空的 []
打开列表 是空的 []
进程里的列表是 [6]
打开列表 是空的 []
进程里的列表是 [7]
打开列表 是空的 []
进程里的列表是 [8]
打开列表 是空的 []
进程里的列表是 [9]

共享数据两种方法:

  1. Array

    !/usr/bin/env python

    codfing:utf-8

    author = 'yaoyao'

    from multiprocessing import Process,Array
    temp = Array('i', [11,22,33,44])
    def Foo(i):
    temp[i] = 100 i
    for item in temp:
    print (i,'----->',item)

    if name == "main":
    for i in range(1):
    p = Process(target=Foo,args=(i,))
    p.start()
    2.manage.dict()

协程

协程,又称微线程,纤程。英文名Coroutine。一句话说明什么是线程:协程是一种用户态的轻量级线程。

协程拥有自己的寄存器上下文和栈。协程调度切换时,将寄存器上下文和栈保存到其他地方,在切回来的时候,恢复先前保存的寄存器上下文和栈。因此:

协程能保留上一次调用时的状态(即所有局部状态的一个特定组合),每次过程重入时,就相当于进入上一次调用的状态,换种说法:进入上一次离开时所处逻辑流的位置。

协程的好处:

无需线程上下文切换的开销
无需原子操作锁定及同步的开销
方便切换控制流,简化编程模型
高并发 高扩展性 低成本:一个CPU支持上万的协程都不是问题。所以很适合用于高并发处理。

缺点:

无法利用多核资源:协程的本质是个单线程,它不能同时将 单个CPU 的多个核用上,协程需要和进程配合才能运行在多CPU上.当然我们日常所编写的绝大部分应用都没有这个必要,除非是cpu密集型应用。
进行阻塞(Blocking)操作(如IO时)会阻塞掉整个程序

使用yield实现协程操作例子    

import time
import queue
def consumer(name):
print("--->starting eating baozi...")
while True:
new_baozi = yield
print("[%s] is eating baozi %s" % (name,new_baozi))
#time.sleep(1)

def producer():

r = con.__next__()
r = con2.__next__()
n = 0
while n < 5:
    n  =1
    con.send(n)
    con2.send(n)
    print("33[32;1m[producer]33[0m is making baozi %s" %n )

if name == 'main':
con = consumer("c1")
con2 = consumer("c2")
p = producer()
Greenlet

对主线程的更改(取消,优先级更改等)可能会影响进程其他线程的行为;            对父进程的更改不会影响子进程。

进程和线程的区别?

  • 线程共享创建它的进程的地址空间,进程的内存空间是独立的
  • 多个线程直接访问数据进程的数据,数据时共享的;一个父进程中的多个子进程对数据的访问其实是克隆,相互之间是独立的。
  • 线程可以直接与创建它的进程的其他线程通信;一个父进程的子进程间的通信必须通过一个中间代理来实现
  • 新的线程容易创建;创建新进程需要对其父进程进行一次克隆
  • 线程可以对创建它的进程中的线程进行控制和操作,线程之间没有实际的隶属关系;进程只能对其子进程进行控制和操作
  • 对主线程的更改(取消、优先级更改等)可能影响进程的其他线程的行为;对进程的更改不会影响子进程

    线程是有主线程创建的,primary thread;能够一直创建新的线程,Linux操作系统有一个主线程。

!/usr/bin/env python

 

多线程并发的例子

import threading,time

def run(n)
    print("task",n)
    time.sleep(2)

t1 = threading.Thread(target=run,args=("t1",))#target=此线程要执行的代码块(函数);args=参数(不定个数参数,只有一个参数也需要加`,`,这里是元组形式)
t2 = threading.Thread(target=run,args=("t2",))
t1.start()
t2.start()
  • 启动多个线程
    ```python
    import threading,time

def run(n)
print("task",n)
time.sleep(2)

start_time = time.time()
for i to range(50)
t = threading.Thread(target=run,args=("t%s" %i ,))
t.start()

print('const',time.time()-start_time)
```

  • 这里统计的执行时间比2秒小很多,因为主线程和由它启动的子线程是并行的

  • join()等待线程执行完毕再继续相当于wait
    ```python
    import threading
    import time

def run(n):
print('task:',n)
time.sleep(2)

start_time = time.time()
thread_list = []
for i in range(50):
t = threading.Thread(target=run,args=(i,))
t.start()
#如果这里加入t.join()则等待每个线程执行完毕再进行下一个线程,多线程变成了串行
thread_list.append(t)

for t in thread_list:
t.join()#在线程启动后(start),加入join,等待所有创建的线程执行完毕,再执行主线程

print('cont:',time.time()-start_time)
print(threading.current_thread(),threading.active_count())

    线程和进程的区别:

-- coding:utf-8 --

from greenlet import greenlet

def test1():
print 12
gr2.switch()
print 34
gr2.switch()

def test2():
print 56
gr1.switch()
print 78

gr1 = greenlet(test1)
gr2 = greenlet(test2)
gr1.switch()

  
Gevent

Gevent 是一个第三方库,可以轻松通过gevent实现并发同步或异步编程,在gevent中用到的主要模式是Greenlet, 它是以C扩展模块形式接入Python的轻量级协程。 Greenlet全部运行在主程序操作系统进程的内部,但它们被协作式地调度。

import gevent

def foo():
print('Running in foo')
gevent.sleep(0)
print('Explicit context switch to foo again')

def bar():
print('Explicit context to bar')
gevent.sleep(0)
print('Implicit context switch back to bar')

gevent.joinall([
gevent.spawn(foo),
gevent.spawn(bar),])

输出:

Running in foo
Explicit context to bar
Explicit context switch to foo again
Implicit context switch back to bar

3.一条进程至少有一条线程

threading.current_thread()显示当前进程,threading.active_count()当前进程活跃个数

```

  • 这里结果为2秒多一点,统计时间正确,用于此场景时,join()必须在所有线程的start()之后,否则变为多线程串行,多线程就无意义了

    线程和进程比快是没有可比性的。

4.线程锁
    每个线程在要修改公共数据时,为了避免自己在还没改完的时候别人也来修改此数据,可以给这个数据加一把锁, 这样其它线程想修改此数据时就必须等待你修改完毕并把锁释放掉后才能再访问此数据

守护线程

  • 不加jion()时,主线程和子线程是并行的,线程之间是并行关系;加了join(),加了join()的线程执行完毕才会继续其他线程
  • 设为【守护线程】,主线程不等待子线程执行完毕,直接执行;程序会等主线程执行完毕,但不会等待守护线程
    ```python
    import threading
    import time

def run(n):
print('task:',n)
time.sleep(2)

start_time = time.time()
thread_list = []
for i in range(50):
t = threading.Thread(target=run,args=(i,))
t.t.setDaemon(True)#设置为守护线程,必须在start之前
#守护=》仆人,守护主人(主进程/线程),主人down了,守护的仆人直接结束
t.start()
thread_list.append(t)
print('cont:',time.time()-start_time)

    1、线程共享内存空间,进程的内存是独立的;

 

主线程不是守护线程(也不可设置为守护线程),不等待子线程(设置为守护线程)等待2秒的时间,直接执行最后一句print()

```

    2、同一个进程的线程之间可以直接交流,两个进程想通信,必须通过一个中间代理来实现;

5.Semaphore(信号量)

线程锁

  • 每个线程在要修改公共数据时,为了避免自己在还没改完的时候别人也来修改此数据,可以给这个数据加一把锁, 这样其它线程想修改此数据时就必须等待你修改完毕并把锁释放掉后才能再访问此数据。
  • 线程锁将线程变为串行

    def run(n):

    lock.acquire()#创建锁
    global num
    num  =1
    lock.relsase#释放锁
    

    lock = threading.Lock()#实例化锁 for i in range(50):

    t = threading.Thread(target=run,args=(i,))
    t.start()
    

    print('num:',num)

    3、新的线程容易创建,创建新线程需要对其父进程进行一次克隆;(parent process)

    互斥锁 同时只允许一个线程更改数据,而Semaphore是同时允许一定数量的线程更改数据 ,比如厕所有3个坑,那最多只允许3个人上厕所,后面的人只能等里面有人出来了才能再进去。

RLock(递归锁)

  • 多层锁的时候使用,说白了就是在一个大锁中还要再包含子锁
    ```python
    import threading,time

def run1():
print("grab the first part data")
lock.acquire()
global num
num =1
lock.release()
return num
def run2():
print("grab the second part data")
lock.acquire()
global num2
num2 =1
lock.release()
return num2
def run3():
lock.acquire()
res = run1()
print('--------between run1 and run2-----')
res2 = run2()
lock.release()
print(res,res2)

澳门新萄京官方网站多线程与多进程,学习笔记15。if name == 'main':

num,num2 = 0,0
lock = threading.RLock()
for i in range(10):
    t = threading.Thread(target=run3)
    t.start()

while threading.active_count() != 1:
print(threading.active_count())
else:
print('----all threads done---')
print(num,num2)
```

    4、一个线程可以控制和操作同一进程里的其他线程,但是进程只能操作子进程;

 

信号量(Semaphore)

  • 互斥锁(线程锁) 同时只允许一个线程更改数据,而Semaphore是同时允许一定数量的线程更改数据 ,比如厕所有3个坑,那最多只允许3个人上厕所,后面的人只能等里面有人出来了才能再进去。
  • 每释放一个锁,立刻进一个线程(例如socket_server中的并发数限制)

    import threading,time

    澳门新萄京官方网站多线程与多进程,学习笔记15。def run(n):

    semaphore.acquire()
    time.sleep(1)
    print("run the thread: %sn" %n)
    semaphore.release()
    

    if name == 'main':

    num= 0
    semaphore  = threading.BoundedSemaphore(5) #最多允许5个线程同时运行
    for i in range(20):
        t = threading.Thread(target=run,args=(i,))
        t.start()
    

    while threading.active_count() != 1:

    pass #print threading.active_count()
    

    else:

    print('----all threads done---')
    print(num)
    

    5、线程之间数据可以交流,进程之间是不允许数据交流的。

6.join的作用是 等待线程执行完毕

继承式多线程

  • 一般不用

    线程源代码:

 

通过类的形式=》多线程

import threading,time

class MyThread(threading.Thread)
    def __inin__(self,n)
        super(MyThread,self).__init__(n)
        self.n = n

    def run(self)#这里方法名必须为run
        print("running task",self.n)
        time.sleep(2)

t1 = MyThread(1)
t2 = MyThread(2)
t1.start()
t2.start()

 

7.练习

"""Thread module emulating a subset of Java's threading model."""

import sys as _sys
import _thread

from time import monotonic as _time
from traceback import format_exc as _format_exc
from _weakrefset import WeakSet
from itertools import islice as _islice, count as _count
try:
    from _collections import deque as _deque
except ImportError:
    from collections import deque as _deque

# Note regarding PEP 8 compliant names
#  This threading model was originally inspired by Java, and inherited
# the convention of camelCase function and method names from that
# language. Those original names are not in any imminent danger of
# being deprecated (even for Py3k),so this module provides them as an
# alias for the PEP 8 compliant names
# Note that using the new PEP 8 compliant names facilitates substitution
# with the multiprocessing module, which doesn't provide the old
# Java inspired names.

__all__ = ['active_count', 'Condition', 'current_thread', 'enumerate', 'Event',
           'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Barrier',
           'Timer', 'ThreadError', 'setprofile', 'settrace', 'local', 'stack_size']

# Rename some stuff so "from threading import *" is safe
_start_new_thread = _thread.start_new_thread
_allocate_lock = _thread.allocate_lock
_set_sentinel = _thread._set_sentinel
get_ident = _thread.get_ident
ThreadError = _thread.error
try:
    _CRLock = _thread.RLock
except AttributeError:
    _CRLock = None
TIMEOUT_MAX = _thread.TIMEOUT_MAX
del _thread


# Support for profile and trace hooks

_profile_hook = None
_trace_hook = None

def setprofile(func):
    """Set a profile function for all threads started from the threading module.

    The func will be passed to sys.setprofile() for each thread, before its
    run() method is called.

    """
    global _profile_hook
    _profile_hook = func

def settrace(func):
    """Set a trace function for all threads started from the threading module.

    The func will be passed to sys.settrace() for each thread, before its run()
    method is called.

    """
    global _trace_hook
    _trace_hook = func

# Synchronization classes

Lock = _allocate_lock

def RLock(*args, **kwargs):
    """Factory function that returns a new reentrant lock.

    A reentrant lock must be released by the thread that acquired it. Once a
    thread has acquired a reentrant lock, the same thread may acquire it again
    without blocking; the thread must release it once for each time it has
    acquired it.

    """
    if _CRLock is None:
        return _PyRLock(*args, **kwargs)
    return _CRLock(*args, **kwargs)

class _RLock:
    """This class implements reentrant lock objects.

    A reentrant lock must be released by the thread that acquired it. Once a
    thread has acquired a reentrant lock, the same thread may acquire it
    again without blocking; the thread must release it once for each time it
    has acquired it.

    """

    def __init__(self):
        self._block = _allocate_lock()
        self._owner = None
        self._count = 0

    def __repr__(self):
        owner = self._owner
        try:
            owner = _active[owner].name
        except KeyError:
            pass
        return "<%s %s.%s object owner=%r count=%d at %s>" % (
            "locked" if self._block.locked() else "unlocked",
            self.__class__.__module__,
            self.__class__.__qualname__,
            owner,
            self._count,
            hex(id(self))
        )

    def acquire(self, blocking=True, timeout=-1):
        """Acquire a lock, blocking or non-blocking.

        When invoked without arguments: if this thread already owns the lock,
        increment the recursion level by one, and return immediately. Otherwise,
        if another thread owns the lock, block until the lock is unlocked. Once
        the lock is unlocked (not owned by any thread), then grab ownership, set
        the recursion level to one, and return. If more than one thread is
        blocked waiting until the lock is unlocked, only one at a time will be
        able to grab ownership of the lock. There is no return value in this
        case.

        When invoked with the blocking argument set to true, do the same thing
        as when called without arguments, and return true.

        When invoked with the blocking argument set to false, do not block. If a
        call without an argument would block, return false immediately;
        otherwise, do the same thing as when called without arguments, and
        return true.

        When invoked with the floating-point timeout argument set to a positive
        value, block for at most the number of seconds specified by timeout
        and as long as the lock cannot be acquired.  Return true if the lock has
        been acquired, false if the timeout has elapsed.

        """
        me = get_ident()
        if self._owner == me:
            self._count  = 1
            return 1
        rc = self._block.acquire(blocking, timeout)
        if rc:
            self._owner = me
            self._count = 1
        return rc

    __enter__ = acquire

    def release(self):
        """Release a lock, decrementing the recursion level.

        If after the decrement it is zero, reset the lock to unlocked (not owned
        by any thread), and if any other threads are blocked waiting for the
        lock to become unlocked, allow exactly one of them to proceed. If after
        the decrement the recursion level is still nonzero, the lock remains
        locked and owned by the calling thread.

        Only call this method when the calling thread owns the lock. A
        RuntimeError is raised if this method is called when the lock is
        unlocked.

        There is no return value.

        """
        if self._owner != get_ident():
            raise RuntimeError("cannot release un-acquired lock")
        self._count = count = self._count - 1
        if not count:
            self._owner = None
            self._block.release()

    def __exit__(self, t, v, tb):
        self.release()

    # Internal methods used by condition variables

    def _acquire_restore(self, state):
        self._block.acquire()
        self._count, self._owner = state

    def _release_save(self):
        if self._count == 0:
            raise RuntimeError("cannot release un-acquired lock")
        count = self._count
        self._count = 0
        owner = self._owner
        self._owner = None
        self._block.release()
        return (count, owner)

    def _is_owned(self):
        return self._owner == get_ident()

_PyRLock = _RLock


class Condition:
    """Class that implements a condition variable.

    A condition variable allows one or more threads to wait until they are
    notified by another thread.

    If the lock argument is given and not None, it must be a Lock or RLock
    object, and it is used as the underlying lock. Otherwise, a new RLock object
    is created and used as the underlying lock.

    """

    def __init__(self, lock=None):
        if lock is None:
            lock = RLock()
        self._lock = lock
        # Export the lock's acquire() and release() methods
        self.acquire = lock.acquire
        self.release = lock.release
        # If the lock defines _release_save() and/or _acquire_restore(),
        # these override the default implementations (which just call
        # release() and acquire() on the lock).  Ditto for _is_owned().
        try:
            self._release_save = lock._release_save
        except AttributeError:
            pass
        try:
            self._acquire_restore = lock._acquire_restore
        except AttributeError:
            pass
        try:
            self._is_owned = lock._is_owned
        except AttributeError:
            pass
        self._waiters = _deque()

    def __enter__(self):
        return self._lock.__enter__()

    def __exit__(self, *args):
        return self._lock.__exit__(*args)

    def __repr__(self):
        return "<Condition(%s, %d)>" % (self._lock, len(self._waiters))

    def _release_save(self):
        self._lock.release()           # No state to save

    def _acquire_restore(self, x):
        self._lock.acquire()           # Ignore saved state

    def _is_owned(self):
        # Return True if lock is owned by current_thread.
        # This method is called only if _lock doesn't have _is_owned().
        if self._lock.acquire(0):
            self._lock.release()
            return False
        else:
            return True

    def wait(self, timeout=None):
        """Wait until notified or until a timeout occurs.

        If the calling thread has not acquired the lock when this method is
        called, a RuntimeError is raised.

        This method releases the underlying lock, and then blocks until it is
        awakened by a notify() or notify_all() call for the same condition
        variable in another thread, or until the optional timeout occurs. Once
        awakened or timed out, it re-acquires the lock and returns.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof).

        When the underlying lock is an RLock, it is not released using its
        release() method, since this may not actually unlock the lock when it
        was acquired multiple times recursively. Instead, an internal interface
        of the RLock class is used, which really unlocks it even when it has
        been recursively acquired several times. Another internal interface is
        then used to restore the recursion level when the lock is reacquired.

        """
        if not self._is_owned():
            raise RuntimeError("cannot wait on un-acquired lock")
        waiter = _allocate_lock()
        waiter.acquire()
        self._waiters.append(waiter)
        saved_state = self._release_save()
        gotit = False
        try:    # restore state no matter what (e.g., KeyboardInterrupt)
            if timeout is None:
                waiter.acquire()
                gotit = True
            else:
                if timeout > 0:
                    gotit = waiter.acquire(True, timeout)
                else:
                    gotit = waiter.acquire(False)
            return gotit
        finally:
            self._acquire_restore(saved_state)
            if not gotit:
                try:
                    self._waiters.remove(waiter)
                except ValueError:
                    pass

    def wait_for(self, predicate, timeout=None):
        """Wait until a condition evaluates to True.

        predicate should be a callable which result will be interpreted as a
        boolean value.  A timeout may be provided giving the maximum time to
        wait.

        """
        endtime = None
        waittime = timeout
        result = predicate()
        while not result:
            if waittime is not None:
                if endtime is None:
                    endtime = _time()   waittime
                else:
                    waittime = endtime - _time()
                    if waittime <= 0:
                        break
            self.wait(waittime)
            result = predicate()
        return result

    def notify(self, n=1):
        """Wake up one or more threads waiting on this condition, if any.

        If the calling thread has not acquired the lock when this method is
        called, a RuntimeError is raised.

        This method wakes up at most n of the threads waiting for the condition
        variable; it is a no-op if no threads are waiting.

        """
        if not self._is_owned():
            raise RuntimeError("cannot notify on un-acquired lock")
        all_waiters = self._waiters
        waiters_to_notify = _deque(_islice(all_waiters, n))
        if not waiters_to_notify:
            return
        for waiter in waiters_to_notify:
            waiter.release()
            try:
                all_waiters.remove(waiter)
            except ValueError:
                pass

    def notify_all(self):
        """Wake up all threads waiting on this condition.

        If the calling thread has not acquired the lock when this method
        is called, a RuntimeError is raised.

        """
        self.notify(len(self._waiters))

    notifyAll = notify_all


class Semaphore:
    """This class implements semaphore objects.

    Semaphores manage a counter representing the number of release() calls minus
    the number of acquire() calls, plus an initial value. The acquire() method
    blocks if necessary until it can return without making the counter
    negative. If not given, value defaults to 1.

    """

    # After Tim Peters' semaphore class, but not quite the same (no maximum)

    def __init__(self, value=1):
        if value < 0:
            raise ValueError("semaphore initial value must be >= 0")
        self._cond = Condition(Lock())
        self._value = value

    def acquire(self, blocking=True, timeout=None):
        """Acquire a semaphore, decrementing the internal counter by one.

        When invoked without arguments: if the internal counter is larger than
        zero on entry, decrement it by one and return immediately. If it is zero
        on entry, block, waiting until some other thread has called release() to
        make it larger than zero. This is done with proper interlocking so that
        if multiple acquire() calls are blocked, release() will wake exactly one
        of them up. The implementation may pick one at random, so the order in
        which blocked threads are awakened should not be relied on. There is no
        return value in this case.

        When invoked with blocking set to true, do the same thing as when called
        without arguments, and return true.

        When invoked with blocking set to false, do not block. If a call without
        an argument would block, return false immediately; otherwise, do the
        same thing as when called without arguments, and return true.

        When invoked with a timeout other than None, it will block for at
        most timeout seconds.  If acquire does not complete successfully in
        that interval, return false.  Return true otherwise.

        """
        if not blocking and timeout is not None:
            raise ValueError("can't specify timeout for non-blocking acquire")
        rc = False
        endtime = None
        with self._cond:
            while self._value == 0:
                if not blocking:
                    break
                if timeout is not None:
                    if endtime is None:
                        endtime = _time()   timeout
                    else:
                        timeout = endtime - _time()
                        if timeout <= 0:
                            break
                self._cond.wait(timeout)
            else:
                self._value -= 1
                rc = True
        return rc

    __enter__ = acquire

    def release(self):
        """Release a semaphore, incrementing the internal counter by one.

        When the counter is zero on entry and another thread is waiting for it
        to become larger than zero again, wake up that thread.

        """
        with self._cond:
            self._value  = 1
            self._cond.notify()

    def __exit__(self, t, v, tb):
        self.release()


class BoundedSemaphore(Semaphore):
    """Implements a bounded semaphore.

    A bounded semaphore checks to make sure its current value doesn't exceed its
    initial value. If it does, ValueError is raised. In most situations
    semaphores are used to guard resources with limited capacity.

    If the semaphore is released too many times it's a sign of a bug. If not
    given, value defaults to 1.

    Like regular semaphores, bounded semaphores manage a counter representing
    the number of release() calls minus the number of acquire() calls, plus an
    initial value. The acquire() method blocks if necessary until it can return
    without making the counter negative. If not given, value defaults to 1.

    """

    def __init__(self, value=1):
        Semaphore.__init__(self, value)
        self._initial_value = value

    def release(self):
        """Release a semaphore, incrementing the internal counter by one.

        When the counter is zero on entry and another thread is waiting for it
        to become larger than zero again, wake up that thread.

        If the number of releases exceeds the number of acquires,
        raise a ValueError.

        """
        with self._cond:
            if self._value >= self._initial_value:
                raise ValueError("Semaphore released too many times")
            self._value  = 1
            self._cond.notify()


class Event:
    """Class implementing event objects.

    Events manage a flag that can be set to true with the set() method and reset
    to false with the clear() method. The wait() method blocks until the flag is
    true.  The flag is initially false.

    """

    # After Tim Peters' event class (without is_posted())

    def __init__(self):
        self._cond = Condition(Lock())
        self._flag = False

    def _reset_internal_locks(self):
        # private!  called by Thread._reset_internal_locks by _after_fork()
        self._cond.__init__(Lock())

    def is_set(self):
        """Return true if and only if the internal flag is true."""
        return self._flag

    isSet = is_set

    def set(self):
        """Set the internal flag to true.

        All threads waiting for it to become true are awakened. Threads
        that call wait() once the flag is true will not block at all.

        """
        with self._cond:
            self._flag = True
            self._cond.notify_all()

    def clear(self):
        """Reset the internal flag to false.

        Subsequently, threads calling wait() will block until set() is called to
        set the internal flag to true again.

        """
        with self._cond:
            self._flag = False

    def wait(self, timeout=None):
        """Block until the internal flag is true.

        If the internal flag is true on entry, return immediately. Otherwise,
        block until another thread calls set() to set the flag to true, or until
        the optional timeout occurs.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof).

        This method returns the internal flag on exit, so it will always return
        True except if a timeout is given and the operation times out.

        """
        with self._cond:
            signaled = self._flag
            if not signaled:
                signaled = self._cond.wait(timeout)
            return signaled


# A barrier class.  Inspired in part by the pthread_barrier_* api and
# the CyclicBarrier class from Java.  See
# http://sourceware.org/pthreads-win32/manual/pthread_barrier_init.html and
# http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/
#        CyclicBarrier.html
# for information.
# We maintain two main states, 'filling' and 'draining' enabling the barrier
# to be cyclic.  Threads are not allowed into it until it has fully drained
# since the previous cycle.  In addition, a 'resetting' state exists which is
# similar to 'draining' except that threads leave with a BrokenBarrierError,
# and a 'broken' state in which all threads get the exception.
class Barrier:
    """Implements a Barrier.

    Useful for synchronizing a fixed number of threads at known synchronization
    points.  Threads block on 'wait()' and are simultaneously once they have all
    made that call.

    """

    def __init__(self, parties, action=None, timeout=None):
        """Create a barrier, initialised to 'parties' threads.

        'action' is a callable which, when supplied, will be called by one of
        the threads after they have all entered the barrier and just prior to
        releasing them all. If a 'timeout' is provided, it is uses as the
        default for all subsequent 'wait()' calls.

        """
        self._cond = Condition(Lock())
        self._action = action
        self._timeout = timeout
        self._parties = parties
        self._state = 0 #0 filling, 1, draining, -1 resetting, -2 broken
        self._count = 0

    def wait(self, timeout=None):
        """Wait for the barrier.

        When the specified number of threads have started waiting, they are all
        simultaneously awoken. If an 'action' was provided for the barrier, one
        of the threads will have executed that callback prior to returning.
        Returns an individual index number from 0 to 'parties-1'.

        """
        if timeout is None:
            timeout = self._timeout
        with self._cond:
            self._enter() # Block while the barrier drains.
            index = self._count
            self._count  = 1
            try:
                if index   1 == self._parties:
                    # We release the barrier
                    self._release()
                else:
                    # We wait until someone releases us
                    self._wait(timeout)
                return index
            finally:
                self._count -= 1
                # Wake up any threads waiting for barrier to drain.
                self._exit()

    # Block until the barrier is ready for us, or raise an exception
    # if it is broken.
    def _enter(self):
        while self._state in (-1, 1):
            # It is draining or resetting, wait until done
            self._cond.wait()
        #see if the barrier is in a broken state
        if self._state < 0:
            raise BrokenBarrierError
        assert self._state == 0

    # Optionally run the 'action' and release the threads waiting
    # in the barrier.
    def _release(self):
        try:
            if self._action:
                self._action()
            # enter draining state
            self._state = 1
            self._cond.notify_all()
        except:
            #an exception during the _action handler.  Break and reraise
            self._break()
            raise

    # Wait in the barrier until we are relased.  Raise an exception
    # if the barrier is reset or broken.
    def _wait(self, timeout):
        if not self._cond.wait_for(lambda : self._state != 0, timeout):
            #timed out.  Break the barrier
            self._break()
            raise BrokenBarrierError
        if self._state < 0:
            raise BrokenBarrierError
        assert self._state == 1

    # If we are the last thread to exit the barrier, signal any threads
    # waiting for the barrier to drain.
    def _exit(self):
        if self._count == 0:
            if self._state in (-1, 1):
                #resetting or draining
                self._state = 0
                self._cond.notify_all()

    def reset(self):
        """Reset the barrier to the initial state.

        Any threads currently waiting will get the BrokenBarrier exception
        raised.

        """
        with self._cond:
            if self._count > 0:
                if self._state == 0:
                    #reset the barrier, waking up threads
                    self._state = -1
                elif self._state == -2:
                    #was broken, set it to reset state
                    #which clears when the last thread exits
                    self._state = -1
            else:
                self._state = 0
            self._cond.notify_all()

    def abort(self):
        """Place the barrier into a 'broken' state.

        Useful in case of error.  Any currently waiting threads and threads
        attempting to 'wait()' will have BrokenBarrierError raised.

        """
        with self._cond:
            self._break()

    def _break(self):
        # An internal error was detected.  The barrier is set to
        # a broken state all parties awakened.
        self._state = -2
        self._cond.notify_all()

    @property
    def parties(self):
        """Return the number of threads required to trip the barrier."""
        return self._parties

    @property
    def n_waiting(self):
        """Return the number of threads currently waiting at the barrier."""
        # We don't need synchronization here since this is an ephemeral result
        # anyway.  It returns the correct value in the steady state.
        if self._state == 0:
            return self._count
        return 0

    @property
    def broken(self):
        """Return True if the barrier is in a broken state."""
        return self._state == -2

# exception raised by the Barrier class
class BrokenBarrierError(RuntimeError):
    pass


# Helper to generate new thread names
_counter = _count().__next__
_counter() # Consume 0 so first non-main thread has id 1.
def _newname(template="Thread-%d"):
    return template % _counter()

# Active thread administration
_active_limbo_lock = _allocate_lock()
_active = {}    # maps thread id to Thread object
_limbo = {}
_dangling = WeakSet()

# Main class for threads

class Thread:
    """A class that represents a thread of control.

    This class can be safely subclassed in a limited fashion. There are two ways
    to specify the activity: by passing a callable object to the constructor, or
    by overriding the run() method in a subclass.

    """

    _initialized = False
    # Need to store a reference to sys.exc_info for printing
    # out exceptions when a thread tries to use a global var. during interp.
    # shutdown and thus raises an exception about trying to perform some
    # operation on/with a NoneType
    _exc_info = _sys.exc_info
    # Keep sys.exc_clear too to clear the exception just before
    # allowing .join() to return.
    #XXX __exc_clear = _sys.exc_clear

    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs=None, *, daemon=None):
        """This constructor should always be called with keyword arguments. Arguments are:

        *group* should be None; reserved for future extension when a ThreadGroup
        class is implemented.

        *target* is the callable object to be invoked by the run()
        method. Defaults to None, meaning nothing is called.

        *name* is the thread name. By default, a unique name is constructed of
        the form "Thread-N" where N is a small decimal number.

        *args* is the argument tuple for the target invocation. Defaults to ().

        *kwargs* is a dictionary of keyword arguments for the target
        invocation. Defaults to {}.

        If a subclass overrides the constructor, it must make sure to invoke
        the base class constructor (Thread.__init__()) before doing anything
        else to the thread.

        """
        assert group is None, "group argument must be None for now"
        if kwargs is None:
            kwargs = {}
        self._target = target
        self._name = str(name or _newname())
        self._args = args
        self._kwargs = kwargs
        if daemon is not None:
            self._daemonic = daemon
        else:
            self._daemonic = current_thread().daemon
        self._ident = None
        self._tstate_lock = None
        self._started = Event()
        self._is_stopped = False
        self._initialized = True
        # sys.stderr is not stored in the class like
        # sys.exc_info since it can be changed between instances
        self._stderr = _sys.stderr
        # For debugging and _after_fork()
        _dangling.add(self)

    def _reset_internal_locks(self, is_alive):
        # private!  Called by _after_fork() to reset our internal locks as
        # they may be in an invalid state leading to a deadlock or crash.
        self._started._reset_internal_locks()
        if is_alive:
            self._set_tstate_lock()
        else:
            # The thread isn't alive after fork: it doesn't have a tstate
            # anymore.
            self._is_stopped = True
            self._tstate_lock = None

    def __repr__(self):
        assert self._initialized, "Thread.__init__() was not called"
        status = "initial"
        if self._started.is_set():
            status = "started"
        self.is_alive() # easy way to get ._is_stopped set when appropriate
        if self._is_stopped:
            status = "stopped"
        if self._daemonic:
            status  = " daemon"
        if self._ident is not None:
            status  = " %s" % self._ident
        return "<%s(%s, %s)>" % (self.__class__.__name__, self._name, status)

    def start(self):
        """Start the thread's activity.

        It must be called at most once per thread object. It arranges for the
        object's run() method to be invoked in a separate thread of control.

        This method will raise a RuntimeError if called more than once on the
        same thread object.

        """
        if not self._initialized:
            raise RuntimeError("thread.__init__() not called")

        if self._started.is_set():
            raise RuntimeError("threads can only be started once")
        with _active_limbo_lock:
            _limbo[self] = self
        try:
            _start_new_thread(self._bootstrap, ())
        except Exception:
            with _active_limbo_lock:
                del _limbo[self]
            raise
        self._started.wait()

    def run(self):
        """Method representing the thread's activity.

        You may override this method in a subclass. The standard run() method
        invokes the callable object passed to the object's constructor as the
        target argument, if any, with sequential and keyword arguments taken
        from the args and kwargs arguments, respectively.

        """
        try:
            if self._target:
                self._target(*self._args, **self._kwargs)
        finally:
            # Avoid a refcycle if the thread is running a function with
            # an argument that has a member that points to the thread.
            del self._target, self._args, self._kwargs

    def _bootstrap(self):
        # Wrapper around the real bootstrap code that ignores
        # exceptions during interpreter cleanup.  Those typically
        # happen when a daemon thread wakes up at an unfortunate
        # moment, finds the world around it destroyed, and raises some
        # random exception *** while trying to report the exception in
        # _bootstrap_inner() below ***.  Those random exceptions
        # don't help anybody, and they confuse users, so we suppress
        # them.  We suppress them only when it appears that the world
        # indeed has already been destroyed, so that exceptions in
        # _bootstrap_inner() during normal business hours are properly
        # reported.  Also, we only suppress them for daemonic threads;
        # if a non-daemonic encounters this, something else is wrong.
        try:
            self._bootstrap_inner()
        except:
            if self._daemonic and _sys is None:
                return
            raise

    def _set_ident(self):
        self._ident = get_ident()

    def _set_tstate_lock(self):
        """
        Set a lock object which will be released by the interpreter when
        the underlying thread state (see pystate.h) gets deleted.
        """
        self._tstate_lock = _set_sentinel()
        self._tstate_lock.acquire()

    def _bootstrap_inner(self):
        try:
            self._set_ident()
            self._set_tstate_lock()
            self._started.set()
            with _active_limbo_lock:
                _active[self._ident] = self
                del _limbo[self]

            if _trace_hook:
                _sys.settrace(_trace_hook)
            if _profile_hook:
                _sys.setprofile(_profile_hook)

            try:
                self.run()
            except SystemExit:
                pass
            except:
                # If sys.stderr is no more (most likely from interpreter
                # shutdown) use self._stderr.  Otherwise still use sys (as in
                # _sys) in case sys.stderr was redefined since the creation of
                # self.
                if _sys and _sys.stderr is not None:
                    print("Exception in thread %s:n%s" %
                          (self.name, _format_exc()), file=_sys.stderr)
                elif self._stderr is not None:
                    # Do the best job possible w/o a huge amt. of code to
                    # approximate a traceback (code ideas from
                    # Lib/traceback.py)
                    exc_type, exc_value, exc_tb = self._exc_info()
                    try:
                        print((
                            "Exception in thread "   self.name  
                            " (most likely raised during interpreter shutdown):"), file=self._stderr)
                        print((
                            "Traceback (most recent call last):"), file=self._stderr)
                        while exc_tb:
                            print((
                                '  File "%s", line %s, in %s' %
                                (exc_tb.tb_frame.f_code.co_filename,
                                    exc_tb.tb_lineno,
                                    exc_tb.tb_frame.f_code.co_name)), file=self._stderr)
                            exc_tb = exc_tb.tb_next
                        print(("%s: %s" % (exc_type, exc_value)), file=self._stderr)
                    # Make sure that exc_tb gets deleted since it is a memory
                    # hog; deleting everything else is just for thoroughness
                    finally:
                        del exc_type, exc_value, exc_tb
            finally:
                # Prevent a race in
                # test_threading.test_no_refcycle_through_target when
                # the exception keeps the target alive past when we
                # assert that it's dead.
                #XXX self._exc_clear()
                pass
        finally:
            with _active_limbo_lock:
                try:
                    # We don't call self._delete() because it also
                    # grabs _active_limbo_lock.
                    del _active[get_ident()]
                except:
                    pass

    def _stop(self):
        # After calling ._stop(), .is_alive() returns False and .join() returns
        # immediately.  ._tstate_lock must be released before calling ._stop().
        #
        # Normal case:  C code at the end of the thread's life
        # (release_sentinel in _threadmodule.c) releases ._tstate_lock, and
        # that's detected by our ._wait_for_tstate_lock(), called by .join()
        # and .is_alive().  Any number of threads _may_ call ._stop()
        # simultaneously (for example, if multiple threads are blocked in
        # .join() calls), and they're not serialized.  That's harmless -
        # they'll just make redundant rebindings of ._is_stopped and
        # ._tstate_lock.  Obscure:  we rebind ._tstate_lock last so that the
        # "assert self._is_stopped" in ._wait_for_tstate_lock() always works
        # (the assert is executed only if ._tstate_lock is None).
        #
        # Special case:  _main_thread releases ._tstate_lock via this
        # module's _shutdown() function.
        lock = self._tstate_lock
        if lock is not None:
            assert not lock.locked()
        self._is_stopped = True
        self._tstate_lock = None

    def _delete(self):
        "Remove current thread from the dict of currently running threads."

        # Notes about running with _dummy_thread:
        #
        # Must take care to not raise an exception if _dummy_thread is being
        # used (and thus this module is being used as an instance of
        # dummy_threading).  _dummy_thread.get_ident() always returns -1 since
        # there is only one thread if _dummy_thread is being used.  Thus
        # len(_active) is always <= 1 here, and any Thread instance created
        # overwrites the (if any) thread currently registered in _active.
        #
        # An instance of _MainThread is always created by 'threading'.  This
        # gets overwritten the instant an instance of Thread is created; both
        # threads return -1 from _dummy_thread.get_ident() and thus have the
        # same key in the dict.  So when the _MainThread instance created by
        # 'threading' tries to clean itself up when atexit calls this method
        # it gets a KeyError if another Thread instance was created.
        #
        # This all means that KeyError from trying to delete something from
        # _active if dummy_threading is being used is a red herring.  But
        # since it isn't if dummy_threading is *not* being used then don't
        # hide the exception.

        try:
            with _active_limbo_lock:
                del _active[get_ident()]
                # There must not be any python code between the previous line
                # and after the lock is released.  Otherwise a tracing function
                # could try to acquire the lock again in the same thread, (in
                # current_thread()), and would block.
        except KeyError:
            if 'dummy_threading' not in _sys.modules:
                raise

    def join(self, timeout=None):
        """Wait until the thread terminates.

        This blocks the calling thread until the thread whose join() method is
        called terminates -- either normally or through an unhandled exception
        or until the optional timeout occurs.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof). As join() always returns None, you must call
        isAlive() after join() to decide whether a timeout happened -- if the
        thread is still alive, the join() call timed out.

        When the timeout argument is not present or None, the operation will
        block until the thread terminates.

        A thread can be join()ed many times.

        join() raises a RuntimeError if an attempt is made to join the current
        thread as that would cause a deadlock. It is also an error to join() a
        thread before it has been started and attempts to do so raises the same
        exception.

        """
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if not self._started.is_set():
            raise RuntimeError("cannot join thread before it is started")
        if self is current_thread():
            raise RuntimeError("cannot join current thread")

        if timeout is None:
            self._wait_for_tstate_lock()
        else:
            # the behavior of a negative timeout isn't documented, but
            # historically .join(timeout=x) for x<0 has acted as if timeout=0
            self._wait_for_tstate_lock(timeout=max(timeout, 0))

    def _wait_for_tstate_lock(self, block=True, timeout=-1):
        # Issue #18808: wait for the thread state to be gone.
        # At the end of the thread's life, after all knowledge of the thread
        # is removed from C data structures, C code releases our _tstate_lock.
        # This method passes its arguments to _tstate_lock.acquire().
        # If the lock is acquired, the C code is done, and self._stop() is
        # called.  That sets ._is_stopped to True, and ._tstate_lock to None.
        lock = self._tstate_lock
        if lock is None:  # already determined that the C code is done
            assert self._is_stopped
        elif lock.acquire(block, timeout):
            lock.release()
            self._stop()

    @property
    def name(self):
        """A string used for identification purposes only.

        It has no semantics. Multiple threads may be given the same name. The
        initial name is set by the constructor.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._name

    @name.setter
    def name(self, name):
        assert self._initialized, "Thread.__init__() not called"
        self._name = str(name)

    @property
    def ident(self):
        """Thread identifier of this thread or None if it has not been started.

        This is a nonzero integer. See the thread.get_ident() function. Thread
        identifiers may be recycled when a thread exits and another thread is
        created. The identifier is available even after the thread has exited.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._ident

    def is_alive(self):
        """Return whether the thread is alive.

        This method returns True just before the run() method starts until just
        after the run() method terminates. The module function enumerate()
        returns a list of all alive threads.

        """
        assert self._initialized, "Thread.__init__() not called"
        if self._is_stopped or not self._started.is_set():
            return False
        self._wait_for_tstate_lock(False)
        return not self._is_stopped

    isAlive = is_alive

    @property
    def daemon(self):
        """A boolean value indicating whether this thread is a daemon thread.

        This must be set before start() is called, otherwise RuntimeError is
        raised. Its initial value is inherited from the creating thread; the
        main thread is not a daemon thread and therefore all threads created in
        the main thread default to daemon = False.

        The entire Python program exits when no alive non-daemon threads are
        left.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._daemonic

    @daemon.setter
    def daemon(self, daemonic):
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if self._started.is_set():
            raise RuntimeError("cannot set daemon status of active thread")
        self._daemonic = daemonic

    def isDaemon(self):
        return self.daemon

    def setDaemon(self, daemonic):
        self.daemon = daemonic

    def getName(self):
        return self.name

    def setName(self, name):
        self.name = name

# The timer class was contributed by Itamar Shtull-Trauring

class Timer(Thread):
    """Call a function after a specified number of seconds:

            t = Timer(30.0, f, args=None, kwargs=None)
            t.start()
            t.cancel()     # stop the timer's action if it's still waiting

    """

    def __init__(self, interval, function, args=None, kwargs=None):
        Thread.__init__(self)
        self.interval = interval
        self.function = function
        self.args = args if args is not None else []
        self.kwargs = kwargs if kwargs is not None else {}
        self.finished = Event()

    def cancel(self):
        """Stop the timer if it hasn't finished yet."""
        self.finished.set()

    def run(self):
        self.finished.wait(self.interval)
        if not self.finished.is_set():
            self.function(*self.args, **self.kwargs)
        self.finished.set()

# Special thread class to represent the main thread
# This is garbage collected through an exit handler

class _MainThread(Thread):

    def __init__(self):
        Thread.__init__(self, name="MainThread", daemon=False)
        self._set_tstate_lock()
        self._started.set()
        self._set_ident()
        with _active_limbo_lock:
            _active[self._ident] = self


# Dummy thread class to represent threads not started here.
# These aren't garbage collected when they die, nor can they be waited for.
# If they invoke anything in threading.py that calls current_thread(), they
# leave an entry in the _active dict forever after.
# Their purpose is to return *something* from current_thread().
# They are marked as daemon threads so we won't wait for them
# when we exit (conform previous semantics).

class _DummyThread(Thread):

    def __init__(self):
        Thread.__init__(self, name=_newname("Dummy-%d"), daemon=True)

        self._started.set()
        self._set_ident()
        with _active_limbo_lock:
            _active[self._ident] = self

    def _stop(self):
        pass

    def join(self, timeout=None):
        assert False, "cannot join a dummy thread"


# Global API functions

def current_thread():
    """Return the current Thread object, corresponding to the caller's thread of control.

    If the caller's thread of control was not created through the threading
    module, a dummy thread object with limited functionality is returned.

    """
    try:
        return _active[get_ident()]
    except KeyError:
        return _DummyThread()

currentThread = current_thread

def active_count():
    """Return the number of Thread objects currently alive.

    The returned count is equal to the length of the list returned by
    enumerate().

    """
    with _active_limbo_lock:
        return len(_active)   len(_limbo)

activeCount = active_count

def _enumerate():
    # Same as enumerate(), but without the lock. Internal use only.
    return list(_active.values())   list(_limbo.values())

def enumerate():
    """Return a list of all Thread objects currently alive.

    The list includes daemonic threads, dummy thread objects created by
    current_thread(), and the main thread. It excludes terminated threads and
    threads that have not yet been started.

    """
    with _active_limbo_lock:
        return list(_active.values())   list(_limbo.values())

from _thread import stack_size

# Create the main thread object,
# and make it available for the interpreter
# (Py_Main) as threading._shutdown.

_main_thread = _MainThread()

def _shutdown():
    # Obscure:  other threads may be waiting to join _main_thread.  That's
    # dubious, but some code does it.  We can't wait for C code to release
    # the main thread's tstate_lock - that won't happen until the interpreter
    # is nearly dead.  So we release it here.  Note that just calling _stop()
    # isn't enough:  other threads may already be waiting on _tstate_lock.
    tlock = _main_thread._tstate_lock
    # The main thread isn't finished yet, so its thread state lock can't have
    # been released.
    assert tlock is not None
    assert tlock.locked()
    tlock.release()
    _main_thread._stop()
    t = _pickSomeNonDaemonThread()
    while t:
        t.join()
        t = _pickSomeNonDaemonThread()
    _main_thread._delete()

def _pickSomeNonDaemonThread():
    for t in enumerate():
        if not t.daemon and t.is_alive():
            return t
    return None

def main_thread():
    """Return the main thread object.

    In normal conditions, the main thread is the thread from which the
    Python interpreter was started.
    """
    return _main_thread

# get thread-local implementation, either from the thread
# module, or from the python fallback

try:
    from _thread import _local as local
except ImportError:
    from _threading_local import local


def _after_fork():
    # This function is called by Python/ceval.c:PyEval_ReInitThreads which
    # is called from PyOS_AfterFork.  Here we cleanup threading module state
    # that should not exist after a fork.

    # Reset _active_limbo_lock, in case we forked while the lock was held
    # by another (non-forked) thread.  http://bugs.python.org/issue874900
    global _active_limbo_lock, _main_thread
    _active_limbo_lock = _allocate_lock()

    # fork() only copied the current thread; clear references to others.
    new_active = {}
    current = current_thread()
    _main_thread = current
    with _active_limbo_lock:
        # Dangling thread instances must still have their locks reset,
        # because someone may join() them.
        threads = set(_enumerate())
        threads.update(_dangling)
        for thread in threads:
            # Any lock/condition variable may be currently locked or in an
            # invalid state, so we reinitialize them.
            if thread is current:
                # There is only one active thread. We reset the ident to
                # its new value since it can have changed.
                thread._reset_internal_locks(True)
                ident = get_ident()
                thread._ident = ident
                new_active[ident] = thread
            else:
                # All the others are already stopped.
                thread._reset_internal_locks(False)
                thread._stop()

        _limbo.clear()
        _active.clear()
        _active.update(new_active)
        assert len(_active) == 1

信号量

 

__author__ = "Narwhale"

import threading,time

def run(n):
    semaphore.acquire()
    time.sleep(1)
    print('线程%s在跑!'%n)
    semaphore.release()

if __name__ == '__main__':
    semaphore = threading.BoundedSemaphore(5)      #最多5个线程同时跑
    for i in range(20):
        t = threading.Thread(target=run,args=(i,))
        t.start()

while threading.active_count() !=1:
    pass
else:
    print('所有线程跑完了!')

    线程实例:

生产者消费者模型

    Python threading模块

__author__ = "Narwhale"
import queue,time,threading
q = queue.Queue(10)

def producer(name):
    count = 0
    while True:
        print('%s生产了包子%s'%(name,count))
        q.put('包子%s'%count)
        count  = 1
        time.sleep(1)

def consumer(name):
    while True:
        print('%s取走了%s,并且吃了它。。。。。'%(name,q.get()))
        time.sleep(1)


A1 = threading.Thread(target=producer,args=('A1',))
A1.start()

B1 = threading.Thread(target=consumer,args=('B1',))
B1.start()
B2 = threading.Thread(target=consumer,args=('B2',))
B2.start()

    线程有2种调用方式,如下:

红绿灯

    直接调用

__author__ = "Narwhale"

import threading,time

event = threading.Event()

def light():
    event.set()
    count = 0
    while True:
        if count >5 and count < 10:
            event.clear()
            print('33[41;1m红灯亮了33[0m' )
        elif count > 10:
            event.set()
            count = 0
        else:
            print('33[42;1m绿灯亮了33[0m')
        time.sleep(1)
        count  =1


def car(n):
    while True:
        if event.isSet():
            print('33[34;1m%s车正在跑!33[0m'%n)
            time.sleep(1)
        else:
            print('车停下来了')
            event.wait()

light = threading.Thread(target=light,args=( ))
light.start()
car1 = threading.Thread(target=car,args=('Tesla',))
car1.start()

 

 

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)


if __name__ == "__main__":
    start_time = time.time()
    t1 = threading.Thread(target=func,args=(6,))
    t2 = threading.Thread(target=func,args=(9,))
    t1.start()
    t2.start()
    end_time = time.time()
    run_time = end_time-start_time
    print("33[34;1m程序运行时间:33[0m",run_time)


    time1 = time.time()
    func(6)
    func(9)
    time2 = time.time()
    run_time2 = time2 - time1
    print("33[32m直接执行需要时间:33[0m",run_time2)
执行结果如下:
The lucky num is  6
The lucky num is  9
程序运行时间: 0.00044083595275878906
The lucky num is  6
The lucky num is  9
直接执行需要时间: 4.002933979034424

 

    从上面代码可以看出,我们使用的是线程,threading.Thread,线程里面target=func(函数名),args=(参数,),可以看出,线程的速度很快,启动两个线程执行需要之间很短,但是这只是启动线程的时间,IO操作其实并没有执行,这个时候,程序还没有执行完毕,但是线程是不管的,直接会向下执行,而串行的程序则不一样,一行一行执行,因此运行的时间就是叠加的。

    所以上面,第一个时间只是线程启动过程中花费的时间,并没有算IO操作的时间,IO操作等待的时候,线程会向下执行,不会等待程序执行,接着往下运行,只有等到下面也有IO操作的时候,才会看上面是否执行完毕,上面线程执行完毕则打印,但是不管怎样,最后都会等待程序执行完毕,然后才结束程序。

    继承式调用

 

 

import threading,time

class MyThreading(threading.Thread):
    '''定义一个线程类'''
    def __init__(self,num):                       #初始化子类
        super(MyThreading,self).__init__()        #由于是继承父类threading.Thread,要重写父类,没有继承参数super(子类,self).__init__(继承父类参数)
        self.num = num

    def run(self):
        print("The lucky num is",self.num)
        time.sleep(2)
        print("使用类启动线程,本局执行在什么时候!")

if __name__ == "__main__":
    start_time1 = time.time()
    t1 = MyThreading(6)
    t2 = MyThreading(9)
    t1.start()
    t2.start()
    end_time1 = time.time()
    run_time1 = end_time1 - start_time1
    print("线程运行时间:",run_time1)

    start_time2 = time.time()
    t1.run()
    t2.run()
    end_time2 = time.time()
    run_time2 = end_time2 - start_time2
    print("串行程序执行时间:",run_time2)
执行结果如下:
The lucky num is 6
The lucky num is 9
线程运行时间: 0.0004470348358154297
The lucky num is 6
使用类启动线程,本局执行在什么时候!
使用类启动线程,本局执行在什么时候!
使用类启动线程,本局执行在什么时候!
The lucky num is 9
使用类启动线程,本局执行在什么时候!
串行程序执行时间: 4.004571914672852

 

    上面程序是用类写的线程,上面线程是继承threading里面的类Thread,

    threading.Thread源代码:

class Thread:
    """A class that represents a thread of control.

    This class can be safely subclassed in a limited fashion. There are two ways
    to specify the activity: by passing a callable object to the constructor, or
    by overriding the run() method in a subclass.

    """

    _initialized = False
    # Need to store a reference to sys.exc_info for printing
    # out exceptions when a thread tries to use a global var. during interp.
    # shutdown and thus raises an exception about trying to perform some
    # operation on/with a NoneType
    _exc_info = _sys.exc_info
    # Keep sys.exc_clear too to clear the exception just before
    # allowing .join() to return.
    #XXX __exc_clear = _sys.exc_clear

    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs=None, *, daemon=None):
        """This constructor should always be called with keyword arguments. Arguments are:

        *group* should be None; reserved for future extension when a ThreadGroup
        class is implemented.

        *target* is the callable object to be invoked by the run()
        method. Defaults to None, meaning nothing is called.

        *name* is the thread name. By default, a unique name is constructed of
        the form "Thread-N" where N is a small decimal number.

        *args* is the argument tuple for the target invocation. Defaults to ().

        *kwargs* is a dictionary of keyword arguments for the target
        invocation. Defaults to {}.

        If a subclass overrides the constructor, it must make sure to invoke
        the base class constructor (Thread.__init__()) before doing anything
        else to the thread.

        """
        assert group is None, "group argument must be None for now"
        if kwargs is None:
            kwargs = {}
        self._target = target
        self._name = str(name or _newname())
        self._args = args
        self._kwargs = kwargs
        if daemon is not None:
            self._daemonic = daemon
        else:
            self._daemonic = current_thread().daemon
        self._ident = None
        self._tstate_lock = None
        self._started = Event()
        self._is_stopped = False
        self._initialized = True
        # sys.stderr is not stored in the class like
        # sys.exc_info since it can be changed between instances
        self._stderr = _sys.stderr
        # For debugging and _after_fork()
        _dangling.add(self)

    def _reset_internal_locks(self, is_alive):
        # private!  Called by _after_fork() to reset our internal locks as
        # they may be in an invalid state leading to a deadlock or crash.
        self._started._reset_internal_locks()
        if is_alive:
            self._set_tstate_lock()
        else:
            # The thread isn't alive after fork: it doesn't have a tstate
            # anymore.
            self._is_stopped = True
            self._tstate_lock = None

    def __repr__(self):
        assert self._initialized, "Thread.__init__() was not called"
        status = "initial"
        if self._started.is_set():
            status = "started"
        self.is_alive() # easy way to get ._is_stopped set when appropriate
        if self._is_stopped:
            status = "stopped"
        if self._daemonic:
            status  = " daemon"
        if self._ident is not None:
            status  = " %s" % self._ident
        return "<%s(%s, %s)>" % (self.__class__.__name__, self._name, status)

    def start(self):
        """Start the thread's activity.

        It must be called at most once per thread object. It arranges for the
        object's run() method to be invoked in a separate thread of control.

        This method will raise a RuntimeError if called more than once on the
        same thread object.

        """
        if not self._initialized:
            raise RuntimeError("thread.__init__() not called")

        if self._started.is_set():
            raise RuntimeError("threads can only be started once")
        with _active_limbo_lock:
            _limbo[self] = self
        try:
            _start_new_thread(self._bootstrap, ())
        except Exception:
            with _active_limbo_lock:
                del _limbo[self]
            raise
        self._started.wait()

    def run(self):
        """Method representing the thread's activity.

        You may override this method in a subclass. The standard run() method
        invokes the callable object passed to the object's constructor as the
        target argument, if any, with sequential and keyword arguments taken
        from the args and kwargs arguments, respectively.

        """
        try:
            if self._target:
                self._target(*self._args, **self._kwargs)
        finally:
            # Avoid a refcycle if the thread is running a function with
            # an argument that has a member that points to the thread.
            del self._target, self._args, self._kwargs

    def _bootstrap(self):
        # Wrapper around the real bootstrap code that ignores
        # exceptions during interpreter cleanup.  Those typically
        # happen when a daemon thread wakes up at an unfortunate
        # moment, finds the world around it destroyed, and raises some
        # random exception *** while trying to report the exception in
        # _bootstrap_inner() below ***.  Those random exceptions
        # don't help anybody, and they confuse users, so we suppress
        # them.  We suppress them only when it appears that the world
        # indeed has already been destroyed, so that exceptions in
        # _bootstrap_inner() during normal business hours are properly
        # reported.  Also, we only suppress them for daemonic threads;
        # if a non-daemonic encounters this, something else is wrong.
        try:
            self._bootstrap_inner()
        except:
            if self._daemonic and _sys is None:
                return
            raise

    def _set_ident(self):
        self._ident = get_ident()

    def _set_tstate_lock(self):
        """
        Set a lock object which will be released by the interpreter when
        the underlying thread state (see pystate.h) gets deleted.
        """
        self._tstate_lock = _set_sentinel()
        self._tstate_lock.acquire()

    def _bootstrap_inner(self):
        try:
            self._set_ident()
            self._set_tstate_lock()
            self._started.set()
            with _active_limbo_lock:
                _active[self._ident] = self
                del _limbo[self]

            if _trace_hook:
                _sys.settrace(_trace_hook)
            if _profile_hook:
                _sys.setprofile(_profile_hook)

            try:
                self.run()
            except SystemExit:
                pass
            except:
                # If sys.stderr is no more (most likely from interpreter
                # shutdown) use self._stderr.  Otherwise still use sys (as in
                # _sys) in case sys.stderr was redefined since the creation of
                # self.
                if _sys and _sys.stderr is not None:
                    print("Exception in thread %s:n%s" %
                          (self.name, _format_exc()), file=_sys.stderr)
                elif self._stderr is not None:
                    # Do the best job possible w/o a huge amt. of code to
                    # approximate a traceback (code ideas from
                    # Lib/traceback.py)
                    exc_type, exc_value, exc_tb = self._exc_info()
                    try:
                        print((
                            "Exception in thread "   self.name  
                            " (most likely raised during interpreter shutdown):"), file=self._stderr)
                        print((
                            "Traceback (most recent call last):"), file=self._stderr)
                        while exc_tb:
                            print((
                                '  File "%s", line %s, in %s' %
                                (exc_tb.tb_frame.f_code.co_filename,
                                    exc_tb.tb_lineno,
                                    exc_tb.tb_frame.f_code.co_name)), file=self._stderr)
                            exc_tb = exc_tb.tb_next
                        print(("%s: %s" % (exc_type, exc_value)), file=self._stderr)
                    # Make sure that exc_tb gets deleted since it is a memory
                    # hog; deleting everything else is just for thoroughness
                    finally:
                        del exc_type, exc_value, exc_tb
            finally:
                # Prevent a race in
                # test_threading.test_no_refcycle_through_target when
                # the exception keeps the target alive past when we
                # assert that it's dead.
                #XXX self._exc_clear()
                pass
        finally:
            with _active_limbo_lock:
                try:
                    # We don't call self._delete() because it also
                    # grabs _active_limbo_lock.
                    del _active[get_ident()]
                except:
                    pass

    def _stop(self):
        # After calling ._stop(), .is_alive() returns False and .join() returns
        # immediately.  ._tstate_lock must be released before calling ._stop().
        #
        # Normal case:  C code at the end of the thread's life
        # (release_sentinel in _threadmodule.c) releases ._tstate_lock, and
        # that's detected by our ._wait_for_tstate_lock(), called by .join()
        # and .is_alive().  Any number of threads _may_ call ._stop()
        # simultaneously (for example, if multiple threads are blocked in
        # .join() calls), and they're not serialized.  That's harmless -
        # they'll just make redundant rebindings of ._is_stopped and
        # ._tstate_lock.  Obscure:  we rebind ._tstate_lock last so that the
        # "assert self._is_stopped" in ._wait_for_tstate_lock() always works
        # (the assert is executed only if ._tstate_lock is None).
        #
        # Special case:  _main_thread releases ._tstate_lock via this
        # module's _shutdown() function.
        lock = self._tstate_lock
        if lock is not None:
            assert not lock.locked()
        self._is_stopped = True
        self._tstate_lock = None

    def _delete(self):
        "Remove current thread from the dict of currently running threads."

        # Notes about running with _dummy_thread:
        #
        # Must take care to not raise an exception if _dummy_thread is being
        # used (and thus this module is being used as an instance of
        # dummy_threading).  _dummy_thread.get_ident() always returns -1 since
        # there is only one thread if _dummy_thread is being used.  Thus
        # len(_active) is always <= 1 here, and any Thread instance created
        # overwrites the (if any) thread currently registered in _active.
        #
        # An instance of _MainThread is always created by 'threading'.  This
        # gets overwritten the instant an instance of Thread is created; both
        # threads return -1 from _dummy_thread.get_ident() and thus have the
        # same key in the dict.  So when the _MainThread instance created by
        # 'threading' tries to clean itself up when atexit calls this method
        # it gets a KeyError if another Thread instance was created.
        #
        # This all means that KeyError from trying to delete something from
        # _active if dummy_threading is being used is a red herring.  But
        # since it isn't if dummy_threading is *not* being used then don't
        # hide the exception.

        try:
            with _active_limbo_lock:
                del _active[get_ident()]
                # There must not be any python code between the previous line
                # and after the lock is released.  Otherwise a tracing function
                # could try to acquire the lock again in the same thread, (in
                # current_thread()), and would block.
        except KeyError:
            if 'dummy_threading' not in _sys.modules:
                raise

    def join(self, timeout=None):
        """Wait until the thread terminates.

        This blocks the calling thread until the thread whose join() method is
        called terminates -- either normally or through an unhandled exception
        or until the optional timeout occurs.

        When the timeout argument is present and not None, it should be a
        floating point number specifying a timeout for the operation in seconds
        (or fractions thereof). As join() always returns None, you must call
        isAlive() after join() to decide whether a timeout happened -- if the
        thread is still alive, the join() call timed out.

        When the timeout argument is not present or None, the operation will
        block until the thread terminates.

        A thread can be join()ed many times.

        join() raises a RuntimeError if an attempt is made to join the current
        thread as that would cause a deadlock. It is also an error to join() a
        thread before it has been started and attempts to do so raises the same
        exception.

        """
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if not self._started.is_set():
            raise RuntimeError("cannot join thread before it is started")
        if self is current_thread():
            raise RuntimeError("cannot join current thread")

        if timeout is None:
            self._wait_for_tstate_lock()
        else:
            # the behavior of a negative timeout isn't documented, but
            # historically .join(timeout=x) for x<0 has acted as if timeout=0
            self._wait_for_tstate_lock(timeout=max(timeout, 0))

    def _wait_for_tstate_lock(self, block=True, timeout=-1):
        # Issue #18808: wait for the thread state to be gone.
        # At the end of the thread's life, after all knowledge of the thread
        # is removed from C data structures, C code releases our _tstate_lock.
        # This method passes its arguments to _tstate_lock.acquire().
        # If the lock is acquired, the C code is done, and self._stop() is
        # called.  That sets ._is_stopped to True, and ._tstate_lock to None.
        lock = self._tstate_lock
        if lock is None:  # already determined that the C code is done
            assert self._is_stopped
        elif lock.acquire(block, timeout):
            lock.release()
            self._stop()

    @property
    def name(self):
        """A string used for identification purposes only.

        It has no semantics. Multiple threads may be given the same name. The
        initial name is set by the constructor.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._name

    @name.setter
    def name(self, name):
        assert self._initialized, "Thread.__init__() not called"
        self._name = str(name)

    @property
    def ident(self):
        """Thread identifier of this thread or None if it has not been started.

        This is a nonzero integer. See the thread.get_ident() function. Thread
        identifiers may be recycled when a thread exits and another thread is
        created. The identifier is available even after the thread has exited.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._ident

    def is_alive(self):
        """Return whether the thread is alive.

        This method returns True just before the run() method starts until just
        after the run() method terminates. The module function enumerate()
        returns a list of all alive threads.

        """
        assert self._initialized, "Thread.__init__() not called"
        if self._is_stopped or not self._started.is_set():
            return False
        self._wait_for_tstate_lock(False)
        return not self._is_stopped

    isAlive = is_alive

    @property
    def daemon(self):
        """A boolean value indicating whether this thread is a daemon thread.

        This must be set before start() is called, otherwise RuntimeError is
        raised. Its initial value is inherited from the creating thread; the
        main thread is not a daemon thread and therefore all threads created in
        the main thread default to daemon = False.

        The entire Python program exits when no alive non-daemon threads are
        left.

        """
        assert self._initialized, "Thread.__init__() not called"
        return self._daemonic

    @daemon.setter
    def daemon(self, daemonic):
        if not self._initialized:
            raise RuntimeError("Thread.__init__() not called")
        if self._started.is_set():
            raise RuntimeError("cannot set daemon status of active thread")
        self._daemonic = daemonic

    def isDaemon(self):
        return self.daemon

    def setDaemon(self, daemonic):
        self.daemon = daemonic

    def getName(self):
        return self.name

    def setName(self, name):
        self.name = name

    线程里面,能够获取线程名字,getName(),也能够自行设置线程名setName(),默认情况下线程名字是:Thread-1,Thread-2;

    下面来看一个实例:

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!")


if __name__ == "__main__":
    start_time = time.time()
    for i in range(10):
        t1 = threading.Thread(target=func,args=("thread_%s" %i,))
        t1.start()
    end_time = time.time()

    print("------------------all thread is running done-----------------------")
    run_time = end_time-start_time
    print("33[34;1m程序运行时间:33[0m",run_time)

    上面的代码执行结果如下:

The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
------------------all thread is running done-----------------------
程序运行时间: 0.002081155776977539
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!

    上面,程序运行时间为什么只有0.00282秒,为什么不是2秒?下面来做细致的分析:

    首先一个程序至少有一个线程,程序本身就是主线程,主线程启动子线程,主线程是独立的,子线程也是独立的,两者之间是并行的,主线程和子线程相互独立,是并行的,各自执行各自的,主线程还是继续向下执行,子线程也在独自执行。程序本身就是线程。

    下面,我们通过列表,让每个线程自行执行完毕:

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!")


if __name__ == "__main__":
    start_time = time.time()
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        t.start()
        lists.append(t)
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    end_time = time.time()

    print("------------------all thread is running done-----------------------")
    run_time = end_time-start_time
    print("33[34;1m程序运行时间:33[0m",run_time)
程序执行如下:
The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
线程休眠了!
------------------all thread is running done-----------------------
程序运行时间: 2.0065605640411377

    上面程序中,我们加入了一个列表,让每个线程启动之后,放入一个列表中,然后遍历列表,让每个线程都执行完毕再执行下面的程序。

    可以看出,所有线程执行完毕花费的总时间是:2.0065605640411377,这就是所有线程执行的时间。创建临时列表,让程序执行之后,每个线程各自执行,不影响其他线程,否则就是串行的。

    join()解释:"""Wait until the thread terminates.等待线程终止(结束)

    上面程序中,我们启动了10个线程,那么第一个启动的线程是否是主线程呢?不是的,主线程是程序本身,我们启动程序的时候,程序是由上而下执行的,本身就是一个线程,这个线程就是主线程,也即程序本身,下面我们来验证一下:

 

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!,什么线程?",threading.current_thread())


if __name__ == "__main__":
    start_time = time.time()
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        t.start()
        lists.append(t)
    print("33[31m运行的线程数:%s33[0m" % threading.active_count())
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    end_time = time.time()

    print("------------------all thread is running done-----------------------",threading.current_thread())
    print("当前运行的线程数:",threading.active_count())
    run_time = end_time-start_time
    print("33[34;1m程序运行时间:33[0m",run_time)

 

    上面程序中,我们加入了验证当前线程是否是主线程,在函数和主程序里面我们都加入了验证,并且在线程未结束和结束后加入了统计线程运行的个数,程序运行结果如下:

The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
运行的线程数:11
线程休眠了!,什么线程? <Thread(Thread-2, started 140013432059648)>
线程休眠了!,什么线程? <Thread(Thread-1, started 140013440452352)>
线程休眠了!,什么线程? <Thread(Thread-3, started 140013423666944)>
线程休眠了!,什么线程? <Thread(Thread-4, started 140013415274240)>
线程休眠了!,什么线程? <Thread(Thread-10, started 140013022988032)>
线程休眠了!,什么线程? <Thread(Thread-7, started 140013048166144)>
线程休眠了!,什么线程? <Thread(Thread-5, started 140013406881536)>
线程休眠了!,什么线程? <Thread(Thread-6, started 140013398488832)>
线程休眠了!,什么线程? <Thread(Thread-8, started 140013039773440)>
线程休眠了!,什么线程? <Thread(Thread-9, started 140013031380736)>
------------------all thread is running done----------------------- <_MainThread(MainThread, started 140013466183424)>
当前运行的线程数: 1
程序运行时间: 2.0047178268432617

    从上面程序的运行结果可以看出,在10个线程启动后,程序是由11个线程在运行,并且启动的线程只是单纯的线程(Thread),而下面线程执行完毕之后,运行的才是主线程<MainThread>;由此可以看出,程序本身才是主线程,启动程序本身,就开启了一个线程,当启动的线程结束后,就会自动停止运行,被杀死,这点和Windows有点区别,在Windows上面,线程还是在激活中。

    threading.current_thread()是查看当前线程是否是主线程,threading.active_count()统计当前运行线程的个数。

    守护线程:主线程结束之后,其他线程都停止运行,不管其他线程是否执行完毕。帮忙管理资源。

    我们知道,如果没有join()主线程会一直执行下去,不管其他线程是否执行完毕,但是最后都在等待其他线程执行完毕之后才结束主线程。把线程转换为守护线程,那么主程序就不会管守护线程是否执行完毕,只需让其他线程执行完毕即可。

    下面我们把线程设置为守护线程,如下:

import threading,time

def func(num):
    print("The lucky num is ",num)
    time.sleep(2)
    print("线程休眠了!,什么线程?",threading.current_thread())


if __name__ == "__main__":
    start_time = time.time()
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
        t.start()
        lists.append(t)
    print("33[31m运行的线程数:%s33[0m" % threading.active_count())
    print("当前执行线程:%s" %threading.current_thread())
    # for w in lists:
    #     w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    end_time = time.time()

    print("------------------all thread is running done-----------------------",threading.current_thread())
    print("当前运行的线程数:",threading.active_count())
    run_time = end_time-start_time
    print("33[34;1m程序运行时间:33[0m",run_time)

    上面程序中,我们启动了10个线程,并将其设置为守护线程,setDaemon(True),下面我们来看看程序的执行情况:

The lucky num is  thread_0
The lucky num is  thread_1
The lucky num is  thread_2
The lucky num is  thread_3
The lucky num is  thread_4
The lucky num is  thread_5
The lucky num is  thread_6
The lucky num is  thread_7
The lucky num is  thread_8
The lucky num is  thread_9
运行的线程数:11
当前执行线程:<_MainThread(MainThread, started 140558033020672)>
------------------all thread is running done----------------------- <_MainThread(MainThread, started 140558033020672)>
当前运行的线程数: 11
程序运行时间: 0.0032095909118652344

    从程序的执行结果可以看出,当我们把启动的线程设置为守护线程之后,由于遇到IO操作,在守护线程等待的过程中,主程序已经执行完毕了,由于是守护线程,无关紧要,程序结束,不管其是否执行完毕,可以看出,当被设置为守护线程之后,就自己在系统中运行,如果在主程序执行完毕之前执行完毕,则会打印结果,否则主线程关闭,守护线程一起关闭。

    setDaemon():是把当前线程设置为守护线程。要在t.start()线程启动之前。

    GIL(全局解释器锁)四核机器可以同时做4件事情,单核永远是串行的,四核CPU统一时间真真正正就有四件事情在执行,但是在Python中,无论是4核,8核,统一时间执行的线程都只有一个,这是Python开发时候的一个缺陷,都是单核。Python计算的时候,Python解释器调用的是C语言的接口,只能等待接口返回的结果,不能控制C语言的线程。统一时间只有一个线程能够接收,修改数据。其他语言都是自己写的线程。Python是调用C语言的线程。

    线程锁(互斥锁Mutex)

    一个进程下可以启动多个线程,多个线程共享父进程的内存空间,也就意味着每个线程可以访问同一份数据,此时,如果2个线程同时要修改同一份数据,会出现什么状况?

    正常来讲,这个num结果应该是0,但在python2.7上多运行几次,会发现,最后打印出来的num结果不总是0,为什么每次运行结果不一样呢?哈哈,很简单,假设您有A,B两个线程,此时都要对num进行减1操作,由于2个线程是并发同时运行的,所以2个线程很有可能同时拿走了num=100这个初始变量交给CPU去运算,当A线程去处理完结果是99,但此时B线程运算完的结果也是99,两个线程同时CPU运算的结果赋值给num变量后,结果就都是99。那么怎么办呢?很简单,每个线程在要修改公共数据时,为了避免自己在还没改完的时候别人也来修改此数据,可以给这个数据加一把锁,这样其他线程想修改此数据时就必须等待您修改完毕并把锁释放之后才能再访问此数据。

    注:不要在3.x上运行,不知为什么,3.x上的结果总是正确的,可能是自动加了锁。

 

    线程之间是可以互相沟通的,现在下面来看一个例子,所有的线程来修改同一份数据,如下:

 

import threading,time

def func(n):
    global num
    time.sleep(0.8)                            #sleep()是不占用CPU的CPU会执行其他的
    num  = 1                                   #所有的线程共同修改num数据

if __name__ == "__main__":
    num = 0
    lists = []
    for i in range(1000):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        # t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
        t.start()
        lists.append(t)
    print("33[31m运行的线程数:%s33[0m" % threading.active_count())
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    print("------------------all thread is running done-----------------------")
    print("当前运行的线程数:",threading.active_count())

    print("num:",num)                           #所有的线程共同修改一个数据

 

    上面程序中,所有线程都会操作num,让num数量加1,正常结果就是1000,运行结果如下:

运行的线程数:1001
------------------all thread is running done-----------------------
当前运行的线程数: 1
num: 1000

    运行结果也是1000,但是在早期版本中,经常会出现结果不是1000,而是999等接近的数,有些系统运行总是会出现,在Python3中不会有问题,为什么会出现这种情况呢?

    解释器同时只放行一个线程运行,申请python解释器锁,执行时间到了,没有执行完毕,由于线程执行是由时间分配,如果执行时间到了,就释放全局解释器锁(gil lock),出现的原因就是由于自己没有执行完毕,就要释放gil lock,没有返回;使此线程虽然执行了,但是没有执行完毕,别的线程拿到的初始值还是没有修改的初始值。

澳门新萄京官方网站 1

 

    如何解决这个问题呢?要进行加锁,全局解释器(GIL LOCK)自己会加锁和释放锁;我们也自己给程序加锁,释放锁,让程序执行的时候,只有这个线程在执行计算,不会因为Python的GIL LOCK释放,而程序没有执行完毕,出现计算错误;我们自己加锁就是让线程执行完毕之后在释放锁。让其他线程调用。如下:

 

import threading,time

def func(n):
    lock.acquire()                             #给线程解锁,让此线程执行完毕
    global num
    # time.sleep(0)                            #sleep()是不占用CPU的CPU会执行其他的
    num  = 1                                   #所有的线程共同修改num数据
    lock.release()

if __name__ == "__main__":
    lock = threading.Lock()                    #声明一个锁的变量
    num = 0
    lists = []
    for i in range(10):
        t = threading.Thread(target=func,args=("thread_%s" %i,))
        # t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
        t.start()
        lists.append(t)
    print("33[31m运行的线程数:%s33[0m" % threading.active_count())
    for w in lists:
        w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕

    print("------------------all thread is running done-----------------------")
    print("当前运行的线程数:",threading.active_count())

    print("num:",num)                           #所有的线程共同修改一个数据

 

     上面程序中,我们首先声明了一把锁,lock=threading.Lock(),然后在执行线程中加锁,lock.acquire(),最后释放lock.release(),如果加锁的话,一定要记住,程序执行时间比较端,由于释放锁别人才能使用,等于让程序编程串行的了,因而,里面不能有IO操作,不能会执行很慢,加锁让程序效率肯定会变慢,但是确保了数据的准确性。加锁是让本次线程执行完毕才释放,因此之后本次释放才会执行下一次线程。

    上面程序中,程序本身执行的时候,GIL LOCK会在系统申请锁,我们自己给程序也加了锁。

    递归锁:如果加锁过去,会让程序不知道怎么释放,锁死程序,因而要使用递归锁,程序如下:

import threading
'''自己写一个递归所的实例'''

def run1(num):
    lock.acquire()
    num  = 1
    lock.release()
    return num

def run2(num):
    lock.acquire()
    num  = 2
    lock.release()
    return num

def run3(x,y):
    lock.acquire()
    """执行run1"""
    res1 = run1(x)                                         #调用run1,run1里面也加锁了,是run3下面的锁
    '''执行run2'''
    res2 = run2(y)                                         #调用run2,run2里面也加锁了,是run3下面的锁,与run1平行,没有上下级关系
    lock.release()
    print("res1:",res1,"res2:",res2)

if __name__ == "__main__":
    lock = threading.Lock()
    for i in range(10):
        t = threading.Thread(target=run3,args=(1,1,))       #对run3函数加锁
        t.start()
    while threading.active_count() != 1:                    #判断活跃线程个数,当其他线程都执行完毕,只剩主线程时,就是1
        print("33[31m活跃的线程个数:%s33[0m" %threading.active_count())
    else:
        print("All the threading task done!!!")

    上面,我们写了三个函数,函数run3中调用run1和run2,run3里面加锁,并且run1和run2也加锁了,run1和run2是run3下面的锁,run1和run2是平行锁,两者不存在上下级关系,现在我们来执行程序,看是什么样的结果,如下:

活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
活跃的线程个数:11
......

    从上面执行结果可以看出,并没有执行启动的10个线程,由于每层都加锁,导致程序识别锁混乱,如何结果呢?要使用到递归锁,何为递归所呢,就是给自己加上标记。

import threading
'''写一个递归锁'''

def run1():
    lock.acquire()     #加锁
    global num1
    num1  = 1
    lock.release()
    return num1

def run2():
    '''加锁'''
    lock.acquire()
    global num2
    num2  = 2
    lock.release()
    return num2

def run3():
    lock.acquire()
    res1 = run1()
    '''执行第二个调用'''
    res2 = run2()
    lock.release()
    print(res1,res2)

if __name__ == "__main__":
    num1,num2 =1,2
    lock = threading.RLock()
    for i in range(10):
        t = threading.Thread(target=run3)
        t.start()

while threading.active_count() != 1:
    print("33[31m当前活跃的线程个数:%s33[0m" %threading.active_count())
else:
    print("All the thread has task done!!!!")
    print(num1,num2)

     上面代码中,我们进行了修改,使用了递归锁,即有明确的出口,递归:recursion,这样,就解决了问题,如下:

2 4
3 6
4 8
5 10
6 12
7 14
8 16
9 18
10 20
11 22
当前活跃的线程个数:2
All the thread has task done!!!!
11 22

    上面程序中,结果能够正确的运行,并且嵌套锁没有出错,是因为使用了递归锁RLock(),从上面程序中,我也简单掌握了全局变量的使用,在函数中如何修改全局变量,首先定义一个全局变量,然后修改即可。

    Semaphore(信号量)

    互斥锁 同时只允许一个线程更改数据,而Semaphore是同时允许一定数量的线程更改数据 ,比如厕所有3个坑,那最多只允许3个人上厕所,后面的人只能等里面有人出来了才能再进去。

    互斥锁:控制线程同一时间执行的数量,我们可以启动多个线程,但是我们可以规定统一时间让几个线程执行,当有线程执行完毕之后,添加新的线程进去,直至所有线程执行完毕。

import threading,time
'''写一个递归锁'''

def run1():
    global num1
    num1  = 1
    return num1

def run2():
    global num2
    num2  = 2
    return num2

def run3():
    semaphore.acquire()
    res1 = run1()
    '''执行第二个调用'''
    res2 = run2()
    semaphore.release()
    time.sleep(2)
    print(res1,res2)

if __name__ == "__main__":
    num1,num2 =1,2
    lock = threading.RLock()
    semaphore = threading.BoundedSemaphore(5)
    for i in range(10):
        t = threading.Thread(target=run3)
        t.start()

while threading.active_count() != 1:
    print("33[31m当前活跃的线程个数:%s33[0m" %threading.active_count())
else:
    print("All the thread has task done!!!!")
    print(num1,num2)

    上面程序使用了信号量,即统一时间只允许5个线程执行,虽然启动了10个线程;Bounded:绑定的;Semaphore:信号量,BondedSemaphore:绑定的信号量,即同一时间允许运行的线程数,上面程序的运行代码如下:

当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
3 6
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
当前活跃的线程个数:11
4 8
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
当前活跃的线程个数:9
6 12
5 10
7 14
2 4
当前活跃的线程个数:5
8 16
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
当前活跃的线程个数:4
11 22
当前活跃的线程个数:3
当前活跃的线程个数:3
当前活跃的线程个数:3
10 20
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
当前活跃的线程个数:2
9 18
All the thread has task done!!!!
11 22

    从结果可以看出,执行是分批次执行的,同一时间只会有5个线程同时执行,当有线程执行完毕,会补充新的线程进来。

    Events(事件)

    An event is a simple synchronization object:事件是一个简单的同步对象;

    the event represents an internal flag, and threads can wait for the flag to be set, or set or clear the flag themselves。(

该事件代表一个内部标志,线程可以等待标志设置,或设置或清除标志本身。)

    event = threading.Event()   #生命一个时间

    event.wait()                #一个客户端线程可以等待标志被设置(a client thread can wait for the flag to be set),检测标志位

    event.set()                 #服务器线程可以设置或重置它(a server thread can set or reset it)

 

    event.clear()               #清楚标志位

    If the flag is set, the wait method doesn’t do anything.(如果设置了标志,则等待方法不执行任何操作。)

    If the flag is cleared, wait will block until it becomes set again.(如果标志位已清楚,等待将阻塞,直到它再次设置)。

    Any number of threads may wait for the same event.(任何数量的线程可以等待同一事件)

    下面来看一个红绿灯的程序,可以转换红绿灯以便车辆通行,当红灯的时候,车的线程等待,当绿灯的时候,车辆通行,就是两个线程交互的情况,使用的是事件(event),如下:

 

import threading,time

def traffic_lights():
    counter = 0
    while True:
        if counter < 30:
            print("33[42m即将转为绿灯,准备通行!!!33[0m")
            event.set()                          #一分钟为一个轮回,30秒以内为绿灯
            print("33[32m绿灯,通行......33[0m")
        elif counter >= 30 and counter <= 60:
            print("33[41m即将转为红灯,请等待!!!33[0m")
            event.clear()                        #清楚标志,转为红灯
            print("33[31m红灯中,禁止通行......33[0m")
        elif counter > 60:
            counter = 0                          #超过60秒重新计数,重新下一次循环
        counter  = 1
        time.sleep(1)                            #一秒一秒的运行

def car(name):
    '''定义车的线程,汽车就检测是否有红绿灯,通行和等待'''
    while True:
        if event.is_set():                       #存在标识位,说明是绿灯
            '''检测,如果存在标志位,说明是绿灯中,车可以通行'''
            print("[%s] is running!!!" %name)
        else:
            '''标识位不存在,说明是红灯过程中'''
            print("[%s] is waitting!!!" %name)
        time.sleep(1)

if __name__ == "__main__":
    try:
        event = threading.Event()
        lighter = threading.Thread(target=traffic_lights)
        lighter.start()
        '''启动多个车的线程'''
        for i in range(1):
            my_car = threading.Thread(target=car,args=("tesla",))
            my_car.start()
    except KeyboardInterrupt as e:
        print("线程断开了!!!")

    except Exception as e:
        print("线程断开了!!!")

 

    上面程序执行如下:

即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
[tesla] is running!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为绿灯,准备通行!!!
绿灯,通行......
[tesla] is running!!!
即将转为红灯,请等待!!!
[tesla] is running!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
即将转为红灯,请等待!!!
[tesla] is waitting!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
[tesla] is waitting!!!
即将转为红灯,请等待!!!
红灯中,禁止通行......
即将转为红灯,请等待!!!
红灯中,禁止通行......
[tesla] is waitting!!!

    上面,我们定义了两个线程,并且实现了交互,使用的是事件,event.set():设置事件标识符,代表执行;event.clear():清除标识符,代表等待,只有当新的标识符被设置,才会通行。

import threading,time

def traffic_lights():
    '''设置红绿灯,会显示事件,以及由绿——黄——红、红———黄——绿的转换'''
    global counter                                                           #计时器
    counter = 0
    while True:
        if counter < 40:                                                     #绿灯通行中
            event.set()
            '''绿灯中,可以通行'''
            print("33[42mThe light is on green light,runing!!!33[0m")
            print("剩余通行时间:%s" %(40-counter))
        elif counter >40 and counter <= 43:
            event.clear()
            '''黄灯中,是由绿灯转为红灯的'''
            print("Yellow light is on,waitting!!!即将转为红灯!")
        elif counter > 43 and counter <= 63:
            '''红灯,由黄灯转换为红灯'''
            print("33[41mThe red light is on!!! Waitting33[0m")
            print("剩余红灯时间:%s" %(63-counter))
        elif counter > 63 and counter <= 66:
            '''由红灯转换为红灯,即将转为绿灯'''
            print("The yewwlow is on,Waitting!!!即将转为红灯!!")
        elif counter > 66:
            counter = 0
        counter  = 1
        time.sleep(1)

def go_through(name):
    '''通行线程,根据上面红绿灯判断是否通行'''
    while True:
        if event.is_set():
            """绿灯,可以通行"""
            print("[%s] is running!!!" %name)
        else:
            print("%s is waitting!!!" %name)
        time.sleep(1)

if __name__ == "__main__":
    event = threading.Event()
    lights = threading.Thread(target=traffic_lights)
    lights.start()

    car = threading.Thread(target=go_through,args=("tesla",))
    car.start()

    上面程序中,我们实现了时间提醒,跟现实世界的红绿灯很相似,并且由绿--黄--红至红--黄--绿,实现来回的转换,如下所示:

The light is on green light,runing!!!
剩余通行时间:40
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:39
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:38
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:37
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:36
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:35
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:34
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:33
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:32
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:31
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:30
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:29
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:28
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:27
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:26
The light is on green light,runing!!!
剩余通行时间:25
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:24
The light is on green light,runing!!!
[tesla] is running!!!
剩余通行时间:23
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:22
The light is on green light,runing!!!
剩余通行时间:21
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:20
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:19
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:18
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:17
The light is on green light,runing!!!
剩余通行时间:16
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:15
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:14
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:13
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:12
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:11
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:10
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:9
The light is on green light,runing!!!
剩余通行时间:8
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:7
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:6
[tesla] is running!!!
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:5
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:4
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:3
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:2
The light is on green light,runing!!!
剩余通行时间:1
[tesla] is running!!!
[tesla] is running!!!
[tesla] is running!!!
Yellow light is on,waitting!!!即将转为红灯!
tesla is waitting!!!
Yellow light is on,waitting!!!即将转为红灯!
Yellow light is on,waitting!!!即将转为红灯!
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:19
tesla is waitting!!!
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:18
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:17
The red light is on!!! Waitting
剩余红灯时间:16
tesla is waitting!!!
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:15
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:14
The red light is on!!! Waitting
剩余红灯时间:13
tesla is waitting!!!
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:12
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:11
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:10
The red light is on!!! Waitting
剩余红灯时间:9
tesla is waitting!!!
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:8
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:7
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:6
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:5
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:4
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:3
The red light is on!!! Waitting
tesla is waitting!!!
剩余红灯时间:2
tesla is waitting!!!
The red light is on!!! Waitting
剩余红灯时间:1
The red light is on!!! Waitting
剩余红灯时间:0
tesla is waitting!!!
tesla is waitting!!!
The yewwlow is on,Waitting!!!即将转为红灯!!
tesla is waitting!!!
The yewwlow is on,Waitting!!!即将转为红灯!!
tesla is waitting!!!
The yewwlow is on,Waitting!!!即将转为红灯!!
tesla is waitting!!!
tesla is waitting!!!
The light is on green light,runing!!!
剩余通行时间:39
[tesla] is running!!!
The light is on green light,runing!!!
剩余通行时间:38
[tesla] is running!!!

    上面程序中,我们实现了红绿灯的交替,即时间的设置与取消,根据两个状态来判断,只有设置的时候,绿灯才通行,取消的时候,都是等待。

本文由澳门新萄京官方网站发布于www.8455.com,转载请注明出处:澳门新萄京官方网站多线程与多进程,学习笔记

关键词: