上篇写了有关于MANIMML的简单介绍和基础代码,本篇文章将继续更新有关MANIMML的详细内容及各种功能~如果是第一次看到这篇博客的新的小伙伴,想要对MANIMML做大致了解的话,可以看下我上篇博客的基础阐述噢~链接我放在这里啦:https://www.guyuehome.com/43764
一、设置功能
在Manim中,所有可视化和动画都属于一个场景,小伙伴们可以通过扩展Scene类或ThreeDScene类(若你的动画包含了3D内容,那么就像我给出的示例一样)来创建场景;可以将我以下的代码添加到名为example.py的Python模块中:
from manim import *
# Import modules here
class BasicScene(ThreeDScene):
def construct(self):
# Your code goes here
text = Text("Your first scene!")
self.add(text)
为了渲染场景,可以在命令行中运行以下命令:
$ manim -pq -l example.py
这个是可以生成低质量的图像文件,若想生成高质量图像文件则用 -h。(注:在本文中其余部分,均需要将代码片段复制到构造函数主体中)
二、一个简单的前馈网络
通过MANIMML我们可以方便的把一个前馈神经网络可视化:
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
nn = NeuralNetwork([
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3)
])
self.add(nn)
在上述代码中,我们创建了一个NeuralNetwork对象,并向其传递了一个层例表,我们为每个前馈层指定节点数;MANIMML会自动将各个层拼接成一个神经网络,我们在场景构造方法的主体中调用self.add(nn),以便将神经网络添加到场景中;大多数MANIMML神经网络对象和函数都可以直接从manim_ml.neural_network中导入,接下来我们可以运行该程序来渲染场景的静帧图像:
$ manim -pql example.py
三、前传动画
我们可以使用neural_network.make_forward_pass_animation方法创建动画,并在场景中使用self.paly(animation)播放动画,从而自动渲染神经网络的前向传递。
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
# Make the neural network
nn = NeuralNetwork([
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3)
])
self.add(nn)
# Make the animation
forward_pass_animation = nn.make_forward_pass_animation()
# Play the animation
self.play(forward_pass_animation)
现在我们可以用:
$ manim -pql example.py
(图片本来应该是个GIF的动图但因为图片大小限制只能是静态图啦,但小伙伴们可以自己动手试试得到喔~)
四、卷积神经网络
MANIMML支持卷积神经网络可视化,小伙伴们可以通过指定特征图的数量、特征图的大小和滤波器的大小,如下所示Convolutional2DLayer(num_feature_maps,feature_map_size,filter_size),也可以更改其他一些样式参数;这是一个多层的卷积神经网络,如果有小伙伴不太熟悉神经网络,那本篇博客将会是一个很好的资源了,此外,CNN Explainer也是了解CNN的绝佳互动工具。(注:在指定CNN时,相邻层的特征图大小和滤波器的尺寸必须匹配)
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer, Convolutional2DLayer
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32), # Note the default stride is 1.
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
现在我们可以在命令行中输入:
$ manim -pql example.py
这就是卷积神经网络!(这里的图片与上张一样,也是GIF的动图)
五、带有图像的卷积神经网络
我们还可以在通过第一个卷积层之前指定一个ImageLayer,将输入的卷积神经网络图像制作成动画;
import numpy as np
from PIL import Image
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer, Convolutional2DLayer, ImageLayer
image = Image.open("digit.jpeg") # You will need to download an image of a digit.
numpy_image = np.asarray(image)
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.5),
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32), # Note the default stride is 1.
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
然后在命令行里输入:(这里的图片也是GIF!)
$ manim -pql example.py
六、最大集合
在深度学习中最常见的操作是二位最大池化(2D Max Pooling)操作,它可以减小卷积特征图大小,我们可以使用MaxPooling2DLayer来直观的显示最大池化。
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, MaxPooling2DLayer
# Make neural network
nn = NeuralNetwork([
Convolutional2DLayer(1, 8),
Convolutional2DLayer(3, 6, 3),
MaxPooling2DLayer(kernel_size=2),
Convolutional2DLayer(5, 2, 2),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.wait(1)
self.play(forward_pass)
然后在命令行中输入:(GIF动图+1......)
$ manim -pql example.py
七、激活功能
激活函数对神经网络的输出进行非线性处理;激活函数的形状各不相同,所以我们可视化这些函数非常有用,所以在Convolutional2DLayer上添加了可视化激活函数的功能,方法如下:
layer = FeedForwardLayer(num_nodes=3, activation_function="ReLU")
我们可以将其添加到更大的神经网络中,如下所示:
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, FeedForwardLayer
# Make nn
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32, activation_function="ReLU"),
FeedForwardLayer(3, activation_function="Sigmoid"),
],
layer_spacing=0.25,
)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
然后在命令行输入:(GIF动图+1......)
$ manim -pql example.py
八、更复杂的神经网络宕机
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
from manim_ml.neural_network.animations.dropout import make_neural_network_dropout_animation
# Make nn
nn = NeuralNetwork([
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(4),
],
layer_spacing=0.4,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True
)
)
self.wait(1)
然后在命令行输入:(GIF动图+1......)
$ manim -pql example.py
紧接着Seed the Dropouts:(GIF动图 +1......)
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True, seed=4
)
)
然后Seed the Dropouts with First layer static:
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True, seed=4, first_layer_stable=True
)
)
还可以Seed the Dropouts with First and Last layer static:
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True, seed=4, first_layer_stable=True, last_layer_stable=True
)
)
(在这里就不放这两个运行程序代码的图片啦,小伙伴们感兴趣的话可以自行复制粘贴我的代码直接试试!)
这就是本篇博客的全部内容啦,希望可以帮到大家,当然如果有想要更多及详细代码的小伙伴可以在评论区留言噢,我很乐意能够帮到你,如果对这篇博客感兴趣的话请给我一键三连!!!
如需转载或引用请标明出处。(码字不易猫猫叹气!)
评论(6)
您还未登录,请登录后发表或查看评论