请选择 进入手机版 | 继续访问电脑版
×
热搜词

QQ登录

只需一步,快速开始

AI技术换脸源码(人工智能源码)

当faceswapping首次开发并发布时,这项技术是开创性的,它是人工智能发展的一大步. 在学术界之外,它也完全被忽略了,因为代码是混乱和零碎的. 它需要对复杂的人工智能技术有一个全面的了解,并花费了大量的努力来解决它. 直到有一个人把它们组合成一个整体。它运行起来,工作起来,就像互联网上出现的新技术一样,它很快就被用来创建不合适的内容。尽管该软件最初的使用不当,但它是第一个任何人都可以通过实验下载、运行和学习的人工智能代码,而不需要数学、计算机理论、心理学等博士学位。在“深度造假”之前,这些技术就像巫术一样,只有那些能够理解深奥而又无穷复杂的书籍和论文中所描述的所有内部工作原理的人才会使用.

"Deepfakes" 改变了这一切,任何人都可以参与人工智能的开发。对于我们开发人员来说,这段代码的发布提供了一个极好的学习机会。它使我们能够建立在他人开发的想法之上,与各种熟练的程序员合作,在学习新技能的同时进行人工智能实验,并最终为一项新兴技术做出贡献,随着技术的进步,这种技术只会得到更主流的应用.

是否有一些人在用类似的软件做可怕的事情?是的。正因为如此,开发商一直遵循严格的道德标准。我们中的许多人甚至不使用它来创建视频,我们只是修改代码,看看它能做什么。遗憾的是,媒体只关注这种软件的不道德使用。不幸的是,这就是它最初如何向公众公开的本质,但它并不能代表为什么要创建它,我们现在如何使用它,或者我们在未来看到了什么。就像任何技术一样,它可以被用于好的方面,也可以被滥用。我们的目的是开发FaceSwap,使其滥用的可能性降到最低,同时最大限度地发挥其作为学习、实验工具的潜力,当然,还有作为合法的faceswapping工具的潜力.

我们不想诋毁名人或贬低任何人。我们是程序员,我们是工程师,我们是好莱坞特效艺术家,我们是活动家,我们是爱好者,我们是人类。为了达到这个目的,我们觉得是时候对这个软件做一个标准的说明了.

FaceSwap 不是用来创建不合适的内容的。FaceSwap不能在未经同意或意图隐藏其用途的情况下改变面部。FaceSwap并非用于任何非法、不道德或可疑的目的。FaceSwap的存在是为了试验和发现人工智能技术,用于社会或政治评论,电影,以及任何数量的道德和合理的用途。
我们非常困扰的事实,FaceSwap可以用于不道德和不体面的事情。然而,我们支持开发可在道德上使用的工具和技术,并为任何希望亲自学习人工智能的人提供人工智能方面的教育和经验。我们将对任何出于不道德目的使用本软件的人采取零容忍的态度,并将积极劝阻任何此类使用。

设置和运行项目
FaceSwap是一个Python程序,可以在多个操作系统上运行,包括Windows、Linux和MacOS。

INSTALL.md 参阅完整的安装说明。你将需要一个现代GPU与CUDA支持最佳性能。部分支持AMD gpu。

概述
项目有多个入口点。你必须: -收集照片和/或视频

extractTrain *从照片/视频中提取的人脸模型Convert您的源代码与模型Extract
从安装文件夹中运行python faceswap.py extract。这将采取照片从src文件夹和提取的面孔到extract文件夹。

Train
从安装文件夹中运行python faceswap.py train。这将从两个包含两张面孔照片的文件夹中拍摄照片,并训练一个模型,该模型将保存在models文件夹中。

Convert
从安装文件夹中运行python faceswap.py convert。这将从“原始”文件夹中拍摄照片,并将新面孔应用到modified文件夹中。

GUI
另外,您可以通过运行来运行GUI python faceswap.py gui

注意:所有提到的脚本都有-h/--help 选项,它们的参数都是可以接受的。你懂得,小屌丝!
另:有一个视频转换工具。这可以通过运行python tools.py effmpeg -h。或者,您可以使用ffmpeg将视频转换为照片、处理图像,并将图像转换回视频。

一些技巧: 重用现有的模型比从零开始训练要快得多。 如果没有足够的训练数据,就从长相相似的人开始,然后转换数据。

英文介绍:

Workflow
Before attempting any of this, please make sure you have read, understood and completed the installation instructions. If you are experiencing issues, please raise them in the faceswap Forum or the FaceSwap Discord server instead of the main repo.

WorkflowIntroductionDisclaimerGetting StartedExtractGathering raw dataExtracting FacesGeneral TipsTraining a modelGeneral TipsConverting a videoGeneral TipsGUIVideo'sEFFMPEGExtracting video frames with FFMPEGGenerating a videoNotesIntroductionDisclaimer
This guide provides a high level overview of the faceswapping process. It does not aim to go into every available option, but will provide a useful entry point to using the software. There are many more options available that are not covered by this guide. These can be found, and explained, by passing the -h flag to the command line (eg: python faceswap.py extract -h) or by hovering over the options within the GUI.

Getting Started
So, you want to swap faces in pictures and videos? Well hold up, because first you gotta understand what this application will do, how it does it and what it can't currently do.

The basic operation of this script is simple. It trains a machine learning model to recognize and transform two faces based on pictures. The machine learning model is our little "bot" that we're teaching to do the actual swapping and the pictures are the "training data" that we use to train it. Note that the bot is primarily processing faces. Other objects might not work.

So here's our plan. We want to create a reality where Donald Trump lost the presidency to Nic Cage; we have his inauguration video; let's replace Trump with Cage.

ExtractGathering raw data
In order to accomplish this, the bot needs to learn to recognize both face A (Trump) and face B (Nic Cage). By default, the bot doesn't know what a Trump or a Nic Cage looks like. So we need to show it lots of pictures and let it guess which is which. So we need pictures of both of these faces first.

A possible source is Google, DuckDuckGo or Bing image search. There are scripts to download large amounts of images. A better source of images are videos (from interviews, public speeches, or movies) as these will capture many more natural poses and expressions. Fortunately FaceSwap has you covered and can extract faces from both still images and video files. See Extracting video frames for more information.

Feel free to list your image sets in the faceswap Forum, or add more methods to this file.

So now we have a folder full of pictures/videos of Trump and a separate folder of Nic Cage. Let's save them in our directory where we put the FaceSwap project. Example: ~/faceswap/src/trump and ~/faceswap/src/cage

Extracting Faces
So here's a problem. We have a ton of pictures and videos of both our subjects, but these are just of them doing stuff or in an environment with other people. Their bodies are on there, they're on there with other people... It's a mess. We can only train our bot if the data we have is consistent and focuses on the subject we want to swap. This is where FaceSwap first comes in.

Command Line:

# To extract trump from photos in a folder:python faceswap.py extract -i ~/faceswap/src/trump -o ~/faceswap/faces/trump# To extract trump from a video file:python faceswap.py extract -i ~/faceswap/src/trump.mp4 -o ~/faceswap/faces/trump# To extract cage from photos in a folder:python faceswap.py extract -i ~/faceswap/src/cage -o ~/faceswap/faces/cage# To extract cage from a video file:python faceswap.py extract -i ~/faceswap/src/cage.mp4 -o ~/faceswap/faces/cage
GUI:

To extract trump from photos in a folder (Right hand folder icon):

To extract cage from a video file (Left hand folder icon):

For input we either specify our photo directory or video file and for output we specify the folder where our extracted faces will be saved. The script will then try its best to recognize face landmarks, crop the images to a consistent size, and save the faces to the output folder. An alignments.json file will also be created and saved into your input folder. This file contains information about each of the faces that will be used by FaceSwap.

Note: this script will make grabbing test data much easier, but it is not perfect. It will (incorrectly) detect multiple faces in some photos and does not recognize if the face is the person whom we want to swap. Therefore: Always check your training data before you start training. The training data will influence how good your model will be at swapping.

General Tips
When extracting faces for training, you are looking to gather around 500 to 5000 faces for each subject you wish to train. These should be of a high quality and contain a wide variety of angles, expressions and lighting conditions.

You do not want to extract every single frame from a video for training as from frame to frame the faces will be very similar.

If you plan to train with a mask or use the Warp to Landmarks option, then you will need to copy the output alignments.json file from your source frames folder into your output faces folder for training. If you have extracted from multiple sources, you can use the alignments tool to merge several alignments.json files together.

You can see the full list of arguments for extracting by hovering over the options in the GUI or passing the help flag. i.e:

python faceswap.py extract -h
Some of the plugins have configurable options. You can find the config options in: <faceswap_folder>configextract.ini. You will need to have run Extract or the GUI at least once for this file to be generated.

Training a model
Ok, now you have a folder full of Trump faces and a folder full of Cage faces. What now? It's time to train our bot! This creates a 'model' that contains information about what a Cage is and what a Trump is and how to swap between the two.

The training process will take the longest, how long depends on many factors; the model used, the number of images, your GPU etc. However, a ballpark figure is 12-48 hours on GPU and weeks if training on CPU.

We specify the folders where the two faces are, and where we will save our training model.

Command Line:

python faceswap.py train -A ~/faceswap/faces/trump -B ~/faceswap/faces/cage -m ~/faceswap/trump_cage_model/# or -p to show a previewpython faceswap.py train -A ~/faceswap/faces/trump -B ~/faceswap/faces/cage -m ~/faceswap/trump_cage_model/ -p
GUI:

Once you run the command, it will start hammering the training data. If you have a preview up, then you will see a load of blotches appear. These are the faces it is learning. They don't look like much, but then your model hasn't learned anything yet. Over time these will more and more start to resemble trump and cage.

You want to leave your model learning until you are happy with the images in the preview. To stop training you can:

Command Line: press "Enter" in the preview window or in the consoleGUI: Press the Terminate button
When stopping training, the model will save and the process will exit. This can take a little while, so be patient. The model will also save every 100 iterations or so.

You can stop and resume training at any time. Just point FaceSwap at the same folders and carry on.

General Tips
If you are training with a mask or using Warp to Landmarks, you will need to pass in an alignments.json file for each of the face sets. See Extract - General Tips for more information.

The model is automatically backed up at every save iteration where the overall loss has dropped (i.e. the model has improved). If your model corrupts for some reason, you can go into the model folder and remove the .bk extension from the backups to restore the model from backup.

You can see the full list of arguments for training by hovering over the options in the GUI or passing the help flag. i.e:

python faceswap.py train -h
Some of the plugins have configurable options. You can find the config options in: <faceswap_folder>configtrain.ini. You will need to have run Train or the GUI at least once for this file to be generated.

Converting a video
Now that we're happy with our trained model, we can convert our video. How does it work?

Well firstly we need to generate an alignments.json file for our swap. To do this, follow the steps in Extracting Faces, only this time you want to run extract for every face in your source video. This file tells the convert process where the face is on the source frame.

You are likely going to want to cleanup your alignments file, by deleting false positives, badly aligned faces etc. These will not look good on your final convert. There are tools to help with this.

Just like extract you can convert from a series of images or from a video file.

Remember those initial pictures we had of Trump? Let's try swapping a face there. We will use that directory as our input directory, create a new folder where the output will be saved, and tell them which model to use.

Command Line:

python faceswap.py convert -i ~/faceswap/src/trump/ -o ~/faceswap/converted/ -m ~/faceswap/trump_cage_model/
GUI:

It should now start swapping faces of all these pictures.

General Tips
You can see the full list of arguments for training by hovering over the options in the GUI or passing the help flag. i.e:

python faceswap.py convert -h
Some of the plugins have configurable options. You can find the config options in: <faceswap_folder>configconvert.ini. You will need to have run Convert or the GUI at least once for this file to be generated.

GUI
All of the above commands and options can be run from the GUI. This is launched with:

python faceswap.py gui
The GUI allows a more user friendly interface into the scripts and also has some extended functionality. Hovering over options in the GUI will tell you more about what the option does.

Video's
A video is just a series of pictures in the form of frames. Therefore you can gather the raw images from them for your dataset or combine your results into a video.

EFFMPEG
You can perform various video processes with the built-in effmpeg tool. You can see the full list of arguments available by running:

python tools.py effmpeg -hExtracting video frames with FFMPEG
Alternatively, you can split a video into separate frames using ffmpeg for instance. Below is an example command to process a video to separate frames.

ffmpeg -i /path/to/my/video.mp4 /path/to/output/video-frame-%d.pngGenerating a video
If you split a video, using ffmpeg for example, and used them as a target for swapping faces onto you can combine these frames again. The command below stitches the png frames back into a single video again.

ffmpeg -i video-frame-.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4Notes
This guide is far from complete. Functionality may change over time, and new dependencies are added and removed as time goes on.

If you are experiencing issues, please raise them in the faceswap Forum or the FaceSwap Discord server. Usage questions raised in this repo are likely to be closed without response.

下载地址:https://github.com/qiucheng025/zao-/archive/master.zip


度仙门网 - 论坛版权1、本主题所有言论和图片纯属会员个人意见,与本论坛立场无关
2、本站所有主题由该帖子作者发表,该帖子作者与度仙门网享有帖子相关版权
3、其他单位或个人使用、转载或引用本文时必须同时征得该帖子作者和度仙门网的同意
4、帖子作者须承担一切因本文发表而直接或间接导致的民事或刑事法律责任
5、本帖部分内容转载自其它媒体,但并不代表本站赞同其观点和对其真实性负责
6、如本帖侵犯到任何版权问题,请立即告知本站,本站将及时予与删除并致以最深的歉意
7、度仙门网管理员和版主有权不事先通知发贴者而删除本文

帖子地址: 

玄元一墨

写了 246 篇文章,拥有下品灵石 9544 枚,中品灵石 11421 枚,上品灵石 10002 枚,被 4 人关注

欢迎访问度仙门网!请在论坛右上角的个人设置中修改你的个性签名!
踩
回复

使用道具

您需要登录后才可以回帖 登录 | 立即注册
B Color Link Quote Code Smilies

成为第一个吐槽的人

Copyright © 2018-, Duxian studio.

返回顶部