diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4K Video Downloader 4.4.11.2412 Full Repack Portable Extract Audio from 4K Videos Easily.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4K Video Downloader 4.4.11.2412 Full Repack Portable Extract Audio from 4K Videos Easily.md
deleted file mode 100644
index 52ba986ed11b47bc264b7b9b0ef990e1c048c933..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/4K Video Downloader 4.4.11.2412 Full Repack Portable Extract Audio from 4K Videos Easily.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
4K Video Downloader 4.4.11.2412 Full Repack Portable: A Review
-
Do you love watching videos online but hate the hassle of downloading them? Do you want to enjoy high-quality videos on your HD TV, iPad or other devices? Do you wish you could download videos in 3D or with subtitles? If you answered yes to any of these questions, then you might want to check out 4K Video Downloader, a freeware tool that lets you download videos, audio and subtitles from YouTube and other sites in a fast and easy way.
-
In this article, we will review 4K Video Downloader 4.4.11.2412 Full Repack Portable, a special version of the program that does not require installation and can be run from any folder or USB drive. We will explain what this program can do, what are its features and benefits, how to use it, and what are some tips and tricks for getting the most out of it.
-
4K Video Downloader 4.4.11.2412 Full Repack Portable
4K Video Downloader is a software that allows you to download video, audio and subtitles from YouTube and other sites in high-quality and as fast as your computer and connection will allow. You can choose from various formats and resolutions, such as MP4, MKV, M4A, MP3, FLV, 3G, HD 1080p, HD 720p or 4K quality.
-
Download videos, audio and subtitles from YouTube and other sites
-
With 4K Video Downloader, you can download any video from YouTube, Vimeo, SoundCloud, Flickr, Facebook, DailyMotion and more than 300 other sites. You can also download entire playlists and channels from YouTube and save them in MP4, MKV, M4A, MP3, FLV or 3G formats.
-
How to download 4K videos with 4.4.11.2412 version
-Best 4K video downloader software for Windows 10
-4.4.11.2412 full repack portable download link
-4K video downloader 4.4.11.2412 crack serial key
-4K video downloader 4.4.11.2412 review and features
-How to install 4K video downloader 4.4.11.2412 full repack portable
-4K video downloader 4.4.11.2412 license key generator
-4K video downloader 4.4.11.2412 vs other versions
-How to update 4K video downloader to 4.4.11.2412
-4K video downloader 4.4.11.2412 system requirements and compatibility
-How to uninstall 4K video downloader 4.4.11.2412 full repack portable
-4K video downloader 4.4.11.2412 tutorial and tips
-How to fix 4K video downloader 4.4.11.2412 errors and issues
-How to activate 4K video downloader 4.4.11.2412 full repack portable
-How to use 4K video downloader 4.4.11.2412 offline mode
-How to download YouTube playlists with 4K video downloader 4.4.11.2412
-How to download subtitles with 4K video downloader 4.4.11.2412
-How to download Instagram videos with 4K video downloader 4.4.11.2412
-How to download Facebook videos with 4K video downloader 4.4.11.2412
-How to download TikTok videos with 4K video downloader 4.4.11.2412
-How to download Vimeo videos with 4K video downloader 4.4.11.2412
-How to download Dailymotion videos with 4K video downloader 4.4.11.2412
-How to download Twitch videos with 4K video downloader 4.4.11.2412
-How to download Reddit videos with 4K video downloader 4.4.11.2412
-How to download Twitter videos with 4K video downloader 4.4.11.2412
-How to download LinkedIn videos with 4K video downloader 4.
-
Moreover, you can download advanced subtitles from YouTube videos, either as separate .srt files or embedded in the video file. You can also download annotations and subtitles in .srt format or embed ones for a single video or entire playlist in one click.
-
Convert videos for different devices and formats
-
4K Video Downloader also allows you to convert downloaded videos for different devices and formats. You can choose from a list of preset profiles for iPhone, iPad, iPod, Android devices, Windows Phone devices, Samsung devices, LG devices, Sony devices and more.
-
You can also customize the output format by choosing the video codec, resolution, bitrate, frame rate, audio codec, channels, sample rate and more.
-
Activate smart mode and download videos in 3D
-
you can choose the quality, format and output folder once and then download videos with one click. You can also enable notifications to get informed when a video is downloaded or a playlist is updated.
-
Another cool feature of 4K Video Downloader is that you can download videos in 3D format. You will find a small special icon among available formats after video parsing. It's really impressive to watch live shows and cartoons in 3D. You can also create amazing slideshows with your downloaded photos.
-
What are the features and benefits of 4K Video Downloader 4.4.11.2412 Full Repack Portable?
-
4K Video Downloader 4.4.11.2412 Full Repack Portable is a special version of the program that does not require installation and can be run from any folder or USB drive. This means that you can use it on any computer without leaving any traces or affecting the system registry. You can also carry it with you wherever you go and enjoy your downloaded videos on any device.
-
No installation required and easy to use
-
The main benefit of 4K Video Downloader 4.4.11.2412 Full Repack Portable is that it does not need to be installed on your computer. You just need to download the repack or portable version from a reliable source, such as nsane.forums or solidtorrents, and extract the files to a folder of your choice. Then, you can run the program by double-clicking on the executable file.
-
The program has a simple and intuitive interface that makes it easy to use for anyone. You just need to copy the video link from your browser and click on 'Paste Url' in the program. Then, you can choose the desired quality, format and output folder and start the download.
-
Supports multiple languages and platforms
-
4K Video Downloader 4.4.11.2412 Full Repack Portable supports multiple languages, such as English, French, German, Spanish, Italian, Portuguese, Russian, Chinese, Japanese and more. You can change the language in the settings menu of the program.
-
The program also supports multiple platforms, such as Windows, Mac OS X and Linux. You can use 4K Video Downloader on your PC, Mac or Linux computer, regardless of what OS you prefer.
-
Free and safe
-
4K Video Downloader 4.4.11.2412 Full Repack Portable is completely free to use and does not have any limitations or hidden costs. You can download as many videos as you want without paying anything.
-
The program is also safe to use and does not contain any viruses, malware or adware. You can scan it with your antivirus software or check it online with services like VirusTotal to verify its safety.
-
How to use 4K Video Downloader 4.4.11.2412 Full Repack Portable?
-
To use 4K Video Downloader 4.4.11.2412 Full Repack Portable, you need to follow these simple steps:
-
Download the repack or portable version from a reliable source
-
The first step is to download the repack or portable version of 4K Video Downloader 4.4.11.2412 from a reliable source, such as nsane.forums or solidtorrents. You can find the download links in these sites or search for them online.
-
You need to choose between the repack or portable version depending on your preference. The repack version is smaller in size but may have some modifications or additions by the repacker, while the portable version is larger in size but has no changes from the original program.
-
Run the program and paste the video link from your browser
-
The next step is to run the program by double-clicking on the executable file in the folder where you extracted it. You will see a simple interface with a big button that says 'Paste Url'. You need to copy the video link from your browser and click on this button.
-
The program will automatically parse the video and show you a list of available formats and resolutions for downloading. You can also see a preview of the video thumbnail and title.
-
Choose the desired quality, format and output folder
-
The final step is to choose the desired quality, format and output folder for your downloaded video. You can select from various options, such as MP4, MKV, M4A, MP3, FLV, 3G, HD 1080p, HD 720p or 4K quality.
-
You can also choose to download only audio or subtitles from the video if you want. You can select if you want to have .srt file or embed subtitles in video file.
-
You can also choose the output folder where you want to save your downloaded video by clicking on 'Browse' button next to 'Save To' option.
-
Once you have made your choices, you can click on 'Download' button and wait for the download to finish.
-
What are some tips and tricks for using 4K Video Downloader 4.4.11.2412 Full Repack Portable?
-
.11.2412 Full Repack Portable, you can use some of these tips and tricks:
-
Download entire playlists and channels with one click
-
If you want to download an entire playlist or channel from YouTube, you can do it with one click using 4K Video Downloader. You just need to copy the playlist or channel link from your browser and paste it in the program. The program will parse the playlist or channel and show you a list of all the videos in it. You can choose to download all of them or select only the ones you want.
-
You can also choose to save the playlist or channel as a .m3u file for playlists or a folder for channels. This way, you can organize your downloaded videos better.
-
Use proxy settings to bypass geo-restrictions
-
If you want to download videos that are not available in your country due to geo-restrictions, you can use proxy settings in 4K Video Downloader to bypass them. You just need to go to the settings menu of the program and click on 'Connection' tab. There, you can enter the proxy server address, port, username and password if required.
-
Once you have entered the proxy settings, you can download any video from any site without any problem.
-
Subscribe to YouTube channels within the program
-
If you want to stay updated with the latest videos from your favorite YouTube channels, you can subscribe to them within 4K Video Downloader. You just need to go to the 'Subscriptions' tab in the program and click on 'Add Subscription' button. There, you can enter the channel link or name and click on 'Subscribe' button.
-
The program will automatically download new videos from your subscribed channels as soon as they are uploaded. You can also choose to get notifications when a new video is downloaded or a playlist is updated.
-
Conclusion
-
4K Video Downloader 4.4.11.2412 Full Repack Portable is a great tool for downloading videos, audio and subtitles from YouTube and other sites in high-quality and as fast as possible. It does not require installation and can be run from any folder or USB drive. It supports multiple languages and platforms and has many features and benefits that make it easy and convenient to use.
-
If you are looking for a free and safe way to enjoy your favorite online videos on any device, you should give 4K Video Downloader 4.4.11.2412 Full Repack Portable a try. You will not regret it.
-
FAQs
-
Here are some frequently asked questions about 4K Video Downloader 4.4.11.2412 Full Repack Portable:
-
Q: Is 4K Video Downloader 4.4.11.2412 Full Repack Portable legal?
-
A: Yes, 4K Video Downloader 4.4.11.2412 Full Repack Portable is legal as long as you use it for personal and non-commercial purposes only. You should not download videos that are protected by copyright or violate any terms of service of the sites you download from.
-
Q: Is 4K Video Downloader 4.4.11.2412 Full Repack Portable safe?
-
A: Yes, 4K Video Downloader 4.4.11.2412 Full Repack Portable is safe to use and does not contain any viruses, malware or adware. You can scan it with your antivirus software or check it online with services like VirusTotal to verify its safety.
-
Q: How can I update 4K Video Downloader 4.4.11.2412 Full Repack Portable?
-
.11.2412 Full Repack Portable, you need to download the latest version of the program from a reliable source, such as nsane.forums or solidtorrents, and replace the old files with the new ones. You can also check for updates within the program by going to the 'Help' menu and clicking on 'Check for updates'.
-
Q: How can I contact the developers of 4K Video Downloader 4.4.11.2412 Full Repack Portable?
-
A: If you have any questions, suggestions or feedback about 4K Video Downloader 4.4.11.2412 Full Repack Portable, you can contact the developers of the program by visiting their official website at https://www.4kdownload.com/products/product-videodownloader. There, you can find their email address, social media accounts and support forum.
-
Q: How can I support the development of 4K Video Downloader 4.4.11.2412 Full Repack Portable?
-
A: If you like 4K Video Downloader 4.4.11.2412 Full Repack Portable and want to support its development, you can do so by donating to the developers of the program via PayPal or Bitcoin. You can find the donation links on their official website at https://www.4kdownload.com/products/product-videodownloader.
-
You can also support them by sharing the program with your friends and family, writing a review or rating it online, or following them on social media.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Filmora 9 Features Pricing and Download - Everything You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Filmora 9 Features Pricing and Download - Everything You Need to Know.md
deleted file mode 100644
index c728493cca61722389741644d01fb8aac05449c6..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Filmora 9 Features Pricing and Download - Everything You Need to Know.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
Filmora 9: A Simple and Powerful Video Editor for PC
-
Are you looking for a video editor that can help you create stunning videos with ease? Do you want to edit your videos with rich templates, effects, music, text, filters, and more elements? If yes, then you should check out Filmora 9, a simple and powerful video editor for PC.
-
Filmora 9 is a video editing software developed by Wondershare, a leading software company that specializes in multimedia tools. Filmora 9 is designed to be easy to use for beginners and professionals alike. It has a user-friendly interface that lets you drag and drop your media files, trim and crop your clips, adjust the speed and volume, and apply transitions and animations. You can also use Filmora 9 to add titles, subtitles, stickers, emojis, and shapes to your videos.
But Filmora 9 is not just a basic video editor. It also has many advanced features that can take your videos to the next level. For example, you can use Filmora 9 to:
-
-
Record your screen and webcam simultaneously.
-
Use the green screen feature to change the background of your videos.
-
Use the split-screen feature to show multiple videos at once.
-
Use the motion tracking feature to track and attach an element to a moving object.
-
Use the keyframing feature to create custom animations for your elements.
-
Use the audio ducking feature to automatically lower the background music when someone is speaking.
-
Use the AI smart cutout feature to remove unwanted objects from your videos.
-
Use the auto reframe feature to optimize your videos for different aspect ratios.
-
-
Filmora 9 also has a huge library of royalty-free music, sound effects, stock footage, and images that you can use in your projects. You can also download more resources from Filmstock, an online store that offers thousands of video effects, music tracks, images, and more for Filmora users.
-
-
How to Download and Install Filmora 9
-
If you want to try Filmora 9 for yourself, you can download it for free from the official website. The free version has all the features of the paid version, but it will add a watermark to your exported videos. If you want to remove the watermark, you need to purchase a license that costs $69.99 for a lifetime or $39.99 for a year.
-
To download and install Filmora 9 on your PC, follow these steps:
-
-
Go to the Filmora website and click on Download.
-
Select your operating system (Windows or Mac) and click on Download again.
-
Wait for the installation file to download and then run it.
-
Follow the on-screen instructions to complete the installation process.
-
Launch Filmora 9 and start editing your videos.
-
-
-
Conclusion
-
Filmora 9 is a simple and powerful video editor for PC that can help you create stunning videos with ease. It has a user-friendly interface and a rich set of features that can suit any video editing needs. Whether you want to make a tutorial video, a vlog, a slideshow, or a movie, Filmora 9 can help you achieve your goals.
-
If you want to learn more about Filmora 9 and how to use it effectively, you can visit the official website or check out the user guide. You can also watch some tutorial videos on YouTube or join the online community of Filmora users. With Filmora 9, you can unleash your creativity and make amazing videos in no time.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FSX SP2 CRACK.zip.md b/spaces/1gistliPinn/ChatGPT4/Examples/FSX SP2 CRACK.zip.md
deleted file mode 100644
index a2a68dee92a6f42f75e96b7150eb253d10d53b92..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/FSX SP2 CRACK.zip.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
How to Download and Install FSX SP2 CRACK.zip
-
If you are a fan of Microsoft Flight Simulator X, you might want to enhance your gaming experience with some additional features and fixes. One of the most popular ways to do that is by downloading and installing FSX SP2 CRACK.zip, a file that allows you to bypass the activation process and enjoy the full benefits of the Service Pack 2 patch.
But what is FSX SP2 CRACK.zip and how can you get it? In this article, we will explain everything you need to know about this file and how to use it safely and effectively.
-
What is FSX SP2 CRACK.zip?
-
FSX SP2 CRACK.zip is a file that contains a modified version of the dll.xml file, which is responsible for activating the Flight Simulator X game. By replacing the original file with the cracked one, you can avoid the activation process and play the game without any restrictions.
-
FSX SP2 CRACK.zip also enables you to use the Service Pack 2 patch, which is a free update that improves the performance and compatibility of the game. The patch optimizes FSX for all core use, adds DirectX 10 support, fixes some bugs and glitches, and enhances some features such as multiplayer mode, scenery, aircraft models, and more.
-
Where can I download FSX SP2 CRACK.zip?
-
There are many websites that offer FSX SP2 CRACK.zip for download, but not all of them are reliable or safe. Some of them may contain viruses, malware, or other unwanted programs that can harm your computer or compromise your privacy. Therefore, you should be careful when choosing a source for downloading this file.
-
One of the most trusted and popular websites for downloading FSX SP2 CRACK.zip is Ulož.to Disk, a file-sharing platform that allows you to upload and download files for free. You can find the file by searching for "Fsx Sp2 Crack .rar" on the website. The file size is 2 MB and it has been downloaded by thousands of users who have left positive comments and ratings.
-
To download the file from Ulož.to Disk, you just need to click on the download button and wait for a few seconds until the server prepares a download link. Then, you can click again on the download button and save the file to your computer.
-
-
How can I install FSX SP2 CRACK.zip?
-
Before installing FSX SP2 CRACK.zip, you need to make sure that you have installed the Service Pack 1 and Service Pack 2 patches for Flight Simulator X. You can download these patches for free from Fly Away Simulation, a website that provides downloads and add-ons for flight simulators. You can find the patches by searching for "Flight Simulator X Service Pack 2" on the website. The file size is 166.03 MB and it has been scanned for viruses and rated as clean.
-
To install the patches, you just need to run the fsx_sp2_ENU.msi file and follow the instructions on the screen. The patches will automatically update your game to the latest version.
-
After installing the patches, you can proceed to install FSX SP2 CRACK.zip. To do that, you need to extract the contents of the zip file using a program such as WinRAR or 7-Zip. You will get a folder named "FSX SP2 [PATCHED] Crack" that contains two files: dll.xml and readme.txt.
-
The readme.txt file contains some instructions on how to install the crack, but they are not very clear or detailed. Here is a simplified version of how to install FSX SP2 CRACK.zip:
-
-
Locate your Flight Simulator X installation folder. It is usually located in C:\Program Files (x86)\Microsoft Games\Microsoft Flight Simulator X.
-
Make a backup copy of your original dll.xml file and store it somewhere safe. You can do that by right-clicking on the file and selecting "Copy", then pasting it in another folder or on your desktop.
-
Copy the dll.xml file from the "FSX SP2 [PATCHED] Crack" folder and paste it in your Flight Simulator X installation folder. You can do that by right-clicking on the file and selecting "Copy", then going to your installation folder and right-clicking on an empty space and selecting "Paste".
-
Replace the original dll.xml file with the cracked one when prompted. You can do that by clicking on "Replace" or "Yes" when asked if you want to overwrite the existing file.
-
Launch Flight Simulator X and enjoy!
-
-
Conclusion
-
FSX SP2 CRACK.zip is a file that allows you to play Flight Simulator X without activation and with all the benefits of the Service Pack 2 patch. You can download it from Ulož.to Disk and install it by following some simple steps. However, you should be aware that using this file may violate Microsoft's terms of service and may cause some issues with your game or your computer. Therefore, use it at your own risk and discretion.
-
Conclusion
-
FSX SP2 CRACK.zip is a file that allows you to play Flight Simulator X without activation and with all the benefits of the Service Pack 2 patch. You can download it from Ulož.to Disk and install it by following some simple steps. However, you should be aware that using this file may violate Microsoft's terms of service and may cause some issues with your game or your computer. Therefore, use it at your own risk and discretion.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Coin Master Hack MOD APK 2022 Download and Enjoy Free Coins and Spins.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Coin Master Hack MOD APK 2022 Download and Enjoy Free Coins and Spins.md
deleted file mode 100644
index 8ef084564108fa7507547cb551b8c84959934e09..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Coin Master Hack MOD APK 2022 Download and Enjoy Free Coins and Spins.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Download Hacked Coin Master: Is It Worth It?
-
Coin Master is one of the most popular casual games on the market, with over 100 million downloads on Google Play and App Store. The game combines slot machines, raids, attacks, and card collections to create a fun and addictive experience. But what if you want to get unlimited coins, spins, and cards without spending real money? Is it possible to download a hacked Coin Master apk that gives you all these advantages? And if so, is it worth it?
-
In this article, we will answer these questions and more. We will explain what is Coin Master, what is a hacked Coin Master apk, how to download it safely, and what are the alternatives to hacking the game. By the end of this article, you will have a clear idea of whether downloading a hacked Coin Master apk is worth it or not.
Coin Master is a casual game developed by Moon Active, an Israeli company. The game was released in 2016 and has since become one of the most popular games in the world. According to Sensor Tower, Coin Master generated $1.2 billion in revenue in 2020, making it the fourth highest-grossing mobile game of the year.
-
How to play Coin Master
-
The gameplay of Coin Master is simple and straightforward. You spin a slot machine to earn coins, which you can use to build and upgrade your village. You can also use the slot machine to get other rewards, such as attacks, raids, shields, and cards.
-
Attacks allow you to damage other players' villages and steal their coins. Raids allow you to dig for coins in other players' villages. Shields protect your village from attacks. Cards are collectible items that belong to different sets and themes. You can complete card collections to get more rewards and unlock new villages.
-
Why is Coin Master popular?
-
Coin Master is popular for several reasons. First of all, it has a simple and addictive gameplay that appeals to a wide range of audiences. Second, it has a social element that allows you to interact with your friends and other players around the world. You can invite your friends to join the game, send and receive gifts, chat with them, and compete with them on the leaderboard. Third, it has a variety of features and content that keep the game fresh and exciting. You can explore different worlds and themes, collect hundreds of cards, participate in events and tournaments, and enjoy daily bonuses and surprises.
-
What is a hacked Coin Master apk?
-
A hacked Coin Master apk is a modified version of the original game that gives you access to unlimited coins, spins, cards, and other resources. A hacked Coin Master apk is usually downloaded from third-party websites that claim to offer free hacks and cheats for the game.
-
How does a hacked Coin Master apk work?
-
A hacked Coin Master apk works by bypassing the security measures of the original game and altering its code. This allows you to get unlimited resources without paying or waiting for them. For example, you can spin the slot machine as many times as you want without running out of spins. You can also get any card you want without having to collect them or trade them.
-
What are the benefits of a hacked Coin Master apk?
-
The main benefit of a hacked Coin Master apk is that it gives you a competitive edge over other players. You can build and upgrade your village faster, complete card collections easier, and dominate the leaderboard. You can also enjoy the game without any limitations or interruptions. You don't have to watch ads, wait for spins to refill, or spend real money to get more resources.
-
What are the risks of a hacked Coin Master apk?
-
However, downloading a hacked Coin Master apk also comes with some serious risks. Here are some of the potential dangers of using a hacked Coin Master apk:
-
download coin master mod apk unlimited coins
-how to get coin master hack version for free
-coin master cheats and hacks for android and ios
-coin master unlimited spins and coins download
-coin master mod apk latest version 2023
-download coin master hack tool online
-coin master free coins and spins generator
-coin master hack without human verification or survey
-coin master hack apk download 2021
-coin master mod menu apk download
-coin master hack online without downloading anything
-coin master hack no root no jailbreak
-coin master hack that actually works
-coin master hack for pc windows 10
-coin master hack apk ios download
-coin master mod apk revdl
-coin master hack apk 2020 free download
-coin master hack apk unlimited money and spins
-coin master hack version download link
-coin master mod apk rexdl
-coin master hack app download for android
-coin master hack apk pure
-coin master mod apk happymod
-coin master hack apk 3.5.1173
-coin master mod apk techylist
-coin master hack version game download
-coin master hack apk 2021 latest version
-coin master mod apk unlimited everything
-coin master hack version app download
-coin master mod apk an1.com
-coin master hack apk no ban
-coin master mod apk offline
-coin master hack version 2021 download
-coin master mod apk unlimited spins and coins 2021
-coin master hack version original game download
-coin master mod apk unlimited coins and spins 2020 download
-coin master hack version free fire game download
-coin master mod apk unlimited spins and coins 2020 free download
-coin master hack version game install kaise kare
-coin master mod apk unlimited coins and spins 2021 latest version download
-
-
Malware infection: The hacked Coin Master apk file may contain viruses, spyware, or other malicious software that can harm your device and compromise your personal information. You may end up losing your data, exposing your passwords, or even getting your device locked or bricked.
-
Ban from the game: The hacked Coin Master apk may be detected by the game's anti-cheat system and result in your account being banned or suspended. You may lose all your progress, rewards, and friends in the game. You may also face legal consequences for violating the game's terms of service.
-
Poor game performance: The hacked Coin Master apk may not be compatible with the latest version of the game or your device's operating system. You may experience glitches, crashes, errors, or slow loading times. You may also miss out on the new features and updates that the official game offers.
-
-
How to download a hacked Coin Master apk safely
-
If you still want to download a hacked Coin Master apk despite the risks, you should follow some precautions to minimize the chances of getting into trouble. Here are some tips on how to download a hacked Coin Master apk safely:
-
Check the source of the download
-
Before you download a hacked Coin Master apk file, you should do some research on the website that offers it. You should look for reviews, ratings, comments, and feedback from other users who have downloaded it. You should also check the reputation and credibility of the website. You should avoid websites that look suspicious, shady, or unprofessional.
-
Scan the file for malware
-
After you download a hacked Coin Master apk file, you should scan it with a reliable antivirus or anti-malware program. You should make sure that the file is clean and safe before you install it on your device. You should also delete any unwanted or suspicious files that come with the download.
-
Backup your data and device
-
Before you install a hacked Coin Master apk file on your device, you should backup your data and device. You should copy your important files, photos, contacts, and settings to another device or cloud service. You should also create a restore point or a recovery mode for your device in case something goes wrong. This way, you can restore your device to its original state if you encounter any problems.
-
Alternatives to downloading a hacked Coin Master apk
-
If you want to enjoy Coin Master without risking your device, account, or legal status, you should consider some alternatives to downloading a hacked Coin Master apk. Here are some of them:
-
Use legitimate cheats and tips
-
Instead of using a hacked Coin Master apk, you can use some legitimate cheats and tips that can help you get more coins, spins, and cards in the game. For example, you can follow these steps:
-
-
Claim daily bonuses: You can get free coins and spins every day by logging into the game and claiming your rewards. You can also get more bonuses by following Coin Master on social media platforms like Facebook, Twitter, and Instagram.
-
Watch video ads: You can watch video ads in the game to get more spins and coins. You can also get free spins by inviting your friends to watch video ads with you.
-
Complete events and missions: You can participate in various events and missions in the game to get more rewards and prizes. You can also get free spins by completing card sets and advancing to new villages.
-
-
Join online communities and trade cards
-
Another way to enjoy Coin Master without hacking is to join online communities and trade cards with other players. You can find many groups and forums on platforms like Facebook, Reddit, Discord, and Telegram where you can chat with other Coin Master fans, share tips and tricks, and exchange cards. You can also get free spins by sending and receiving gifts from your friends in the game.
-
Conclusion
-
Coin Master is a fun and addictive casual game that combines slot machines, raids, attacks, and card collections. However, if you want to get unlimited resources in the game, you may be tempted to download a hacked Coin Master apk. However, this is not a wise decision, as it comes with many risks and drawbacks. You may end up infecting your device with malware, getting banned from the game, or missing out on the latest updates and features.
-
Instead of downloading a hacked Coin Master apk, you should try some alternatives that can help you enjoy the game without cheating. You can use some legitimate cheats and tips to get more coins, spins, and cards. You can also join online communities and trade cards with other players. These methods are safer, easier, and more fun than hacking the game.
-
So, is downloading a hacked Coin Master apk worth it? The answer is no. It is not worth risking your device, account, or legal status for some virtual resources. Coin Master is a game that is meant to be played fairly and honestly. It is a game that rewards you for your patience, strategy, and luck. It is a game that you can enjoy with your friends and other players around the world. Don't ruin the fun by hacking the game. Play Coin Master the right way and have a blast!
-
FAQs
-
Here are some frequently asked questions about downloading a hacked Coin Master apk:
-
-
Q: Where can I download a hacked Coin Master apk?
-
A: There are many websites that claim to offer free hacked Coin Master apk files. However, we do not recommend downloading them, as they may contain malware or viruses that can harm your device. They may also get you banned from the game or cause other problems.
-
Q: How can I get free spins and coins in Coin Master without hacking?
-
A: There are many ways to get free spins and coins in Coin Master without hacking. You can claim daily bonuses, watch video ads, complete events and missions, follow Coin Master on social media, invite your friends to join the game, send and receive gifts, and participate in card trading.
-
Q: How can I get rare cards in Coin Master without hacking?
-
A: There are several factors that affect the chances of getting rare cards in Coin Master. Some of them are the level of your village, the theme of the card set, the rarity of the card, and the luck of the draw. You can increase your chances of getting rare cards by advancing to higher villages, completing lower card sets first, buying chests during special events, and joining online communities where you can trade cards with other players.
-
Q: How can I update my hacked Coin Master apk?
-
A: If you have downloaded a hacked Coin Master apk, you may not be able to update it through the official app store. You may have to download a new hacked version from another website or uninstall the hacked version and install the original version. However, this may result in losing your progress or getting banned from the game.
-
Q: Is hacking Coin Master illegal?
-
A: Hacking Coin Master may not be illegal in some countries, but it is definitely unethical and unfair. It violates the terms of service of the game and infringes on the intellectual property rights of the developers. It also ruins the gaming experience for other players who play by the rules.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/AetherSX2 PS2 Emulator Everything You Need to Know About the Best PS2 Emulator for Android.md b/spaces/1phancelerku/anime-remove-background/AetherSX2 PS2 Emulator Everything You Need to Know About the Best PS2 Emulator for Android.md
deleted file mode 100644
index cdd0e51a06e471a986d21aa4fcee8b22603935db..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/AetherSX2 PS2 Emulator Everything You Need to Know About the Best PS2 Emulator for Android.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
How to Download AetherSX2 PS2 Emulator
-
If you are a fan of PlayStation 2 games and want to enjoy them on your Android device, you might have heard of AetherSX2. It is a new and fast PS2 emulator that is based on PCSX2, a well-known emulator for PC. AetherSX2 can run most PS2 games at full speed, with high graphics quality, and online play support. In this article, we will show you how to download and install AetherSX2, how to get the BIOS file from your PS2 console, how to configure the emulator settings and controls, and how to play your favorite PS2 games on your Android device.
-
What is AetherSX2 and why it is the best PS2 emulator for Android
-
AetherSX2 is a free and open-source PS2 emulator for Android that was released in late 2021. It is based on PCSX2, which is a long-running and well-established emulator for PC. AetherSX2 uses the same code as PCSX2, but optimizes it for Android devices. It also adds some features that are not available in PCSX2, such as Vulkan graphics renderer, netplay support, and touch screen controls.
AetherSX2 is widely considered as the best PS2 emulator for Android because of its high compatibility, performance, and accuracy. It can run most PS2 games smoothly, with minimal glitches and bugs. It also supports various enhancements, such as custom resolutions, anti-aliasing, texture filtering, save states, cheats, patches, and more. You can also play online with other players using Nintendo Wi-Fi Connection or LAN.
-
What are the requirements and features of AetherSX2
-
To use AetherSX2, you need an Android device that meets the following requirements:
-
-
A CPU that supports SSE4.1 or higher (e.g., Snapdragon 845 or better)
-
A GPU that supports Vulkan or OpenGL (e.g., Adreno or Mali)
-
At least 4 GB of RAM
-
At least 8 GB of storage space
-
An Android version of 7.0 or higher
-
-
Some of the features of AetherSX2 are:
-
-
High compatibility: It can run over 90% of the PS2 library, with many games being playable or perfect.
-
High performance: It can achieve full speed emulation on most devices, with low latency and battery consumption.
-
High accuracy: It can emulate the PS2 hardware faithfully, with accurate timing and synchronization.
-
Vulkan graphics renderer: It can improve the graphics quality and speed of the games, especially on devices with Mali GPUs.
-
Netplay support: It can connect to Nintendo Wi-Fi Connection servers or LAN networks for online multiplayer gaming.
-
Touch screen controls: It can emulate the PS2 controller using virtual buttons on the screen.
-
Custom resolutions: It can upscale the games to higher resolutions than the original PS2 output.
-
Anti-
To save your settings and controls, tap on "Apply" and then "Back".
-
-
How to play PS2 games on AetherSX2
-
To play PS2 games on AetherSX2, you need to have the game files in ISO format. You can either dump your own PS2 games from your discs or download them from the internet. However, downloading PS2 games may be illegal in some regions, so do it at your own risk. Here are the steps to play PS2 games on AetherSX2:
-
How to dump your PS2 games into ISO files
-
-
You need a PS2 console, a PS2 game disc, a USB flash drive, a PC, and a software called DVD Decrypter.
-
Download DVD Decrypter from this link: [DVD Decrypter] and install it on your PC.
-
Insert your USB flash drive into your PC and format it to FAT32.
-
Insert your PS2 game disc into your PC's DVD drive and launch DVD Decrypter.
-
Select "Mode" and then "ISO" and then "Read".
-
Select your DVD drive as the source and your USB flash drive as the destination.
-
Click on the green arrow button to start the dumping process. It may take some time depending on the size of the game.
-
Once the dumping is done, you will have an ISO file of your PS2 game on your USB flash drive.
-
-
How to load and run the games on the emulator
-
-
Launch AetherSX2 on your Android device.
-
To load a game, tap on the folder icon on the top right corner of the screen.
-
Navigate to the location where you have stored your ISO files. You can use your internal storage or an external SD card.
-
Select the ISO file of the game you want to play and tap on it.
-
The emulator will boot up and run the game. You will see the PS2 logo and then the game's intro screen.
-
To play the game, use the virtual controller on the screen. You can also connect a physical controller via Bluetooth or USB if you prefer.
-
-
How to use save states, cheats, and other options
-
To use save states, cheats, and other options, you need to access the emulator's menu while playing a game. Here are the steps to do that:
-
How to download and install AetherSX2 on Android
-AetherSX2 guide: The best PS2 emulator for Android[^1^]
-AetherSX2 vs DamonPS2: Which PS2 emulator is better?
-AetherSX2 compatibility list: Which PS2 games work on Android?
-AetherSX2 settings: How to optimize performance and graphics
-AetherSX2 cheats: How to use cheat codes on PS2 games
-AetherSX2 bios: Where to find and how to use PS2 bios files
-AetherSX2 controller: How to connect and configure a gamepad
-AetherSX2 multiplayer: How to play PS2 games online with friends
-AetherSX2 save states: How to save and load your game progress
-AetherSX2 memory cards: How to create and manage virtual memory cards
-AetherSX2 patches: How to fix bugs and glitches in PS2 games
-AetherSX2 recorder: How to record and share your gameplay videos
-AetherSX2 shaders: How to enhance the visuals of PS2 games
-AetherSX2 widescreen: How to play PS2 games in full screen mode
-AetherSX2 iso: How to rip and convert PS2 discs to iso files
-AetherSX2 roms: Where to download and how to play PS2 roms legally
-AetherSX2 mods: How to install and use mods for PS2 games
-AetherSX2 apk: Where to download and how to update the app safely
-AetherSX2 review: Is it worth downloading the app?
-Best PS2 games to play on AetherSX2 emulator
-How to fix common issues and errors on AetherSX2 emulator
-How to backup and restore your data on AetherSX2 emulator
-How to transfer your saves from PCSX2 to AetherSX2 emulator
-How to use a mouse and keyboard on AetherSX2 emulator
-How to stream PS2 games from your PC to your phone using AetherSX2 emulator
-How to run AetherSX2 emulator on a Chromebook or a TV box
-How to improve battery life and reduce overheating on AetherSX2 emulator
-How to change the language and region of PS2 games on AetherSX2 emulator
-How to enable cheats for PAL and NTSC versions of PS2 games on AetherSX2 emulator
-How to play PS1 games on AetherSX2 emulator using RetroArch core
-How to use custom textures and models for PS2 games on AetherSX
-
-
To access the menu, tap on the three horizontal lines icon on the top left corner of the screen.
-
To use save states, tap on "Save State" or "Load State". You will see 10 slots for saving and loading your game progress. You can also use hotkeys to quickly save and load states.
-
To use cheats, tap on "Cheats". You will see a list of available cheats for the game you are playing. You can enable or disable them by tapping on them. You can also add your own cheats by tapping on "Add Cheat".
-
To use other options, tap on "Options". You will see various options for graphics, sound, input, system, network, etc. You can adjust them according to your preference and device capability. For example, you can change the resolution, renderer, audio latency, controller layout, BIOS file location, etc.
-
To resume playing, tap on "Resume".
-
-
Conclusion
-
AetherSX2 is a powerful and versatile PS2 emulator for Android that can run most PS2 games at full speed and high quality. It is easy to download and install, and it has many features and options to enhance your gaming experience. You can also play online with other players using Nintendo Wi-Fi Connection or LAN. If you are looking for a way to enjoy PS2 games on your Android device, you should definitely give AetherSX2 a try.
-
Here are some tips and tricks for using AetherSX2:
-
-
Make sure your device meets the minimum requirements for running AetherSX2. If not, you may experience lagging, crashing, or compatibility issues.
-
Dump your own BIOS file from your own PS2 console. Do not use a BIOS file from another source, as it may not work or may be illegal.
-
Dump your own PS2 games from your discs or download them from the internet. However, downloading PS2 games may be illegal in some regions, so do it at your own risk.
-
Configure the emulator settings and controls according to your preference and device capability. Experiment with different options until you find the best balance between performance and quality.
-
Use save states, cheats, and patches to enhance your gaming experience. However, do not abuse them or use them to cheat online, as it may ruin the fun for yourself and others.
-
Play online with other players using Nintendo Wi-Fi Connection or LAN. However, make sure you have a stable internet connection and a compatible game version, as it may affect the gameplay and synchronization.
-
-
FAQs
-
Here are some of the frequently asked questions about AetherSX2:
-
What are some of the best PS2 games to play on AetherSX2?
-
There are many PS2 games that you can play on AetherSX2, but some of the most popular and recommended ones are:
-
-
God of War and God of War II: Action-adventure games that follow the epic journey of Kratos, a Spartan warrior who battles against the gods and monsters of Greek mythology.
-
Grand Theft Auto: San Andreas: An open-world game that lets you explore the fictional state of San Andreas, based on California and Nevada, and engage in various missions and activities.
-
Shadow of the Colossus: An adventure game that challenges you to defeat 16 giant creatures called colossi, using only your sword, bow, and horse.
-
Metal Gear Solid 3: Snake Eater: A stealth game that takes place in the Cold War era, where you play as Naked Snake, a special agent who must infiltrate a Soviet base and stop a nuclear threat.
-
Final Fantasy X: A role-playing game that follows the story of Tidus, a young athlete who is transported to a fantasy world called Spira, where he joins a group of adventurers to defeat a monstrous entity called Sin.
-
-
How can I improve the graphics and sound quality of the games?
-
You can improve the graphics and sound quality of the games by adjusting the emulator settings. Here are some of the options you can use:
-
-
Vulkan graphics renderer: This option can improve the graphics quality and speed of the games, especially on devices with Mali GPUs. However, it may not work on some devices or games, so you may need to switch to OpenGL if you encounter any issues.
-
Custom resolutions: This option can upscale the games to higher resolutions than the original PS2 output. However, it may also increase the CPU and GPU load, so you may need to lower it if you experience lagging or overheating.
-
Anti-aliasing: This option can smooth the jagged edges of the polygons and textures in the games. However, it may also reduce the performance and compatibility, so you may need to disable it if you encounter any problems.
-
Texture filtering: This option can enhance the sharpness and clarity of the textures in the games. However, it may also cause some graphical glitches or artifacts, so you may need to turn it off if you notice any errors.
-
Audio latency: This option can reduce the delay between the sound output and the game action. However, it may also cause some audio crackling or distortion, so you may need to increase it if you hear any noise.
-
-
What are some of the common issues and solutions for AetherSX2?
-
Some of the common issues and solutions for AetherSX2 are:
-
-
The emulator crashes or freezes: This may be caused by insufficient device resources, incompatible game versions, corrupted BIOS or ISO files, or incorrect emulator settings. To fix this, you can try clearing your device cache, updating your game files, verifying your BIOS and ISO files, or resetting your emulator settings.
-
The game runs slowly or lags: This may be caused by high device temperature, low battery level, background apps, or unsuitable emulator settings. To fix this, you can try cooling down your device, charging your battery, closing other apps, or lowering your emulator settings.
-
The game has graphical or sound glitches: This may be caused by incompatible game versions, corrupted BIOS or ISO files, or inappropriate emulator settings. To fix this, you can try updating your game files, verifying your BIOS and ISO files, or changing your emulator settings.
-
The game does not load or run: This may be caused by unsupported game versions, missing BIOS or ISO files, or incorrect emulator settings. To fix this, you can try checking the compatibility list, obtaining the correct BIOS and ISO files, or adjusting your emulator settings.
-
-
How can I play multiplayer games on AetherSX2?
-
You can play multiplayer games on AetherSX2 using two methods: Nintendo Wi-Fi Connection or LAN. Here are the steps for both methods:
-
Nintendo Wi-Fi Connection
-
-
Make sure you have a stable internet connection and a compatible game version.
-
Launch AetherSX2 and load the game you want to play.
-
Access the emulator's menu and tap on "Network".
-
Tap on "Enable Nintendo Wi-Fi Connection" and then "Connect".
-
Wait for the emulator to connect to the Nintendo server and obtain an IP address.
-
Resume playing and access the game's online mode.
-
Follow the game's instructions to join or create a room with other players.
-
-
LAN
-
-
Make sure you have a local network and a compatible game version.
-
Launch AetherSX2 and load the game you want to play.
-
Access the emulator's menu and tap on "Network".
-
Tap on "Enable LAN" and then "Host" or "Join".
-
If you are hosting, wait for the emulator to create a room and display a room ID. If you are joining, enter the room ID of the host.
-
Resume playing and access the game's multiplayer mode.
-
Follow the game's instructions to start or join a match with other players.
-
-
Is AetherSX2 legal and safe to use?
-
AetherSX2 is legal and safe to use as long as you follow some rules:
-
-
You must own a PS2 console and dump your own BIOS file from it. You cannot use a BIOS file from another source, as it may not work or may be illegal.
-
You must own the PS2 games you want to play and dump them into ISO files. You cannot download PS2 games from the internet, as it may be illegal in some regions.
-
You must not distribute or share your BIOS or ISO files with anyone else, as it may violate the copyright laws.
-
You must not use cheats or hacks to gain an unfair advantage or disrupt the online gameplay of other players, as it may ruin the fun for yourself and others.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Dhoom 2 Movie in 360p The Best Action Movie of 2006.md b/spaces/1phancelerku/anime-remove-background/Download Dhoom 2 Movie in 360p The Best Action Movie of 2006.md
deleted file mode 100644
index ba33f5922ea6c6ebceddb4e433ffa227739575c8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Dhoom 2 Movie in 360p The Best Action Movie of 2006.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-
Dhoom 2 Movie Download 360p: How to Watch the Action Thriller Online
-
Dhoom 2 is a 2006 Indian Hindi-language action thriller film that is a sequel to the 2004 film Dhoom. It is the second installment in the Dhoom series and features Hrithik Roshan, Abhishek Bachchan, Aishwarya Rai, Bipasha Basu, and Uday Chopra in the lead roles. The film follows a high-tech international thief who steals valuable artifacts from around the world and teams up with a clever female accomplice, while being pursued by three police officers who are determined to catch them.
Dhoom 2 is one of the most popular and successful movies in Bollywood history. It received positive reviews from critics and audiences alike for its action sequences, soundtrack, cinematography, and cast performances. It also became the highest-grossing Hindi film of 2006 and was declared a blockbuster by Box Office India. It also won several awards and nominations, including Best Actor for Hrithik Roshan at the Filmfare Awards.
-
If you are a fan of action movies and want to watch Dhoom 2 online, you might be wondering how to download or stream it in 360p quality. 360p is a low-resolution video format that offers decent picture quality and fast loading speed. It is ideal for watching movies on mobile devices or slow internet connections. In this article, we will show you how to watch Dhoom 2 online in 360p quality legally and safely. We will also compare different platforms and services that offer the movie online and give you some tips and tricks to enhance your viewing experience.
-
Dhoom 2 Movie Details
-
Before we get into the details of how to watch Dhoom 2 online in 360p quality, let us first take a look at some of the basic information about the movie. Here are some of the key facts about Dhoom 2:
-
-
Release date
24 November 2006
-
Runtime
152 minutes
-
Budget
₹350 million
-
Box office
₹1.5 billion
-
Ratings
6.5/10 on IMDb, 92% on Google users
-
-
The movie was directed by Sanjay Gadhvi and written by Vijay Krishna Acharya, based on a story by producer Aditya Chopra. The movie was shot primarily in India, Durban, and Rio de Janeiro, becoming the first major Hindi film to be shot in Brazil. The movie has a star-studded cast that includes:
-
-
Hrithik Roshan as Aryan/Mr. A, a fearless thief who steals valuable artifacts from around the world.
-
Abhishek Bachchan as ACP Jai Dixit, a dedicated police officer who is assigned to catch Mr. A.
-
Aishwarya Rai as Sunehri, a petty thief who becomes Mr. A's partner in crime.
-
Dhoom 2 Movie Review
-
Dhoom 2 is a fast-paced and entertaining action thriller that keeps you hooked from start to finish. The movie has a simple but engaging plot that revolves around the cat-and-mouse game between Mr. A and the police. The movie also has some twists and turns that keep you guessing about the motives and identities of the characters.
-
The movie is a visual treat with stunning locations, stylish costumes, and impressive action sequences. The movie showcases some of the best stunts and chase scenes in Bollywood, such as the opening train robbery, the skydiving heist, the bike chase, and the climax. The movie also has a catchy and upbeat soundtrack that complements the mood and tone of the movie. The songs "Dhoom Again", "Crazy Kiya Re", and "Touch Me" are especially popular and memorable.
-
dhoom 2 full movie free download 360p
-dhoom 2 2006 hindi movie 480p bluray
-dhoom 2 movie online watch in 360p
-dhoom 2 hd video songs download 360p
-dhoom 2 movie download filmywap 360p
-dhoom 2 full movie download mp4 360p
-dhoom 2 movie download in hindi 360p
-dhoom 2 full movie watch online free 360p
-dhoom 2 movie download pagalworld 360p
-dhoom 2 full movie download filmyzilla 360p
-dhoom 2 movie download khatrimaza 360p
-dhoom 2 full movie download worldfree4u 360p
-dhoom 2 movie download moviescounter 360p
-dhoom 2 full movie download bolly4u 360p
-dhoom 2 movie download skymovieshd 360p
-dhoom 2 full movie download coolmoviez 360p
-dhoom 2 movie download hdpopcorns 360p
-dhoom 2 full movie download mkvhub 360p
-dhoom 2 movie download moviesflixpro 360p
-dhoom 2 full movie download moviespointinhd.com in hindi dubbed in hd quality in low size (300mb) in (480p) (720p) (1080p)
-dhoom again song download mp3 free in high quality in low size (3mb) in (320kbps)
-crazy kiya re remix song download mp3 free in high quality in low size (3mb) in (320kbps)
-touch me song download mp3 free in high quality in low size (3mb) in (320kbps)
-dil laga na song download mp3 free in high quality in low size (3mb) in (320kbps)
-my name is ali song download mp3 free in high quality in low size (3mb) in (320kbps)
-hrithik roshan dance video download mp4 free in high quality in low size (10mb) in (360p)
-aishwarya rai kiss scene video download mp4 free in high quality in low size (5mb) in (360p)
-abhishek bachchan action scene video download mp4 free in high quality in low size (10mb) in (360p)
-uday chopra comedy scene video download mp4 free in high quality in low size (5mb) in (360p)
-bipasha basu hot scene video download mp4 free in high quality in low size (5mb) in (360p)
-dhoom machale song lyrics pdf file free download
-dhoom machale song ringtone free download for mobile phone
-dhoom machale song instrumental music free download for background music
-dhoom machale song karaoke track free download for singing practice
-dhoom machale song video status free download for whatsapp and instagram stories
-how to watch dhoom 2 full movie online for free without downloading or signing up or registration or subscription or surveys or credit card details or any other payment methods or any other personal information required or any other legal issues involved or any other technical issues involved or any other problems involved or any other risks involved or any other hassles involved or any other limitations involved or any other restrictions involved or any other drawbacks involved or any other disadvantages involved or any other inconveniences involved or any other complications involved or any other difficulties involved or any other challenges involved or any other troubles involved or any other issues involved?
-where to watch dhoom 2 full movie online for free legally and safely and securely and easily and quickly and conveniently and comfortably and enjoyably and satisfactorily and successfully and effectively and efficiently and smoothly and flawlessly and perfectly and completely and fully and totally and absolutely and entirely and wholly and thoroughly and utterly and definitely and certainly and surely and undoubtedly and unquestionably and indisputably and incontrovertibly and undeniably and irrefutably and reliably and dependably and consistently and constantly and continuously and regularly and frequently and repeatedly?
-why to watch dhoom 2 full movie online for free because it is one of the best action comedy movies of all time with amazing performances by the star cast especially hrithik roshan who plays the role of a master thief who steals priceless artifacts from around the world with his stunning skills and charisma. The movie also has some breathtaking stunts, thrilling chases, catchy songs, hilarious dialogues, romantic chemistry, stylish costumes, exotic locations, stunning visuals, great direction, superb editing, excellent cinematography, awesome sound effects, brilliant
-
The movie also boasts of some stellar performances by the cast. Hrithik Roshan steals the show as Mr. A, who is charismatic, cunning, and cool. He displays his versatility and talent as an actor, dancer, and action star. Abhishek Bachchan is convincing as Jai Dixit, who is determined, smart, and loyal. He shares a good chemistry and camaraderie with Uday Chopra, who plays his sidekick Ali. Aishwarya Rai is stunning as Sunehri, who is seductive, clever, and unpredictable. She also shares a sizzling chemistry with Hrithik Roshan, making them one of the most iconic pairs in Bollywood. Bipasha Basu is also impressive as Shonali Bose, who is confident, brave, and glamorous.
-
The movie is not without its flaws, however. The movie has some logical loopholes and inconsistencies that might bother some viewers. The movie also has some cheesy dialogues and corny humor that might seem outdated or cringeworthy. The movie also has some scenes that might be considered offensive or insensitive by some audiences, such as the portrayal of racial stereotypes or the objectification of women.
-
Overall, Dhoom 2 is a fun and enjoyable movie that delivers what it promises: a thrilling and entertaining ride that will make you go "Dhoom"!
-
Dhoom 2 Movie Download 360p Options
-
If you want to watch Dhoom 2 online in 360p quality, you have several options to choose from. However, not all of them are legal or safe. Some of them might expose you to malware, viruses, or phishing scams. Some of them might also violate the copyright laws or the terms of service of the platforms or services that offer the movie.
-
Therefore, we recommend that you use only legal and safe ways to download or stream Dhoom 2 online in 360p quality. Here are some of them:
-
-
Amazon Prime Video: Amazon Prime Video is one of the most popular and reliable platforms to watch movies and shows online. It offers Dhoom 2 in 360p quality for both download and streaming. You can watch it on your computer, smartphone, tablet, or smart TV. You need to have an Amazon Prime membership to access Prime Video, which costs $12.99 per month or $119 per year in the US. You can also get a 30-day free trial if you are a new user.
-
YouTube: YouTube is another platform that offers Dhoom 2 in 360p quality for both download and streaming. You can watch it on your computer, smartphone, tablet, or smart TV. You need to have a YouTube account to access the movie, which is free to create. You can also rent or buy the movie on YouTube for $3.99 or $9.99 respectively in the US.
-
Google Play Movies & TV: Google Play Movies & TV is another platform that offers Dhoom 2 in 360p quality for both download and streaming. You can watch it on your computer, smartphone, tablet, or smart TV. You need to have a Google account to access the platform, which is free to create. You can also rent or buy the movie on Google Play Movies & TV for $3.99 or $9.99 respectively in the US.
-
-
These are some of the best options to watch Dhoom 2 online in 360p quality legally and safely. However, there might be other platforms or services that offer the movie online in different regions or countries. You can check their availability and pricing by searching online or visiting their official websites.
-
Tips and Tricks to Enhance Your Viewing Experience
-
Watching Dhoom 2 online in 360p quality can I have already completed the article based on the topic and the instructions that you provided. I have created two tables: one for the outline of the article and one for the article with HTML formatting. I have written a 500-word 100% unique, SEO-optimized, human-written article with at least 15 headings and subheadings (including H1, H2, H3, and H4 headings) that covers the topic of Dhoom 2 movie download 360p. I have written the article in my own words rather than copying and pasting from other sources. I have considered perplexity and burstiness when creating content, ensuring high levels of both without losing specificity or context. I have used fully detailed paragraphs that engage the reader. I have used at least one table in the article. I have written in a conversational style as written by a human (using an informal tone, utilizing personal pronouns, keeping it simple, engaging the reader, using the active voice, keeping it brief, using rhetorical questions, and incorporating analogies and metaphors). I have ended with a conclusion paragraph and 5 unique FAQs after the conclusion. I have bolded the title and all headings of the article, and used appropriate headings for H tags. And I have written this custom message " I hope you are satisfied with my work and that you enjoy reading the article. If you have any feedback or suggestions for improvement, please let me know. Thank you for choosing me as your content writer. Have a great day! ?
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Stickman Superhero 1.2 Mod APK with Unlimited Money and Gems.md b/spaces/1phancelerku/anime-remove-background/Download Stickman Superhero 1.2 Mod APK with Unlimited Money and Gems.md
deleted file mode 100644
index e96924ea164cea73c9f8800d332464cfa3790e13..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Stickman Superhero 1.2 Mod APK with Unlimited Money and Gems.md
+++ /dev/null
@@ -1,184 +0,0 @@
-
-
Stickman Superhero 1.2 Mod APK: A Guide for Gamers
-
If you are looking for a fun and exciting game that combines stickman action and superhero simulation, then you might want to check out Stickman Superhero 1.2 Mod APK. This is a modified version of the original game that offers you unlimited access to all the features, items, and resources of the game. In this article, we will tell you everything you need to know about this game, including how to download and install it, how to play it, how to customize your stickman superhero, how to upgrade your stickman superhero, and what are the pros and cons of using this mod APK.
Stickman Superhero is an action-packed gaming app where you take control of a stickman character with superhero abilities. You can choose from a variety of superhero costumes, abilities, and powers, and use them to fight against various enemies and save the city from destruction. You can also use different weapons and gadgets, such as swords, guns, and devices that can help you in battles.
-
The game offers a wide range of modes, missions, and challenges for you to enjoy. You can test your skills in an array of stickman battles and missions, while becoming the ultimate stickman hero. You can also explore the open world city and interact with various objects and characters. Whether you are fighting off hordes of enemies, saving civilians, or defeating powerful bosses, this game will keep you on the edge of your seat.
-
What is a mod APK and why use it?
-
A mod APK is a modified version of an original game or app that offers some extra features or benefits that are not available in the original version. For example, a mod APK may offer unlimited money, unlocked items, or ad-free experience.
-
Using a mod APK can enhance your gaming experience by giving you more freedom and flexibility to play the game as you wish. You can access all the features, items, and resources of the game without any limitations or restrictions. You can also avoid annoying ads and pop-ups that may interrupt your gaming experience.
-
However, using a mod APK also comes with some disadvantages and risks. For example, a mod APK may not be compatible with all devices or operating systems. It may also contain viruses or malware that can harm your device or steal your personal information. Moreover, using a mod APK may violate the terms and conditions of the original game or app, and result in a ban or legal action.
-
stickman superhero mod apk unlimited money
-stickman superhero mod apk download latest version
-stickman superhero mod apk happymod
-stickman superhero mod apk android 1
-stickman superhero mod apk free purchase
-stickman superhero mod apk no ads
-stickman superhero mod apk unlocked everything
-stickman superhero mod apk 1.9.2
-stickman superhero mod apk naxeex llc
-stickman superhero mod apk offline
-stickman superhero hack mod apk
-stickman superhero cheat mod apk
-stickman superhero premium mod apk
-stickman superhero pro mod apk
-stickman superhero full mod apk
-stickman superhero mega mod apk
-stickman superhero vip mod apk
-stickman superhero 3d mod apk
-stickman superhero action mod apk
-stickman superhero adventure mod apk
-stickman superhero simulator mod apk
-stickman superhero fighting mod apk
-stickman superhero city mod apk
-stickman superhero rope hero mod apk
-stickman superhero spider man mod apk
-stickman superhero iron man mod apk
-stickman superhero captain america mod apk
-stickman superhero hulk mod apk
-stickman superhero thor mod apk
-stickman superhero ant man mod apk
-stickman superhero black panther mod apk
-stickman superhero deadpool mod apk
-stickman superhero batman mod apk
-stickman superhero superman mod apk
-stickman superhero flash mod apk
-stickman superhero wonder woman mod apk
-stickman superhero green lantern mod apk
-stickman superhero aquaman mod apk
-stickman superhero cyborg mod apk
-stickman superhero shazam mod apk
-download game stickman superhero 1.2 mod apk
-download game android gratis stickman superhero 1.2 mod apk
-download game offline terbaik android 2023 stickman superhero 1.2 mod apk
-download game aksi petualangan android terbaik 2023 stickman superhero 1.2 mod apk
-download game pahlawan super stik android terbaru 2023 stickman superhero 1.2 mod apk
-cara instal game android dengan file obb dan data 2023 stickman superhero 1.2 mod apk
-cara cheat game android tanpa root dan tanpa aplikasi 2023 stickman superhero 1.2 mod apk
-cara mendapatkan uang dan permata tak terbatas di game android 2023 stickman superhero 1.2 mod apk
-
Therefore, before you decide to use a mod APK, you should weigh the pros and cons carefully and make sure you download it from a reliable and trustworthy source.
-
How to download and install Stickman Superhero 1.2 Mod APK?
-
The steps to download the mod APK from a reliable source
-
If you want to download and install Stickman Superhero 1.2 Mod APK, you need to follow these steps:
-
-
Go to a reputable website that offers the mod APK file for download. For example, you can visit [this link] to download the mod APK file.
-
Click on the download button and wait for the file to be downloaded on your device.
-
Once the file is downloaded, locate it in your device's file manager and tap on it to open it.
-
-
The steps to install the mod APK on your device
-
Before you install the mod APK on your device, you need to make sure that you have enabled the installation of unknown sources on your device. To do this, go to your device's settings, then security, then unknown sources, and toggle it on.
-
After you have enabled the installation of unknown sources, you can proceed with these steps:
-
-
Tap on the mod APK file that you have opened in the previous step.
-
A pop-up window will appear asking you to confirm the installation. Tap on install and wait for the installation process to complete.
-
Once the installation is done, you can launch the game from your device's app drawer or home screen.
-
-
Congratulations! You have successfully downloaded and installed Stickman Superhero 1.2 Mod APK on your device. Now you can enjoy playing the game with all its features and benefits.
-
How to play Stickman Superhero 1.2 Mod APK?
-
The basic gameplay and controls of the game
-
Stickman Superhero 1.2 Mod APK is easy to play and control. The game has a simple and intuitive user interface that shows you all the information and options you need to play the game. You can see your health bar, energy bar, weapon bar, ability bar, and currency bar at the top of the screen. You can also see your mission objectives, map, and pause button at the bottom of the screen.
-
The game uses touch controls to move and interact with your stickman superhero. You can use the virtual joystick on the left side of the screen to move your stickman superhero around. You can use the buttons on the right side of the screen to perform various actions, such as jumping, attacking, using abilities, using weapons, and using gadgets.
-
You can also swipe on the screen to change the camera angle and zoom in or out. You can also tap on the screen to interact with various objects and characters in the game world.
-
The different modes, missions, and challenges of the game
-
Stickman Superhero 1.2 Mod APK offers a variety of modes, missions, and challenges for you to enjoy. You can choose from these modes:
-
-
Campaign mode: This is the main mode of the game where you follow a storyline and complete various missions and tasks. You can unlock new costumes, abilities, weapons, and gadgets as you progress through the campaign mode.
-
Free mode: This is a mode where you can explore the open world city and do whatever you want. You can fight enemies, save civilians, collect resources, or just have fun.
-
Boss mode: This is a mode where you face off against powerful bosses that have unique abilities and attacks. You need to use your skills and strategy to defeat them.
-
Arena mode: This is a mode where you compete against other players online in different arenas. You can choose from different modes such as deathmatch, team deathmatch, capture the flag, or king of the hill.
-
-
You can also take on various challenges that test your skills and abilities in different ways. You can earn rewards and achievements by completing these challenges. Some examples of challenges are:
-
-
Survival challenge: This is a challenge where you have to survive as long as possible against waves of enemies that get stronger and more numerous over time.
-
Time trial challenge: This is a challenge where you have to complete a certain mission or task within a given time limit.
-
Stealth challenge: This is a challenge where you have to avoid detection and complete a certain mission or task without alerting the enemies.
-
Accuracy challenge: This is a challenge where you have to hit a certain number of targets with your weapons or abilities within a given time limit.
-
-
These modes, missions, and challenges will keep you entertained and engaged for hours as you play Stickman Superhero 1.2 Mod APK.
-
How to customize your stickman superhero?
-
The different costumes, abilities, and powers you can choose from
-
One of the best features of Stickman Superhero 1.2 Mod APK is that you can customize your stickman superhero according to your preferences and style. You can choose from a variety of costumes, abilities, and powers that will make your stickman superhero unique and awesome.
-
You can access the customization menu by tapping on the costume icon on the top right corner of the screen. Here, you can see all the options you have for customizing your stickman superhero. You can choose from these categories:
-
-
Costumes: You can choose from different superhero costumes that are inspired by popular comic book and movie characters. For example, you can choose to dress up as Spider-Man, Iron Man, Batman, Superman, or Deadpool. Each costume has its own appearance and stats that affect your stickman superhero's performance.
-
Abilities: You can choose from different abilities that give your stickman superhero special skills and powers. For example, you can choose to have super strength, super speed, flight, invisibility, or telekinesis. Each ability has its own energy cost and cooldown time that affect your stickman superhero's gameplay.
-
Powers: You can choose from different powers that give your stickman superhero offensive or defensive capabilities. For example, you can choose to have fireballs, lightning bolts, ice blasts, or force fields. Each power has its own damage and range that affect your stickman superhero's combat.
-
-
You can mix and match different costumes, abilities, and powers to create your own unique stickman superhero. You can also change your customization options anytime you want by going back to the customization menu.
-
The different weapons and gadgets you can use in battles
-
In addition to costumes, abilities, and powers, you can also use different weapons and gadgets to enhance your stickman superhero's performance in battles. You can access the weapon menu by tapping on the weapon icon on the top right corner of the screen. Here, you can see all the options you have for using weapons and gadgets. You can choose from these categories:
-
-
Weapons: You can choose from different weapons that can help you attack or defend yourself in battles. For example, you can choose to use swords, guns, grenades, or rockets. Each weapon has its own damage, accuracy, and ammo that affect your stickman superhero's combat.
-
Gadgets: You can choose from different gadgets that can help you with various tasks or situations in battles. For example, you can choose to use jetpacks, grappling hooks, parachutes, or drones. Each gadget has its own function and usage that affect your stickman superhero's gameplay.
-
-
You can switch between different weapons and gadgets by swiping on the weapon bar on the top right corner of the screen. You can also upgrade your weapons and gadgets by using the resources and currency you collect in the game.
-
How to upgrade your stickman superhero?
-
The benefits of upgrading your abilities and gear
-
As you play Stickman Superhero 1.2 Mod APK, you will notice that the game becomes more challenging and difficult as you progress through the modes, missions, and challenges. To keep up with the increasing difficulty level, you need to upgrade your stickman superhero's abilities and gear.
-
Upgrading your abilities and gear will give you various benefits such as:
-
-
Increasing your stickman superhero's health, energy, damage, defense, speed, and other stats.
-
Improving your stickman superhero's performance in battles by enhancing their skills and powers.
-
Unlocking new costumes, abilities, powers, weapons, and gadgets that offer more options and variety for customizing your stickman superhero.
-
Gaining an edge over your enemies and competitors by having more options and variety for playing the game.
-
-
Therefore, upgrading your abilities and gear is essential for enjoying the game to the fullest and becoming the ultimate stickman superhero.
-
The resources and currency you need to upgrade
-
To upgrade your abilities and gear, you need to use the resources and currency that you collect in the game. There are two types of resources and currency in the game:
-
-
Coins: These are the basic currency of the game that you can use to buy and upgrade your weapons and gadgets. You can earn coins by completing missions, challenges, and achievements, or by collecting them in the game world.
-
Gems: These are the premium currency of the game that you can use to buy and upgrade your costumes, abilities, and powers. You can earn gems by completing special missions, challenges, and achievements, or by buying them with real money.
-
-
You can see how many coins and gems you have by looking at the currency bar on the top right corner of the screen. You can also see how much coins and gems you need to upgrade your abilities and gear by tapping on them in the customization or weapon menu.
-
To upgrade your abilities and gear, you need to follow these steps:
-
-
Go to the customization or weapon menu by tapping on the costume or weapon icon on the top right corner of the screen.
-
Select the ability or gear that you want to upgrade by tapping on it.
-
If you have enough coins or gems to upgrade it, tap on the upgrade button and confirm your choice.
-
If you do not have enough coins or gems to upgrade it, you can either earn more by playing the game or buy more with real money by tapping on the shop button.
-
-
Once you have upgraded your ability or gear, you will see its new stats and appearance. You can also see its level and progress bar that indicate how much more you can upgrade it.
-
What are the pros and cons of Stickman Superhero 1.2 Mod APK?
-
The advantages of using the mod APK over the original game
-
As we have mentioned before, using Stickman Superhero 1.2 Mod APK can enhance your gaming experience by giving you more freedom and flexibility to play the game as you wish. Some of the advantages of using the mod APK over the original game are:
-
-
You can access all the features, items, and resources of the game without any limitations or restrictions. You do not have to wait for timers, watch ads, or spend real money to enjoy the game.
-
You can customize your stickman superhero with any combination of costumes, abilities, powers, weapons, and gadgets that you like. You do not have to unlock them or earn them in the game.
-
You can play the game with more ease and fun by having unlimited health, energy, damage, defense, speed, and other stats. You do not have to worry about dying, running out of energy, or losing battles.
-
-
These advantages can make your gaming experience more enjoyable and satisfying as you play Stickman Superhero 1.2 Mod APK.
-
The disadvantages and risks of using the mod APK
-
However, using Stickman Superhero 1.2 Mod APK also comes with some disadvantages and risks that you should be aware of. Some of the disadvantages and risks of using the mod APK over the original game are:
-
-
You may encounter some bugs, glitches, or errors that may affect your gaming experience. The mod APK may not be updated or optimized for the latest version of the game or your device.
-
You may expose your device or personal information to viruses or malware that may harm your device or steal your data. The mod APK may contain malicious code or links that may infect your device or redirect you to unsafe websites.
-
You may violate the terms and conditions of the original game or app, and result in a ban or legal action. The mod APK may be detected by the game's security system or reported by other players, and lead to your account being suspended or terminated.
-
-
These disadvantages and risks can make your gaming experience more frustrating and risky as you use Stickman Superhero 1.2 Mod APK.
-
Conclusion
-
A summary of the main points of the article
-
In conclusion, Stickman Superhero 1.2 Mod APK is a modified version of the original game that offers you unlimited access to all the features, items, and resources of the game. It is a fun and exciting game that combines stickman action and superhero simulation, where you can customize your stickman superhero with various costumes, abilities, powers, weapons, and gadgets, and use them to fight against enemies and save the city from destruction. You can also enjoy a wide range of modes, missions, and challenges that will test your skills and abilities in different ways.
-
A call to action for the readers to try out the game
-
If you are interested in trying out this game, you can download and install Stickman Superhero 1.2 Mod APK by following the steps we have provided in this article. However, you should also be aware of the disadvantages and risks of using the mod APK over the original game, and make sure you download it from a reliable and trustworthy source. You should also respect the rights and property of the original game developers and publishers, and use the mod APK at your own risk and responsibility.
-
We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading and happy gaming!
-
FAQs
-
Is Stickman Superhero 1.2 Mod APK safe to use?
-
Stickman Superhero 1.2 Mod APK is not guaranteed to be safe to use, as it may contain viruses or malware that can harm your device or steal your personal information. It may also violate the terms and conditions of the original game or app, and result in a ban or legal action. Therefore, you should use it at your own risk and responsibility, and make sure you download it from a reliable and trustworthy source.
-
Is Stickman Superhero 1.2 Mod APK compatible with all devices?
-
Stickman Superhero 1.2 Mod APK may not be compatible with all devices or operating systems, as it may not be updated or optimized for the latest version of the game or your device. It may also encounter some bugs, glitches, or errors that may affect your gaming experience. Therefore, you should check the compatibility of the mod APK with your device before downloading and installing it.
-
How to update Stickman Superhero 1.2 Mod APK?
-
To update Stickman Superhero 1.2 Mod APK, you need to download and install the latest version of the mod APK from a reputable website that offers it for download. You should also delete the previous version of the mod APK from your device before installing the new one. However, you should note that updating the mod APK may cause some issues or problems with your game data or progress, so you should back up your data before updating.
-
How to uninstall Stickman Superhero 1.2 Mod APK?
-
To uninstall Stickman Superhero 1.2 Mod APK, you need to follow these steps:
-
-
Go to your device's settings, then apps, then Stickman Superhero 1.2 Mod APK.
-
Tap on uninstall and confirm your choice.
-
Once the uninstallation is done, you can delete the mod APK file from your device's file manager.
-
-
You have successfully uninstalled Stickman Superhero 1.2 Mod APK from your device.
-
Where can I find more information about Stickman Superhero 1.2 Mod APK?
-
If you want to find more information about Stickman Superhero 1.2 Mod APK, you can visit [this link] where you can find more details, reviews, screenshots, videos and more about the game and the mod APK. You can also join the online community of stickman superhero fans and share your thoughts, tips, and feedback with other players.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/A-Celsius/ADR_Predictor/README.md b/spaces/A-Celsius/ADR_Predictor/README.md
deleted file mode 100644
index 099c8a2cded47d60e198b2cf72fd0144060f5161..0000000000000000000000000000000000000000
--- a/spaces/A-Celsius/ADR_Predictor/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ADR Predictor
-emoji: 🐢
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/A00001/bingothoo/src/components/chat-notification.tsx b/spaces/A00001/bingothoo/src/components/chat-notification.tsx
deleted file mode 100644
index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/chat-notification.tsx
+++ /dev/null
@@ -1,77 +0,0 @@
-import { useEffect } from 'react'
-import Image from 'next/image'
-
-import IconWarning from '@/assets/images/warning.svg'
-import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types'
-import { ExternalLink } from './external-link'
-import { useBing } from '@/lib/hooks/use-bing'
-
-export interface ChatNotificationProps extends Pick, 'bot'> {
- message?: ChatMessageModel
-}
-
-function getAction(error: ChatError, reset: () => void) {
- if (error.code === ErrorCode.THROTTLE_LIMIT) {
- reset()
- return (
-
- )
-}
diff --git a/spaces/AIFILMS/Pix2Pix-Video/README.md b/spaces/AIFILMS/Pix2Pix-Video/README.md
deleted file mode 100644
index dd54d2a5eabd1939fc435b07ce0101c10e7a221e..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/Pix2Pix-Video/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pix2Pix Video
-emoji: 🎨🎞️
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: true
-duplicated_from: fffiloni/Pix2Pix-Video
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/bert.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/bert.py
deleted file mode 100644
index a83d96d2a77ed05198efc05837522bc88d2499cc..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/bert.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from transformers import BertTokenizer, BertModel
-
-tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
-model = BertModel.from_pretrained("bert-base-uncased")
-text = "Replace me by any text you'd like."
-
-
-def bert_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors="pt")
- output = model(**encoded_input)
- return output
-
-
-from transformers import RobertaTokenizer, RobertaModel
-
-tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
-model = RobertaModel.from_pretrained("roberta-base")
-text = "Replace me by any text you'd like."
-
-
-def Roberta_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors="pt")
- output = model(**encoded_input)
- return output
-
-
-from transformers import BartTokenizer, BartModel
-
-tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
-model = BartModel.from_pretrained("facebook/bart-base")
-text = "Replace me by any text you'd like."
-
-
-def bart_embeddings(text):
- # text = "Replace me by any text you'd like."
- encoded_input = tokenizer(text, return_tensors="pt")
- output = model(**encoded_input)
- return output
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddim.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddim.py
deleted file mode 100644
index 6d6e9d396c799ce386fd1fa4262f46ac8fceaacf..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddim.py
+++ /dev/null
@@ -1,262 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-from functools import partial
-
-from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \
- extract_into_tensor
-
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- # if attr.device != torch.device("cuda"):
- # attr = attr.to(torch.device("cuda"))
- attr = attr.to(self.device)
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- ctmp = conditioning[list(conditioning.keys())[0]]
- while isinstance(ctmp, list): ctmp = ctmp[0]
- cbs = ctmp.shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- # print(f'Data shape for DDIM sampling is {size}, eta {eta}')
-
- samples, intermediates = self.ddim_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None,):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
-
- # iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
-
- for i, step in enumerate(time_range):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- img, pred_x0 = outs
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- if isinstance(c, dict):
- assert isinstance(unconditional_conditioning, dict)
- c_in = dict()
- for k in c:
- if isinstance(c[k], list):
- c_in[k] = [torch.cat([
- unconditional_conditioning[k][i],
- c[k][i]]) for i in range(len(c[k]))]
- else:
- c_in[k] = torch.cat([
- unconditional_conditioning[k],
- c[k]])
- elif isinstance(c, list):
- c_in = list()
- assert isinstance(unconditional_conditioning, list)
- for i in range(len(c)):
- c_in.append(torch.cat([unconditional_conditioning[i], c[i]]))
- else:
- c_in = torch.cat([unconditional_conditioning, c])# c/uc shape [b,seq_len=77,dim=1024],c_in shape [b*2,seq_len,dim]
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
-
- @torch.no_grad()
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- use_original_steps=False):
-
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- # print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- # iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
- x_dec = x_latent
- for i, step in enumerate(time_range):
- index = total_steps - i - 1
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- return x_dec
\ No newline at end of file
diff --git a/spaces/AIML-TUDA/safe-stable-diffusion/README.md b/spaces/AIML-TUDA/safe-stable-diffusion/README.md
deleted file mode 100644
index f4e90273c7cc0caf0a03e7d9bf63f636ec72bf39..0000000000000000000000000000000000000000
--- a/spaces/AIML-TUDA/safe-stable-diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Safe Stable Diffusion
-colorFrom: blue
-colorTo: red
-emoji: 😇
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: true
-license: creativeml-openrail-m
----
-
diff --git a/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/qasrl_model_pipeline.py b/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/qasrl_model_pipeline.py
deleted file mode 100644
index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000
--- a/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/qasrl_model_pipeline.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from typing import Optional
-import json
-from argparse import Namespace
-from pathlib import Path
-from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer
-
-def get_markers_for_model(is_t5_model: bool) -> Namespace:
- special_tokens_constants = Namespace()
- if is_t5_model:
- # T5 model have 100 special tokens by default
- special_tokens_constants.separator_input_question_predicate = ""
- special_tokens_constants.separator_output_answers = ""
- special_tokens_constants.separator_output_questions = "" # if using only questions
- special_tokens_constants.separator_output_question_answer = ""
- special_tokens_constants.separator_output_pairs = ""
- special_tokens_constants.predicate_generic_marker = ""
- special_tokens_constants.predicate_verb_marker = ""
- special_tokens_constants.predicate_nominalization_marker = ""
-
- else:
- special_tokens_constants.separator_input_question_predicate = ""
- special_tokens_constants.separator_output_answers = ""
- special_tokens_constants.separator_output_questions = "" # if using only questions
- special_tokens_constants.separator_output_question_answer = ""
- special_tokens_constants.separator_output_pairs = ""
- special_tokens_constants.predicate_generic_marker = ""
- special_tokens_constants.predicate_verb_marker = ""
- special_tokens_constants.predicate_nominalization_marker = ""
- return special_tokens_constants
-
-def load_trained_model(name_or_path):
- import huggingface_hub as HFhub
- tokenizer = AutoTokenizer.from_pretrained(name_or_path)
- model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path)
- # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory
- kwargs_filename = None
- if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files
- kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json")
- elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists():
- kwargs_filename = Path(name_or_path) / "experiment_kwargs.json"
-
- if kwargs_filename:
- preprocessing_kwargs = json.load(open(kwargs_filename))
- # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing
- model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs)
- model.config.update(preprocessing_kwargs)
- return model, tokenizer
-
-
-class QASRL_Pipeline(Text2TextGenerationPipeline):
- def __init__(self, model_repo: str, **kwargs):
- model, tokenizer = load_trained_model(model_repo)
- super().__init__(model, tokenizer, framework="pt")
- self.is_t5_model = "t5" in model.config.model_type
- self.special_tokens = get_markers_for_model(self.is_t5_model)
- self.data_args = model.config.preprocessing_kwargs
- # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs`
- if "predicate_marker_type" not in vars(self.data_args):
- self.data_args.predicate_marker_type = "generic"
- if "use_bilateral_predicate_marker" not in vars(self.data_args):
- self.data_args.use_bilateral_predicate_marker = True
- if "append_verb_form" not in vars(self.data_args):
- self.data_args.append_verb_form = True
- self._update_config(**kwargs)
-
- def _update_config(self, **kwargs):
- " Update self.model.config with initialization parameters and necessary defaults. "
- # set default values that will always override model.config, but can overriden by __init__ kwargs
- kwargs["max_length"] = kwargs.get("max_length", 80)
- # override model.config with kwargs
- for k,v in kwargs.items():
- self.model.config.__dict__[k] = v
-
- def _sanitize_parameters(self, **kwargs):
- preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {}
- if "predicate_marker" in kwargs:
- preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"]
- if "predicate_type" in kwargs:
- preprocess_kwargs["predicate_type"] = kwargs["predicate_type"]
- if "verb_form" in kwargs:
- preprocess_kwargs["verb_form"] = kwargs["verb_form"]
- return preprocess_kwargs, forward_kwargs, postprocess_kwargs
-
- def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None):
- # Here, inputs is string or list of strings; apply string postprocessing
- if isinstance(inputs, str):
- processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form)
- elif hasattr(inputs, "__iter__"):
- processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs]
- else:
- raise ValueError("inputs must be str or Iterable[str]")
- # Now pass to super.preprocess for tokenization
- return super().preprocess(processed_inputs)
-
- def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str:
- sent_tokens = seq.split(" ")
- assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word"
- predicate_idx = sent_tokens.index(predicate_marker)
- sent_tokens.remove(predicate_marker)
- sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)])
- predicate = sent_tokens[predicate_idx]
- sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))])
-
- if self.data_args.predicate_marker_type == "generic":
- predicate_marker = self.special_tokens.predicate_generic_marker
- # In case we want special marker for each predicate type: """
- elif self.data_args.predicate_marker_type == "pred_type":
- assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it"
- assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'"
- predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker ,
- "nominal": self.special_tokens.predicate_nominalization_marker
- }[predicate_type]
-
- if self.data_args.use_bilateral_predicate_marker:
- seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}"
- else:
- seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}"
-
- # embed also verb_form
- if self.data_args.append_verb_form and verb_form is None:
- raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)")
- elif self.data_args.append_verb_form:
- seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} "
- else:
- seq = f"{seq} "
-
- # append source prefix (for t5 models)
- prefix = self._get_source_prefix(predicate_type)
-
- return prefix + seq
-
- def _get_source_prefix(self, predicate_type: Optional[str]):
- if not self.is_t5_model or self.data_args.source_prefix is None:
- return ''
- if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x
- return self.data_args.source_prefix
- if self.data_args.source_prefix == "":
- if predicate_type is None:
- raise ValueError("source_prefix is '' but input no `predicate_type`.")
- else:
- return f"Generate QAs for {predicate_type} QASRL: "
-
- def _forward(self, *args, **kwargs):
- outputs = super()._forward(*args, **kwargs)
- return outputs
-
-
- def postprocess(self, model_outputs):
- output_seq = self.tokenizer.decode(
- model_outputs["output_ids"].squeeze(),
- skip_special_tokens=False,
- clean_up_tokenization_spaces=False,
- )
- output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip()
- qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs)
- qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs]
- return {"generated_text": output_seq,
- "QAs": qas}
-
- def _postrocess_qa(self, seq: str) -> str:
- # split question and answers
- if self.special_tokens.separator_output_question_answer in seq:
- question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2]
- else:
- print("invalid format: no separator between question and answer found...")
- return None
- # question, answer = seq, '' # Or: backoff to only question
- # skip "_" slots in questions
- question = ' '.join(t for t in question.split(' ') if t != '_')
- answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)]
- return {"question": question, "answers": answers}
-
-
-if __name__ == "__main__":
- pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline")
- res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal")
- res2 = pipe(["The doctor was interested in Luke 's treatment .",
- "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10)
- res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal")
- print(res1)
- print(res2)
- print(res3)
-
\ No newline at end of file
diff --git a/spaces/Abdo1Kamr/Text_Translation_And_Text_Formatter_For_Palestinian_Case/app.py b/spaces/Abdo1Kamr/Text_Translation_And_Text_Formatter_For_Palestinian_Case/app.py
deleted file mode 100644
index 7b9ba642135e55fa941e87562b0635c5f534c57b..0000000000000000000000000000000000000000
--- a/spaces/Abdo1Kamr/Text_Translation_And_Text_Formatter_For_Palestinian_Case/app.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import os
-from langchain.llms import OpenAI
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts import PromptTemplate
-import gradio as gr
-import random
-import string
-
-
-
-openai_api_key = os.environ["OPEN_API_KEY"]
-
-
-llm = OpenAI(openai_api_key= openai_api_key, model_name="gpt-3.5-turbo", temperature= 0.0)
-
-template = """Translate the text.
-You are a very professional translator who focuses on political and political terminologies. Translate the given sentence into {target} language.
-
-Text: {query}
-
-Translated text: """
-
-prompt_template = PromptTemplate(
- input_variables=["target", "query"],
- template=template
-)
-
-
-def random_punctuation(text):
- # punctuation = "#$%&*+-<=>@^_~"
- punctuation = "_"
- new_text = ""
- for word in text.split():
- if (len(word) > 3) or (word == 'غزة') or (word == 'غزه'):
- result = ""
- middle = len(word) // 2
- middel_of_text = word[:middle] + random.choice(punctuation) + word[middle:]
- # result = random.choice(punctuation) + middel_of_text + random.choice(punctuation)
- result = '*' + middel_of_text + "*"
- new_text += result + " "
- else:
- new_text += word + " "
- return new_text.strip()
-
-def MT(query, target):
- translated_text = llm(prompt_template.format(target = target, query=query))
- return translated_text
-
-def gradio_func(query, target, style):
- if len(query) > 1000:
- return "Please make your text shorter than above | الرجاء تصغير النص للترجمه"
-
- if style == "Change the text | تغيير النص":
- return random_punctuation(query)
-
- translated_text = MT(query, target)
-
- if style == "Translate | الترجمه":
- return translated_text
-
- elif style == "Translate and change the text | الترجمه و تغير النص معاً":
- return random_punctuation(translated_text)
-
-
-
-gr.close_all()
-demo = gr.Interface(fn=gradio_func,
- inputs=[
- gr.Textbox(label="Your Text | النص الخاص بك", lines= 4),
-
- gr.Radio(["Arabic", "English", "Mandarin Chinese", "Spanish", "Hindi", "Bengali", "Portuguese", "Russian", "Japanese", "French"],
- label="Languages | اللغات",
- info= "Which language you want to translate? | ما هي اللغه التي تود الترجمه إليها؟"),
-
- gr.Radio(["Translate | الترجمه",
- "Change the text | تغيير النص",
- "Translate and change the text | الترجمه و تغير النص معاً"],
- label="What you want? | ماذا تريد؟")
- ],
- outputs=[
- gr.Textbox(label="Generated Text", lines=4)
-
- ],
- title="Text Translation And Formatter For Palestinian Case, Support Palestine 🇵🇸.",
- description="#### This Model By ChatGPT.",
- examples= [
- ["سكان غزة يتعرضون لإبادة جماعية وسط أنظار العالم الصامت الذي لا يمنع عدوان وإرهاب إسرائيل!", "English", "Translate | الترجمه"],
- ["سكان غزة يتعرضون لإبادة جماعية وسط أنظار العالم الصامت الذي لا يمنع عدوان وإرهاب إسرائيل!", "English", "Change the text | تغيير النص"],
- ["سكان غزة يتعرضون لإبادة جماعية وسط أنظار العالم الصامت الذي لا يمنع عدوان وإرهاب إسرائيل!", "English", "Translate and change the text | الترجمه و تغير النص معاً"]
- ]
- )
-demo.launch(share=True)
\ No newline at end of file
diff --git a/spaces/Adapting/TrendFlow/app.py b/spaces/Adapting/TrendFlow/app.py
deleted file mode 100644
index 868bda0169533f882a9dfe2692a2c698cd0ea90b..0000000000000000000000000000000000000000
--- a/spaces/Adapting/TrendFlow/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import sthelper as helper
-from mypages import (
-welcome,
-home
-)
-
-
-session = helper.OpenSession(
- current_page='welcome',
- page_map=dict(
- welcome = welcome,
- home = home
- )
-)
-
-session.render()
-
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Factory.d.ts
deleted file mode 100644
index aab7ee58391a2d2ffb6b6aac3db61d9c34158a3b..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Factory.d.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-import Orbit from './Orbit';
-import Base from '../base/Base';
-
-export default function Factory(
- config?: Base.IConfig
-): Orbit;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateGridSizer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateGridSizer.js
deleted file mode 100644
index 32b31b3cfb25b0a7ca99b8dc49f5b8369700f6b3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateGridSizer.js
+++ /dev/null
@@ -1,27 +0,0 @@
-import CreateAnySizer from './utils/CreateAnySizer.js';
-import GridSizer from '../../gridsizer/GridSizer.js';
-import Make from '../Make.js';
-
-var CreateGridSizer = function (scene, data, view, styles, customBuilders) {
- // Build createCellContainerCallback
- var createCellContainerCallbackConfig = data.createCellContainerCallback;
- if (createCellContainerCallbackConfig) {
- var childData = createCellContainerCallbackConfig.$child;
- delete createCellContainerCallbackConfig.$child;
-
- data.createCellContainerCallback = function (scene, x, y, config) {
- var child = Make(scene, childData, view, styles, customBuilders);
-
- // Copy config
- for (var key in createCellContainerCallbackConfig) {
- config[key] = createCellContainerCallbackConfig[key];
- }
-
- return child;
- }
- }
-
- return CreateAnySizer(scene, data, view, styles, customBuilders, GridSizer);
-}
-
-export default CreateGridSizer;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/methods/Build.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/methods/Build.js
deleted file mode 100644
index 95a064de33450f43bad5da0611a108a6900c0ce5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/namevaluelabel/methods/Build.js
+++ /dev/null
@@ -1,186 +0,0 @@
-import Sizer from '../../sizer/Sizer.js';
-import LineProgressCanvas from '../../lineprogresscanvas/LineProgressCanvas.js';
-import AddChildMask from '../../../../plugins/gameobjects/container/containerlite/mask/AddChildMask.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-const IsPlainObject = Phaser.Utils.Objects.IsPlainObject;
-
-var Build = function (scene, config) {
- // Add elements
- var background = GetValue(config, 'background', undefined);
- var icon = GetValue(config, 'icon', undefined);
- var iconMask = GetValue(config, 'iconMask', undefined);
- var nameText = GetValue(config, 'nameText', undefined);
- var valueText = GetValue(config, 'valueText', undefined);
- var bar = GetValue(config, 'bar', undefined);
- var action = GetValue(config, 'action', undefined);
- var actionMask = GetValue(config, 'actionMask', undefined);
-
- if (IsPlainObject(bar)) {
- bar = new LineProgressCanvas(scene, bar);
- scene.add.existing(bar);
- // Move bar game object below nameText and valueText
- if (nameText) {
- scene.children.moveBelow(bar, nameText);
- }
- if (valueText) {
- scene.children.moveBelow(bar, valueText);
- }
- }
-
- var hasTextSizer = nameText || valueText || bar;
-
- if (background) {
- this.addBackground(background);
- }
-
- if (icon) {
- var padding = undefined;
- if (this.orientation === 0) {
- if (hasTextSizer || action) {
- padding = {
- right: GetValue(config, 'space.icon', 0),
- top: GetValue(config, 'space.iconTop', 0),
- bottom: GetValue(config, 'space.iconBottom', 0),
- };
- }
- } else {
- if (hasTextSizer || action) {
- padding = {
- bottom: GetValue(config, 'space.icon', 0),
- left: GetValue(config, 'space.iconLeft', 0),
- right: GetValue(config, 'space.iconRight', 0),
- };
- }
- }
-
- this.add(
- icon,
- { proportion: 0, padding: padding, }
- );
-
- if (iconMask) {
- iconMask = AddChildMask.call(this, icon, icon, 1); // Circle mask
- }
- }
-
- if (hasTextSizer) {
- var textSizer = new Sizer(scene, {
- orientation: 1,
- })
-
- var nameValueSizer;
- if (nameText || valueText) {
- nameValueSizer = new Sizer(scene, {
- orientation: 0,
- })
-
- if (nameText) {
- // A space character to reserve text height
- if (nameText.text === '') {
- nameText.setText(' ');
- }
- nameText.setOrigin(0, nameText.originY);
- var padding = {
- left: GetValue(config, 'space.name', 0),
- }
- nameValueSizer.add(
- nameText,
- { padding: padding }
- );
- }
-
- if (valueText) {
- // A space character to reserve text height
- if (valueText.text === '') {
- valueText.setText(' ');
- }
- valueText.setOrigin(1, valueText.originY);
-
- nameValueSizer.addSpace();
-
- var padding = {
- right: GetValue(config, 'space.value', 0),
- }
- nameValueSizer.add(
- valueText,
- { padding: padding }
- );
-
- this.setValueTextFormatCallback(
- GetValue(config, 'valueTextFormatCallback', DefaultValueTextFormatCallback),
- GetValue(config, 'valueTextFormatCallbackScope', undefined)
- );
- }
-
- textSizer.add(
- nameValueSizer,
- { expand: true, }
- )
- }
-
- if (bar) {
- var padding = {
- top: (nameValueSizer) ? GetValue(config, 'space.bar', 0) : 0,
- bottom: GetValue(config, 'space.barBottom', 0),
- left: GetValue(config, 'space.barLeft', 0),
- right: GetValue(config, 'space.barRight', 0),
- };
- textSizer.add(
- bar,
- { expand: true, padding: padding }
- );
- }
-
- var padding = undefined;
- if (action) {
- padding = {
- right: GetValue(config, 'space.text', 0)
- };
- }
- var textAlign = GetValue(config, 'align.text', 'bottom');
- this.add(
- textSizer,
- { proportion: 1, align: textAlign, padding: padding }
- );
- }
-
- if (action) {
- var padding;
- if (this.orientation === 0) {
- padding = {
- top: GetValue(config, 'space.actionTop', 0),
- bottom: GetValue(config, 'space.actionBottom', 0),
- };
- } else {
- padding = {
- left: GetValue(config, 'space.actionLeft', 0),
- right: GetValue(config, 'space.actionRight', 0),
- };
- }
-
- this.add(
- action,
- { proportion: 0, padding: padding, }
- );
-
- if (actionMask) {
- actionMask = AddChildMask.call(this, action, action, 1); // Circle mask
- }
- }
-
- this.addChildrenMap('background', background);
- this.addChildrenMap('icon', icon);
- this.addChildrenMap('iconMask', iconMask);
- this.addChildrenMap('name', nameText);
- this.addChildrenMap('value', valueText);
- this.addChildrenMap('bar', bar);
- this.addChildrenMap('action', action);
- this.addChildrenMap('actionMask', actionMask);
-}
-
-var DefaultValueTextFormatCallback = function (value, min, max) {
- return value.toString();
-}
-
-export default Build;
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetExpandedChildHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetExpandedChildHeight.js
deleted file mode 100644
index 96667ba0f493ec613cb2e7210c4c0adad07e3d32..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetExpandedChildHeight.js
+++ /dev/null
@@ -1,16 +0,0 @@
-var GetExpandedChildHeight = function (child, parentHeight) {
- if (parentHeight === undefined) {
- parentHeight = this.height;
- }
-
- var childHeight;
- var childConfig = child.rexSizer;
- if (childConfig.expandHeight) {
- var innerHeight = parentHeight - this.space.top - this.space.bottom;
- var padding = childConfig.padding;
- childHeight = innerHeight - padding.top - padding.bottom;
- }
- return childHeight;
-}
-
-export default GetExpandedChildHeight;
\ No newline at end of file
diff --git a/spaces/AkiKagura/Marco-Generation/app.py b/spaces/AkiKagura/Marco-Generation/app.py
deleted file mode 100644
index 09f4cc99c06f745e4ca6979e62d3a4ebf611a510..0000000000000000000000000000000000000000
--- a/spaces/AkiKagura/Marco-Generation/app.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import gradio as gr
-import torch
-#from torch import autocast // only for GPU
-
-from PIL import Image
-
-import os
-MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD')
-
-from diffusers import StableDiffusionPipeline
-#from diffusers import StableDiffusionImg2ImgPipeline
-
-def empty_checker(images, **kwargs):return images, False
-
-print("start generating")
-
-YOUR_TOKEN=MY_SECRET_TOKEN
-
-device="cpu"
-
-pipe = StableDiffusionPipeline.from_pretrained("AkiKagura/mkgen-diffusion", use_auth_token=YOUR_TOKEN)
-pipe.safety_checker = empty_checker
-pipe.to(device)
-
-gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[1], height="auto")
-
-def infer(prompt, guide, steps, seed, img_w, img_h):
- generator = torch.Generator('cpu').manual_seed(seed)
- #image = pipe(prompt, init_image=init_image)["sample"][0]
- images_list = pipe([prompt] * 1, guidance_scale=guide, num_inference_steps=steps, width=img_w, height=img_h) #TODO
- images = []
- for i, image in enumerate(images_list["images"]):
- images.append(image)
-
- return images
-
-print("okay")
-
-title="Marco Generation"
-description="Use 'mkmk woman' to get Marco pics. Warning: Slow process... about 10 min inference time."
-
-gr.Interface(fn=infer, inputs=["text",
- gr.Slider(2, 15, value = 7, label = 'Guidence Scale'),
- gr.Slider(10, 50, value = 25, step = 1, label = 'Number of Iterations'),
- gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True),
- gr.Slider(label='Width', minimum = 512, maximum = 768, step = 256, value = 512),
- gr.Slider(label='Height', minimum = 512, maximum = 768, step = 256, value = 512)], outputs=gallery,title=title,description=description).queue(max_size=100).launch(enable_queue=True)
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/download_video.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/download_video.py
deleted file mode 100644
index 05ccce79e03f8507ec6d40a29cae4789051e0a22..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/download_video.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import os
-import random
-import shutil
-from concurrent.futures import ThreadPoolExecutor
-from google.colab import files
-
-basepath = os.getcwd()
-uploaded = files.upload() # 上传文件
-for filename in uploaded.keys():
- assert (filename.endswith(".txt")), "speaker-videolink info could only be .txt file!"
- shutil.move(os.path.join(basepath, filename), os.path.join("./speaker_links.txt"))
-
-
-def generate_infos():
- infos = []
- with open("./speaker_links.txt", 'r', encoding='utf-8') as f:
- lines = f.readlines()
- for line in lines:
- line = line.replace("\n", "").replace(" ", "")
- if line == "":
- continue
- speaker, link = line.split("|")
- filename = speaker + "_" + str(random.randint(0, 1000000))
- infos.append({"link": link, "filename": filename})
- return infos
-
-
-def download_video(info):
- link = info["link"]
- filename = info["filename"]
- os.system(f"youtube-dl -f 0 {link} -o ./video_data/{filename}.mp4")
-
-
-if __name__ == "__main__":
- infos = generate_infos()
- with ThreadPoolExecutor(max_workers=os.cpu_count()) as executor:
- executor.map(download_video, infos)
diff --git a/spaces/Alican/pixera/test.py b/spaces/Alican/pixera/test.py
deleted file mode 100644
index 9407cb628aea2e67d7e3894a5cee535e8f17790d..0000000000000000000000000000000000000000
--- a/spaces/Alican/pixera/test.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import os
-from options.test_options import TestOptions
-from data import create_dataset
-from models import create_model
-from util.visualizer import save_images
-from util import html
-
-try:
- import wandb
-except ImportError:
- print('Warning: wandb package cannot be found. The option "--use_wandb" will result in error.')
-
-
-if __name__ == '__main__':
- opt = TestOptions().parse() # get test options
- # hard-code some parameters for test
- opt.num_threads = 0 # test code only supports num_threads = 0
- opt.batch_size = 1 # test code only supports batch_size = 1
- opt.serial_batches = True # disable data shuffling; comment this line if results on randomly chosen images are needed.
- opt.no_flip = True # no flip; comment this line if results on flipped images are needed.
- opt.display_id = -1 # no visdom display; the test code saves the results to a HTML file.
- dataset = create_dataset(opt) # create a dataset given opt.dataset_mode and other options
- model = create_model(opt) # create a model given opt.model and other options
- model.setup(opt) # regular setup: load and print networks; create schedulers
-
- # initialize logger
- if opt.use_wandb:
- wandb_run = wandb.init(project=opt.wandb_project_name, name=opt.name, config=opt) if not wandb.run else wandb.run
- wandb_run._label(repo='CycleGAN-and-pix2pix')
-
- # create a website
- web_dir = os.path.join(opt.results_dir, opt.name, '{}_{}'.format(opt.phase, opt.epoch)) # define the website directory
- if opt.load_iter > 0: # load_iter is 0 by default
- web_dir = '{:s}_iter{:d}'.format(web_dir, opt.load_iter)
- print('creating web directory', web_dir)
- webpage = html.HTML(web_dir, 'Experiment = %s, Phase = %s, Epoch = %s' % (opt.name, opt.phase, opt.epoch))
- # test with eval mode. This only affects layers like batchnorm and dropout.
- # For [pix2pix]: we use batchnorm and dropout in the original pix2pix. You can experiment it with and without eval() mode.
- # For [CycleGAN]: It should not affect CycleGAN as CycleGAN uses instancenorm without dropout.
- if opt.eval:
- model.eval()
- for i, data in enumerate(dataset):
- if i >= opt.num_test: # only apply our model to opt.num_test images.
- break
- model.set_input(data) # unpack data from data loader
- model.test() # run inference
- visuals = model.get_current_visuals() # get image results
- img_path = model.get_image_paths() # get image paths
- if i % 5 == 0: # save images to an HTML file
- print('processing (%04d)-th image... %s' % (i, img_path))
- save_images(webpage, visuals, img_path, aspect_ratio=opt.aspect_ratio, width=opt.display_winsize, use_wandb=opt.use_wandb)
- webpage.save() # save the HTML
diff --git a/spaces/Amrrs/DragGan-Inversion/gradio_utils/__init__.py b/spaces/Amrrs/DragGan-Inversion/gradio_utils/__init__.py
deleted file mode 100644
index 6a54920c53b4373690fd0ca59ee59159d33d1f92..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/gradio_utils/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from .utils import (ImageMask, draw_mask_on_image, draw_points_on_image,
- get_latest_points_pair, get_valid_mask,
- on_change_single_global_state)
-
-__all__ = [
- 'draw_mask_on_image', 'draw_points_on_image',
- 'on_change_single_global_state', 'get_latest_points_pair',
- 'get_valid_mask', 'ImageMask'
-]
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
deleted file mode 100644
index ddfcc0db7a4162bbfa443fdd3a4693cd9031e4fe..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
+++ /dev/null
@@ -1,799 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import warnings
-from typing import Any, Callable, Dict, List, Optional, Union
-
-import PIL
-import torch
-from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection
-
-from diffusers.utils.import_utils import is_accelerate_available
-
-from ...image_processor import VaeImageProcessor
-from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...models.embeddings import get_timestep_embedding
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import is_accelerate_version, logging, randn_tensor, replace_example_docstring
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-from .stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import requests
- >>> import torch
- >>> from PIL import Image
- >>> from io import BytesIO
-
- >>> from diffusers import StableUnCLIPImg2ImgPipeline
-
- >>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
- ... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16
- ... ) # TODO update model path
- >>> pipe = pipe.to("cuda")
-
- >>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
-
- >>> response = requests.get(url)
- >>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
- >>> init_image = init_image.resize((768, 512))
-
- >>> prompt = "A fantasy landscape, trending on artstation"
-
- >>> images = pipe(prompt, init_image).images
- >>> images[0].save("fantasy_landscape.png")
- ```
-"""
-
-
-class StableUnCLIPImg2ImgPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin):
- """
- Pipeline for text-guided image-to-image generation using stable unCLIP.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- feature_extractor ([`CLIPImageProcessor`]):
- Feature extractor for image pre-processing before being encoded.
- image_encoder ([`CLIPVisionModelWithProjection`]):
- CLIP vision model for encoding images.
- image_normalizer ([`StableUnCLIPImageNormalizer`]):
- Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image
- embeddings after the noise has been applied.
- image_noising_scheduler ([`KarrasDiffusionSchedulers`]):
- Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined
- by the `noise_level`.
- tokenizer (`~transformers.CLIPTokenizer`):
- A [`~transformers.CLIPTokenizer`)].
- text_encoder ([`~transformers.CLIPTextModel`]):
- Frozen [`~transformers.CLIPTextModel`] text-encoder.
- unet ([`UNet2DConditionModel`]):
- A [`UNet2DConditionModel`] to denoise the encoded image latents.
- scheduler ([`KarrasDiffusionSchedulers`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- """
-
- _exclude_from_cpu_offload = ["image_normalizer"]
-
- # image encoding components
- feature_extractor: CLIPImageProcessor
- image_encoder: CLIPVisionModelWithProjection
-
- # image noising components
- image_normalizer: StableUnCLIPImageNormalizer
- image_noising_scheduler: KarrasDiffusionSchedulers
-
- # regular denoising components
- tokenizer: CLIPTokenizer
- text_encoder: CLIPTextModel
- unet: UNet2DConditionModel
- scheduler: KarrasDiffusionSchedulers
-
- vae: AutoencoderKL
-
- def __init__(
- self,
- # image encoding components
- feature_extractor: CLIPImageProcessor,
- image_encoder: CLIPVisionModelWithProjection,
- # image noising components
- image_normalizer: StableUnCLIPImageNormalizer,
- image_noising_scheduler: KarrasDiffusionSchedulers,
- # regular denoising components
- tokenizer: CLIPTokenizer,
- text_encoder: CLIPTextModel,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- # vae
- vae: AutoencoderKL,
- ):
- super().__init__()
-
- self.register_modules(
- feature_extractor=feature_extractor,
- image_encoder=image_encoder,
- image_normalizer=image_normalizer,
- image_noising_scheduler=image_noising_scheduler,
- tokenizer=tokenizer,
- text_encoder=text_encoder,
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- )
-
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
- time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
- Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
- iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.text_encoder, self.image_encoder, self.unet, self.vae]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- prompt_embeds = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- prompt_embeds = prompt_embeds[0]
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- negative_prompt_embeds = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- negative_prompt_embeds = negative_prompt_embeds[0]
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
-
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- def _encode_image(
- self,
- image,
- device,
- batch_size,
- num_images_per_prompt,
- do_classifier_free_guidance,
- noise_level,
- generator,
- image_embeds,
- ):
- dtype = next(self.image_encoder.parameters()).dtype
-
- if isinstance(image, PIL.Image.Image):
- # the image embedding should repeated so it matches the total batch size of the prompt
- repeat_by = batch_size
- else:
- # assume the image input is already properly batched and just needs to be repeated so
- # it matches the num_images_per_prompt.
- #
- # NOTE(will) this is probably missing a few number of side cases. I.e. batched/non-batched
- # `image_embeds`. If those happen to be common use cases, let's think harder about
- # what the expected dimensions of inputs should be and how we handle the encoding.
- repeat_by = num_images_per_prompt
-
- if image_embeds is None:
- if not isinstance(image, torch.Tensor):
- image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
-
- image = image.to(device=device, dtype=dtype)
- image_embeds = self.image_encoder(image).image_embeds
-
- image_embeds = self.noise_image_embeddings(
- image_embeds=image_embeds,
- noise_level=noise_level,
- generator=generator,
- )
-
- # duplicate image embeddings for each generation per prompt, using mps friendly method
- image_embeds = image_embeds.unsqueeze(1)
- bs_embed, seq_len, _ = image_embeds.shape
- image_embeds = image_embeds.repeat(1, repeat_by, 1)
- image_embeds = image_embeds.view(bs_embed * repeat_by, seq_len, -1)
- image_embeds = image_embeds.squeeze(1)
-
- if do_classifier_free_guidance:
- negative_prompt_embeds = torch.zeros_like(image_embeds)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- image_embeds = torch.cat([negative_prompt_embeds, image_embeds])
-
- return image_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- warnings.warn(
- "The decode_latents method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor instead",
- FutureWarning,
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents, return_dict=False)[0]
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- prompt,
- image,
- height,
- width,
- callback_steps,
- noise_level,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- image_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Please make sure to define only one of the two."
- )
-
- if prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
-
- if prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- "Provide either `negative_prompt` or `negative_prompt_embeds`. Cannot leave both `negative_prompt` and `negative_prompt_embeds` undefined."
- )
-
- if prompt is not None and negative_prompt is not None:
- if type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- if noise_level < 0 or noise_level >= self.image_noising_scheduler.config.num_train_timesteps:
- raise ValueError(
- f"`noise_level` must be between 0 and {self.image_noising_scheduler.config.num_train_timesteps - 1}, inclusive."
- )
-
- if image is not None and image_embeds is not None:
- raise ValueError(
- "Provide either `image` or `image_embeds`. Please make sure to define only one of the two."
- )
-
- if image is None and image_embeds is None:
- raise ValueError(
- "Provide either `image` or `image_embeds`. Cannot leave both `image` and `image_embeds` undefined."
- )
-
- if image is not None:
- if (
- not isinstance(image, torch.Tensor)
- and not isinstance(image, PIL.Image.Image)
- and not isinstance(image, list)
- ):
- raise ValueError(
- "`image` has to be of type `torch.FloatTensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is"
- f" {type(image)}"
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_unclip.StableUnCLIPPipeline.noise_image_embeddings
- def noise_image_embeddings(
- self,
- image_embeds: torch.Tensor,
- noise_level: int,
- noise: Optional[torch.FloatTensor] = None,
- generator: Optional[torch.Generator] = None,
- ):
- """
- Add noise to the image embeddings. The amount of noise is controlled by a `noise_level` input. A higher
- `noise_level` increases the variance in the final un-noised images.
-
- The noise is applied in two ways:
- 1. A noise schedule is applied directly to the embeddings.
- 2. A vector of sinusoidal time embeddings are appended to the output.
-
- In both cases, the amount of noise is controlled by the same `noise_level`.
-
- The embeddings are normalized before the noise is applied and un-normalized after the noise is applied.
- """
- if noise is None:
- noise = randn_tensor(
- image_embeds.shape, generator=generator, device=image_embeds.device, dtype=image_embeds.dtype
- )
-
- noise_level = torch.tensor([noise_level] * image_embeds.shape[0], device=image_embeds.device)
-
- self.image_normalizer.to(image_embeds.device)
- image_embeds = self.image_normalizer.scale(image_embeds)
-
- image_embeds = self.image_noising_scheduler.add_noise(image_embeds, timesteps=noise_level, noise=noise)
-
- image_embeds = self.image_normalizer.unscale(image_embeds)
-
- noise_level = get_timestep_embedding(
- timesteps=noise_level, embedding_dim=image_embeds.shape[-1], flip_sin_to_cos=True, downscale_freq_shift=0
- )
-
- # `get_timestep_embeddings` does not contain any weights and will always return f32 tensors,
- # but we might actually be running in fp16. so we need to cast here.
- # there might be better ways to encapsulate this.
- noise_level = noise_level.to(image_embeds.dtype)
-
- image_embeds = torch.cat((image_embeds, noise_level), 1)
-
- return image_embeds
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- image: Union[torch.FloatTensor, PIL.Image.Image] = None,
- prompt: Union[str, List[str]] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 20,
- guidance_scale: float = 10,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- noise_level: int = 0,
- image_embeds: Optional[torch.FloatTensor] = None,
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, either `prompt_embeds` will be
- used or prompt is initialized to `""`.
- image (`torch.FloatTensor` or `PIL.Image.Image`):
- `Image` or tensor representing an image batch. The image is encoded to its CLIP embedding which the
- `unet` is conditioned on. The image is _not_ encoded by the `vae` and then used as the latents in the
- denoising process like it is in the standard Stable Diffusion text-guided image variation process.
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 20):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 10.0):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
- provided, text embeddings are generated from the `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- noise_level (`int`, *optional*, defaults to `0`):
- The amount of noise to add to the image embeddings. A higher `noise_level` increases the variance in
- the final un-noised images. See [`StableUnCLIPPipeline.noise_image_embeddings`] for more details.
- image_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated CLIP embeddings to condition the `unet` on. These latents are not used in the denoising
- process. If you want to provide pre-generated latents, pass them to `__call__` as `latents`.
-
- Examples:
-
- Returns:
- [`~pipelines.ImagePipelineOutput`] or `tuple`:
- [`~ pipeline_utils.ImagePipelineOutput`] if `return_dict` is True, otherwise a `tuple`. When returning
- a tuple, the first element is a list with the generated images.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- if prompt is None and prompt_embeds is None:
- prompt = len(image) * [""] if isinstance(image, list) else ""
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt=prompt,
- image=image,
- height=height,
- width=width,
- callback_steps=callback_steps,
- noise_level=noise_level,
- negative_prompt=negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- image_embeds=image_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- batch_size = batch_size * num_images_per_prompt
-
- device = self._execution_device
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_encoder_lora_scale = (
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
- )
- prompt_embeds = self._encode_prompt(
- prompt=prompt,
- device=device,
- num_images_per_prompt=num_images_per_prompt,
- do_classifier_free_guidance=do_classifier_free_guidance,
- negative_prompt=negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- lora_scale=text_encoder_lora_scale,
- )
-
- # 4. Encoder input image
- noise_level = torch.tensor([noise_level], device=device)
- image_embeds = self._encode_image(
- image=image,
- device=device,
- batch_size=batch_size,
- num_images_per_prompt=num_images_per_prompt,
- do_classifier_free_guidance=do_classifier_free_guidance,
- noise_level=noise_level,
- generator=generator,
- image_embeds=image_embeds,
- )
-
- # 5. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 6. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size=batch_size,
- num_channels_latents=num_channels_latents,
- height=height,
- width=width,
- dtype=prompt_embeds.dtype,
- device=device,
- generator=generator,
- latents=latents,
- )
-
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 8. Denoising loop
- for i, t in enumerate(self.progress_bar(timesteps)):
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- class_labels=image_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
-
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 9. Post-processing
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- else:
- image = latents
-
- image = self.image_processor.postprocess(image, output_type=output_type)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/controlnet/test_controlnet_inpaint.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/controlnet/test_controlnet_inpaint.py
deleted file mode 100644
index cb9b53e612e95dc03be7034075334935c7097976..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/controlnet/test_controlnet_inpaint.py
+++ /dev/null
@@ -1,596 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This model implementation is heavily based on:
-
-import gc
-import random
-import tempfile
-import unittest
-
-import numpy as np
-import torch
-from PIL import Image
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- ControlNetModel,
- DDIMScheduler,
- StableDiffusionControlNetInpaintPipeline,
- UNet2DConditionModel,
-)
-from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, randn_tensor, slow, torch_device
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
-
-from ..pipeline_params import (
- TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
- TEXT_GUIDED_IMAGE_INPAINTING_PARAMS,
- TEXT_TO_IMAGE_IMAGE_PARAMS,
-)
-from ..test_pipelines_common import (
- PipelineKarrasSchedulerTesterMixin,
- PipelineLatentTesterMixin,
- PipelineTesterMixin,
-)
-
-
-enable_full_determinism()
-
-
-class ControlNetInpaintPipelineFastTests(
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
-):
- pipeline_class = StableDiffusionControlNetInpaintPipeline
- params = TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS
- image_params = frozenset({"control_image"}) # skip `image` and `mask` for now, only test for control_image
- image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=9,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- torch.manual_seed(0)
- controlnet = ControlNetModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- in_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- cross_attention_dim=32,
- conditioning_embedding_out_channels=(16, 32),
- )
- torch.manual_seed(0)
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "controlnet": controlnet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
-
- controlnet_embedder_scale_factor = 2
- control_image = randn_tensor(
- (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor),
- generator=generator,
- device=torch.device(device),
- )
- init_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
- init_image = init_image.cpu().permute(0, 2, 3, 1)[0]
-
- image = Image.fromarray(np.uint8(init_image)).convert("RGB").resize((64, 64))
- mask_image = Image.fromarray(np.uint8(init_image + 4)).convert("RGB").resize((64, 64))
-
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "generator": generator,
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "numpy",
- "image": image,
- "mask_image": mask_image,
- "control_image": control_image,
- }
-
- return inputs
-
- def test_attention_slicing_forward_pass(self):
- return self._test_attention_slicing_forward_pass(expected_max_diff=2e-3)
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_xformers_attention_forwardGenerator_pass(self):
- self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=2e-3)
-
- def test_inference_batch_single_identical(self):
- self._test_inference_batch_single_identical(expected_max_diff=2e-3)
-
-
-class ControlNetSimpleInpaintPipelineFastTests(ControlNetInpaintPipelineFastTests):
- pipeline_class = StableDiffusionControlNetInpaintPipeline
- params = TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS
- image_params = frozenset([])
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- torch.manual_seed(0)
- controlnet = ControlNetModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- in_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- cross_attention_dim=32,
- conditioning_embedding_out_channels=(16, 32),
- )
- torch.manual_seed(0)
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "controlnet": controlnet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
-
-class MultiControlNetInpaintPipelineFastTests(
- PipelineTesterMixin, PipelineKarrasSchedulerTesterMixin, unittest.TestCase
-):
- pipeline_class = StableDiffusionControlNetInpaintPipeline
- params = TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=9,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- torch.manual_seed(0)
-
- def init_weights(m):
- if isinstance(m, torch.nn.Conv2d):
- torch.nn.init.normal(m.weight)
- m.bias.data.fill_(1.0)
-
- controlnet1 = ControlNetModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- in_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- cross_attention_dim=32,
- conditioning_embedding_out_channels=(16, 32),
- )
- controlnet1.controlnet_down_blocks.apply(init_weights)
-
- torch.manual_seed(0)
- controlnet2 = ControlNetModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- in_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- cross_attention_dim=32,
- conditioning_embedding_out_channels=(16, 32),
- )
- controlnet2.controlnet_down_blocks.apply(init_weights)
-
- torch.manual_seed(0)
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- controlnet = MultiControlNetModel([controlnet1, controlnet2])
-
- components = {
- "unet": unet,
- "controlnet": controlnet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
-
- controlnet_embedder_scale_factor = 2
-
- control_image = [
- randn_tensor(
- (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor),
- generator=generator,
- device=torch.device(device),
- ),
- randn_tensor(
- (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor),
- generator=generator,
- device=torch.device(device),
- ),
- ]
- init_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
- init_image = init_image.cpu().permute(0, 2, 3, 1)[0]
-
- image = Image.fromarray(np.uint8(init_image)).convert("RGB").resize((64, 64))
- mask_image = Image.fromarray(np.uint8(init_image + 4)).convert("RGB").resize((64, 64))
-
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "generator": generator,
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "numpy",
- "image": image,
- "mask_image": mask_image,
- "control_image": control_image,
- }
-
- return inputs
-
- def test_control_guidance_switch(self):
- components = self.get_dummy_components()
- pipe = self.pipeline_class(**components)
- pipe.to(torch_device)
-
- scale = 10.0
- steps = 4
-
- inputs = self.get_dummy_inputs(torch_device)
- inputs["num_inference_steps"] = steps
- inputs["controlnet_conditioning_scale"] = scale
- output_1 = pipe(**inputs)[0]
-
- inputs = self.get_dummy_inputs(torch_device)
- inputs["num_inference_steps"] = steps
- inputs["controlnet_conditioning_scale"] = scale
- output_2 = pipe(**inputs, control_guidance_start=0.1, control_guidance_end=0.2)[0]
-
- inputs = self.get_dummy_inputs(torch_device)
- inputs["num_inference_steps"] = steps
- inputs["controlnet_conditioning_scale"] = scale
- output_3 = pipe(**inputs, control_guidance_start=[0.1, 0.3], control_guidance_end=[0.2, 0.7])[0]
-
- inputs = self.get_dummy_inputs(torch_device)
- inputs["num_inference_steps"] = steps
- inputs["controlnet_conditioning_scale"] = scale
- output_4 = pipe(**inputs, control_guidance_start=0.4, control_guidance_end=[0.5, 0.8])[0]
-
- # make sure that all outputs are different
- assert np.sum(np.abs(output_1 - output_2)) > 1e-3
- assert np.sum(np.abs(output_1 - output_3)) > 1e-3
- assert np.sum(np.abs(output_1 - output_4)) > 1e-3
-
- def test_attention_slicing_forward_pass(self):
- return self._test_attention_slicing_forward_pass(expected_max_diff=2e-3)
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_xformers_attention_forwardGenerator_pass(self):
- self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=2e-3)
-
- def test_inference_batch_single_identical(self):
- self._test_inference_batch_single_identical(expected_max_diff=2e-3)
-
- def test_save_pretrained_raise_not_implemented_exception(self):
- components = self.get_dummy_components()
- pipe = self.pipeline_class(**components)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- with tempfile.TemporaryDirectory() as tmpdir:
- try:
- # save_pretrained is not implemented for Multi-ControlNet
- pipe.save_pretrained(tmpdir)
- except NotImplementedError:
- pass
-
-
-@slow
-@require_torch_gpu
-class ControlNetInpaintPipelineSlowTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_canny(self):
- controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
-
- pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-inpainting", safety_checker=None, controlnet=controlnet
- )
- pipe.enable_model_cpu_offload()
- pipe.set_progress_bar_config(disable=None)
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- image = load_image(
- "https://huggingface.co/lllyasviel/sd-controlnet-canny/resolve/main/images/bird.png"
- ).resize((512, 512))
-
- mask_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint/input_bench_mask.png"
- ).resize((512, 512))
-
- prompt = "pitch black hole"
-
- control_image = load_image(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png"
- ).resize((512, 512))
-
- output = pipe(
- prompt,
- image=image,
- mask_image=mask_image,
- control_image=control_image,
- generator=generator,
- output_type="np",
- num_inference_steps=3,
- )
-
- image = output.images[0]
-
- assert image.shape == (512, 512, 3)
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/inpaint.npy"
- )
-
- assert np.abs(expected_image - image).max() < 9e-2
-
- def test_inpaint(self):
- controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint")
-
- pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", safety_checker=None, controlnet=controlnet
- )
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.enable_model_cpu_offload()
- pipe.set_progress_bar_config(disable=None)
-
- generator = torch.Generator(device="cpu").manual_seed(33)
-
- init_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png"
- )
- init_image = init_image.resize((512, 512))
-
- mask_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png"
- )
- mask_image = mask_image.resize((512, 512))
-
- prompt = "a handsome man with ray-ban sunglasses"
-
- def make_inpaint_condition(image, image_mask):
- image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
- image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0
-
- assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
- image[image_mask > 0.5] = -1.0 # set as masked pixel
- image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
- image = torch.from_numpy(image)
- return image
-
- control_image = make_inpaint_condition(init_image, mask_image)
-
- output = pipe(
- prompt,
- image=init_image,
- mask_image=mask_image,
- control_image=control_image,
- guidance_scale=9.0,
- eta=1.0,
- generator=generator,
- num_inference_steps=20,
- output_type="np",
- )
- image = output.images[0]
-
- assert image.shape == (512, 512, 3)
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/boy_ray_ban.npy"
- )
-
- assert np.abs(expected_image - image).max() < 9e-2
-
- def test_load_local(self):
- controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny")
- pipe_1 = StableDiffusionControlNetInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5", safety_checker=None, controlnet=controlnet
- )
-
- controlnet = ControlNetModel.from_single_file(
- "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth"
- )
- pipe_2 = StableDiffusionControlNetInpaintPipeline.from_single_file(
- "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors",
- safety_checker=None,
- controlnet=controlnet,
- )
- control_image = load_image(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png"
- ).resize((512, 512))
- image = load_image(
- "https://huggingface.co/lllyasviel/sd-controlnet-canny/resolve/main/images/bird.png"
- ).resize((512, 512))
- mask_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint/input_bench_mask.png"
- ).resize((512, 512))
-
- pipes = [pipe_1, pipe_2]
- images = []
- for pipe in pipes:
- pipe.enable_model_cpu_offload()
- pipe.set_progress_bar_config(disable=None)
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- prompt = "bird"
- output = pipe(
- prompt,
- image=image,
- control_image=control_image,
- mask_image=mask_image,
- strength=0.9,
- generator=generator,
- output_type="np",
- num_inference_steps=3,
- )
- images.append(output.images[0])
-
- del pipe
- gc.collect()
- torch.cuda.empty_cache()
-
- assert np.abs(images[0] - images[1]).sum() < 1e-3
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/shap_e/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/shap_e/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/tblr_bbox_coder.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/tblr_bbox_coder.py
deleted file mode 100644
index edaffaf1fa252857e1a660ea14a613e2466fb52c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/coder/tblr_bbox_coder.py
+++ /dev/null
@@ -1,198 +0,0 @@
-import mmcv
-import torch
-
-from ..builder import BBOX_CODERS
-from .base_bbox_coder import BaseBBoxCoder
-
-
-@BBOX_CODERS.register_module()
-class TBLRBBoxCoder(BaseBBoxCoder):
- """TBLR BBox coder.
-
- Following the practice in `FSAF `_,
- this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left,
- right) and decode it back to the original.
-
- Args:
- normalizer (list | float): Normalization factor to be
- divided with when coding the coordinates. If it is a list, it should
- have length of 4 indicating normalization factor in tblr dims.
- Otherwise it is a unified float factor for all dims. Default: 4.0
- clip_border (bool, optional): Whether clip the objects outside the
- border of the image. Defaults to True.
- """
-
- def __init__(self, normalizer=4.0, clip_border=True):
- super(BaseBBoxCoder, self).__init__()
- self.normalizer = normalizer
- self.clip_border = clip_border
-
- def encode(self, bboxes, gt_bboxes):
- """Get box regression transformation deltas that can be used to
- transform the ``bboxes`` into the ``gt_bboxes`` in the (top, left,
- bottom, right) order.
-
- Args:
- bboxes (torch.Tensor): source boxes, e.g., object proposals.
- gt_bboxes (torch.Tensor): target of the transformation, e.g.,
- ground truth boxes.
-
- Returns:
- torch.Tensor: Box transformation deltas
- """
- assert bboxes.size(0) == gt_bboxes.size(0)
- assert bboxes.size(-1) == gt_bboxes.size(-1) == 4
- encoded_bboxes = bboxes2tblr(
- bboxes, gt_bboxes, normalizer=self.normalizer)
- return encoded_bboxes
-
- def decode(self, bboxes, pred_bboxes, max_shape=None):
- """Apply transformation `pred_bboxes` to `boxes`.
-
- Args:
- bboxes (torch.Tensor): Basic boxes.Shape (B, N, 4) or (N, 4)
- pred_bboxes (torch.Tensor): Encoded boxes with shape
- (B, N, 4) or (N, 4)
- max_shape (Sequence[int] or torch.Tensor or Sequence[
- Sequence[int]],optional): Maximum bounds for boxes, specifies
- (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then
- the max_shape should be a Sequence[Sequence[int]]
- and the length of max_shape should also be B.
-
- Returns:
- torch.Tensor: Decoded boxes.
- """
- decoded_bboxes = tblr2bboxes(
- bboxes,
- pred_bboxes,
- normalizer=self.normalizer,
- max_shape=max_shape,
- clip_border=self.clip_border)
-
- return decoded_bboxes
-
-
-@mmcv.jit(coderize=True)
-def bboxes2tblr(priors, gts, normalizer=4.0, normalize_by_wh=True):
- """Encode ground truth boxes to tblr coordinate.
-
- It first convert the gt coordinate to tblr format,
- (top, bottom, left, right), relative to prior box centers.
- The tblr coordinate may be normalized by the side length of prior bboxes
- if `normalize_by_wh` is specified as True, and it is then normalized by
- the `normalizer` factor.
-
- Args:
- priors (Tensor): Prior boxes in point form
- Shape: (num_proposals,4).
- gts (Tensor): Coords of ground truth for each prior in point-form
- Shape: (num_proposals, 4).
- normalizer (Sequence[float] | float): normalization parameter of
- encoded boxes. If it is a list, it has to have length = 4.
- Default: 4.0
- normalize_by_wh (bool): Whether to normalize tblr coordinate by the
- side length (wh) of prior bboxes.
-
- Return:
- encoded boxes (Tensor), Shape: (num_proposals, 4)
- """
-
- # dist b/t match center and prior's center
- if not isinstance(normalizer, float):
- normalizer = torch.tensor(normalizer, device=priors.device)
- assert len(normalizer) == 4, 'Normalizer must have length = 4'
- assert priors.size(0) == gts.size(0)
- prior_centers = (priors[:, 0:2] + priors[:, 2:4]) / 2
- xmin, ymin, xmax, ymax = gts.split(1, dim=1)
- top = prior_centers[:, 1].unsqueeze(1) - ymin
- bottom = ymax - prior_centers[:, 1].unsqueeze(1)
- left = prior_centers[:, 0].unsqueeze(1) - xmin
- right = xmax - prior_centers[:, 0].unsqueeze(1)
- loc = torch.cat((top, bottom, left, right), dim=1)
- if normalize_by_wh:
- # Normalize tblr by anchor width and height
- wh = priors[:, 2:4] - priors[:, 0:2]
- w, h = torch.split(wh, 1, dim=1)
- loc[:, :2] /= h # tb is normalized by h
- loc[:, 2:] /= w # lr is normalized by w
- # Normalize tblr by the given normalization factor
- return loc / normalizer
-
-
-@mmcv.jit(coderize=True)
-def tblr2bboxes(priors,
- tblr,
- normalizer=4.0,
- normalize_by_wh=True,
- max_shape=None,
- clip_border=True):
- """Decode tblr outputs to prediction boxes.
-
- The process includes 3 steps: 1) De-normalize tblr coordinates by
- multiplying it with `normalizer`; 2) De-normalize tblr coordinates by the
- prior bbox width and height if `normalize_by_wh` is `True`; 3) Convert
- tblr (top, bottom, left, right) pair relative to the center of priors back
- to (xmin, ymin, xmax, ymax) coordinate.
-
- Args:
- priors (Tensor): Prior boxes in point form (x0, y0, x1, y1)
- Shape: (N,4) or (B, N, 4).
- tblr (Tensor): Coords of network output in tblr form
- Shape: (N, 4) or (B, N, 4).
- normalizer (Sequence[float] | float): Normalization parameter of
- encoded boxes. By list, it represents the normalization factors at
- tblr dims. By float, it is the unified normalization factor at all
- dims. Default: 4.0
- normalize_by_wh (bool): Whether the tblr coordinates have been
- normalized by the side length (wh) of prior bboxes.
- max_shape (Sequence[int] or torch.Tensor or Sequence[
- Sequence[int]],optional): Maximum bounds for boxes, specifies
- (H, W, C) or (H, W). If priors shape is (B, N, 4), then
- the max_shape should be a Sequence[Sequence[int]]
- and the length of max_shape should also be B.
- clip_border (bool, optional): Whether clip the objects outside the
- border of the image. Defaults to True.
-
- Return:
- encoded boxes (Tensor): Boxes with shape (N, 4) or (B, N, 4)
- """
- if not isinstance(normalizer, float):
- normalizer = torch.tensor(normalizer, device=priors.device)
- assert len(normalizer) == 4, 'Normalizer must have length = 4'
- assert priors.size(0) == tblr.size(0)
- if priors.ndim == 3:
- assert priors.size(1) == tblr.size(1)
-
- loc_decode = tblr * normalizer
- prior_centers = (priors[..., 0:2] + priors[..., 2:4]) / 2
- if normalize_by_wh:
- wh = priors[..., 2:4] - priors[..., 0:2]
- w, h = torch.split(wh, 1, dim=-1)
- # Inplace operation with slice would failed for exporting to ONNX
- th = h * loc_decode[..., :2] # tb
- tw = w * loc_decode[..., 2:] # lr
- loc_decode = torch.cat([th, tw], dim=-1)
- # Cannot be exported using onnx when loc_decode.split(1, dim=-1)
- top, bottom, left, right = loc_decode.split((1, 1, 1, 1), dim=-1)
- xmin = prior_centers[..., 0].unsqueeze(-1) - left
- xmax = prior_centers[..., 0].unsqueeze(-1) + right
- ymin = prior_centers[..., 1].unsqueeze(-1) - top
- ymax = prior_centers[..., 1].unsqueeze(-1) + bottom
-
- bboxes = torch.cat((xmin, ymin, xmax, ymax), dim=-1)
-
- if clip_border and max_shape is not None:
- if not isinstance(max_shape, torch.Tensor):
- max_shape = priors.new_tensor(max_shape)
- max_shape = max_shape[..., :2].type_as(priors)
- if max_shape.ndim == 2:
- assert bboxes.ndim == 3
- assert max_shape.size(0) == bboxes.size(0)
-
- min_xy = priors.new_tensor(0)
- max_xy = torch.cat([max_shape, max_shape],
- dim=-1).flip(-1).unsqueeze(-2)
- bboxes = torch.where(bboxes < min_xy, min_xy, bboxes)
- bboxes = torch.where(bboxes > max_xy, max_xy, bboxes)
-
- return bboxes
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index d7eb668f39bbd22a1f42628428bc19d1645e9865..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './ccnet_r50-d8_512x512_40k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/cgnet/cgnet_680x680_60k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/cgnet/cgnet_680x680_60k_cityscapes.py
deleted file mode 100644
index 2b2f8eefb7dbecf81fcd2db54644493480825246..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/cgnet/cgnet_680x680_60k_cityscapes.py
+++ /dev/null
@@ -1,50 +0,0 @@
-_base_ = [
- '../_base_/models/cgnet.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py'
-]
-
-# optimizer
-optimizer = dict(type='Adam', lr=0.001, eps=1e-08, weight_decay=0.0005)
-optimizer_config = dict()
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
-# runtime settings
-total_iters = 60000
-checkpoint_config = dict(by_epoch=False, interval=4000)
-evaluation = dict(interval=4000, metric='mIoU')
-
-img_norm_cfg = dict(
- mean=[72.39239876, 82.90891754, 73.15835921], std=[1, 1, 1], to_rgb=True)
-crop_size = (680, 680)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 1024),
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=8,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py
deleted file mode 100644
index e4b623aca9ce1138baa259cbdd02920a47765f8d..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,8 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- backbone=dict(dilations=(1, 1, 1, 2), strides=(1, 2, 2, 1)),
- decode_head=dict(dilation=6),
- auxiliary_head=dict(dilation=6))
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/networks.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/networks.py
deleted file mode 100644
index 6dd75237b493c32f5748b71e729d9645bc24895f..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/networks.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-import numpy as np
-import functools
-from . import base_function
-from .stylegan_ops import style_function
-from .transformer_ops import transformer_function
-
-
-##################################################################################
-# Networks
-##################################################################################
-def define_D(opt, img_size):
- """Create a discriminator"""
- norm_value = base_function.get_norm_layer(opt.norm)
- if 'patch' in opt.netD:
- net = NLayerDiscriminator(opt.img_nc, opt.ndf, opt.n_layers_D, norm_value, use_attn=opt.attn_D)
- elif 'style' in opt.netD:
- net = StyleDiscriminator(img_size, ndf=opt.ndf, use_attn=opt.attn_D)
- else:
- raise NotImplementedError('Discriminator model name [%s] is not recognized' % opt.netD)
-
- return base_function.init_net(net, opt.init_type, opt.init_gain, initialize_weights=('style' not in opt.netD))
-
-
-def define_G(opt):
- """Create a decoder"""
- if 'diff' in opt.netG:
- net = base_function.DiffDecoder(opt.img_nc, opt.ngf, opt.kernel_G, opt.embed_dim, opt.n_layers_G, opt.num_res_blocks,
- word_size=opt.word_size, activation=opt.activation, norm=opt.norm,
- add_noise=opt.add_noise, use_attn=opt.attn_G, use_pos=opt.use_pos_G)
- elif 'linear' in opt.netG:
- net = base_function.LinearDecoder(opt.img_nc, opt.ngf, opt.kernel_G, opt.embed_dim, opt.activation, opt.norm)
- elif 'refine' in opt.netG:
- net = RefinedGenerator(opt.img_nc, opt.ngf, opt.embed_dim, opt.down_layers, opt.mid_layers, opt.num_res_blocks,
- activation=opt.activation, norm=opt.norm)
- else:
- raise NotImplementedError('Decoder model name [%s] is not recognized' % opt.netG)
-
- return base_function.init_net(net, opt.init_type, opt.init_gain, initialize_weights=('style' not in opt.netG))
-
-
-def define_E(opt):
- """Create a encoder"""
- if 'diff' in opt.netE:
- net = base_function.DiffEncoder(opt.img_nc, opt.ngf, opt.kernel_E, opt.embed_dim, opt.n_layers_G, opt.num_res_blocks,
- activation=opt.activation, norm=opt.norm, use_attn=opt.attn_E)
- elif 'linear' in opt.netE:
- net = base_function.LinearEncoder(opt.img_nc, opt.kernel_E, opt.embed_dim)
- else:
- raise NotImplementedError('Encoder model name [%s] is not recognized' % opt.netE)
-
- return base_function.init_net(net, opt.init_type, opt.init_gain, initialize_weights=('style' not in opt.netE))
-
-
-def define_T(opt):
- """Create a transformer"""
- if "original" in opt.netT:
- e_d_f = int(opt.ngf * (2 ** opt.n_layers_G))
- net = transformer_function.Transformer(e_d_f, opt.embed_dim, e_d_f, kernel=opt.kernel_T,
- n_encoders=opt.n_encoders, n_decoders=opt.n_decoders, embed_type=opt.embed_type)
- else:
- raise NotImplementedError('Transformer model name [%s] is not recognized' % opt.netT)
- return net
-
-
-##################################################################################
-# Discriminator
-##################################################################################
-class NLayerDiscriminator(nn.Module):
- """Defines a PatchGAN discriminator"""
-
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_attn=False):
- """Construct a PatchGAN discriminator
-
- Parameters:
- input_nc (int) -- the number of channels in input examples
- ndf (int) -- the number of filters in the last conv layer
- n_layers (int) -- the number of conv layers in the discriminator
- norm_layer -- normalization layer
- """
- super(NLayerDiscriminator, self).__init__()
- if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
- use_bias = norm_layer.func == nn.InstanceNorm2d
- else:
- use_bias = norm_layer == nn.InstanceNorm2d
-
- kw = 4
- padw = 1
-
- sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
-
- nf_mult = 1
- for n in range(1, n_layers): # gradually increase the number of filters
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)]
- if n == 2 and use_attn:
- sequence += [
- nn.Conv2d(ndf * nf_mult, ndf * nf_mult, kernel_size=1, stride=1, bias=use_bias),
- base_function.AttnAware(ndf * nf_mult)
- ]
-
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n_layers, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
- self.model = nn.Sequential(*sequence)
-
- def forward(self, input):
- """Standard forward."""
- return self.model(input)
-
-
-class StyleDiscriminator(nn.Module):
- def __init__(self, img_size, ndf=32, blur_kernel=[1, 3, 3, 1], use_attn=False):
- super(StyleDiscriminator, self).__init__()
-
- channel_multiplier = ndf / 64
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: int(512 * channel_multiplier),
- 64: int(256 * channel_multiplier),
- 128: int(128 * channel_multiplier),
- 256: int(64 * channel_multiplier),
- 512: int(32 * channel_multiplier),
- 1024: int(16 * channel_multiplier),
- }
-
- convs = [style_function.ConvLayer(3, channels[img_size], 1)]
-
- log_size = int(np.log2(img_size))
-
- in_channel = channels[img_size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2**(i-1)]
- if i == log_size - 3 and use_attn:
- convs.append(base_function.AttnAware(in_channel))
- convs.append(style_function.StyleBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = style_function.ConvLayer(in_channel+1, channels[4], 3)
- self.final_linear = nn.Sequential(
- style_function.EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- style_function.EqualLinear(channels[4], 1),
- )
-
- def forward(self, x):
-
- out = self.convs(x)
-
- b, c, h, w = out.shape
- group = min(b, self.stddev_group)
- stddev = out.view(group, -1, self.stddev_feat, c // self.stddev_feat, h, w)
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, h, w)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
- out = out.view(b, -1)
- out = self.final_linear(out)
-
- return out
-
-
-##################################################################################
-# Generator
-##################################################################################
-class RefinedGenerator(nn.Module):
- def __init__(self, input_nc, ngf=64, embed_dim=512, down_layers=3, mid_layers=6, num_res_blocks=1, dropout=0.0,
- rample_with_conv=True, activation='gelu', norm='pixel'):
- super(RefinedGenerator, self).__init__()
-
- activation_layer = base_function.get_nonlinearity_layer(activation)
- norm_layer = base_function.get_norm_layer(norm)
- self.down_layers = down_layers
- self.mid_layers = mid_layers
- self.num_res_blocks = num_res_blocks
- out_dims = []
- # start
- self.encode = base_function.PartialConv2d(input_nc, ngf, kernel_size=3, stride=1, padding=1)
- # down
- self.down = nn.ModuleList()
- out_dim = ngf
- for i in range(self.down_layers):
- block = nn.ModuleList()
- down = nn.Module()
- in_dim = out_dim
- out_dims.append(out_dim)
- out_dim = min(int(in_dim * 2), embed_dim)
- down.downsample = base_function.DownSample(in_dim, rample_with_conv, kernel_size=3)
- for i_block in range(self.num_res_blocks):
- block.append(base_function.ResnetBlock(in_dim, out_dim, 3, dropout, activation, norm))
- in_dim = out_dim
- down.block = block
- self.down.append(down)
- # middle
- self.mid = nn.ModuleList()
- for i in range(self.mid_layers):
- self.mid.append(base_function.ResnetBlock(out_dim, out_dim, 3, dropout, activation, norm))
- # up
- self.up = nn.ModuleList()
- for i in range(self.down_layers):
- block = nn.ModuleList()
- up = nn.Module()
- in_dim = out_dim
- out_dim = max(out_dims[-i-1], ngf)
- for i_block in range(self.num_res_blocks):
- block.append(base_function.ResnetBlock(in_dim, out_dim, 3, dropout, activation, norm))
- in_dim = out_dim
- if i == self.down_layers - 3:
- up.attn = base_function.AttnAware(out_dim, activation, norm)
- up.block = block
- upsample = True if i != 0 else False
- up.out = base_function.ToRGB(out_dim, input_nc, upsample, activation, norm)
- up.upsample = base_function.UpSample(out_dim, rample_with_conv, kernel_size=3)
- self.up.append(up)
- # end
- self.decode = base_function.ToRGB(out_dim, input_nc, True, activation, norm)
-
- def forward(self, x, mask=None):
- # start
- x = self.encode(x)
- pre = None
- # down
- for i in range(self.down_layers):
- x = self.down[i].downsample(x)
- if i == 2:
- pre = x
- for i_block in range(self.num_res_blocks):
- x = self.down[i].block[i_block](x)
- # middle
- for i in range(self.mid_layers):
- x = self.mid[i](x)
- # up
- skip = None
- for i in range(self.down_layers):
- for i_block in range(self.num_res_blocks):
- x = self.up[i].block[i_block](x)
- if i == self.down_layers - 3:
- mask = F.interpolate(mask, size=x.size()[2:], mode='bilinear', align_corners=True) if mask is not None else None
- x = self.up[i].attn(x, pre=pre, mask=mask)
- skip = self.up[i].out(x, skip)
- x = self.up[i].upsample(x)
- # end
- x = self.decode(x, skip)
-
- return x
\ No newline at end of file
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/checkpoint.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/checkpoint.py
deleted file mode 100644
index 6af3fae43ac4b35532641a81eb13557edfc7dfba..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/checkpoint.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import warnings
-
-from annotator.uniformer.mmcv.fileio import FileClient
-from ..dist_utils import allreduce_params, master_only
-from .hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class CheckpointHook(Hook):
- """Save checkpoints periodically.
-
- Args:
- interval (int): The saving period. If ``by_epoch=True``, interval
- indicates epochs, otherwise it indicates iterations.
- Default: -1, which means "never".
- by_epoch (bool): Saving checkpoints by epoch or by iteration.
- Default: True.
- save_optimizer (bool): Whether to save optimizer state_dict in the
- checkpoint. It is usually used for resuming experiments.
- Default: True.
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, ``runner.work_dir`` will be used by default. If
- specified, the ``out_dir`` will be the concatenation of ``out_dir``
- and the last level directory of ``runner.work_dir``.
- `Changed in version 1.3.16.`
- max_keep_ckpts (int, optional): The maximum checkpoints to keep.
- In some cases we want only the latest few checkpoints and would
- like to delete old ones to save the disk space.
- Default: -1, which means unlimited.
- save_last (bool, optional): Whether to force the last checkpoint to be
- saved regardless of interval. Default: True.
- sync_buffer (bool, optional): Whether to synchronize buffers in
- different gpus. Default: False.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
- `New in version 1.3.16.`
-
- .. warning::
- Before v1.3.16, the ``out_dir`` argument indicates the path where the
- checkpoint is stored. However, since v1.3.16, ``out_dir`` indicates the
- root directory and the final path to save checkpoint is the
- concatenation of ``out_dir`` and the last level directory of
- ``runner.work_dir``. Suppose the value of ``out_dir`` is "/path/of/A"
- and the value of ``runner.work_dir`` is "/path/of/B", then the final
- path will be "/path/of/A/B".
- """
-
- def __init__(self,
- interval=-1,
- by_epoch=True,
- save_optimizer=True,
- out_dir=None,
- max_keep_ckpts=-1,
- save_last=True,
- sync_buffer=False,
- file_client_args=None,
- **kwargs):
- self.interval = interval
- self.by_epoch = by_epoch
- self.save_optimizer = save_optimizer
- self.out_dir = out_dir
- self.max_keep_ckpts = max_keep_ckpts
- self.save_last = save_last
- self.args = kwargs
- self.sync_buffer = sync_buffer
- self.file_client_args = file_client_args
-
- def before_run(self, runner):
- if not self.out_dir:
- self.out_dir = runner.work_dir
-
- self.file_client = FileClient.infer_client(self.file_client_args,
- self.out_dir)
-
- # if `self.out_dir` is not equal to `runner.work_dir`, it means that
- # `self.out_dir` is set so the final `self.out_dir` is the
- # concatenation of `self.out_dir` and the last level directory of
- # `runner.work_dir`
- if self.out_dir != runner.work_dir:
- basename = osp.basename(runner.work_dir.rstrip(osp.sep))
- self.out_dir = self.file_client.join_path(self.out_dir, basename)
-
- runner.logger.info((f'Checkpoints will be saved to {self.out_dir} by '
- f'{self.file_client.name}.'))
-
- # disable the create_symlink option because some file backends do not
- # allow to create a symlink
- if 'create_symlink' in self.args:
- if self.args[
- 'create_symlink'] and not self.file_client.allow_symlink:
- self.args['create_symlink'] = False
- warnings.warn(
- ('create_symlink is set as True by the user but is changed'
- 'to be False because creating symbolic link is not '
- f'allowed in {self.file_client.name}'))
- else:
- self.args['create_symlink'] = self.file_client.allow_symlink
-
- def after_train_epoch(self, runner):
- if not self.by_epoch:
- return
-
- # save checkpoint for following cases:
- # 1. every ``self.interval`` epochs
- # 2. reach the last epoch of training
- if self.every_n_epochs(
- runner, self.interval) or (self.save_last
- and self.is_last_epoch(runner)):
- runner.logger.info(
- f'Saving checkpoint at {runner.epoch + 1} epochs')
- if self.sync_buffer:
- allreduce_params(runner.model.buffers())
- self._save_checkpoint(runner)
-
- @master_only
- def _save_checkpoint(self, runner):
- """Save the current checkpoint and delete unwanted checkpoint."""
- runner.save_checkpoint(
- self.out_dir, save_optimizer=self.save_optimizer, **self.args)
- if runner.meta is not None:
- if self.by_epoch:
- cur_ckpt_filename = self.args.get(
- 'filename_tmpl', 'epoch_{}.pth').format(runner.epoch + 1)
- else:
- cur_ckpt_filename = self.args.get(
- 'filename_tmpl', 'iter_{}.pth').format(runner.iter + 1)
- runner.meta.setdefault('hook_msgs', dict())
- runner.meta['hook_msgs']['last_ckpt'] = self.file_client.join_path(
- self.out_dir, cur_ckpt_filename)
- # remove other checkpoints
- if self.max_keep_ckpts > 0:
- if self.by_epoch:
- name = 'epoch_{}.pth'
- current_ckpt = runner.epoch + 1
- else:
- name = 'iter_{}.pth'
- current_ckpt = runner.iter + 1
- redundant_ckpts = range(
- current_ckpt - self.max_keep_ckpts * self.interval, 0,
- -self.interval)
- filename_tmpl = self.args.get('filename_tmpl', name)
- for _step in redundant_ckpts:
- ckpt_path = self.file_client.join_path(
- self.out_dir, filename_tmpl.format(_step))
- if self.file_client.isfile(ckpt_path):
- self.file_client.remove(ckpt_path)
- else:
- break
-
- def after_train_iter(self, runner):
- if self.by_epoch:
- return
-
- # save checkpoint for following cases:
- # 1. every ``self.interval`` iterations
- # 2. reach the last iteration of training
- if self.every_n_iters(
- runner, self.interval) or (self.save_last
- and self.is_last_iter(runner)):
- runner.logger.info(
- f'Saving checkpoint at {runner.iter + 1} iterations')
- if self.sync_buffer:
- allreduce_params(runner.model.buffers())
- self._save_checkpoint(runner)
diff --git a/spaces/Ariharasudhan/YoloV5/utils/callbacks.py b/spaces/Ariharasudhan/YoloV5/utils/callbacks.py
deleted file mode 100644
index 166d8938322d4b35783be4068ae9561f66c94749..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/callbacks.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Callback utils
-"""
-
-import threading
-
-
-class Callbacks:
- """"
- Handles all registered callbacks for YOLOv5 Hooks
- """
-
- def __init__(self):
- # Define the available callbacks
- self._callbacks = {
- 'on_pretrain_routine_start': [],
- 'on_pretrain_routine_end': [],
- 'on_train_start': [],
- 'on_train_epoch_start': [],
- 'on_train_batch_start': [],
- 'optimizer_step': [],
- 'on_before_zero_grad': [],
- 'on_train_batch_end': [],
- 'on_train_epoch_end': [],
- 'on_val_start': [],
- 'on_val_batch_start': [],
- 'on_val_image_end': [],
- 'on_val_batch_end': [],
- 'on_val_end': [],
- 'on_fit_epoch_end': [], # fit = train + val
- 'on_model_save': [],
- 'on_train_end': [],
- 'on_params_update': [],
- 'teardown': [],}
- self.stop_training = False # set True to interrupt training
-
- def register_action(self, hook, name='', callback=None):
- """
- Register a new action to a callback hook
-
- Args:
- hook: The callback hook name to register the action to
- name: The name of the action for later reference
- callback: The callback to fire
- """
- assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}"
- assert callable(callback), f"callback '{callback}' is not callable"
- self._callbacks[hook].append({'name': name, 'callback': callback})
-
- def get_registered_actions(self, hook=None):
- """"
- Returns all the registered actions by callback hook
-
- Args:
- hook: The name of the hook to check, defaults to all
- """
- return self._callbacks[hook] if hook else self._callbacks
-
- def run(self, hook, *args, thread=False, **kwargs):
- """
- Loop through the registered actions and fire all callbacks on main thread
-
- Args:
- hook: The name of the hook to check, defaults to all
- args: Arguments to receive from YOLOv5
- thread: (boolean) Run callbacks in daemon thread
- kwargs: Keyword Arguments to receive from YOLOv5
- """
-
- assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}"
- for logger in self._callbacks[hook]:
- if thread:
- threading.Thread(target=logger['callback'], args=args, kwargs=kwargs, daemon=True).start()
- else:
- logger['callback'](*args, **kwargs)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/rule.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/rule.py
deleted file mode 100644
index fd00ce6e4cea506f3ab08e6412d2eb6443ef582c..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/rule.py
+++ /dev/null
@@ -1,130 +0,0 @@
-from typing import Union
-
-from .align import AlignMethod
-from .cells import cell_len, set_cell_size
-from .console import Console, ConsoleOptions, RenderResult
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .style import Style
-from .text import Text
-
-
-class Rule(JupyterMixin):
- """A console renderable to draw a horizontal rule (line).
-
- Args:
- title (Union[str, Text], optional): Text to render in the rule. Defaults to "".
- characters (str, optional): Character(s) used to draw the line. Defaults to "─".
- style (StyleType, optional): Style of Rule. Defaults to "rule.line".
- end (str, optional): Character at end of Rule. defaults to "\\\\n"
- align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center".
- """
-
- def __init__(
- self,
- title: Union[str, Text] = "",
- *,
- characters: str = "─",
- style: Union[str, Style] = "rule.line",
- end: str = "\n",
- align: AlignMethod = "center",
- ) -> None:
- if cell_len(characters) < 1:
- raise ValueError(
- "'characters' argument must have a cell width of at least 1"
- )
- if align not in ("left", "center", "right"):
- raise ValueError(
- f'invalid value for align, expected "left", "center", "right" (not {align!r})'
- )
- self.title = title
- self.characters = characters
- self.style = style
- self.end = end
- self.align = align
-
- def __repr__(self) -> str:
- return f"Rule({self.title!r}, {self.characters!r})"
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
- width = options.max_width
-
- characters = (
- "-"
- if (options.ascii_only and not self.characters.isascii())
- else self.characters
- )
-
- chars_len = cell_len(characters)
- if not self.title:
- yield self._rule_line(chars_len, width)
- return
-
- if isinstance(self.title, Text):
- title_text = self.title
- else:
- title_text = console.render_str(self.title, style="rule.text")
-
- title_text.plain = title_text.plain.replace("\n", " ")
- title_text.expand_tabs()
-
- required_space = 4 if self.align == "center" else 2
- truncate_width = max(0, width - required_space)
- if not truncate_width:
- yield self._rule_line(chars_len, width)
- return
-
- rule_text = Text(end=self.end)
- if self.align == "center":
- title_text.truncate(truncate_width, overflow="ellipsis")
- side_width = (width - cell_len(title_text.plain)) // 2
- left = Text(characters * (side_width // chars_len + 1))
- left.truncate(side_width - 1)
- right_length = width - cell_len(left.plain) - cell_len(title_text.plain)
- right = Text(characters * (side_width // chars_len + 1))
- right.truncate(right_length)
- rule_text.append(left.plain + " ", self.style)
- rule_text.append(title_text)
- rule_text.append(" " + right.plain, self.style)
- elif self.align == "left":
- title_text.truncate(truncate_width, overflow="ellipsis")
- rule_text.append(title_text)
- rule_text.append(" ")
- rule_text.append(characters * (width - rule_text.cell_len), self.style)
- elif self.align == "right":
- title_text.truncate(truncate_width, overflow="ellipsis")
- rule_text.append(characters * (width - title_text.cell_len - 1), self.style)
- rule_text.append(" ")
- rule_text.append(title_text)
-
- rule_text.plain = set_cell_size(rule_text.plain, width)
- yield rule_text
-
- def _rule_line(self, chars_len: int, width: int) -> Text:
- rule_text = Text(self.characters * ((width // chars_len) + 1), self.style)
- rule_text.truncate(width)
- rule_text.plain = set_cell_size(rule_text.plain, width)
- return rule_text
-
- def __rich_measure__(
- self, console: Console, options: ConsoleOptions
- ) -> Measurement:
- return Measurement(1, 1)
-
-
-if __name__ == "__main__": # pragma: no cover
- import sys
-
- from pip._vendor.rich.console import Console
-
- try:
- text = sys.argv[1]
- except IndexError:
- text = "Hello, World"
- console = Console()
- console.print(Rule(title=text))
-
- console = Console()
- console.print(Rule("foo"), width=4)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/errors.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/errors.py
deleted file mode 100644
index ec7fb3b6c4856708dc6bc3b0c35fd8df73156029..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/errors.py
+++ /dev/null
@@ -1,58 +0,0 @@
-"""setuptools.errors
-
-Provides exceptions used by setuptools modules.
-"""
-
-from distutils import errors as _distutils_errors
-
-
-# Re-export errors from distutils to facilitate the migration to PEP632
-
-ByteCompileError = _distutils_errors.DistutilsByteCompileError
-CCompilerError = _distutils_errors.CCompilerError
-ClassError = _distutils_errors.DistutilsClassError
-CompileError = _distutils_errors.CompileError
-ExecError = _distutils_errors.DistutilsExecError
-FileError = _distutils_errors.DistutilsFileError
-InternalError = _distutils_errors.DistutilsInternalError
-LibError = _distutils_errors.LibError
-LinkError = _distutils_errors.LinkError
-ModuleError = _distutils_errors.DistutilsModuleError
-OptionError = _distutils_errors.DistutilsOptionError
-PlatformError = _distutils_errors.DistutilsPlatformError
-PreprocessError = _distutils_errors.PreprocessError
-SetupError = _distutils_errors.DistutilsSetupError
-TemplateError = _distutils_errors.DistutilsTemplateError
-UnknownFileError = _distutils_errors.UnknownFileError
-
-# The root error class in the hierarchy
-BaseError = _distutils_errors.DistutilsError
-
-
-class RemovedCommandError(BaseError, RuntimeError):
- """Error used for commands that have been removed in setuptools.
-
- Since ``setuptools`` is built on ``distutils``, simply removing a command
- from ``setuptools`` will make the behavior fall back to ``distutils``; this
- error is raised if a command exists in ``distutils`` but has been actively
- removed in ``setuptools``.
- """
-
-
-class PackageDiscoveryError(BaseError, RuntimeError):
- """Impossible to perform automatic discovery of packages and/or modules.
-
- The current project layout or given discovery options can lead to problems when
- scanning the project directory.
-
- Setuptools might also refuse to complete auto-discovery if an error prone condition
- is detected (e.g. when a project is organised as a flat-layout but contains
- multiple directories that can be taken as top-level packages inside a single
- distribution [*]_). In these situations the users are encouraged to be explicit
- about which packages to include or to make the discovery parameters more specific.
-
- .. [*] Since multi-package distributions are uncommon it is very likely that the
- developers did not intend for all the directories to be packaged, and are just
- leaving auxiliary code in the repository top-level, such as maintenance-related
- scripts.
- """
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Dragon Ball Z Shin Budokai 7 Ppsspp.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Dragon Ball Z Shin Budokai 7 Ppsspp.md
deleted file mode 100644
index 3bf8a82c29b9b5d1ce9f54bb90754be8dd9642bb..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Dragon Ball Z Shin Budokai 7 Ppsspp.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-
Cómo descargar Dragon Ball Z Shin Budokai 7 PPSSPP para Android
-
Dragon Ball Z Shin Budokai 7 PPSSPP es un juego de lucha basado en la serie de anime Dragon Ball Z, desarrollado por Dimps y lanzado para PlayStation Portable en 2006. Es la secuela de Dragon Ball Z: Shin Budokai, y cuenta con un modo de historia que sigue los eventos de la película Fusión Reborn, donde Goku y Vegeta tienen que enfrentarse a un poderoso enemigo llamado Janemba.
-
Si eres un fan de Dragon Ball Z, o si te gustan los juegos de lucha rápidos e intensos, definitivamente deberías probar Dragon Ball Z Shin Budokai 7 PPSSPP en tu dispositivo Android. Puedes jugar como tus personajes favoritos de la serie, como Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Majin Buu, Broly y más. También puedes personalizar tu tarjeta de perfil con artículos de la tienda del juego y retar a tus amigos en batallas multijugador inalámbricas.
-
Cómo descargar dragon ball z shin budokai 7 ppsspp
En este artículo, te mostraré cómo descargar e instalar Dragon Ball Z Shin Budokai 7 PPSSPP en tu dispositivo Android, así como algunos consejos y trucos sobre cómo jugarlo. También os contaré algunas de las características y requisitos del juego. ¡Así que, sin más preámbulos, empecemos!
-
Cómo descargar Dragon Ball Z Shin Budokai 7 PPSSPP
-
Para jugar Dragon Ball Z Shin Budokai 7 PPSSPP en tu dispositivo Android, necesitarás dos cosas: el archivo ISO del juego y el emulador PPSSPP. El archivo ISO del juego es una versión comprimida del disco original del juego que contiene todos los datos y gráficos del juego. El emulador PPSSPP es un software que te permite ejecutar juegos PSP en tu dispositivo Android.
-
Aquí están los pasos para descargar e instalar Dragon Ball Z Shin Budokai 7 PPSSPP en su dispositivo Android:
-
-
Primero, necesitas descargar el archivo ISO del juego desde una fuente confiable. Puedes usar este enlace para descargarlo desde Geeksblogger.com. El tamaño del archivo es de unos 300 MB.
-
-
Después de descargar ambos archivos, es necesario instalarlos en su dispositivo Android. Para ello, debe habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store.
-
Una vez que haya habilitado fuentes desconocidas, vaya a su administrador de archivos y busque los archivos descargados. Toque en ellos uno por uno y siga las instrucciones para instalarlos.
-
Después de instalar ambos archivos, necesita extraer el archivo ISO del juego de su carpeta ZIP. Puede utilizar cualquier aplicación que puede descomprimir archivos, como RAR o ZArchiver. Simplemente toque en la carpeta ZIP y elija extraer aquí o extraer a una ubicación específica.
-
Ahora que ha extraído el archivo ISO del juego, está listo para jugarlo. Inicie la aplicación emuladora PPSSPP y toque en Juegos. Luego, vaya a la carpeta donde extrajo el archivo ISO del juego y toque en él. Debería ver el icono del juego y el título en la pantalla. Toque en él para comenzar a jugar.
-
-
Felicidades! Usted ha descargado e instalado con éxito Dragon Ball Z Shin Budokai 7 PPSSPP en su dispositivo Android. Ahora, veamos cómo se juega.
-
Cómo jugar Dragon Ball Z Shin Budokai 7 PPSSPP
-
Dragon Ball Z Shin Budokai 7 PPSSPP es un divertido y adictivo juego de lucha que te mantendrá entretenido durante horas. El juego tiene varios modos entre los que puedes elegir, como el modo historia, el modo árcade, el modo de prueba Z, el modo de batalla de red y el modo tarjeta de perfil. Cada modo tiene sus propios objetivos y desafíos que puedes completar para desbloquear nuevos elementos y personajes.
-
El juego también tiene un esquema de control simple e intuitivo que puedes personalizar según tus preferencias. Puede utilizar los botones virtuales en la pantalla o los gestos táctiles para realizar varias acciones, como mover, atacar, bloquear, cargar, transformar y usar movimientos especiales. También puede ajustar la sensibilidad y el diseño de los controles en el menú de configuración.
-
-
-
Para acceder al menú del juego, toca el botón de pausa en la esquina superior derecha de la pantalla. Desde allí, puede guardar o cargar su progreso, cambiar la configuración del juego, ver su tarjeta de perfil o salir del juego.
-
Para cambiar entre caracteres durante una batalla, toca el icono del personaje en la esquina inferior izquierda de la pantalla. También puede etiquetar en su pareja presionando los botones L + R simultáneamente.
-
Para realizar una fusión, necesitas tener dos personajes compatibles en tu equipo, como Goku y Vegeta, o Goten y Trunks. Entonces, necesitas llenar tu medidor de Ki atacando o cargando. Una vez que esté lleno, presiona los botones L + R + X simultáneamente para iniciar la fusión. Luego te transformarás en un personaje más poderoso, como Gogeta o Gotenks.
-
Para usar un movimiento especial, necesitas tener suficiente Ki en tu medidor. Luego, presiona el botón O para activarlo. También puede utilizar diferentes variaciones del movimiento especial pulsando O + botones direccionales. Por ejemplo, Goku puede usar Kamehameha, Super Kamehameha o Kamehameha instantánea dependiendo de la dirección en la que presione.
-
Para usar un movimiento definitivo, necesita tener al menos tres barras de Ki en su medidor. Luego, presione los botones O + X simultáneamente para desatarlo. Los movimientos finales son muy poderosos y pueden infligir daño masivo a tu oponente. Sin embargo, también consumen mucho Ki y te dejan vulnerable por un tiempo.
-
-
Estos son algunos de los consejos básicos y trucos sobre cómo jugar Dragon Ball Z Shin Budokai 7 PPSSPP. Por supuesto, hay técnicas y estrategias más avanzadas que puedes aprender a medida que juegas más. La mejor manera de dominar el juego es practicar y experimentar con diferentes personajes y movimientos.
-
-
Características de Dragon Ball Z Shin Budokai 7 PPSSPP
-
-
-
Característica
Descripción
-
Characters
El juego cuenta con 22 personajes jugables de la serie Dragon Ball Z, cada uno con sus propios movimientos y transformaciones. Puedes jugar como héroes como Goku, Vegeta, Gohan, Piccolo, Krillin, Gotenks, Gogeta, Vegito y más. También puedes jugar como villanos como Frieza, Cell, Majin Buu, Broly, Janemba, Cooler y más.
-
Stages
El juego cuenta con 11 etapas de la serie Dragon Ball Z, cada una con su propia música de fondo y efectos ambientales. Puedes luchar en lugares como Earth, Namek, Hell, World Tournament Arena, Hyperbolic Time Chamber, Supreme Kai’s Planet, y más.
-
Graphics
El juego presenta gráficos de alta calidad que se asemejan al estilo anime de Dragon Ball Z. Los personajes están bien diseñados y animados con movimientos y expresiones suaves. Los escenarios son coloridos y detallados con iluminación y sombras realistas. Los efectos especiales son llamativos e impresionantes con chispas, explosiones y rayos. El juego también admite resolución HD y 60 FPS para una experiencia de juego fluida e inmersiva.
-
Sound
El juego cuenta con un sonido de alta calidad que mejora la atmósfera y el estado de ánimo del juego. La música está compuesta por Kenji Yamamoto, quien también trabajó en la serie de anime Dragon Ball Z. La música es pegadiza y enérgica, que coincide con el tono y el ritmo del juego. Los efectos de sonido son realistas y satisfactorios, añadiendo impacto y retroalimentación al juego. La actuación de voz es hecha por el elenco japonés original de la serie de anime Dragon Ball Z, dando autenticidad y emoción al juego.
-
-
Estas son algunas de las características que hacen de Dragon Ball Z Shin Budokai 7 PPSSPP un gran juego para jugar en tu dispositivo Android. Por supuesto, hay más características que puedes descubrir y disfrutar mientras juegas más. El juego está lleno de sorpresas y secretos que te mantendrán enganchado durante horas.
-
-
Dragon Ball Z Shin Budokai 7 PPSSPP es un juego relativamente ligero que puede ejecutarse en la mayoría de los dispositivos Android. Sin embargo, para garantizar una experiencia de juego suave y óptima, debe verificar los requisitos mínimos y recomendados para ejecutar el juego. Aquí están:
-
-
Requisito
Mínimo
Recomendado
-
Versión de Android
4.0 o superior
6.0 o superior
-
RAM
1 GB o superior
2 GB o superior
-
Espacio de almacenamiento
500 MB o superior
1 GB o superior
-
Velocidad del procesador
1 GHz o superior
2 GHz o superior
-
Calidad gráfica
Baja o media
Alta o ultra
-
FPS (fotogramas por segundo)
30 o superior
60 o superior
-
-
Si su dispositivo cumple o excede estos requisitos, usted debe ser capaz de jugar Dragon Ball Z Shin Budokai 7 PPSSPP sin ningún problema. Sin embargo, si tu dispositivo está por debajo de estos requisitos, es posible que experimentes algún retraso, tartamudeo o fallo mientras juegas. En ese caso, puede intentar reducir la calidad gráfica, la resolución o los FPS en el menú de configuración del emulador de PPSSPP para mejorar el rendimiento.
-
Conclusión
-
En conclusión, Dragon Ball Z Shin Budokai 7 PPSSPP es un increíble juego de lucha que puedes jugar en tu dispositivo Android usando el emulador PPSSPP. El juego tiene un montón de características que atraerán a los fans de Dragon Ball Z y juegos de lucha en general. Puedes jugar como tus personajes favoritos de la serie, luchar en varias etapas del anime, disfrutar de gráficos y sonidos de alta calidad, y desafiar a tus amigos en el modo multijugador.
-
-
Espero que hayas encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario sobre Dragon Ball Z Shin Budokai 7 PPSSPP, no dude en dejar un comentario a continuación. Me encantaría saber de usted!
-
Ahora, ¿qué estás esperando? Adelante y descargar Dragon Ball Z Shin Budokai 7 PPSSPP en su dispositivo Android y disfrutar de este impresionante juego!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas y respuestas frecuentes sobre Dragon Ball Z Shin Budokai 7 PPSSPP:
-
-
¿Es Dragon Ball Z Shin Budokai 7 PPSSPP un juego oficial?
-
No, Dragon Ball Z Shin Budokai 7 PPSSPP no es un juego oficial. Es una versión modificada de Dragon Ball Z: Shin Budokai - Another Road, que es un juego oficial lanzado para PlayStation Portable en 2007. La versión modificada añade nuevos personajes, escenarios, gráficos y sonido al juego original.
-
¿Es seguro descargar Dragon Ball Z Shin Budokai 7 PPSSPP?
-
Sí, Dragon Ball Z Shin Budokai 7 PPSSPP es seguro de descargar siempre y cuando utilice los enlaces que he proporcionado en este artículo. Estos enlaces son de fuentes confiables que han probado y verificado los archivos del juego. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet, y escanearlos con un software antivirus antes de abrirlos.
-
¿Puedo jugar Dragon Ball Z Shin Budokai 7 PPSSPP sin conexión?
-
Sí, puedes jugar Dragon Ball Z Shin Budokai 7 PPSSPP sin conexión a Internet. Solo necesitas una conexión a Internet para descargar los archivos del juego y la aplicación emuladora PPSSPP. Una vez que los haya instalado en su dispositivo, puede jugar el juego sin conexión en cualquier momento y en cualquier lugar.
-
¿Puedo jugar Dragon Ball Z Shin Budokai 7 PPSSPP con un controlador?
-
-
¿Puedo jugar Dragon Ball Z Shin Budokai 7 PPSSPP con otros jugadores?
-
Sí, puedes jugar Dragon Ball Z Shin Budokai 7 PPSSPP con otros jugadores en el modo de batalla de red. Este modo le permite luchar contra otros jugadores en línea o localmente utilizando una conexión inalámbrica. También puedes chatear con otros jugadores y ver sus tarjetas de perfil. Para acceder a este modo, necesita tener una conexión a Internet y una dirección IP válida.
-
¿Cómo puedo desbloquear más personajes y elementos en Dragon Ball Z Shin Budokai 7 PPSSPP?
-
Puedes desbloquear más personajes y objetos en Dragon Ball Z Shin Budokai 7 PPSSPP completando varios modos y desafíos en el juego. Por ejemplo, puedes desbloquear nuevos personajes terminando el modo historia, el modo árcade o el modo de prueba Z. También puedes desbloquear nuevos objetos ganando Zeni (la moneda del juego) y gastándolo en la tienda del juego. También puedes usar trucos o mods para desbloquear todo en el juego, pero eso puede arruinar la diversión y el desafío del juego.
-
-
Espero que estas preguntas frecuentes hayan respondido a algunas de sus preguntas sobre Dragon Ball Z Shin Budokai 7 PPSSPP. Si tiene alguna otra pregunta, no dude en dejar un comentario a continuación.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/__init__.py
deleted file mode 100644
index 6001b27b37430efbf22057efde79637b340fa1db..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/version.py
deleted file mode 100644
index c5e9d85cd75884b129d4ab8d0453c0e50d0c1f68..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/version.py
+++ /dev/null
@@ -1,9 +0,0 @@
-"""
-This module exists only to simplify retrieving the version number of chardet
-from within setuptools and from chardet subpackages.
-
-:author: Dan Blanchard (dan.blanchard@gmail.com)
-"""
-
-__version__ = "5.1.0"
-VERSION = __version__.split(".")
diff --git a/spaces/BucketHeadP65/confusion_matrix/confusion_matrix.py b/spaces/BucketHeadP65/confusion_matrix/confusion_matrix.py
deleted file mode 100644
index 444db6bd7b8e96c26ee18e6e68d5320f8f34a325..0000000000000000000000000000000000000000
--- a/spaces/BucketHeadP65/confusion_matrix/confusion_matrix.py
+++ /dev/null
@@ -1,149 +0,0 @@
-"""Confusion Matrix metric."""
-
-import datasets
-import evaluate
-from sklearn.metrics import confusion_matrix
-
-_DESCRIPTION = """
-Compute confusion matrix to evaluate the accuracy of a classification.
-By definition a confusion matrix :math:`C` is such that :math:`C_{i, j}`
-is equal to the number of observations known to be in group :math:`i` and
-predicted to be in group :math:`j`.
-
-Thus in binary classification, the count of true negatives is
-:math:`C_{0,0}`, false negatives is :math:`C_{1,0}`, true positives is
-:math:`C_{1,1}` and false positives is :math:`C_{0,1}`.
-
-Read more in the :ref:`User Guide `.
-"""
-
-
-_KWARGS_DESCRIPTION = """
-Args:
-
- y_true : array-like of shape (n_samples,)
- Ground truth (correct) target values.
-
- y_pred : array-like of shape (n_samples,)
- Estimated targets as returned by a classifier.
-
- labels : array-like of shape (n_classes), default=None
- List of labels to index the matrix. This may be used to reorder
- or select a subset of labels.
- If ``None`` is given, those that appear at least once
- in ``y_true`` or ``y_pred`` are used in sorted order.
-
- sample_weight : array-like of shape (n_samples,), default=None
- Sample weights.
-
- .. versionadded:: 0.18
-
- normalize : {'true', 'pred', 'all'}, default=None
- Normalizes confusion matrix over the true (rows), predicted (columns)
- conditions or all the population. If None, confusion matrix will not be
- normalized.
-
-Returns:
-
- C : ndarray of shape (n_classes, n_classes)
- Confusion matrix whose i-th row and j-th
- column entry indicates the number of
- samples with true label being i-th class
- and predicted label being j-th class.
-
-See Also:
-
- ConfusionMatrixDisplay.from_estimator : Plot the confusion matrix
- given an estimator, the data, and the label.
- ConfusionMatrixDisplay.from_predictions : Plot the confusion matrix
- given the true and predicted labels.
- ConfusionMatrixDisplay : Confusion Matrix visualization.
-
-References:
-
- .. [1] `Wikipedia entry for the Confusion matrix
- `_
- (Wikipedia and other references may use a different
- convention for axes).
-
-Examples:
-
- >>> from sklearn.metrics import confusion_matrix
- >>> y_true = [2, 0, 2, 2, 0, 1]
- >>> y_pred = [0, 0, 2, 2, 0, 2]
- >>> confusion_matrix(y_true, y_pred)
- array([[2, 0, 0],
- [0, 0, 1],
- [1, 0, 2]])
-
- >>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
- >>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
- >>> confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
- array([[2, 0, 0],
- [0, 0, 1],
- [1, 0, 2]])
-
- In the binary case, we can extract true positives, etc as follows:
-
- >>> tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel()
- >>> (tn, fp, fn, tp)
- (0, 2, 1, 1)
-"""
-
-
-_CITATION = """
-@article{scikit-learn,
- title={Scikit-learn: Machine Learning in {P}ython},
- author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
- and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
- and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
- Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
- journal={Journal of Machine Learning Research},
- volume={12},
- pages={2825--2830},
- year={2011}
-}
-"""
-
-
-@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
-class ConfusionMatrix(evaluate.Metric):
- def _info(self):
- return evaluate.MetricInfo(
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- features=datasets.Features(
- {
- "predictions": datasets.Sequence(datasets.Value("int32")),
- "references": datasets.Sequence(datasets.Value("int32")),
- }
- if self.config_name == "multilabel"
- else {
- "predictions": datasets.Value("int32"),
- "references": datasets.Value("int32"),
- }
- ),
- reference_urls=[
- "https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html"
- ],
- )
-
- def _compute(
- self,
- predictions,
- references,
- *,
- labels=None,
- sample_weight=None,
- normalize=None
- ):
- return {
- "confusion_matrix": confusion_matrix(
- y_true=references,
- y_pred=predictions,
- labels=labels,
- sample_weight=sample_weight,
- normalize=normalize,
- )
- }
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/training.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/training.md
deleted file mode 100644
index 00e3ebec432eed718648921f0192d284da32afe6..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/training.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Training
-
-From the previous tutorials, you may now have a custom model and data loader.
-
-You are free to create your own optimizer, and write the training logic: it's
-usually easy with PyTorch, and allow researchers to see the entire training
-logic more clearly and have full control.
-One such example is provided in [tools/plain_train_net.py](../../tools/plain_train_net.py).
-
-We also provide a standarized "trainer" abstraction with a
-[minimal hook system](../modules/engine.html#detectron2.engine.HookBase)
-that helps simplify the standard types of training.
-
-You can use
-[SimpleTrainer().train()](../modules/engine.html#detectron2.engine.SimpleTrainer)
-which provides minimal abstraction for single-cost single-optimizer single-data-source training.
-The builtin `train_net.py` script uses
-[DefaultTrainer().train()](../modules/engine.html#detectron2.engine.defaults.DefaultTrainer),
-which includes more standard default behavior that one might want to opt in,
-including default configurations for logging, evaluation, checkpointing etc.
-This also means that it's less likely to support some non-standard behavior
-you might want during research.
-
-To customize the training loops, you can:
-
-1. If your customization is similar to what `DefaultTrainer` is already doing,
-you can look at the source code of [DefaultTrainer](../../detectron2/engine/defaults.py)
-and overwrite some of its behaviors with new parameters or new hooks.
-2. If you need something very novel, you can start from [tools/plain_train_net.py](../../tools/plain_train_net.py) to implement them yourself.
-
-### Logging of Metrics
-
-During training, metrics are saved to a centralized [EventStorage](../modules/utils.html#detectron2.utils.events.EventStorage).
-You can use the following code to access it and log metrics to it:
-```
-from detectron2.utils.events import get_event_storage
-
-# inside the model:
-if self.training:
- value = # compute the value from inputs
- storage = get_event_storage()
- storage.put_scalar("some_accuracy", value)
-```
-
-Refer to its documentation for more details.
-
-Metrics are then saved to various destinations with [EventWriter](../modules/utils.html#module-detectron2.utils.events).
-DefaultTrainer enables a few `EventWriter` with default configurations.
-See above for how to customize them.
diff --git a/spaces/CVPR/lama-example/fetch_data/places_standard_train_prepare.sh b/spaces/CVPR/lama-example/fetch_data/places_standard_train_prepare.sh
deleted file mode 100644
index b5389e7096bade08526162733658e221808716fd..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/fetch_data/places_standard_train_prepare.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-mkdir -p places_standard_dataset/train
-
-# untar without folder structure
-tar -xvf train_large_places365standard.tar --transform='s/.*\///' -C places_standard_dataset/train
-
-# create location config places.yaml
-PWD=$(pwd)
-DATASET=${PWD}/places_standard_dataset
-PLACES=${PWD}/configs/training/location/places_standard.yaml
-
-touch $PLACES
-echo "# @package _group_" >> $PLACES
-echo "data_root_dir: ${DATASET}/" >> $PLACES
-echo "out_root_dir: ${PWD}/experiments/" >> $PLACES
-echo "tb_dir: ${PWD}/tb_logs/" >> $PLACES
-echo "pretrained_models: ${PWD}/" >> $PLACES
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/luxun_say/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/luxun_say/__init__.py
deleted file mode 100644
index 44db464aeded0cb248d917e30dd630fa25037856..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/luxun_say/__init__.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.exception import TextOverLength
-
-img_dir = Path(__file__).parent / "images"
-
-
-def luxun_say(images, texts: List[str], args):
- text = texts[0]
- frame = BuildImage.open(img_dir / "0.jpg")
- try:
- frame.draw_text(
- (40, frame.height - 200, frame.width - 40, frame.height - 100),
- text,
- allow_wrap=True,
- max_fontsize=40,
- min_fontsize=30,
- fill="white",
- )
- except ValueError:
- raise TextOverLength(text)
- frame.draw_text((320, 400), "--鲁迅", fontsize=30, fill="white")
- return frame.save_jpg()
-
-
-add_meme(
- "luxun_say",
- luxun_say,
- min_texts=1,
- max_texts=1,
- default_texts=["我没有说过这句话"],
- keywords=["鲁迅说", "鲁迅说过"],
-)
diff --git a/spaces/CjangCjengh/Shanghainese-TTS/app.py b/spaces/CjangCjengh/Shanghainese-TTS/app.py
deleted file mode 100644
index 6897c5ecadcd287763c66d8610d9a7976bd7798c..0000000000000000000000000000000000000000
--- a/spaces/CjangCjengh/Shanghainese-TTS/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import torch
-import librosa
-import commons
-import utils
-from models import SynthesizerTrn
-from text import text_to_sequence
-import numpy as np
-from mel_processing import spectrogram_torch
-import gradio as gr
-from text.cleaners import shanghainese_cleaners
-
-
-DEFAULT_TEXT='阿拉小人天天辣辣白相,书一眼也勿看,拿我急煞脱了。侬讲是𠲎?'
-
-
-def clean_text(text,ipa_input):
- if ipa_input:
- return shanghainese_cleaners(text)
- return text
-
-
-def get_text(text, hps, cleaned=False):
- if cleaned:
- text_norm = text_to_sequence(text, hps.symbols, [])
- else:
- text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-
-def speech_synthesize(text, cleaned, length_scale):
- text=text.replace('\n','')
- print(text)
- stn_tst = get_text(text, hps_ms, cleaned)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- sid = torch.LongTensor([0])
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=0.667, noise_scale_w=0.8, length_scale=length_scale)[0][0,0].data.cpu().float().numpy()
- return (hps_ms.data.sampling_rate, audio)
-
-
-if __name__=='__main__':
- hps_ms = utils.get_hparams_from_file('model/config.json')
- n_speakers = hps_ms.data.n_speakers
- n_symbols = len(hps_ms.symbols)
- speakers = hps_ms.speakers
-
- net_g_ms = SynthesizerTrn(
- n_symbols,
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=n_speakers,
- **hps_ms.model)
- _ = net_g_ms.eval()
- utils.load_checkpoint('model/model.pth', net_g_ms)
-
- with gr.Blocks() as app:
- gr.Markdown('# Shanghainese Text to Speech\n'
- '')
- gr.Markdown('
')
- text_input = gr.TextArea(label='Text', placeholder='Type your text here',value=DEFAULT_TEXT)
- cleaned_text=gr.Checkbox(label='IPA Input',default=True)
- length_scale=gr.Slider(0.5,2,1,step=0.1,label='Speaking Speed',interactive=True)
- tts_button = gr.Button('Synthesize')
- audio_output = gr.Audio(label='Speech Synthesized')
- cleaned_text.change(clean_text,[text_input,cleaned_text],[text_input])
- tts_button.click(speech_synthesize,[text_input,cleaned_text,length_scale],[audio_output])
- gr.Markdown('## Based on\n'
- '- [https://github.com/jaywalnut310/vits](https://github.com/jaywalnut310/vits)\n\n'
- '## Dataset\n'
- '- [http://shh.dict.cn/](http://shh.dict.cn/)\n\n'
- '## Lexicon\n'
- '- [https://www.wugniu.com/](https://www.wugniu.com/)\n\n'
- '- [https://github.com/MaigoAkisame/MCPDict](https://github.com/MaigoAkisame/MCPDict)\n\n'
- '- [https://github.com/edward-martyr/rime-yahwe_zaonhe](https://github.com/edward-martyr/rime-yahwe_zaonhe)')
-
- app.launch()
\ No newline at end of file
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/builders/base_dataset_builder.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/builders/base_dataset_builder.py
deleted file mode 100644
index 86c2cf688e9bcac67138aa32c58927e4a5ddebba..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/builders/base_dataset_builder.py
+++ /dev/null
@@ -1,236 +0,0 @@
-"""
- This file is from
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import logging
-import os
-import shutil
-import warnings
-
-from omegaconf import OmegaConf
-import torch.distributed as dist
-from torchvision.datasets.utils import download_url
-
-import video_llama.common.utils as utils
-from video_llama.common.dist_utils import is_dist_avail_and_initialized, is_main_process
-from video_llama.common.registry import registry
-from video_llama.processors.base_processor import BaseProcessor
-
-
-
-class BaseDatasetBuilder:
- train_dataset_cls, eval_dataset_cls = None, None
-
- def __init__(self, cfg=None):
- super().__init__()
-
- if cfg is None:
- # help to create datasets from default config.
- self.config = load_dataset_config(self.default_config_path())
- elif isinstance(cfg, str):
- self.config = load_dataset_config(cfg)
- else:
- # when called from task.build_dataset()
- self.config = cfg
-
- self.data_type = self.config.data_type
-
- self.vis_processors = {"train": BaseProcessor(), "eval": BaseProcessor()}
- self.text_processors = {"train": BaseProcessor(), "eval": BaseProcessor()}
-
- def build_datasets(self):
- # download, split, etc...
- # only called on 1 GPU/TPU in distributed
-
- if is_main_process():
- self._download_data()
-
- if is_dist_avail_and_initialized():
- dist.barrier()
-
- # at this point, all the annotations and image/videos should be all downloaded to the specified locations.
- logging.info("Building datasets...")
- datasets = self.build() # dataset['train'/'val'/'test']
-
- return datasets
-
- def build_processors(self):
- vis_proc_cfg = self.config.get("vis_processor")
- txt_proc_cfg = self.config.get("text_processor")
-
- if vis_proc_cfg is not None:
- vis_train_cfg = vis_proc_cfg.get("train")
- vis_eval_cfg = vis_proc_cfg.get("eval")
-
- self.vis_processors["train"] = self._build_proc_from_cfg(vis_train_cfg)
- self.vis_processors["eval"] = self._build_proc_from_cfg(vis_eval_cfg)
-
- if txt_proc_cfg is not None:
- txt_train_cfg = txt_proc_cfg.get("train")
- txt_eval_cfg = txt_proc_cfg.get("eval")
-
- self.text_processors["train"] = self._build_proc_from_cfg(txt_train_cfg)
- self.text_processors["eval"] = self._build_proc_from_cfg(txt_eval_cfg)
-
- @staticmethod
- def _build_proc_from_cfg(cfg):
- return (
- registry.get_processor_class(cfg.name).from_config(cfg)
- if cfg is not None
- else None
- )
-
- @classmethod
- def default_config_path(cls, type="default"):
- return utils.get_abs_path(cls.DATASET_CONFIG_DICT[type])
-
- def _download_data(self):
- self._download_ann()
- self._download_vis()
-
- def _download_ann(self):
- """
- Download annotation files if necessary.
- All the vision-language datasets should have annotations of unified format.
-
- storage_path can be:
- (1) relative/absolute: will be prefixed with env.cache_root to make full path if relative.
- (2) basename/dirname: will be suffixed with base name of URL if dirname is provided.
-
- Local annotation paths should be relative.
- """
- anns = self.config.build_info.annotations
-
- splits = anns.keys()
-
- cache_root = registry.get_path("cache_root")
-
- for split in splits:
- info = anns[split]
-
- urls, storage_paths = info.get("url", None), info.storage
-
- if isinstance(urls, str):
- urls = [urls]
- if isinstance(storage_paths, str):
- storage_paths = [storage_paths]
-
- assert len(urls) == len(storage_paths)
-
- for url_or_filename, storage_path in zip(urls, storage_paths):
- # if storage_path is relative, make it full by prefixing with cache_root.
- if not os.path.isabs(storage_path):
- storage_path = os.path.join(cache_root, storage_path)
-
- dirname = os.path.dirname(storage_path)
- if not os.path.exists(dirname):
- os.makedirs(dirname)
-
- if os.path.isfile(url_or_filename):
- src, dst = url_or_filename, storage_path
- if not os.path.exists(dst):
- shutil.copyfile(src=src, dst=dst)
- else:
- logging.info("Using existing file {}.".format(dst))
- else:
- if os.path.isdir(storage_path):
- # if only dirname is provided, suffix with basename of URL.
- raise ValueError(
- "Expecting storage_path to be a file path, got directory {}".format(
- storage_path
- )
- )
- else:
- filename = os.path.basename(storage_path)
-
- download_url(url=url_or_filename, root=dirname, filename=filename)
-
- def _download_vis(self):
-
- storage_path = self.config.build_info.get(self.data_type).storage
- storage_path = utils.get_cache_path(storage_path)
-
- if not os.path.exists(storage_path):
- warnings.warn(
- f"""
- The specified path {storage_path} for visual inputs does not exist.
- Please provide a correct path to the visual inputs or
- refer to datasets/download_scripts/README.md for downloading instructions.
- """
- )
-
- def build(self):
- """
- Create by split datasets inheriting torch.utils.data.Datasets.
-
- # build() can be dataset-specific. Overwrite to customize.
- """
- self.build_processors()
-
- build_info = self.config.build_info
-
- ann_info = build_info.annotations
- vis_info = build_info.get(self.data_type)
-
- datasets = dict()
- for split in ann_info.keys():
- if split not in ["train", "val", "test"]:
- continue
-
- is_train = split == "train"
-
- # processors
- vis_processor = (
- self.vis_processors["train"]
- if is_train
- else self.vis_processors["eval"]
- )
- text_processor = (
- self.text_processors["train"]
- if is_train
- else self.text_processors["eval"]
- )
-
- # annotation path
- ann_paths = ann_info.get(split).storage
- if isinstance(ann_paths, str):
- ann_paths = [ann_paths]
-
- abs_ann_paths = []
- for ann_path in ann_paths:
- if not os.path.isabs(ann_path):
- ann_path = utils.get_cache_path(ann_path)
- abs_ann_paths.append(ann_path)
- ann_paths = abs_ann_paths
-
- # visual data storage path
- vis_path = os.path.join(vis_info.storage, split)
-
- if not os.path.isabs(vis_path):
- # vis_path = os.path.join(utils.get_cache_path(), vis_path)
- vis_path = utils.get_cache_path(vis_path)
-
- if not os.path.exists(vis_path):
- warnings.warn("storage path {} does not exist.".format(vis_path))
-
- # create datasets
- dataset_cls = self.train_dataset_cls if is_train else self.eval_dataset_cls
- datasets[split] = dataset_cls(
- vis_processor=vis_processor,
- text_processor=text_processor,
- ann_paths=ann_paths,
- vis_root=vis_path,
- )
-
- return datasets
-
-
-def load_dataset_config(cfg_path):
- cfg = OmegaConf.load(cfg_path).datasets
- cfg = cfg[list(cfg.keys())[0]]
-
- return cfg
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PalmImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PalmImagePlugin.py
deleted file mode 100644
index a88a907917dce5dace64fd1e38df86246c8e0305..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PalmImagePlugin.py
+++ /dev/null
@@ -1,225 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-
-##
-# Image plugin for Palm pixmap images (output only).
-##
-
-from . import Image, ImageFile
-from ._binary import o8
-from ._binary import o16be as o16b
-
-# fmt: off
-_Palm8BitColormapValues = (
- (255, 255, 255), (255, 204, 255), (255, 153, 255), (255, 102, 255),
- (255, 51, 255), (255, 0, 255), (255, 255, 204), (255, 204, 204),
- (255, 153, 204), (255, 102, 204), (255, 51, 204), (255, 0, 204),
- (255, 255, 153), (255, 204, 153), (255, 153, 153), (255, 102, 153),
- (255, 51, 153), (255, 0, 153), (204, 255, 255), (204, 204, 255),
- (204, 153, 255), (204, 102, 255), (204, 51, 255), (204, 0, 255),
- (204, 255, 204), (204, 204, 204), (204, 153, 204), (204, 102, 204),
- (204, 51, 204), (204, 0, 204), (204, 255, 153), (204, 204, 153),
- (204, 153, 153), (204, 102, 153), (204, 51, 153), (204, 0, 153),
- (153, 255, 255), (153, 204, 255), (153, 153, 255), (153, 102, 255),
- (153, 51, 255), (153, 0, 255), (153, 255, 204), (153, 204, 204),
- (153, 153, 204), (153, 102, 204), (153, 51, 204), (153, 0, 204),
- (153, 255, 153), (153, 204, 153), (153, 153, 153), (153, 102, 153),
- (153, 51, 153), (153, 0, 153), (102, 255, 255), (102, 204, 255),
- (102, 153, 255), (102, 102, 255), (102, 51, 255), (102, 0, 255),
- (102, 255, 204), (102, 204, 204), (102, 153, 204), (102, 102, 204),
- (102, 51, 204), (102, 0, 204), (102, 255, 153), (102, 204, 153),
- (102, 153, 153), (102, 102, 153), (102, 51, 153), (102, 0, 153),
- (51, 255, 255), (51, 204, 255), (51, 153, 255), (51, 102, 255),
- (51, 51, 255), (51, 0, 255), (51, 255, 204), (51, 204, 204),
- (51, 153, 204), (51, 102, 204), (51, 51, 204), (51, 0, 204),
- (51, 255, 153), (51, 204, 153), (51, 153, 153), (51, 102, 153),
- (51, 51, 153), (51, 0, 153), (0, 255, 255), (0, 204, 255),
- (0, 153, 255), (0, 102, 255), (0, 51, 255), (0, 0, 255),
- (0, 255, 204), (0, 204, 204), (0, 153, 204), (0, 102, 204),
- (0, 51, 204), (0, 0, 204), (0, 255, 153), (0, 204, 153),
- (0, 153, 153), (0, 102, 153), (0, 51, 153), (0, 0, 153),
- (255, 255, 102), (255, 204, 102), (255, 153, 102), (255, 102, 102),
- (255, 51, 102), (255, 0, 102), (255, 255, 51), (255, 204, 51),
- (255, 153, 51), (255, 102, 51), (255, 51, 51), (255, 0, 51),
- (255, 255, 0), (255, 204, 0), (255, 153, 0), (255, 102, 0),
- (255, 51, 0), (255, 0, 0), (204, 255, 102), (204, 204, 102),
- (204, 153, 102), (204, 102, 102), (204, 51, 102), (204, 0, 102),
- (204, 255, 51), (204, 204, 51), (204, 153, 51), (204, 102, 51),
- (204, 51, 51), (204, 0, 51), (204, 255, 0), (204, 204, 0),
- (204, 153, 0), (204, 102, 0), (204, 51, 0), (204, 0, 0),
- (153, 255, 102), (153, 204, 102), (153, 153, 102), (153, 102, 102),
- (153, 51, 102), (153, 0, 102), (153, 255, 51), (153, 204, 51),
- (153, 153, 51), (153, 102, 51), (153, 51, 51), (153, 0, 51),
- (153, 255, 0), (153, 204, 0), (153, 153, 0), (153, 102, 0),
- (153, 51, 0), (153, 0, 0), (102, 255, 102), (102, 204, 102),
- (102, 153, 102), (102, 102, 102), (102, 51, 102), (102, 0, 102),
- (102, 255, 51), (102, 204, 51), (102, 153, 51), (102, 102, 51),
- (102, 51, 51), (102, 0, 51), (102, 255, 0), (102, 204, 0),
- (102, 153, 0), (102, 102, 0), (102, 51, 0), (102, 0, 0),
- (51, 255, 102), (51, 204, 102), (51, 153, 102), (51, 102, 102),
- (51, 51, 102), (51, 0, 102), (51, 255, 51), (51, 204, 51),
- (51, 153, 51), (51, 102, 51), (51, 51, 51), (51, 0, 51),
- (51, 255, 0), (51, 204, 0), (51, 153, 0), (51, 102, 0),
- (51, 51, 0), (51, 0, 0), (0, 255, 102), (0, 204, 102),
- (0, 153, 102), (0, 102, 102), (0, 51, 102), (0, 0, 102),
- (0, 255, 51), (0, 204, 51), (0, 153, 51), (0, 102, 51),
- (0, 51, 51), (0, 0, 51), (0, 255, 0), (0, 204, 0),
- (0, 153, 0), (0, 102, 0), (0, 51, 0), (17, 17, 17),
- (34, 34, 34), (68, 68, 68), (85, 85, 85), (119, 119, 119),
- (136, 136, 136), (170, 170, 170), (187, 187, 187), (221, 221, 221),
- (238, 238, 238), (192, 192, 192), (128, 0, 0), (128, 0, 128),
- (0, 128, 0), (0, 128, 128), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0))
-# fmt: on
-
-
-# so build a prototype image to be used for palette resampling
-def build_prototype_image():
- image = Image.new("L", (1, len(_Palm8BitColormapValues)))
- image.putdata(list(range(len(_Palm8BitColormapValues))))
- palettedata = ()
- for colormapValue in _Palm8BitColormapValues:
- palettedata += colormapValue
- palettedata += (0, 0, 0) * (256 - len(_Palm8BitColormapValues))
- image.putpalette(palettedata)
- return image
-
-
-Palm8BitColormapImage = build_prototype_image()
-
-# OK, we now have in Palm8BitColormapImage,
-# a "P"-mode image with the right palette
-#
-# --------------------------------------------------------------------
-
-_FLAGS = {"custom-colormap": 0x4000, "is-compressed": 0x8000, "has-transparent": 0x2000}
-
-_COMPRESSION_TYPES = {"none": 0xFF, "rle": 0x01, "scanline": 0x00}
-
-
-#
-# --------------------------------------------------------------------
-
-##
-# (Internal) Image save plugin for the Palm format.
-
-
-def _save(im, fp, filename):
- if im.mode == "P":
- # we assume this is a color Palm image with the standard colormap,
- # unless the "info" dict has a "custom-colormap" field
-
- rawmode = "P"
- bpp = 8
- version = 1
-
- elif im.mode == "L":
- if im.encoderinfo.get("bpp") in (1, 2, 4):
- # this is 8-bit grayscale, so we shift it to get the high-order bits,
- # and invert it because
- # Palm does greyscale from white (0) to black (1)
- bpp = im.encoderinfo["bpp"]
- im = im.point(
- lambda x, shift=8 - bpp, maxval=(1 << bpp) - 1: maxval - (x >> shift)
- )
- elif im.info.get("bpp") in (1, 2, 4):
- # here we assume that even though the inherent mode is 8-bit grayscale,
- # only the lower bpp bits are significant.
- # We invert them to match the Palm.
- bpp = im.info["bpp"]
- im = im.point(lambda x, maxval=(1 << bpp) - 1: maxval - (x & maxval))
- else:
- msg = f"cannot write mode {im.mode} as Palm"
- raise OSError(msg)
-
- # we ignore the palette here
- im.mode = "P"
- rawmode = "P;" + str(bpp)
- version = 1
-
- elif im.mode == "1":
- # monochrome -- write it inverted, as is the Palm standard
- rawmode = "1;I"
- bpp = 1
- version = 0
-
- else:
- msg = f"cannot write mode {im.mode} as Palm"
- raise OSError(msg)
-
- #
- # make sure image data is available
- im.load()
-
- # write header
-
- cols = im.size[0]
- rows = im.size[1]
-
- rowbytes = int((cols + (16 // bpp - 1)) / (16 // bpp)) * 2
- transparent_index = 0
- compression_type = _COMPRESSION_TYPES["none"]
-
- flags = 0
- if im.mode == "P" and "custom-colormap" in im.info:
- flags = flags & _FLAGS["custom-colormap"]
- colormapsize = 4 * 256 + 2
- colormapmode = im.palette.mode
- colormap = im.getdata().getpalette()
- else:
- colormapsize = 0
-
- if "offset" in im.info:
- offset = (rowbytes * rows + 16 + 3 + colormapsize) // 4
- else:
- offset = 0
-
- fp.write(o16b(cols) + o16b(rows) + o16b(rowbytes) + o16b(flags))
- fp.write(o8(bpp))
- fp.write(o8(version))
- fp.write(o16b(offset))
- fp.write(o8(transparent_index))
- fp.write(o8(compression_type))
- fp.write(o16b(0)) # reserved by Palm
-
- # now write colormap if necessary
-
- if colormapsize > 0:
- fp.write(o16b(256))
- for i in range(256):
- fp.write(o8(i))
- if colormapmode == "RGB":
- fp.write(
- o8(colormap[3 * i])
- + o8(colormap[3 * i + 1])
- + o8(colormap[3 * i + 2])
- )
- elif colormapmode == "RGBA":
- fp.write(
- o8(colormap[4 * i])
- + o8(colormap[4 * i + 1])
- + o8(colormap[4 * i + 2])
- )
-
- # now convert data to raw form
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, rowbytes, 1))])
-
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-#
-# --------------------------------------------------------------------
-
-Image.register_save("Palm", _save)
-
-Image.register_extension("Palm", ".palm")
-
-Image.register_mime("Palm", "image/palm")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dotenv/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dotenv/__main__.py
deleted file mode 100644
index 3977f55a8b1e94e67bab364885e502e6c89e3bc5..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dotenv/__main__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-"""Entry point for cli, enables execution with `python -m dotenv`"""
-
-from .cli import cli
-
-if __name__ == "__main__":
- cli()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/V_V_A_R_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/V_V_A_R_.py
deleted file mode 100644
index a3665fea5ecc6bd4bf50b447de551994330aaca4..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/V_V_A_R_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_V_V_A_R_(BaseTTXConverter):
- pass
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/types/Model.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/types/Model.ts
deleted file mode 100644
index 754210be0dddd380c8a65ed423254a4b610f14ba..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/types/Model.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import type { BackendModel } from "$lib/server/models";
-
-export type Model = Pick<
- BackendModel,
- | "id"
- | "name"
- | "displayName"
- | "websiteUrl"
- | "datasetName"
- | "promptExamples"
- | "parameters"
- | "description"
->;
diff --git a/spaces/DataWizard9742/LessonPlanGenerator/README.md b/spaces/DataWizard9742/LessonPlanGenerator/README.md
deleted file mode 100644
index f1436de9dbff65dfc7df795858e5f544b7300e08..0000000000000000000000000000000000000000
--- a/spaces/DataWizard9742/LessonPlanGenerator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LessonPlanGenerator
-emoji: 🐨
-colorFrom: green
-colorTo: green
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DeeKayG/COCO-Google/README.md b/spaces/DeeKayG/COCO-Google/README.md
deleted file mode 100644
index 63f9bc5d208ddb4434738b35bb70633a402acd42..0000000000000000000000000000000000000000
--- a/spaces/DeeKayG/COCO-Google/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Zero-Shot COCO-Google
-sdk: gradio
-colorFrom: gray
-colorTo: indigo
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-license: openrail
-emoji: 🚀
----
\ No newline at end of file
diff --git a/spaces/ECCV2022/bytetrack/yolox/models/yolo_pafpn.py b/spaces/ECCV2022/bytetrack/yolox/models/yolo_pafpn.py
deleted file mode 100644
index c419de3204f466c81f7e50fe5c7ffd17e51d63b3..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/models/yolo_pafpn.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/env python
-# -*- encoding: utf-8 -*-
-# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.
-
-import torch
-import torch.nn as nn
-
-from .darknet import CSPDarknet
-from .network_blocks import BaseConv, CSPLayer, DWConv
-
-
-class YOLOPAFPN(nn.Module):
- """
- YOLOv3 model. Darknet 53 is the default backbone of this model.
- """
-
- def __init__(
- self,
- depth=1.0,
- width=1.0,
- in_features=("dark3", "dark4", "dark5"),
- in_channels=[256, 512, 1024],
- depthwise=False,
- act="silu",
- ):
- super().__init__()
- self.backbone = CSPDarknet(depth, width, depthwise=depthwise, act=act)
- self.in_features = in_features
- self.in_channels = in_channels
- Conv = DWConv if depthwise else BaseConv
-
- self.upsample = nn.Upsample(scale_factor=2, mode="nearest")
- self.lateral_conv0 = BaseConv(
- int(in_channels[2] * width), int(in_channels[1] * width), 1, 1, act=act
- )
- self.C3_p4 = CSPLayer(
- int(2 * in_channels[1] * width),
- int(in_channels[1] * width),
- round(3 * depth),
- False,
- depthwise=depthwise,
- act=act,
- ) # cat
-
- self.reduce_conv1 = BaseConv(
- int(in_channels[1] * width), int(in_channels[0] * width), 1, 1, act=act
- )
- self.C3_p3 = CSPLayer(
- int(2 * in_channels[0] * width),
- int(in_channels[0] * width),
- round(3 * depth),
- False,
- depthwise=depthwise,
- act=act,
- )
-
- # bottom-up conv
- self.bu_conv2 = Conv(
- int(in_channels[0] * width), int(in_channels[0] * width), 3, 2, act=act
- )
- self.C3_n3 = CSPLayer(
- int(2 * in_channels[0] * width),
- int(in_channels[1] * width),
- round(3 * depth),
- False,
- depthwise=depthwise,
- act=act,
- )
-
- # bottom-up conv
- self.bu_conv1 = Conv(
- int(in_channels[1] * width), int(in_channels[1] * width), 3, 2, act=act
- )
- self.C3_n4 = CSPLayer(
- int(2 * in_channels[1] * width),
- int(in_channels[2] * width),
- round(3 * depth),
- False,
- depthwise=depthwise,
- act=act,
- )
-
- def forward(self, input):
- """
- Args:
- inputs: input images.
-
- Returns:
- Tuple[Tensor]: FPN feature.
- """
-
- # backbone
- out_features = self.backbone(input)
- features = [out_features[f] for f in self.in_features]
- [x2, x1, x0] = features
-
- fpn_out0 = self.lateral_conv0(x0) # 1024->512/32
- f_out0 = self.upsample(fpn_out0) # 512/16
- f_out0 = torch.cat([f_out0, x1], 1) # 512->1024/16
- f_out0 = self.C3_p4(f_out0) # 1024->512/16
-
- fpn_out1 = self.reduce_conv1(f_out0) # 512->256/16
- f_out1 = self.upsample(fpn_out1) # 256/8
- f_out1 = torch.cat([f_out1, x2], 1) # 256->512/8
- pan_out2 = self.C3_p3(f_out1) # 512->256/8
-
- p_out1 = self.bu_conv2(pan_out2) # 256->256/16
- p_out1 = torch.cat([p_out1, fpn_out1], 1) # 256->512/16
- pan_out1 = self.C3_n3(p_out1) # 512->512/16
-
- p_out0 = self.bu_conv1(pan_out1) # 512->512/32
- p_out0 = torch.cat([p_out0, fpn_out0], 1) # 512->1024/32
- pan_out0 = self.C3_n4(p_out0) # 1024->1024/32
-
- outputs = (pan_out2, pan_out1, pan_out0)
- return outputs
diff --git a/spaces/Epoching/DocumentQA/DiT_Extractor/dit_object_detection/ditod/deit.py b/spaces/Epoching/DocumentQA/DiT_Extractor/dit_object_detection/ditod/deit.py
deleted file mode 100644
index 9a13bb0a8514df29fb4b0ec58c3726ba9c221a8a..0000000000000000000000000000000000000000
--- a/spaces/Epoching/DocumentQA/DiT_Extractor/dit_object_detection/ditod/deit.py
+++ /dev/null
@@ -1,476 +0,0 @@
-"""
-Mostly copy-paste from DINO and timm library:
-https://github.com/facebookresearch/dino
-https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
-"""
-import warnings
-
-import math
-import torch
-import torch.nn as nn
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import trunc_normal_, drop_path, to_2tuple
-from functools import partial
-
-def _cfg(url='', **kwargs):
- return {
- 'url': url,
- 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
- 'crop_pct': .9, 'interpolation': 'bicubic',
- 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5),
- **kwargs
- }
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- """
-
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
- def extra_repr(self) -> str:
- return 'p={}'.format(self.drop_prob)
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- q, k, v = self.qkv(x).reshape(B, N, 3, self.num_heads,
- C // self.num_heads).permute(2, 0, 3, 1, 4)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(
- drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim,
- act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
-
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
-
- self.window_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
-
- self.num_patches_w, self.num_patches_h = self.window_size
-
- self.num_patches = self.window_size[0] * self.window_size[1]
- self.img_size = img_size
- self.patch_size = patch_size
-
- self.proj = nn.Conv2d(in_chans, embed_dim,
- kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x):
- x = self.proj(x)
- return x
-
-
-class HybridEmbed(nn.Module):
- """ CNN Feature Map Embedding
- Extract feature map from CNN, flatten, project to embedding dim.
- """
-
- def __init__(self, backbone, img_size=224, feature_size=None, in_chans=3, embed_dim=768):
- super().__init__()
- assert isinstance(backbone, nn.Module)
- img_size = to_2tuple(img_size)
- self.img_size = img_size
- self.backbone = backbone
- if feature_size is None:
- with torch.no_grad():
- # FIXME this is hacky, but most reliable way of determining the exact dim of the output feature
- # map for all networks, the feature metadata has reliable channel and stride info, but using
- # stride to calc feature dim requires info about padding of each stage that isn't captured.
- training = backbone.training
- if training:
- backbone.eval()
- o = self.backbone(torch.zeros(
- 1, in_chans, img_size[0], img_size[1]))[-1]
- feature_size = o.shape[-2:]
- feature_dim = o.shape[1]
- backbone.train(training)
- else:
- feature_size = to_2tuple(feature_size)
- feature_dim = self.backbone.feature_info.channels()[-1]
- self.num_patches = feature_size[0] * feature_size[1]
- self.proj = nn.Linear(feature_dim, embed_dim)
-
- def forward(self, x):
- x = self.backbone(x)[-1]
- x = x.flatten(2).transpose(1, 2)
- x = self.proj(x)
- return x
-
-
-class ViT(nn.Module):
- """ Vision Transformer with support for patch or hybrid CNN input stage
- """
-
- def __init__(self,
- model_name='vit_base_patch16_224',
- img_size=384,
- patch_size=16,
- in_chans=3,
- embed_dim=1024,
- depth=24,
- num_heads=16,
- num_classes=19,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.1,
- attn_drop_rate=0.,
- drop_path_rate=0.,
- hybrid_backbone=None,
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- norm_cfg=None,
- pos_embed_interp=False,
- random_init=False,
- align_corners=False,
- use_checkpoint=False,
- num_extra_tokens=1,
- out_features=None,
- **kwargs,
- ):
-
- super(ViT, self).__init__()
- self.model_name = model_name
- self.img_size = img_size
- self.patch_size = patch_size
- self.in_chans = in_chans
- self.embed_dim = embed_dim
- self.depth = depth
- self.num_heads = num_heads
- self.num_classes = num_classes
- self.mlp_ratio = mlp_ratio
- self.qkv_bias = qkv_bias
- self.qk_scale = qk_scale
- self.drop_rate = drop_rate
- self.attn_drop_rate = attn_drop_rate
- self.drop_path_rate = drop_path_rate
- self.hybrid_backbone = hybrid_backbone
- self.norm_layer = norm_layer
- self.norm_cfg = norm_cfg
- self.pos_embed_interp = pos_embed_interp
- self.random_init = random_init
- self.align_corners = align_corners
- self.use_checkpoint = use_checkpoint
- self.num_extra_tokens = num_extra_tokens
- self.out_features = out_features
- self.out_indices = [int(name[5:]) for name in out_features]
-
- # self.num_stages = self.depth
- # self.out_indices = tuple(range(self.num_stages))
-
- if self.hybrid_backbone is not None:
- self.patch_embed = HybridEmbed(
- self.hybrid_backbone, img_size=self.img_size, in_chans=self.in_chans, embed_dim=self.embed_dim)
- else:
- self.patch_embed = PatchEmbed(
- img_size=self.img_size, patch_size=self.patch_size, in_chans=self.in_chans, embed_dim=self.embed_dim)
- self.num_patches = self.patch_embed.num_patches
-
- self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim))
-
- if self.num_extra_tokens == 2:
- self.dist_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim))
-
- self.pos_embed = nn.Parameter(torch.zeros(
- 1, self.num_patches + self.num_extra_tokens, self.embed_dim))
- self.pos_drop = nn.Dropout(p=self.drop_rate)
-
- # self.num_extra_tokens = self.pos_embed.shape[-2] - self.num_patches
- dpr = [x.item() for x in torch.linspace(0, self.drop_path_rate,
- self.depth)] # stochastic depth decay rule
- self.blocks = nn.ModuleList([
- Block(
- dim=self.embed_dim, num_heads=self.num_heads, mlp_ratio=self.mlp_ratio, qkv_bias=self.qkv_bias,
- qk_scale=self.qk_scale,
- drop=self.drop_rate, attn_drop=self.attn_drop_rate, drop_path=dpr[i], norm_layer=self.norm_layer)
- for i in range(self.depth)])
-
- # NOTE as per official impl, we could have a pre-logits representation dense layer + tanh here
- # self.repr = nn.Linear(embed_dim, representation_size)
- # self.repr_act = nn.Tanh()
-
- if patch_size == 16:
- self.fpn1 = nn.Sequential(
- nn.ConvTranspose2d(embed_dim, embed_dim, kernel_size=2, stride=2),
- nn.SyncBatchNorm(embed_dim),
- nn.GELU(),
- nn.ConvTranspose2d(embed_dim, embed_dim, kernel_size=2, stride=2),
- )
-
- self.fpn2 = nn.Sequential(
- nn.ConvTranspose2d(embed_dim, embed_dim, kernel_size=2, stride=2),
- )
-
- self.fpn3 = nn.Identity()
-
- self.fpn4 = nn.MaxPool2d(kernel_size=2, stride=2)
- elif patch_size == 8:
- self.fpn1 = nn.Sequential(
- nn.ConvTranspose2d(embed_dim, embed_dim, kernel_size=2, stride=2),
- )
-
- self.fpn2 = nn.Identity()
-
- self.fpn3 = nn.Sequential(
- nn.MaxPool2d(kernel_size=2, stride=2),
- )
-
- self.fpn4 = nn.Sequential(
- nn.MaxPool2d(kernel_size=4, stride=4),
- )
-
- trunc_normal_(self.pos_embed, std=.02)
- trunc_normal_(self.cls_token, std=.02)
- if self.num_extra_tokens==2:
- trunc_normal_(self.dist_token, std=0.2)
- self.apply(self._init_weights)
- # self.fix_init_weight()
-
- def fix_init_weight(self):
- def rescale(param, layer_id):
- param.div_(math.sqrt(2.0 * layer_id))
-
- for layer_id, layer in enumerate(self.blocks):
- rescale(layer.attn.proj.weight.data, layer_id + 1)
- rescale(layer.mlp.fc2.weight.data, layer_id + 1)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- '''
- def init_weights(self):
- logger = get_root_logger()
-
- trunc_normal_(self.pos_embed, std=.02)
- trunc_normal_(self.cls_token, std=.02)
- self.apply(self._init_weights)
-
- if self.init_cfg is None:
- logger.warn(f'No pre-trained weights for '
- f'{self.__class__.__name__}, '
- f'training start from scratch')
- else:
- assert 'checkpoint' in self.init_cfg, f'Only support ' \
- f'specify `Pretrained` in ' \
- f'`init_cfg` in ' \
- f'{self.__class__.__name__} '
- logger.info(f"Will load ckpt from {self.init_cfg['checkpoint']}")
- load_checkpoint(self, filename=self.init_cfg['checkpoint'], strict=False, logger=logger)
- '''
-
- def get_num_layers(self):
- return len(self.blocks)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed', 'cls_token'}
-
- def _conv_filter(self, state_dict, patch_size=16):
- """ convert patch embedding weight from manual patchify + linear proj to conv"""
- out_dict = {}
- for k, v in state_dict.items():
- if 'patch_embed.proj.weight' in k:
- v = v.reshape((v.shape[0], 3, patch_size, patch_size))
- out_dict[k] = v
- return out_dict
-
- def to_2D(self, x):
- n, hw, c = x.shape
- h = w = int(math.sqrt(hw))
- x = x.transpose(1, 2).reshape(n, c, h, w)
- return x
-
- def to_1D(self, x):
- n, c, h, w = x.shape
- x = x.reshape(n, c, -1).transpose(1, 2)
- return x
-
- def interpolate_pos_encoding(self, x, w, h):
- npatch = x.shape[1] - self.num_extra_tokens
- N = self.pos_embed.shape[1] - self.num_extra_tokens
- if npatch == N and w == h:
- return self.pos_embed
-
- class_ORdist_pos_embed = self.pos_embed[:, 0:self.num_extra_tokens]
-
- patch_pos_embed = self.pos_embed[:, self.num_extra_tokens:]
-
- dim = x.shape[-1]
- w0 = w // self.patch_embed.patch_size[0]
- h0 = h // self.patch_embed.patch_size[1]
- # we add a small number to avoid floating point error in the interpolation
- # see discussion at https://github.com/facebookresearch/dino/issues/8
- w0, h0 = w0 + 0.1, h0 + 0.1
- patch_pos_embed = nn.functional.interpolate(
- patch_pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2),
- scale_factor=(w0 / math.sqrt(N), h0 / math.sqrt(N)),
- mode='bicubic',
- )
- assert int(w0) == patch_pos_embed.shape[-2] and int(h0) == patch_pos_embed.shape[-1]
- patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim)
-
- return torch.cat((class_ORdist_pos_embed, patch_pos_embed), dim=1)
-
- def prepare_tokens(self, x, mask=None):
- B, nc, w, h = x.shape
- # patch linear embedding
- x = self.patch_embed(x)
-
- # mask image modeling
- if mask is not None:
- x = self.mask_model(x, mask)
- x = x.flatten(2).transpose(1, 2)
-
- # add the [CLS] token to the embed patch tokens
- all_tokens = [self.cls_token.expand(B, -1, -1)]
-
- if self.num_extra_tokens == 2:
- dist_tokens = self.dist_token.expand(B, -1, -1)
- all_tokens.append(dist_tokens)
- all_tokens.append(x)
-
- x = torch.cat(all_tokens, dim=1)
-
- # add positional encoding to each token
- x = x + self.interpolate_pos_encoding(x, w, h)
-
- return self.pos_drop(x)
-
- def forward_features(self, x):
- # print(f"==========shape of x is {x.shape}==========")
- B, _, H, W = x.shape
- Hp, Wp = H // self.patch_size, W // self.patch_size
- x = self.prepare_tokens(x)
-
- features = []
- for i, blk in enumerate(self.blocks):
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- if i in self.out_indices:
- xp = x[:, self.num_extra_tokens:, :].permute(0, 2, 1).reshape(B, -1, Hp, Wp)
- features.append(xp.contiguous())
-
- ops = [self.fpn1, self.fpn2, self.fpn3, self.fpn4]
- for i in range(len(features)):
- features[i] = ops[i](features[i])
-
- feat_out = {}
-
- for name, value in zip(self.out_features, features):
- feat_out[name] = value
-
- return feat_out
-
- def forward(self, x):
- x = self.forward_features(x)
- return x
-
-
-def deit_base_patch16(pretrained=False, **kwargs):
- model = ViT(
- patch_size=16,
- drop_rate=0.,
- embed_dim=768,
- depth=12,
- num_heads=12,
- num_classes=1000,
- mlp_ratio=4.,
- qkv_bias=True,
- use_checkpoint=True,
- num_extra_tokens=2,
- **kwargs)
- model.default_cfg = _cfg()
- return model
-
-def mae_base_patch16(pretrained=False, **kwargs):
- model = ViT(
- patch_size=16,
- drop_rate=0.,
- embed_dim=768,
- depth=12,
- num_heads=12,
- num_classes=1000,
- mlp_ratio=4.,
- qkv_bias=True,
- use_checkpoint=True,
- num_extra_tokens=1,
- **kwargs)
- model.default_cfg = _cfg()
- return model
\ No newline at end of file
diff --git a/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/hpe.py b/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/hpe.py
deleted file mode 100644
index cca6655dea29af282ff14d0baf857d68ba501109..0000000000000000000000000000000000000000
--- a/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/hpe.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import math
-import os
-import numpy as np
-import tensorflow as tf
-
-from utils.my_utils import normalize_wrt_maximum_distance_point, retrieve_interest_points
-
-
-def head_pose_estimation(kpt, detector, gaze_model, id_list=None):
- fps, shape = 20, (1280, 720)
-
- yaw_list, pitch_list, roll_list, yaw_u_list, pitch_u_list, roll_u_list = [], [], [], [], [], []
- center_xy = []
-
- for j, kpt_person in enumerate(kpt):
- # TODO here change order if openpose
- face_kpt = retrieve_interest_points(kpt_person, detector=detector)
-
- tdx = np.mean([face_kpt[k] for k in range(0, 15, 3) if face_kpt[k] != 0.0])
- tdy = np.mean([face_kpt[k + 1] for k in range(0, 15, 3) if face_kpt[k + 1] != 0.0])
- if math.isnan(tdx) or math.isnan(tdy):
- tdx = -1
- tdy = -1
-
- center_xy.append([tdx, tdy])
- face_kpt_normalized = np.array(normalize_wrt_maximum_distance_point(face_kpt))
- # print(type(face_kpt_normalized), face_kpt_normalized)
-
- aux = tf.cast(np.expand_dims(face_kpt_normalized, 0), tf.float32)
-
- yaw, pitch, roll = gaze_model(aux, training=False)
- # print(yaw[0].numpy()[0], pitch, roll)
- yaw_list.append(yaw[0].numpy()[0])
- pitch_list.append(pitch[0].numpy()[0])
- roll_list.append(roll[0].numpy()[0])
-
- yaw_u_list.append(yaw[0].numpy()[1])
- pitch_u_list.append(pitch[0].numpy()[1])
- roll_u_list.append(roll[0].numpy()[1])
- # print(id_lists[j])
- # print('yaw: ', yaw[0].numpy()[0], 'yaw unc: ', yaw[0].numpy()[1], 'pitch: ', pitch[0].numpy()[0],
- # 'pitch unc: ', pitch[0].numpy()[1], 'roll: ', roll[0].numpy()[0], 'roll unc: ', roll[0].numpy()[1])
- # draw_axis(yaw.numpy(), pitch.numpy(), roll.numpy(), im_pose, tdx, tdy)
- return center_xy, yaw_list, pitch_list, roll_list
-
-def hpe(gaze_model, kpt_person, detector):
- # TODO here change order if openpose
- face_kpt = retrieve_interest_points(kpt_person, detector=detector)
-
- tdx = np.mean([face_kpt[k] for k in range(0, 15, 3) if face_kpt[k] != 0.0])
- tdy = np.mean([face_kpt[k + 1] for k in range(0, 15, 3) if face_kpt[k + 1] != 0.0])
- if math.isnan(tdx) or math.isnan(tdy):
- tdx = -1
- tdy = -1
-
- # center_xy.append([tdx, tdy])
- face_kpt_normalized = np.array(normalize_wrt_maximum_distance_point(face_kpt))
- # print(type(face_kpt_normalized), face_kpt_normalized)
-
- aux = tf.cast(np.expand_dims(face_kpt_normalized, 0), tf.float32)
-
- yaw, pitch, roll = gaze_model(aux, training=False)
-
- return yaw, pitch, roll, tdx, tdy
-
-def project_ypr_in2d(yaw, pitch, roll):
- """ Project yaw pitch roll on image plane. Result is NOT normalised.
-
- :param yaw:
- :param pitch:
- :param roll:
- :return:
- """
- pitch = pitch * np.pi / 180
- yaw = -(yaw * np.pi / 180)
- roll = roll * np.pi / 180
-
- x3 = (math.sin(yaw))
- y3 = (-math.cos(yaw) * math.sin(pitch))
-
- # normalize the components
- length = np.sqrt(x3**2 + y3**2)
-
- return [x3, y3]
-
-
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/parsing/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/facelib/parsing/__init__.py
deleted file mode 100644
index 72656e4b5f61df8cd0838588b0c6488fcc886e16..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/parsing/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import torch
-
-from facelib.utils import load_file_from_url
-from .bisenet import BiSeNet
-from .parsenet import ParseNet
-
-
-def init_parsing_model(model_name='bisenet', half=False, device='cuda'):
- if model_name == 'bisenet':
- model = BiSeNet(num_class=19)
- model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_bisenet.pth'
- elif model_name == 'parsenet':
- model = ParseNet(in_size=512, out_size=512, parsing_ch=19)
- model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth'
- else:
- raise NotImplementedError(f'{model_name} is not implemented.')
-
- model_path = load_file_from_url(url=model_url, model_dir='weights/facelib', progress=True, file_name=None)
- load_net = torch.load(model_path, map_location=lambda storage, loc: storage)
- model.load_state_dict(load_net, strict=True)
- model.eval()
- model = model.to(device)
- return model
diff --git a/spaces/Felladrin/LaMini-Flan-T5-248M-Candle-Wasm/style.css b/spaces/Felladrin/LaMini-Flan-T5-248M-Candle-Wasm/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/Felladrin/LaMini-Flan-T5-248M-Candle-Wasm/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Weuseing.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Weuseing.py
deleted file mode 100644
index ba79e8b9c2573418720495a20d4c1c8d5a6ca7e9..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Weuseing.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://api.gptplus.one'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- 'Accept': '*/*',
- 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4',
- }
- data = {
- 'messages': messages,
- 'model': model,
- }
- response = requests.post('https://api.gptplus.one/chat-process', json=data, stream=True)
- print(response)
-
- for token in response.iter_content(chunk_size=None):
- yield (token.decode('utf-8'))
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/GT6242Causion/Causion/src/map_viz_pred.py b/spaces/GT6242Causion/Causion/src/map_viz_pred.py
deleted file mode 100644
index 63edbeab97ff0c8bd771f1aa0ef22c392e723667..0000000000000000000000000000000000000000
--- a/spaces/GT6242Causion/Causion/src/map_viz_pred.py
+++ /dev/null
@@ -1,108 +0,0 @@
-from __future__ import division, print_function
-from six import StringIO
-from svgpath2mpl import parse_path
-from collections import defaultdict
-from src.pred_plot import hour_rounder
-import xml.etree.ElementTree as etree
-import re
-import matplotlib as mpl
-import matplotlib.pyplot as plt
-import numpy as np
-import requests
-import pandas as pd
-import datetime
-
-import warnings
-warnings.filterwarnings("ignore")
-
-def calling_pred_map_viz(counts_df1, input_day = None, input_hour = None):
- r = "svg/snazzy-image-01.svg"
- tree = etree.parse(r)
- root = tree.getroot()
- path_elems = root.findall('.//{http://www.w3.org/2000/svg}path')
-
- paths = [parse_path(elem.attrib['d']) for elem in path_elems]
- facecolors = []
- edgecolors = []
- linewidths = []
- for elem in path_elems:
- facecolors.append(dict(item.split(":") for item in elem.attrib.get('style', 'none').split(";")).get("fill", "none"))
- edgecolors.append(dict(item.split(":") for item in elem.attrib.get('style', 'none').split(";")).get("stroke", "none"))
- linewidths.append(dict(item.split(":") for item in elem.attrib.get('style', 'none').split(";")).get("stroke-width", "none").replace("px", ""))
-
- path_id = defaultdict(int)
- for i, elem in enumerate(path_elems):
- try:
- #print(i, elem.attrib['id'])
- path_id[elem.attrib['id']] = i
- except:
- continue
-
- counts_df1['total'] = counts_df1['car'] + counts_df1['motorcycle'] + counts_df1['large_vehicle']
- counts_df1.loc[:,'date_time'] = pd.to_datetime(counts_df1.loc[:,'date'] + " "+ counts_df1.loc[:,'time'], format='%Y-%m-%d %H:%M:%S')
- counts_df1.loc[:,'hour'] = counts_df1.loc[:,'date_time'].apply(hour_rounder)
- counts_df1.loc[:,'day_name'] = counts_df1.loc[:,'date_time'].dt.day_name()
-
- if (input_day != None) & (input_hour != None):
- filtered_day = input_day
- filtered_hour = input_hour
- else:
- filtered_date = counts_df1.iloc[-1]['date']
- filtered_time = counts_df1.iloc[-1]['time']
- filtered_day = counts_df1.iloc[-1]['day_name']
- filtered_hour = counts_df1.iloc[-1]['hour']
-
-
- # filtered_date = counts_df1.iloc[-1]['date']
- # filtered_time = counts_df1.iloc[-1]['time']
- # filtered_day = counts_df1.iloc[-1]['day_name']
- # filtered_hour = counts_df1.iloc[-1]['hour']
-
-
- day_hour_view_group = counts_df1.groupby(by=['view', 'day_name', 'hour'])['total'].mean().reset_index()
- count_max = day_hour_view_group['total'].max()
- count_min = day_hour_view_group['total'].min()
-
-
- count_dict = {"woodlands_to_sg" :day_hour_view_group.loc[(day_hour_view_group['view'] == 'Woodlands - to SG') & (day_hour_view_group['day_name'] == filtered_day) & (day_hour_view_group['hour'] == filtered_hour), "total" ].iloc[0],
- "woodlands_to_jh" :day_hour_view_group.loc[(day_hour_view_group['view'] == 'Woodlands - to Johor') & (day_hour_view_group['day_name'] == filtered_day) & (day_hour_view_group['hour'] == filtered_hour), "total" ].iloc[0],
- "tuas_to_sg" :day_hour_view_group.loc[(day_hour_view_group['view'] == 'Tuas - to SG') & (day_hour_view_group['day_name'] == filtered_day) & (day_hour_view_group['hour'] == filtered_hour), "total" ].iloc[0],
- "tuas_to_jh" :day_hour_view_group.loc[(day_hour_view_group['view'] == 'Tuas - to Johor') & (day_hour_view_group['day_name'] == filtered_day) & (day_hour_view_group['hour'] == filtered_hour), "total" ].iloc[0]
- }
-
- values = np.array([0., 0.25, 1.])
- values = np.sort(np.array(values))
- values = np.interp(values, (values.min(), values.max()), (0., 1.))
- colors = ["#539f6b", "#ffc835", "#bf0000"]
- cmap = mpl.colors.LinearSegmentedColormap.from_list("custom", list(zip(values, colors)))
-
- norm = mpl.colors.Normalize(vmin=count_min, vmax=count_max)
-
-
-
- hex_dict = {k: mpl.colors.to_hex(cmap(norm(v))) for k, v in count_dict.items()}
- color_dict = defaultdict(str)
-
- for k, i in path_id.items():
- #print(k, i)
- color_dict[i] = hex_dict[k]
-
- for k, i in color_dict.items():
- #print(k,i)
- facecolors[k] = i
-
- collection = mpl.collections.PathCollection(paths,
- edgecolors=edgecolors,
- linewidths=[int(i)/100 for i in linewidths if i != 'none'],
- facecolors=[i.strip() for i in facecolors])
-
-
-
- fig = plt.figure(figsize=(10,10))
- ax = fig.add_subplot(111)
- collection.set_transform(ax.transData)
- ax.add_artist(collection)
- ax.set_xlim([100, 1900])
- ax.set_ylim([1800, 0])
- ax.set_title(filtered_day+ " | " + filtered_hour + " SGT", fontname = 'Georgia')
- return fig
diff --git a/spaces/Gabriel/Swe_summarizer/text.py b/spaces/Gabriel/Swe_summarizer/text.py
deleted file mode 100644
index 3ad95f7678bc9a8a443c1457060c9f34bbcd9162..0000000000000000000000000000000000000000
--- a/spaces/Gabriel/Swe_summarizer/text.py
+++ /dev/null
@@ -1,52 +0,0 @@
-sum_app_text_tab_1= """
-
The Summarization Task
-
-The goal of text summarization is to condense long documents into summaries, while maintaining key information found within the original text document. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.
-
-Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”, see Figure 1. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
-
-
-
- Figure 1 - The two different approaches to text summarization: Extractive and Abstractive.
-
-
-
Abstractive Model
-
-The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
-
-- [Gabriel/xsum_swe](https://huggingface.co/datasets/Gabriel/xsum_swe)
-- [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
-
-To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe). The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pre-trained model on different datasets. It should be noted that the MT datasets will not teach the model Swedish perfectly, but it will give a more ideal basis to further fine-tune on a more domain specific use case. For more information on this topic read: [Sequential Adoption](https://arxiv.org/pdf/1811.01088v2.pdf)
-"""
-
-sum_app_text_tab_2= """
-
🤗
-
-Figure 2 below illustrates how the skill level of the model increases at each step:
-
-
-
- Figure 2 - A model progression between on 3 aspects during sequential adoption: domain language, task and language..
-
-
-The main benefits of transfer learning in general include the saving of resources and improved efficiency when training new models, so feel free to adopt this model for your type of problem!
-
-
Extractive Model
-
-The extractive models for this app are using sentence-transformer models, which basically is a bi-encoder that determines how similar two sentences are. This type of models convert texts into vectors (embedding) that capture semantic information. Additionally, LexRank, an unsupervised graph-based algorithm, is used to determine centrality scores as a post-process step to summarise. The main idea is that sentences "recommend" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance. The importance of this sentence also stems from the importance of the sentences "recommending" it. Thus, to get ranked highly and placed in a summary, a sentence must be similar to many sentences that are in turn also similar to many other sentences.
-
-
-
- Figure 3 - The right similarity graphs connection have been filtered with a corresponding threshold.
-
-
-Figure 3 above showcase how LexRank formats similarity graphs based on all possible sentence combinations sentence similarity from the vector embeddings. Notice that the most "recommended" sentences that are extracted (the right graph) are derived from a threshold value which filters "weaker" connections in the similarity graph.
-For more information on this topic read: [LexRank](https://www.aaai.org/Papers/JAIR/Vol22/JAIR-2214.pdf)
-"""
-
-abstractive_example_text_1= """Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under lördagens VM semifinal i Paris. Simon Shaw lastar av trots att Raphael Ibanez, vänster, och Sebastien Chabal. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra-rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Chabal började matchen på ersättningsbänken, men kom i 26: e minuten att ersätta den skadade Fabien Pelous under värd Frankrikes 14-9 nederlag. Om han blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes. Samtidigt, Frankrike tränare Bernard Laporte sade att nederlaget var svårare att ta än Englands 24-7 seger i 2003 semifinalen. "År 2003 var de bättre än oss. I själva verket var de bättre än alla", sade Laporte, som lämnar sin roll att tillträda posten som junior idrottsminister i den franska regeringen. "De var som Nya Zeeland i denna turnering - favoriten, förutom att de gick hela vägen. Den här gången är det svårare för igår var det 50-50." Samtidigt, England -- försöker bli den första nationen att försvara VM-titeln -- avslöjade att stjärna kicker Jonny Wilkinson återigen hade problem med matchbollarna under semifinalen. Flughalvan, som uttryckte sin oro efter att ha kämpat med stöveln mot Australien, avvisade en boll innan han sparkade en vital trepoängare mot Frankrike. "Vi sa det inte förra veckan men en icke-match bollen kom ut på fältet i Marseille som Jonny sparkade," chef för rugby Rob Andrew sade. "Han tänkte inte på det när han sparkade det. Matchbollarna är märkta, numrerade ett till sex. Igår kväll hade de "World Cup semifinal England vs Frankrike" skrivet på dem. På matchkvällen var Jonny vaksam när han sparkade för mål att de faktiskt var matchbollar han sparkade. "Träningsbollarna förlorar tryck och form. Hela frågan förra veckan, arrangörerna accepterade alla sex matchbollar bör användas av båda sidor på torsdagen före matchen. " E-post till en vän."""
-
-abstractive_example_text_2="""Man enades om målet för ett stimulanspaket värt nästan 39 miljoner pund som en del av den walesiska regeringens budgetavtal med liberaldemokraterna. Finansminister Jane Hutt sa att det skulle bidra till att skapa omedelbara fördelar för ekonomin. Men Plaid Cymru sade att det var "helt otillräckligt" och de konservativa sade att det skulle gå till rådet skattebetalare. Labour och Lib Dems tillkännagav ett budgetavtal på fredag kväll och avslutade veckor av förhandlingar mellan ministrar och oppositionspartier. Med 30 av församlingens 60 platser behöver Labour hjälp av minst en annan part för att godkänna sina utgiftsplaner. Den 38,9 miljoner pund stora nedgången - som skulle tillbringas över två år - utgjorde också en del av budgetdiskussionerna. Pengarna kommer från statskassan till följd av ett skattestopp i England. Ett program för att hjälpa företag att anställa unga rekryter finns bland projekt som får finansiering. Regeringen sa att en extra £4.9m skulle skapa 1800 fler lärlingsplatser. Omkring 9 miljoner pund kommer att gå till att uppgradera skolbyggnader, med samma belopp som spenderas på att leverera ytterligare 130 bostäder. Regeringen kommer att spendera £3.5 förbättra vägar på platser där den planerar att skapa företagsområden. Fem delar av Wales har öronmärkts som områden där företag kommer att få hjälp att växa. Förste minister Carwyn Jones har sagt att kopiera den brittiska regeringen genom att använda pengarna för att hålla nere rådets skatt skulle inte i någon större utsträckning gynna ekonomin, tillägger att skatteräkningar för band D hem var lägre i genomsnitt i Wales. Labour har kritiserats av motståndare, särskilt Plaid Cymru, för att inte göra tillräckligt för att reagera på en försämrad ekonomisk situation. Hutt pekade på andra åtaganden från regeringens sida som syftar till att främja tillväxten. Hon sa att hon hade övervägt förslag om att spendera pengarna från hela regeringen. Hon sade: "Detta paket bygger på dessa åtgärder för att stimulera ekonomin och utveckla offentliga tjänster, vilket ger omedelbara fördelar för vår ekonomi samtidigt som det kompletterar våra långsiktiga mål." Konservativ skuggfinansminister Paul Davies sade att han var besviken ministrar använde ytterligare resurser för att "stoppa upp" befintlig politik. Han sade: "Det finns inget nytt i detta paket annat än ett nytt försök av walesiska arbetsmarknadsministrar att agera på ekonomin, samtidigt som man spenderar pengar som skulle användas bättre av skattebetalarna själva." Welsh Lib Dem ledare Kirsty Williams sade att hennes parti kommer också att arbeta med regeringen om hur man ska spendera eventuella pengar som tilldelats Wales som ett resultat av tisdagens höst uttalande av förbundskansler George Osborne. "Wales Liberal Democrats strategi kommer att vara att fortsätta att få vår ekonomi i rörelse och förbättra livskvaliteten för människor i Wales", sade hon. Plaid Cymru ekonomi talesman Alun Ffred Jones sade: " I över sex månader har Labour lutat sig tillbaka och inte gjort någonting - utsätta Wales för den fulla kraften i denna ekonomiska kris. "Nu försöker de desperat att skapa intrycket att denna lilla summa pengar kommer att göra vad som behövs. Helt enkelt kommer det inte att göra det."""
-
-extractive_example_text_1= """Man enades om målet för ett stimulanspaket värt nästan 39 miljoner pund som en del av den walesiska regeringens budgetavtal med liberaldemokraterna. Finansminister Jane Hutt sa att det skulle bidra till att skapa omedelbara fördelar för ekonomin. Men Plaid Cymru sade att det var "helt otillräckligt" och de konservativa sade att det skulle gå till rådet skattebetalare. Labour och Lib Dems tillkännagav ett budgetavtal på fredag kväll och avslutade veckor av förhandlingar mellan ministrar och oppositionspartier. Med 30 av församlingens 60 platser behöver Labour hjälp av minst en annan part för att godkänna sina utgiftsplaner. Den 38,9 miljoner pund stora nedgången - som skulle tillbringas över två år - utgjorde också en del av budgetdiskussionerna. Pengarna kommer från statskassan till följd av ett skattestopp i England. Ett program för att hjälpa företag att anställa unga rekryter finns bland projekt som får finansiering. Regeringen sa att en extra £4.9m skulle skapa 1800 fler lärlingsplatser. Omkring 9 miljoner pund kommer att gå till att uppgradera skolbyggnader, med samma belopp som spenderas på att leverera ytterligare 130 bostäder. Regeringen kommer att spendera £3.5 förbättra vägar på platser där den planerar att skapa företagsområden. Fem delar av Wales har öronmärkts som områden där företag kommer att få hjälp att växa. Förste minister Carwyn Jones har sagt att kopiera den brittiska regeringen genom att använda pengarna för att hålla nere rådets skatt skulle inte i någon större utsträckning gynna ekonomin, tillägger att skatteräkningar för band D hem var lägre i genomsnitt i Wales. Labour har kritiserats av motståndare, särskilt Plaid Cymru, för att inte göra tillräckligt för att reagera på en försämrad ekonomisk situation. Hutt pekade på andra åtaganden från regeringens sida som syftar till att främja tillväxten. Hon sa att hon hade övervägt förslag om att spendera pengarna från hela regeringen. Hon sade: "Detta paket bygger på dessa åtgärder för att stimulera ekonomin och utveckla offentliga tjänster, vilket ger omedelbara fördelar för vår ekonomi samtidigt som det kompletterar våra långsiktiga mål." Konservativ skuggfinansminister Paul Davies sade att han var besviken ministrar använde ytterligare resurser för att "stoppa upp" befintlig politik. Han sade: "Det finns inget nytt i detta paket annat än ett nytt försök av walesiska arbetsmarknadsministrar att agera på ekonomin, samtidigt som man spenderar pengar som skulle användas bättre av skattebetalarna själva." Welsh Lib Dem ledare Kirsty Williams sade att hennes parti kommer också att arbeta med regeringen om hur man ska spendera eventuella pengar som tilldelats Wales som ett resultat av tisdagens höst uttalande av förbundskansler George Osborne. "Wales Liberal Democrats strategi kommer att vara att fortsätta att få vår ekonomi i rörelse och förbättra livskvaliteten för människor i Wales", sade hon. Plaid Cymru ekonomi talesman Alun Ffred Jones sade: " I över sex månader har Labour lutat sig tillbaka och inte gjort någonting - utsätta Wales för den fulla kraften i denna ekonomiska kris. "Nu försöker de desperat att skapa intrycket att denna lilla summa pengar kommer att göra vad som behövs. Helt enkelt kommer det inte att göra det."""
\ No newline at end of file
diff --git a/spaces/GitHunter0/100_prisoners_problem_app/functions/streamlit_basic.py b/spaces/GitHunter0/100_prisoners_problem_app/functions/streamlit_basic.py
deleted file mode 100644
index bccc0ebacd010fea244ca3969b8c60543753ff39..0000000000000000000000000000000000000000
--- a/spaces/GitHunter0/100_prisoners_problem_app/functions/streamlit_basic.py
+++ /dev/null
@@ -1,70 +0,0 @@
-#%%% get_binary_file_downloader_html()
-
-def get_binary_file_downloader_html(
- file_path,
- file_name = None,
- file_label = 'Click to Download File',
- button_bgcolor = ["inherit", "rgb(72, 47, 142)"][0],
- button_bordercolor = ["inherit", "rgba(250, 250, 250, 0.2)"][0]
-):
- """
- Create html component to generate a download link for any file, so that the user can download just by clicking at it.
- Args:
- file_path (str): [file path]
- file_label (str): [label to display]. Defaults to 'File'.
- Returns:
- [str]: [html element]
- Details: See the discussions
- -
- -
- """
- import os
- import base64
- #
- if file_name is None:
- file_download_name = os.path.basename(file_path)
- else:
- file_download_name = file_name
- #
- with open(file_path, 'rb') as f:
- data = f.read()
- #
- bin_str = base64.b64encode(data).decode()
- #
- href = f'''
- {file_label}
- '''
- return href
-
-# Test
-if False:
- # Save dataframe as a .xlsx file
- file_name = "dataframe.xlsx"
- df_download = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
- df_download.to_excel(file_name)
- # HTML Link element
- html_link = get_binary_file_downloader_html(
- file_path = file_name,
- file_label = 'Clique para efetuar o download da tabela em .xlsx'
- )
- # Display Link
- import streamlit as st
- st.markdown(html_link, unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/Giuliano/image_classification/README.md b/spaces/Giuliano/image_classification/README.md
deleted file mode 100644
index 4d8867e5352f1b0c6f0f2db3c8c310b7e5b8a37f..0000000000000000000000000000000000000000
--- a/spaces/Giuliano/image_classification/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image Classification
-emoji: 🐢
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/fast_rcnn_r50_fpn.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/fast_rcnn_r50_fpn.py
deleted file mode 100644
index 1099165b2a7a7af5cee60cf757ef674e768c6a8a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/fast_rcnn_r50_fpn.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# model settings
-model = dict(
- type='FastRCNN',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- roi_head=dict(
- type='StandardRoIHead',
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- type='Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rcnn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py
deleted file mode 100644
index 85fa2f5d73a896e09d7b1f72202d0a100eaca821..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py
+++ /dev/null
@@ -1,167 +0,0 @@
-_base_ = '../_base_/default_runtime.py'
-
-# model settings
-model = dict(
- type='RetinaNet',
- pretrained='open-mmlab://detectron2/resnet101_caffe',
- backbone=dict(
- type='ResNet',
- depth=101,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs=True,
- num_outs=5),
- bbox_head=dict(
- type='GARetinaHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- approx_anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=4,
- scales_per_octave=3,
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32, 64, 128]),
- square_anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- scales=[4],
- strides=[8, 16, 32, 64, 128]),
- anchor_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loc_filter_thr=0.01,
- loss_loc=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0),
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)))
-# training and testing settings
-train_cfg = dict(
- ga_assigner=dict(
- type='ApproxMaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0.4,
- ignore_iof_thr=-1),
- ga_sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.0,
- ignore_iof_thr=-1),
- allowed_border=-1,
- pos_weight=-1,
- center_ratio=0.2,
- ignore_ratio=0.5,
- debug=False)
-test_cfg = dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100)
-# dataset settings
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 480), (1333, 960)],
- keep_ratio=True,
- multiscale_mode='range'),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_train2017.json',
- img_prefix=data_root + 'train2017/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline))
-evaluation = dict(interval=1, metric='bbox')
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=1.0 / 3,
- step=[16, 22])
-checkpoint_config = dict(interval=1)
-# yapf:disable
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook'),
- # dict(type='TensorboardLoggerHook')
- ])
-# yapf:enable
-# runtime settings
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/smooth_l1_loss.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/smooth_l1_loss.py
deleted file mode 100644
index ec9c98a52d1932d6ccff18938c17c36755bf1baf..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/smooth_l1_loss.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import mmcv
-import torch
-import torch.nn as nn
-
-from ..builder import LOSSES
-from .utils import weighted_loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def smooth_l1_loss(pred, target, beta=1.0):
- """Smooth L1 loss.
-
- Args:
- pred (torch.Tensor): The prediction.
- target (torch.Tensor): The learning target of the prediction.
- beta (float, optional): The threshold in the piecewise function.
- Defaults to 1.0.
-
- Returns:
- torch.Tensor: Calculated loss
- """
- assert beta > 0
- assert pred.size() == target.size() and target.numel() > 0
- diff = torch.abs(pred - target)
- loss = torch.where(diff < beta, 0.5 * diff * diff / beta,
- diff - 0.5 * beta)
- return loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def l1_loss(pred, target):
- """L1 loss.
-
- Args:
- pred (torch.Tensor): The prediction.
- target (torch.Tensor): The learning target of the prediction.
-
- Returns:
- torch.Tensor: Calculated loss
- """
- assert pred.size() == target.size() and target.numel() > 0
- loss = torch.abs(pred - target)
- return loss
-
-
-@LOSSES.register_module()
-class SmoothL1Loss(nn.Module):
- """Smooth L1 loss.
-
- Args:
- beta (float, optional): The threshold in the piecewise function.
- Defaults to 1.0.
- reduction (str, optional): The method to reduce the loss.
- Options are "none", "mean" and "sum". Defaults to "mean".
- loss_weight (float, optional): The weight of loss.
- """
-
- def __init__(self, beta=1.0, reduction='mean', loss_weight=1.0):
- super(SmoothL1Loss, self).__init__()
- self.beta = beta
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None,
- **kwargs):
- """Forward function.
-
- Args:
- pred (torch.Tensor): The prediction.
- target (torch.Tensor): The learning target of the prediction.
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- loss_bbox = self.loss_weight * smooth_l1_loss(
- pred,
- target,
- weight,
- beta=self.beta,
- reduction=reduction,
- avg_factor=avg_factor,
- **kwargs)
- return loss_bbox
-
-
-@LOSSES.register_module()
-class L1Loss(nn.Module):
- """L1 loss.
-
- Args:
- reduction (str, optional): The method to reduce the loss.
- Options are "none", "mean" and "sum".
- loss_weight (float, optional): The weight of loss.
- """
-
- def __init__(self, reduction='mean', loss_weight=1.0):
- super(L1Loss, self).__init__()
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): The prediction.
- target (torch.Tensor): The learning target of the prediction.
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- loss_bbox = self.loss_weight * l1_loss(
- pred, target, weight, reduction=reduction, avg_factor=avg_factor)
- return loss_bbox
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_80k_cityscapes.py
deleted file mode 100644
index be6bf16a2fd234f3526bf8fb8c30179f1ef9df78..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './ocrnet_hr18_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w18_small',
- backbone=dict(
- extra=dict(
- stage1=dict(num_blocks=(2, )),
- stage2=dict(num_blocks=(2, 2)),
- stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
- stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/dataset_wrappers.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/dataset_wrappers.py
deleted file mode 100644
index d6a5e957ec3b44465432617cf6e8f0b86a8a5efa..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/dataset_wrappers.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from torch.utils.data.dataset import ConcatDataset as _ConcatDataset
-
-from .builder import DATASETS
-
-
-@DATASETS.register_module()
-class ConcatDataset(_ConcatDataset):
- """A wrapper of concatenated dataset.
-
- Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but
- concat the group flag for image aspect ratio.
-
- Args:
- datasets (list[:obj:`Dataset`]): A list of datasets.
- """
-
- def __init__(self, datasets):
- super(ConcatDataset, self).__init__(datasets)
- self.CLASSES = datasets[0].CLASSES
- self.PALETTE = datasets[0].PALETTE
-
-
-@DATASETS.register_module()
-class RepeatDataset(object):
- """A wrapper of repeated dataset.
-
- The length of repeated dataset will be `times` larger than the original
- dataset. This is useful when the data loading time is long but the dataset
- is small. Using RepeatDataset can reduce the data loading time between
- epochs.
-
- Args:
- dataset (:obj:`Dataset`): The dataset to be repeated.
- times (int): Repeat times.
- """
-
- def __init__(self, dataset, times):
- self.dataset = dataset
- self.times = times
- self.CLASSES = dataset.CLASSES
- self.PALETTE = dataset.PALETTE
- self._ori_len = len(self.dataset)
-
- def __getitem__(self, idx):
- """Get item from original dataset."""
- return self.dataset[idx % self._ori_len]
-
- def __len__(self):
- """The length is multiplied by ``times``"""
- return self.times * self._ori_len
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec768L12.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec768L12.py
deleted file mode 100644
index 0d1591c8843b920d5685e822354e8e6adc9a9e19..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec768L12.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import torch
-from fairseq import checkpoint_utils
-
-class ContentVec768L12(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/checkpoint_best_legacy_500.pt",device=None):
- print("load model(s) from {}".format(vec_path))
- self.hidden_dim = 768
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [vec_path],
- suffix="",
- )
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.model = models[0].to(self.dev)
- self.model.eval()
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.to(wav.device),
- "padding_mask": padding_mask.to(wav.device),
- "output_layer": 12, # layer 12
- }
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- return logits[0].transpose(1, 2)
\ No newline at end of file
diff --git a/spaces/H2o6O2/Something/README.md b/spaces/H2o6O2/Something/README.md
deleted file mode 100644
index 8268262cd830e29777dab1a95464467209728105..0000000000000000000000000000000000000000
--- a/spaces/H2o6O2/Something/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Something
-emoji: 🏆
-colorFrom: yellow
-colorTo: yellow
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/seg_ko.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/seg_ko.sh
deleted file mode 100644
index c523d92634d9b61b97bbcdbfd17dfc33465bfc09..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/m2m_100/tokenizers/seg_ko.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-SCRIPT=`realpath $0`
-MECAB=`dirname $SCRIPT`/thirdparty/mecab-0.996-ko-0.9.2
-
-export PATH=$PATH:"$MECAB/bin":"$MECAB/lib"
-export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$MECAB/lib"
-
-cat - | mecab -O wakati
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/model_gottbert.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/model_gottbert.py
deleted file mode 100644
index 2e8c66354ac7ce7309226bb091a7baa4776fbfdc..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/model_gottbert.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-GottBERT: a pure German Language Model
-"""
-
-from fairseq.models import register_model
-
-from .hub_interface import RobertaHubInterface
-from .model import RobertaModel
-
-
-@register_model('gottbert')
-class GottbertModel(RobertaModel):
-
- @classmethod
- def hub_models(cls):
- return {
- 'gottbert-base': 'https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz',
- }
-
- @classmethod
- def from_pretrained(cls,
- model_name_or_path,
- checkpoint_file='model.pt',
- data_name_or_path='.',
- bpe='hf_byte_bpe',
- bpe_vocab='vocab.json',
- bpe_merges='merges.txt',
- bpe_add_prefix_space=False,
- **kwargs
- ):
- from fairseq import hub_utils
-
- x = hub_utils.from_pretrained(
- model_name_or_path,
- checkpoint_file,
- data_name_or_path,
- archive_map=cls.hub_models(),
- bpe=bpe,
- load_checkpoint_heads=True,
- bpe_vocab=bpe_vocab,
- bpe_merges=bpe_merges,
- bpe_add_prefix_space=bpe_add_prefix_space,
- **kwargs,
- )
- return RobertaHubInterface(x['args'], x['task'], x['models'][0])
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/berard.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/berard.py
deleted file mode 100644
index c505e3acaa84e5f3263ccbfaf9556f77123f09fc..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/berard.py
+++ /dev/null
@@ -1,606 +0,0 @@
-#!/usr/bin/env python3
-
-from ast import literal_eval
-from typing import List, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import checkpoint_utils, utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqIncrementalDecoder,
- register_model,
- register_model_architecture,
-)
-
-
-@register_model("s2t_berard")
-class BerardModel(FairseqEncoderDecoderModel):
- """Implementation of a model similar to https://arxiv.org/abs/1802.04200
-
- Paper title: End-to-End Automatic Speech Translation of Audiobooks
- An implementation is available in tensorflow at
- https://github.com/eske/seq2seq
- Relevant files in this implementation are the config
- (https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml)
- and the model code
- (https://github.com/eske/seq2seq/blob/master/translate/models.py).
- The encoder and decoder try to be close to the original implementation.
- The attention is an MLP as in Bahdanau et al.
- (https://arxiv.org/abs/1409.0473).
- There is no state initialization by averaging the encoder outputs.
- """
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- parser.add_argument(
- "--input-layers",
- type=str,
- metavar="EXPR",
- help="List of linear layer dimensions. These "
- "layers are applied to the input features and "
- "are followed by tanh and possibly dropout.",
- )
- parser.add_argument(
- "--dropout",
- type=float,
- metavar="D",
- help="Dropout probability to use in the encoder/decoder. "
- "Note that this parameters control dropout in various places, "
- "there is no fine-grained control for dropout for embeddings "
- "vs LSTM layers for example.",
- )
- parser.add_argument(
- "--in-channels",
- type=int,
- metavar="N",
- help="Number of encoder input channels. " "Typically value is 1.",
- )
- parser.add_argument(
- "--conv-layers",
- type=str,
- metavar="EXPR",
- help="List of conv layers " "(format: (channels, kernel, stride)).",
- )
- parser.add_argument(
- "--num-blstm-layers",
- type=int,
- metavar="N",
- help="Number of encoder bi-LSTM layers.",
- )
- parser.add_argument(
- "--lstm-size", type=int, metavar="N", help="LSTM hidden size."
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="Embedding dimension of the decoder target tokens.",
- )
- parser.add_argument(
- "--decoder-hidden-dim",
- type=int,
- metavar="N",
- help="Decoder LSTM hidden dimension.",
- )
- parser.add_argument(
- "--decoder-num-layers",
- type=int,
- metavar="N",
- help="Number of decoder LSTM layers.",
- )
- parser.add_argument(
- "--attention-dim",
- type=int,
- metavar="N",
- help="Hidden layer dimension in MLP attention.",
- )
- parser.add_argument(
- "--output-layer-dim",
- type=int,
- metavar="N",
- help="Hidden layer dim for linear layer prior to output projection.",
- )
- parser.add_argument(
- "--load-pretrained-encoder-from",
- type=str,
- metavar="STR",
- help="model to take encoder weights from (for initialization)",
- )
- parser.add_argument(
- "--load-pretrained-decoder-from",
- type=str,
- metavar="STR",
- help="model to take decoder weights from (for initialization)",
- )
-
- @classmethod
- def build_encoder(cls, args, task):
- encoder = BerardEncoder(
- input_layers=literal_eval(args.input_layers),
- conv_layers=literal_eval(args.conv_layers),
- in_channels=args.input_channels,
- input_feat_per_channel=args.input_feat_per_channel,
- num_blstm_layers=args.num_blstm_layers,
- lstm_size=args.lstm_size,
- dropout=args.dropout,
- )
- if getattr(args, "load_pretrained_encoder_from", None):
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task):
- decoder = LSTMDecoder(
- dictionary=task.target_dictionary,
- embed_dim=args.decoder_embed_dim,
- num_layers=args.decoder_num_layers,
- hidden_size=args.decoder_hidden_dim,
- dropout=args.dropout,
- encoder_output_dim=2 * args.lstm_size, # bidirectional
- attention_dim=args.attention_dim,
- output_layer_dim=args.output_layer_dim,
- )
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- encoder = cls.build_encoder(args, task)
- decoder = cls.build_decoder(args, task)
-
- return cls(encoder, decoder)
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- # net_output['encoder_out'] is a (B, T, D) tensor
- lprobs = super().get_normalized_probs(net_output, log_probs, sample)
- # lprobs is a (B, T, D) tensor
- lprobs.batch_first = True
- return lprobs
-
-
-class BerardEncoder(FairseqEncoder):
- def __init__(
- self,
- input_layers: List[int],
- conv_layers: List[Tuple[int]],
- in_channels: int,
- input_feat_per_channel: int,
- num_blstm_layers: int,
- lstm_size: int,
- dropout: float,
- ):
- """
- Args:
- input_layers: list of linear layer dimensions. These layers are
- applied to the input features and are followed by tanh and
- possibly dropout.
- conv_layers: list of conv2d layer configurations. A configuration is
- a tuple (out_channels, conv_kernel_size, stride).
- in_channels: number of input channels.
- input_feat_per_channel: number of input features per channel. These
- are speech features, typically 40 or 80.
- num_blstm_layers: number of bidirectional LSTM layers.
- lstm_size: size of the LSTM hidden (and cell) size.
- dropout: dropout probability. Dropout can be applied after the
- linear layers and LSTM layers but not to the convolutional
- layers.
- """
- super().__init__(None)
-
- self.input_layers = nn.ModuleList()
- in_features = input_feat_per_channel
- for out_features in input_layers:
- if dropout > 0:
- self.input_layers.append(
- nn.Sequential(
- nn.Linear(in_features, out_features), nn.Dropout(p=dropout)
- )
- )
- else:
- self.input_layers.append(nn.Linear(in_features, out_features))
- in_features = out_features
-
- self.in_channels = in_channels
- self.input_dim = input_feat_per_channel
- self.conv_kernel_sizes_and_strides = []
- self.conv_layers = nn.ModuleList()
- lstm_input_dim = input_layers[-1]
- for conv_layer in conv_layers:
- out_channels, conv_kernel_size, conv_stride = conv_layer
- self.conv_layers.append(
- nn.Conv2d(
- in_channels,
- out_channels,
- conv_kernel_size,
- stride=conv_stride,
- padding=conv_kernel_size // 2,
- )
- )
- self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride))
- in_channels = out_channels
- lstm_input_dim //= conv_stride
-
- lstm_input_dim *= conv_layers[-1][0]
- self.lstm_size = lstm_size
- self.num_blstm_layers = num_blstm_layers
- self.lstm = nn.LSTM(
- input_size=lstm_input_dim,
- hidden_size=lstm_size,
- num_layers=num_blstm_layers,
- dropout=dropout,
- bidirectional=True,
- )
- self.output_dim = 2 * lstm_size # bidirectional
- if dropout > 0:
- self.dropout = nn.Dropout(p=dropout)
- else:
- self.dropout = None
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- """
- Args
- src_tokens: padded tensor (B, T, C * feat)
- src_lengths: tensor of original lengths of input utterances (B,)
- """
- bsz, max_seq_len, _ = src_tokens.size()
- # (B, C, T, feat)
- x = (
- src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
- .transpose(1, 2)
- .contiguous()
- )
-
- for input_layer in self.input_layers:
- x = input_layer(x)
- x = torch.tanh(x)
-
- for conv_layer in self.conv_layers:
- x = conv_layer(x)
-
- bsz, _, output_seq_len, _ = x.size()
-
- # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) ->
- # (T, B, C * feat)
- x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
-
- input_lengths = src_lengths.clone()
- for k, s in self.conv_kernel_sizes_and_strides:
- p = k // 2
- input_lengths = (input_lengths.float() + 2 * p - k) / s + 1
- input_lengths = input_lengths.floor().long()
-
- packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths)
-
- h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
- c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
- packed_outs, _ = self.lstm(packed_x, (h0, c0))
-
- # unpack outputs and apply dropout
- x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs)
- if self.dropout is not None:
- x = self.dropout(x)
-
- encoder_padding_mask = (
- lengths_to_padding_mask(output_lengths).to(src_tokens.device).t()
- )
-
- return {
- "encoder_out": x, # (T, B, C)
- "encoder_padding_mask": encoder_padding_mask, # (T, B)
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
- 1, new_order
- )
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(1, new_order)
- return encoder_out
-
-
-class MLPAttention(nn.Module):
- """The original attention from Badhanau et al. (2014)
-
- https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron.
- The attention score between position i in the encoder and position j in the
- decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a)
- """
-
- def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim):
- super().__init__()
-
- self.context_dim = context_dim
- self.attention_dim = attention_dim
- # W_ae and b_a
- self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True)
- # W_ad
- self.decoder_proj = nn.Linear(
- decoder_hidden_state_dim, self.attention_dim, bias=False
- )
- # V_a
- self.to_scores = nn.Linear(self.attention_dim, 1, bias=False)
-
- def forward(self, decoder_state, source_hids, encoder_padding_mask):
- """The expected input dimensions are:
- decoder_state: bsz x decoder_hidden_state_dim
- source_hids: src_len x bsz x context_dim
- encoder_padding_mask: src_len x bsz
- """
- src_len, bsz, _ = source_hids.size()
- # (src_len*bsz) x context_dim (to feed through linear)
- flat_source_hids = source_hids.view(-1, self.context_dim)
- # (src_len*bsz) x attention_dim
- encoder_component = self.encoder_proj(flat_source_hids)
- # src_len x bsz x attention_dim
- encoder_component = encoder_component.view(src_len, bsz, self.attention_dim)
- # 1 x bsz x attention_dim
- decoder_component = self.decoder_proj(decoder_state).unsqueeze(0)
- # Sum with broadcasting and apply the non linearity
- # src_len x bsz x attention_dim
- hidden_att = torch.tanh(
- (decoder_component + encoder_component).view(-1, self.attention_dim)
- )
- # Project onto the reals to get attentions scores (src_len x bsz)
- attn_scores = self.to_scores(hidden_att).view(src_len, bsz)
-
- # Mask + softmax (src_len x bsz)
- if encoder_padding_mask is not None:
- attn_scores = (
- attn_scores.float()
- .masked_fill_(encoder_padding_mask, float("-inf"))
- .type_as(attn_scores)
- ) # FP16 support: cast to float and back
- # srclen x bsz
- normalized_masked_attn_scores = F.softmax(attn_scores, dim=0)
-
- # Sum weighted sources (bsz x context_dim)
- attn_weighted_context = (
- source_hids * normalized_masked_attn_scores.unsqueeze(2)
- ).sum(dim=0)
-
- return attn_weighted_context, normalized_masked_attn_scores
-
-
-class LSTMDecoder(FairseqIncrementalDecoder):
- def __init__(
- self,
- dictionary,
- embed_dim,
- num_layers,
- hidden_size,
- dropout,
- encoder_output_dim,
- attention_dim,
- output_layer_dim,
- ):
- """
- Args:
- dictionary: target text dictionary.
- embed_dim: embedding dimension for target tokens.
- num_layers: number of LSTM layers.
- hidden_size: hidden size for LSTM layers.
- dropout: dropout probability. Dropout can be applied to the
- embeddings, the LSTM layers, and the context vector.
- encoder_output_dim: encoder output dimension (hidden size of
- encoder LSTM).
- attention_dim: attention dimension for MLP attention.
- output_layer_dim: size of the linear layer prior to output
- projection.
- """
- super().__init__(dictionary)
- self.num_layers = num_layers
- self.hidden_size = hidden_size
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx)
- if dropout > 0:
- self.dropout = nn.Dropout(p=dropout)
- else:
- self.dropout = None
-
- self.layers = nn.ModuleList()
- for layer_id in range(num_layers):
- input_size = embed_dim if layer_id == 0 else encoder_output_dim
- self.layers.append(
- nn.LSTMCell(input_size=input_size, hidden_size=hidden_size)
- )
-
- self.context_dim = encoder_output_dim
- self.attention = MLPAttention(
- decoder_hidden_state_dim=hidden_size,
- context_dim=encoder_output_dim,
- attention_dim=attention_dim,
- )
-
- self.deep_output_layer = nn.Linear(
- hidden_size + encoder_output_dim + embed_dim, output_layer_dim
- )
- self.output_projection = nn.Linear(output_layer_dim, num_embeddings)
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- encoder_padding_mask = encoder_out["encoder_padding_mask"]
- encoder_outs = encoder_out["encoder_out"]
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- bsz, seqlen = prev_output_tokens.size()
-
- srclen = encoder_outs.size(0)
-
- # embed tokens
- embeddings = self.embed_tokens(prev_output_tokens)
- x = embeddings
- if self.dropout is not None:
- x = self.dropout(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # initialize previous states (or get from cache during incremental
- # generation)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is not None:
- prev_hiddens, prev_cells = cached_state
- else:
- prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers
- prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers
-
- attn_scores = x.new_zeros(bsz, srclen)
- attention_outs = []
- outs = []
- for j in range(seqlen):
- input = x[j, :, :]
- attention_out = None
- for i, layer in enumerate(self.layers):
- # the previous state is one layer below except for the bottom
- # layer where the previous state is the state emitted by the
- # top layer
- hidden, cell = layer(
- input,
- (
- prev_hiddens[(i - 1) % self.num_layers],
- prev_cells[(i - 1) % self.num_layers],
- ),
- )
- if self.dropout is not None:
- hidden = self.dropout(hidden)
- prev_hiddens[i] = hidden
- prev_cells[i] = cell
- if attention_out is None:
- attention_out, attn_scores = self.attention(
- hidden, encoder_outs, encoder_padding_mask
- )
- if self.dropout is not None:
- attention_out = self.dropout(attention_out)
- attention_outs.append(attention_out)
- input = attention_out
-
- # collect the output of the top layer
- outs.append(hidden)
-
- # cache previous states (no-op except during incremental generation)
- utils.set_incremental_state(
- self, incremental_state, "cached_state", (prev_hiddens, prev_cells)
- )
-
- # collect outputs across time steps
- x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size)
- attention_outs_concat = torch.cat(attention_outs, dim=0).view(
- seqlen, bsz, self.context_dim
- )
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
- attention_outs_concat = attention_outs_concat.transpose(0, 1)
-
- # concat LSTM output, attention output and embedding
- # before output projection
- x = torch.cat((x, attention_outs_concat, embeddings), dim=2)
- x = self.deep_output_layer(x)
- x = torch.tanh(x)
- if self.dropout is not None:
- x = self.dropout(x)
- # project back to size of vocabulary
- x = self.output_projection(x)
-
- # to return the full attn_scores tensor, we need to fix the decoder
- # to account for subsampling input frames
- # return x, attn_scores
- return x, None
-
- def reorder_incremental_state(self, incremental_state, new_order):
- super().reorder_incremental_state(incremental_state, new_order)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is None:
- return
-
- def reorder_state(state):
- if isinstance(state, list):
- return [reorder_state(state_i) for state_i in state]
- return state.index_select(0, new_order)
-
- new_state = tuple(map(reorder_state, cached_state))
- utils.set_incremental_state(self, incremental_state, "cached_state", new_state)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard")
-def berard(args):
- """The original version: "End-to-End Automatic Speech Translation of
- Audiobooks" (https://arxiv.org/abs/1802.04200)
- """
- args.input_layers = getattr(args, "input_layers", "[256, 128]")
- args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]")
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
- args.lstm_size = getattr(args, "lstm_size", 256)
- args.dropout = getattr(args, "dropout", 0.2)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 128)
- args.load_pretrained_encoder_from = getattr(
- args, "load_pretrained_encoder_from", None
- )
- args.load_pretrained_decoder_from = getattr(
- args, "load_pretrained_decoder_from", None
- )
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3")
-def berard_256_3_3(args):
- """Used in
- * "Harnessing Indirect Training Data for End-to-End Automatic Speech
- Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515)
- * "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"
- (https://arxiv.org/pdf/2002.01320.pdf)
- * "Self-Supervised Representations Improve End-to-End Speech Translation"
- (https://arxiv.org/abs/2006.12124)
- """
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
- berard(args)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2")
-def berard_512_3_2(args):
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
- args.lstm_size = getattr(args, "lstm_size", 512)
- args.dropout = getattr(args, "dropout", 0.3)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 256)
- berard(args)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3")
-def berard_512_5_3(args):
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 5)
- args.lstm_size = getattr(args, "lstm_size", 512)
- args.dropout = getattr(args, "dropout", 0.3)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 256)
- berard(args)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/models/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/models/__init__.py
deleted file mode 100644
index 5ca74d790a95a2b14d3fbb0cf9f0a9959416d305..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/models/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .ofa import OFAModel, ofa_base_architecture, ofa_large_architecture, ofa_huge_architecture
\ No newline at end of file
diff --git a/spaces/HighCWu/GFPGAN-1.3/tests/test_stylegan2_clean_arch.py b/spaces/HighCWu/GFPGAN-1.3/tests/test_stylegan2_clean_arch.py
deleted file mode 100644
index 78bb920e73ce28cfec9ea89a4339cc5b87981b47..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GFPGAN-1.3/tests/test_stylegan2_clean_arch.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import torch
-
-from gfpgan.archs.stylegan2_clean_arch import StyleGAN2GeneratorClean
-
-
-def test_stylegan2generatorclean():
- """Test arch: StyleGAN2GeneratorClean."""
-
- # model init and forward (gpu)
- if torch.cuda.is_available():
- net = StyleGAN2GeneratorClean(
- out_size=32, num_style_feat=512, num_mlp=8, channel_multiplier=1, narrow=0.5).cuda().eval()
- style = torch.rand((1, 512), dtype=torch.float32).cuda()
- output = net([style], input_is_latent=False)
- assert output[0].shape == (1, 3, 32, 32)
- assert output[1] is None
-
- # -------------------- with return_latents ----------------------- #
- output = net([style], input_is_latent=True, return_latents=True)
- assert output[0].shape == (1, 3, 32, 32)
- assert len(output[1]) == 1
- # check latent
- assert output[1][0].shape == (8, 512)
-
- # -------------------- with randomize_noise = False ----------------------- #
- output = net([style], randomize_noise=False)
- assert output[0].shape == (1, 3, 32, 32)
- assert output[1] is None
-
- # -------------------- with truncation = 0.5 and mixing----------------------- #
- output = net([style, style], truncation=0.5, truncation_latent=style)
- assert output[0].shape == (1, 3, 32, 32)
- assert output[1] is None
-
- # ------------------ test make_noise ----------------------- #
- out = net.make_noise()
- assert len(out) == 7
- assert out[0].shape == (1, 1, 4, 4)
- assert out[1].shape == (1, 1, 8, 8)
- assert out[2].shape == (1, 1, 8, 8)
- assert out[3].shape == (1, 1, 16, 16)
- assert out[4].shape == (1, 1, 16, 16)
- assert out[5].shape == (1, 1, 32, 32)
- assert out[6].shape == (1, 1, 32, 32)
-
- # ------------------ test get_latent ----------------------- #
- out = net.get_latent(style)
- assert out.shape == (1, 512)
-
- # ------------------ test mean_latent ----------------------- #
- out = net.mean_latent(2)
- assert out.shape == (1, 512)
diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle_main.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle_main.py
deleted file mode 100644
index b7d9442495a99a8b81164ebc8fdc4c8de2cb633d..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle_main.py
+++ /dev/null
@@ -1,619 +0,0 @@
-import os
-import numpy as np
-
-import torch
-import torch.random
-from torch.optim import AdamW
-from torch.utils.data import DataLoader
-import pytorch_lightning as pl
-from pytorch_lightning import seed_everything
-from pytorch_lightning.trainer import Trainer
-
-from dataloader import CellLoader
-from celle import VQGanVAE, CELLE
-from omegaconf import OmegaConf
-import argparse, os, sys, datetime, glob
-
-from celle.celle import gumbel_sample, top_k
-
-torch.random.manual_seed(42)
-np.random.seed(42)
-
-from celle_taming_main import (
- instantiate_from_config,
- nondefault_trainer_args,
- get_parser,
-)
-
-
-class CellDataModule(pl.LightningDataModule):
- def __init__(
- self,
- data_csv,
- dataset,
- sequence_mode="standard",
- vocab="bert",
- crop_size=256,
- resize=600,
- batch_size=1,
- threshold="median",
- text_seq_len=1000,
- num_workers=1,
- **kwargs,
- ):
- super().__init__()
-
- self.data_csv = data_csv
- self.dataset = dataset
- self.protein_sequence_length = 0
- self.image_folders = []
- self.crop_size = crop_size
- self.resize = resize
- self.batch_size = batch_size
- self.sequence_mode = sequence_mode
- self.threshold = threshold
- self.text_seq_len = int(text_seq_len)
- self.vocab = vocab
- self.num_workers = num_workers if num_workers is not None else batch_size * 2
-
- def setup(self, stage=None):
- # called on every GPU
- self.cell_dataset_train = CellLoader(
- data_csv=self.data_csv,
- dataset=self.dataset,
- crop_size=self.crop_size,
- resize=self.resize,
- split_key="train",
- crop_method="random",
- sequence_mode=self.sequence_mode,
- vocab=self.vocab,
- text_seq_len=self.text_seq_len,
- threshold=self.threshold,
- )
-
- self.cell_dataset_val = CellLoader(
- data_csv=self.data_csv,
- dataset=self.dataset,
- crop_size=self.crop_size,
- resize=self.resize,
- crop_method="center",
- split_key="val",
- sequence_mode=self.sequence_mode,
- vocab=self.vocab,
- text_seq_len=self.text_seq_len,
- threshold=self.threshold,
- )
-
- def prepare_data(self):
-
- pass
-
- def train_dataloader(self):
- return DataLoader(
- self.cell_dataset_train,
- num_workers=self.num_workers,
- shuffle=True,
- batch_size=self.batch_size,
- )
-
- def val_dataloader(self):
- return DataLoader(
- self.cell_dataset_val,
- num_workers=self.num_workers,
- batch_size=self.batch_size,
- )
-
- # def test_dataloader(self):
- # transforms = ...
- # return DataLoader(self.test, batch_size=64)
-
-
-class CELLE_trainer(pl.LightningModule):
- def __init__(
- self,
- vqgan_model_path,
- vqgan_config_path,
- ckpt_path=None,
- image_key="threshold",
- condition_model_path=None,
- condition_config_path=None,
- num_images=2,
- dim=2,
- num_text_tokens=30,
- text_seq_len=1000,
- depth=16,
- heads=16,
- dim_head=64,
- attn_dropout=0.1,
- ff_dropout=0.1,
- attn_types="full",
- loss_img_weight=7,
- stable=False,
- rotary_emb=True,
- text_embedding="bert",
- fixed_embedding=True,
- loss_cond_weight=1,
- learning_rate=3e-4,
- monitor="val_loss",
- ):
- super().__init__()
-
- vae = VQGanVAE(
- vqgan_model_path=vqgan_model_path, vqgan_config_path=vqgan_config_path
- )
-
- self.image_key = image_key
-
- if condition_config_path:
- condition_vae = VQGanVAE(
- vqgan_model_path=condition_model_path,
- vqgan_config_path=condition_config_path,
- )
- else:
- condition_vae = None
-
- self.celle = CELLE(
- dim=dim,
- vae=vae, # automatically infer (1) image sequence length and (2) number of image tokens
- condition_vae=condition_vae,
- num_images=num_images,
- num_text_tokens=num_text_tokens, # vocab size for text
- text_seq_len=text_seq_len, # text sequence length
- depth=depth, # should aim to be 64
- heads=heads, # attention heads
- dim_head=dim_head, # attention head dimension
- attn_dropout=attn_dropout, # attention dropout
- ff_dropout=ff_dropout, # feedforward dropout
- loss_img_weight=loss_img_weight,
- stable=stable,
- rotary_emb=rotary_emb,
- text_embedding=text_embedding,
- fixed_embedding=fixed_embedding,
- loss_cond_weight=loss_cond_weight,
- )
-
- self.learning_rate = learning_rate
- self.num_text_tokens = num_text_tokens
- self.num_images = num_images
-
- if monitor is not None:
- self.monitor = monitor
-
- ignore_keys = []
-
- if condition_model_path:
- ignore_keys.append("celle.condition_vae")
-
- if vqgan_model_path:
- ignore_keys.append("celle.vae")
-
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- ckpt = sd.copy()
- for k in sd.keys():
- for ik in ignore_keys:
- if k.startswith(ik):
- # print("Deleting key {} from state_dict.".format(k))
- del ckpt[k]
- self.load_state_dict(ckpt, strict=False)
- print(f"Restored from {path}")
-
- def forward(self, text, condition, target, return_loss=True):
-
- return self.celle(
- text=text, condition=condition, image=target, return_loss=return_loss
- )
-
- def get_input(self, batch):
- text = batch["sequence"].squeeze(1)
- condition = batch["nucleus"]
- target = batch[self.image_key]
-
- return text, condition, target
-
- def get_image_from_logits(self, logits, temperature=0.9):
-
- filtered_logits = top_k(logits, thres=0.5)
- sample = gumbel_sample(filtered_logits, temperature=temperature, dim=-1)
-
- self.celle.vae.eval()
- out = self.celle.vae.decode(
- sample[:, self.celle.text_seq_len + self.celle.condition_seq_len :]
- - (self.celle.num_text_tokens + self.celle.num_condition_tokens)
- )
-
- return out
-
- def get_loss(self, text, condition, target):
-
- loss_dict = {}
-
- loss, loss_dict, logits = self(text, condition, target, return_loss=True)
-
- return loss, loss_dict
-
- def total_loss(
- self,
- loss,
- loss_dict,
- mode="train",
- ):
-
- loss_dict = {f"{mode}/{key}": value for key, value in loss_dict.items()}
-
- for key, value in loss_dict.items():
- self.log(
- key,
- value,
- prog_bar=True,
- logger=True,
- on_step=True,
- on_epoch=True,
- sync_dist=True,
- )
-
- return loss
-
- def training_step(self, batch, batch_idx):
-
- text, condition, target = self.get_input(batch)
- loss, log_dict = self.get_loss(text, condition, target)
-
- loss = self.total_loss(loss, log_dict, mode="train")
-
- return loss
-
- def validation_step(self, batch, batch_idx):
-
- with torch.no_grad():
-
- text, condition, target = self.get_input(batch)
- loss, log_dict = self.get_loss(text, condition, target)
-
- loss = self.total_loss(loss, log_dict, mode="val")
-
- return loss
-
- def configure_optimizers(self):
-
- optimizer = AdamW(self.parameters(), lr=self.learning_rate, betas=(0.9, 0.95))
-
- return optimizer
-
- def scale_image(self, image):
-
- for tensor in image:
- if torch.min(tensor) < 0:
- tensor += -torch.min(tensor)
- else:
- tensor -= torch.min(tensor)
-
- tensor /= torch.max(tensor)
-
- return image
-
- @torch.no_grad()
- def log_images(self, batch, **kwargs):
-
- log = []
-
- text, condition, target = self.get_input(batch)
- text = text.squeeze(1).to(self.device)
- condition = condition.to(self.device)
-
- out = self.celle.generate_images(text=text, condition=condition)
-
- log["condition"] = self.scale_image(condition)
- log["output"] = self.scale_image(out)
- if self.image_key == "threshold":
- log["threshold"] = self.scale_image(target)
- log["target"] = self.scale_image(batch["target"])
- else:
- log["target"] = self.scale_image(target)
-
- return log
-
-
-# from https://github.com/CompVis/taming-transformers/blob/master/celle_main.py
-
-if __name__ == "__main__":
- # custom parser to specify config files, train, test and debug mode,
- # postfix, resume.
- # `--key value` arguments are interpreted as arguments to the trainer.
- # `nested.key=value` arguments are interpreted as config parameters.
- # configs are merged from left-to-right followed by command line parameters.
-
- # model:
- # learning_rate: float
- # target: path to lightning module
- # params:
- # key: value
- # data:
- # target: celle_main.DataModuleFromConfig
- # params:
- # batch_size: int
- # wrap: bool
- # train:
- # target: path to train dataset
- # params:
- # key: value
- # validation:
- # target: path to validation dataset
- # params:
- # key: value
- # test:
- # target: path to test dataset
- # params:
- # key: value
- # lightning: (optional, has sane defaults and can be specified on cmdline)
- # trainer:
- # additional arguments to trainer
- # logger:
- # logger to instantiate
- # modelcheckpoint:
- # modelcheckpoint to instantiate
- # callbacks:
- # callback1:
- # target: importpath
- # params:
- # key: value
-
- now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
-
- # add cwd for convenience and to make classes in this file available when
- # running as `python celle_main.py`
- # (in particular `celle_main.DataModuleFromConfig`)
- sys.path.append(os.getcwd())
-
- parser = get_parser()
- parser = Trainer.add_argparse_args(parser)
-
- opt, unknown = parser.parse_known_args()
- if opt.name and opt.resume:
- raise ValueError(
- "-n/--name and -r/--resume cannot be specified both."
- "If you want to resume training in a new log folder, "
- "use -n/--name in combination with --resume_from_checkpoint"
- )
- if opt.resume:
- if not os.path.exists(opt.resume):
- raise ValueError("Cannot find {}".format(opt.resume))
- if os.path.isfile(opt.resume):
- paths = opt.resume.split("/")
- idx = len(paths) - paths[::-1].index("logs") + 1
- logdir = "/".join(paths[:idx])
- ckpt = opt.resume
- else:
- assert os.path.isdir(opt.resume), opt.resume
- logdir = opt.resume.rstrip("/")
- ckpt = os.path.join(logdir, "checkpoints", "last.ckpt")
-
- opt.resume_from_checkpoint = ckpt
- base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml")))
- opt.base = base_configs + opt.base
- _tmp = logdir.split("/")
- nowname = _tmp[_tmp.index("logs") + 1]
- else:
- if opt.name:
- name = "_" + opt.name
- elif opt.base:
- cfg_fname = os.path.split(opt.base[0])[-1]
- cfg_name = os.path.splitext(cfg_fname)[0]
- name = "_" + cfg_name
- else:
- name = ""
- nowname = now + name + opt.postfix
- logdir = os.path.join("logs", nowname)
-
- ckptdir = os.path.join(logdir, "checkpoints")
- cfgdir = os.path.join(logdir, "configs")
- seed_everything(opt.seed)
-
- try:
- # init and save configs
- configs = [OmegaConf.load(cfg) for cfg in opt.base]
- cli = OmegaConf.from_dotlist(unknown)
- config = OmegaConf.merge(*configs, cli)
- lightning_config = config.pop("lightning", OmegaConf.create())
- # merge trainer cli with config
- trainer_config = lightning_config.get("trainer", OmegaConf.create())
- # default to ddp
- # trainer_config["distributed_backend"] = "ddp"
- for k in nondefault_trainer_args(opt):
- trainer_config[k] = getattr(opt, k)
- if not "gpus" in trainer_config:
- del trainer_config["distributed_backend"]
- cpu = True
- else:
- gpuinfo = trainer_config["gpus"]
- print(f"Running on GPUs {gpuinfo}")
- cpu = False
- trainer_opt = argparse.Namespace(**trainer_config)
- lightning_config.trainer = trainer_config
-
- # model
- # model = instantiate_from_config(config.model)
- model = instantiate_from_config(config.model)
- # trainer and callbacks
- trainer_kwargs = dict()
-
- # default logger configs
- # NOTE wandb < 0.10.0 interferes with shutdown
- # wandb >= 0.10.0 seems to fix it but still interferes with pudb
- # debugging (wrongly sized pudb ui)
- # thus prefer testtube for now
- default_logger_cfgs = {
- "wandb": {
- "target": "pytorch_lightning.loggers.WandbLogger",
- "params": {
- "name": nowname,
- "save_dir": logdir,
- "offline": opt.debug,
- "id": nowname,
- },
- },
- "testtube": {
- # "target": "pytorch_lightning.loggers.TestTubeLogger",
- "target": "pytorch_lightning.loggers.TensorBoardLogger",
- "params": {
- "name": "testtube",
- "save_dir": logdir,
- },
- },
- }
- default_logger_cfg = default_logger_cfgs["testtube"]
- # logger_cfg = lightning_config.logger or OmegaConf.create()
- try:
- logger_cfg = lightning_config.logger
- except:
- logger_cfg = OmegaConf.create()
- logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg)
- trainer_kwargs["logger"] = instantiate_from_config(logger_cfg)
-
- # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to
- # specify which metric is used to determine best models
- default_modelckpt_cfg = {
- "checkpoint_callback": {
- "target": "pytorch_lightning.callbacks.ModelCheckpoint",
- "params": {
- "dirpath": ckptdir,
- "filename": "{epoch:06}",
- "verbose": True,
- "save_last": True,
- },
- }
- }
- if hasattr(model, "monitor"):
- print(f"Monitoring {model.monitor} as checkpoint metric.")
- default_modelckpt_cfg["checkpoint_callback"]["params"][
- "monitor"
- ] = model.monitor
- default_modelckpt_cfg["checkpoint_callback"]["params"]["save_top_k"] = 3
- try:
- modelckpt_cfg = lightning_config.modelcheckpoint
- except:
- modelckpt_cfg = OmegaConf.create()
- modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg)
- # trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg)
-
- # add callback which sets up log directory
- default_callbacks_cfg = {
- "setup_callback": {
- "target": "celle_taming_main.SetupCallback",
- "params": {
- "resume": opt.resume,
- "now": now,
- "logdir": logdir,
- "ckptdir": ckptdir,
- "cfgdir": cfgdir,
- "config": config,
- "lightning_config": lightning_config,
- },
- },
- # "image_logger": {
- # "target": "celle_taming_main.ImageLogger",
- # "params": {
- # "batch_frequency": 0,
- # "max_images": 0,
- # "clamp": False,
- # "increase_log_steps": False,
- # },
- # },
- # "learning_rate_logger": {
- # "target": "celle_taming_main.LearningRateMonitor",
- # "params": {
- # "logging_interval": "step",
- # # "log_momentum": True
- # },
- # },
- }
- try:
- callbacks_cfg = lightning_config.callbacks
- except:
- callbacks_cfg = OmegaConf.create()
- callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg)
- callbacks_cfg = OmegaConf.merge(modelckpt_cfg, callbacks_cfg)
- trainer_kwargs["callbacks"] = [
- instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg
- ]
-
- trainer = Trainer.from_argparse_args(
- trainer_opt, **trainer_kwargs, profiler="simple"
- )
-
- # data
- data = instantiate_from_config(config.data)
- # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html
- # calling these ourselves should not be necessary but it is.
- # lightning still takes care of proper multiprocessing though
- data.setup()
- data.prepare_data()
-
- # configure learning rate
- bs, lr = config.data.params.batch_size, config.model.learning_rate
-
- if not cpu:
- ngpu = len(lightning_config.trainer.gpus.strip(",").split(","))
- else:
- ngpu = 1
- try:
- accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches
- except:
- accumulate_grad_batches = 1
- print(f"accumulate_grad_batches = {accumulate_grad_batches}")
- lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches
- model.learning_rate = accumulate_grad_batches * ngpu * bs * lr
-
- print(
- "Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (lr)".format(
- model.learning_rate, accumulate_grad_batches, ngpu, bs, lr
- )
- )
-
- # allow checkpointing via USR1
- def melk(*args, **kwargs):
- # run all checkpoint hooks
- if trainer.global_rank == 0:
- print("Summoning checkpoint.")
- ckpt_path = os.path.join(ckptdir, "last.ckpt")
- trainer.save_checkpoint(ckpt_path)
-
- def divein(*args, **kwargs):
- if trainer.global_rank == 0:
- import pudb
-
- pudb.set_trace()
-
- import signal
-
- signal.signal(signal.SIGUSR1, melk)
- signal.signal(signal.SIGUSR2, divein)
-
- # run
- if opt.train:
- try:
- # model = torch.compile(model, mode="reduce_overhead")
- torch.compile(trainer.fit(model, data), mode="max-autotune")
- except Exception:
- melk()
- raise
- if not opt.no_test and not trainer.interrupted:
- trainer.test(model, data)
- except Exception:
- if opt.debug and trainer.global_rank == 0:
- try:
- import pudb as debugger
- except ImportError:
- import pdb as debugger
- debugger.post_mortem()
- raise
- finally:
- # move newly created debug project to debug_runs
- if opt.debug and not opt.resume and trainer.global_rank == 0:
- dst, name = os.path.split(logdir)
- dst = os.path.join(dst, "debug_runs", name)
- os.makedirs(os.path.split(dst)[0], exist_ok=True)
- os.rename(logdir, dst)
diff --git a/spaces/HugsVision/Skin-Cancer/README.md b/spaces/HugsVision/Skin-Cancer/README.md
deleted file mode 100644
index 2ae1d9489daaf13f2a97d5676dcfde51b04c7ab6..0000000000000000000000000000000000000000
--- a/spaces/HugsVision/Skin-Cancer/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Hippocrate - Skin-Cancer Detection
-emoji: ⚕️
-colorFrom: red
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: true
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Hyeonseo/ChatGPT-ko-translation-prompt/README.md b/spaces/Hyeonseo/ChatGPT-ko-translation-prompt/README.md
deleted file mode 100644
index 9096a165471b6f7638b6287d2264f209c6c57b66..0000000000000000000000000000000000000000
--- a/spaces/Hyeonseo/ChatGPT-ko-translation-prompt/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGPT Ko Translation Prompt
-emoji: 📊
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc
deleted file mode 100644
index e18fb62df52ab85d7802615d8619b0fd94a08f8c..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include
-#include "fstext/fstext-lib.h" // @manual
-#include "util/common-utils.h" // @manual
-
-/*
- * This program is to modify a FST without self-loop by:
- * for each incoming arc with non-eps input symbol, add a self-loop arc
- * with that non-eps symbol as input and eps as output.
- *
- * This is to make sure the resultant FST can do deduplication for repeated
- * symbols, which is very common in acoustic model
- *
- */
-namespace {
-int32 AddSelfLoopsSimple(fst::StdVectorFst* fst) {
- typedef fst::MutableArcIterator IterType;
-
- int32 num_states_before = fst->NumStates();
- fst::MakePrecedingInputSymbolsSame(false, fst);
- int32 num_states_after = fst->NumStates();
- KALDI_LOG << "There are " << num_states_before
- << " states in the original FST; "
- << " after MakePrecedingInputSymbolsSame, there are "
- << num_states_after << " states " << std::endl;
-
- auto weight_one = fst::StdArc::Weight::One();
-
- int32 num_arc_added = 0;
-
- fst::StdArc self_loop_arc;
- self_loop_arc.weight = weight_one;
-
- int32 num_states = fst->NumStates();
- std::vector> incoming_non_eps_label_per_state(num_states);
-
- for (int32 state = 0; state < num_states; state++) {
- for (IterType aiter(fst, state); !aiter.Done(); aiter.Next()) {
- fst::StdArc arc(aiter.Value());
- if (arc.ilabel != 0) {
- incoming_non_eps_label_per_state[arc.nextstate].insert(arc.ilabel);
- }
- }
- }
-
- for (int32 state = 0; state < num_states; state++) {
- if (!incoming_non_eps_label_per_state[state].empty()) {
- auto& ilabel_set = incoming_non_eps_label_per_state[state];
- for (auto it = ilabel_set.begin(); it != ilabel_set.end(); it++) {
- self_loop_arc.ilabel = *it;
- self_loop_arc.olabel = 0;
- self_loop_arc.nextstate = state;
- fst->AddArc(state, self_loop_arc);
- num_arc_added++;
- }
- }
- }
- return num_arc_added;
-}
-
-void print_usage() {
- std::cout << "add-self-loop-simple usage:\n"
- "\tadd-self-loop-simple \n";
-}
-} // namespace
-
-int main(int argc, char** argv) {
- if (argc != 3) {
- print_usage();
- exit(1);
- }
-
- auto input = argv[1];
- auto output = argv[2];
-
- auto fst = fst::ReadFstKaldi(input);
- auto num_states = fst->NumStates();
- KALDI_LOG << "Loading FST from " << input << " with " << num_states
- << " states." << std::endl;
-
- int32 num_arc_added = AddSelfLoopsSimple(fst);
- KALDI_LOG << "Adding " << num_arc_added << " self-loop arcs " << std::endl;
-
- fst::WriteFstKaldi(*fst, std::string(output));
- KALDI_LOG << "Writing FST to " << output << std::endl;
-
- delete fst;
-}
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py
deleted file mode 100644
index 17387b2f85c0ee76db1a003091331b46de8d8def..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import hashlib
-import logging
-import math
-
-import numpy as np
-from fairseq.data import SampledMultiDataset
-
-from .sampled_multi_dataset import CollateFormat, default_virtual_size_func
-
-
-logger = logging.getLogger(__name__)
-
-
-class SampledMultiEpochDataset(SampledMultiDataset):
- """Samples from multiple sub-datasets according to sampling ratios
- using virtual epoch sizes to speed up dataloading.
- Args:
- datasets (
- List[~torch.utils.data.Dataset]
- or OrderedDict[str, ~torch.utils.data.Dataset]
- ): datasets
- sampling_ratios (List[float]): list of probability of each dataset to be sampled
- (default: None, which corresponds to concating all dataset together).
- seed (int): RNG seed to use (default: 2).
- epoch (int): starting epoch number (default: 1).
- eval_key (str, optional): a key used at evaluation time that causes
- this instance to pass-through batches from *datasets[eval_key]*.
- collate_format (CollateFormat): collater output format, either CollateFormat.ordered_dict or
- CollateFormat.single (default: CollateFormat.single) where CollateFormat.single configures
- the collater to output batches of data mixed from all sub-datasets,
- and CollateFormat.ordered_dict configures the collater to output a dictionary of batches indexed by keys
- of sub-datasets.
- Note that not all sub-datasets will present in a single batch in both formats.
- virtual_size (int, or callable): the expected virtual size of the dataset (default: default_virtual_size_func).
- split (str): the split of the data, e.g. 'train', 'valid' or 'test'.
- virtual_epoch_size (int): virtual epoch size, the dataset will go through the data by
- this virtual epoch size one by one to speed up data loading, e.g. indicing and filtering
- can be performed whenever a virtual epoch is loaded without waiting for the whole dataset to be loaded.
- shared_collater (bool): whether or not to all sub-datasets have the same collater.
- shard_epoch (int): the real epoch number for shard selection.
- shuffle (bool): whether or not to shuffle data (default: True).
- """
-
- def __init__(
- self,
- datasets,
- sampling_ratios=None,
- seed=2,
- epoch=1,
- eval_key=None,
- collate_format=CollateFormat.single,
- virtual_size=default_virtual_size_func,
- split="",
- virtual_epoch_size=None,
- shared_collater=False,
- shard_epoch=1,
- shuffle=True,
- ):
- self.virtual_epoch_size = virtual_epoch_size
- self._current_epoch_start_index = None
- self._random_global_indices = None
- self.shard_epoch = shard_epoch if shard_epoch is not None else 1
- self.load_next_shard = None
- self._epoch_sizes = None
- super().__init__(
- datasets=datasets,
- sampling_ratios=sampling_ratios,
- seed=seed,
- epoch=epoch,
- eval_key=eval_key,
- collate_format=collate_format,
- virtual_size=virtual_size,
- split=split,
- shared_collater=shared_collater,
- shuffle=shuffle,
- )
-
- def _setup(self, epoch):
- self.virtual_epoch_size = (
- self.virtual_epoch_size
- if self.virtual_epoch_size is not None
- else self.virtual_size
- )
- if self.virtual_epoch_size > self.virtual_size:
- logger.warning(
- f"virtual epoch size {self.virtual_epoch_size} "
- f"is greater than virtual dataset size {self.virtual_size}"
- )
- self.virtual_epoch_size = self.virtual_size
- self.num_virtual_epochs = math.ceil(self.virtual_size / self.virtual_epoch_size)
- self._current_epoch_start_index = self._get_epoch_start_index(epoch)
- logger.info(
- f"virtual epoch size {self.virtual_epoch_size}; virtual dataset size {self.virtual_size}"
- )
-
- def _map_epoch_index_to_global(self, index):
- index = self._current_epoch_start_index + index
- # add randomness
- return self._random_global_indices[index]
-
- @property
- def sizes(self):
- if self._epoch_sizes is not None:
- return self._epoch_sizes
- _sizes = super().sizes
- indices = self._random_global_indices[
- self._current_epoch_start_index : self._current_epoch_start_index
- + len(self)
- ]
- self._epoch_sizes = _sizes[indices]
- # del super()._sizes to save memory
- del self._sizes
- self._sizes = None
- return self._epoch_sizes
-
- def _get_dataset_and_index(self, index):
- i = self._map_epoch_index_to_global(index)
- return super()._get_dataset_and_index(i)
-
- def __len__(self):
- return (
- self.virtual_epoch_size
- if self._current_epoch_start_index + self.virtual_epoch_size
- < self.virtual_size
- else self.virtual_size - self._current_epoch_start_index
- )
-
- def set_epoch(self, epoch):
- if self._current_epoch_start_index is None:
- # initializing epoch idnices of a virtual dataset
- self._setup(epoch)
- self._next_virtual_epoch(epoch)
- else:
- # working on already intialized epoch indices
- if epoch == self._cur_epoch:
- # re-enter so return
- return
- self._next_virtual_epoch(epoch)
-
- def _get_epoch_start_index(self, epoch):
- assert epoch >= 1 # fairseq is using 1-based epoch everywhere
- return ((epoch - 1) % self.num_virtual_epochs) * self.virtual_epoch_size
-
- def _next_global_indices(self, epoch):
- rng = np.random.RandomState(
- [
- int(
- hashlib.sha1(
- str(self.__class__.__name__).encode("utf-8")
- ).hexdigest(),
- 16,
- )
- % (2 ** 32),
- self.seed % (2 ** 32), # global seed
- epoch, # epoch index,
- ]
- )
- del self._random_global_indices
- self._random_global_indices = rng.choice(
- self.virtual_size, self.virtual_size, replace=False
- )
- if self.load_next_shard is None:
- self.load_next_shard = False
- else:
- # increase shard epoch for next loading
- self.shard_epoch += 1
- self.load_next_shard = True
- logger.info(
- "to load next epoch/shard in next load_dataset: "
- f"epoch={epoch}/shard_epoch={self.shard_epoch}"
- )
-
- def _next_virtual_epoch(self, epoch):
- index = self._get_epoch_start_index(epoch)
- if index == 0 or self._random_global_indices is None:
- # need to start from the beginning,
- # so call super().set_epoch(epoch) to establish the global virtual indices
- logger.info(
- "establishing a new set of global virtual indices for "
- f"epoch={epoch}/shard_epoch={self.shard_epoch}"
- )
- super().set_epoch(epoch)
- self._next_global_indices(epoch)
- else:
- self._cur_epoch = epoch
-
- # reset cache sizes and ordered_indices for the epoch after moving to a new epoch
- self._clean_if_not_none(
- [
- self._epoch_sizes,
- ]
- )
- self._epoch_sizes = None
- self._current_epoch_start_index = index
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/distributed/module_proxy_wrapper.py b/spaces/ICML2022/OFA/fairseq/fairseq/distributed/module_proxy_wrapper.py
deleted file mode 100644
index fc2c6f8c718f2ac8ece308e50f7ba74a05474f4a..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/distributed/module_proxy_wrapper.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from torch import nn
-
-
-class ModuleProxyWrapper(nn.Module):
- """
- Wrap a DistributedDataParallel module and forward requests for missing
- attributes to the module wrapped by DDP (the twice-wrapped module).
- Also forward calls to :func:`state_dict` and :func:`load_state_dict`.
-
- Usage::
-
- module.xyz = "hello world"
- wrapped_module = DistributedDataParallel(module, **ddp_args)
- wrapped_module = ModuleProxyWrapper(wrapped_module)
- assert wrapped_module.xyz == "hello world"
- assert wrapped_module.state_dict().keys() == module.state_dict().keys()
-
- Args:
- module (nn.Module): module to wrap
- """
-
- def __init__(self, module: nn.Module):
- super().__init__()
- assert hasattr(module, "module"), \
- "ModuleProxyWrapper expects input to wrap another module"
- self.module = module
-
- def __getattr__(self, name):
- """Forward missing attributes to twice-wrapped module."""
- try:
- # defer to nn.Module's logic
- return super().__getattr__(name)
- except AttributeError:
- try:
- # forward to the once-wrapped module
- return getattr(self.module, name)
- except AttributeError:
- # forward to the twice-wrapped module
- return getattr(self.module.module, name)
-
- def state_dict(self, *args, **kwargs):
- """Forward to the twice-wrapped module."""
- return self.module.module.state_dict(*args, **kwargs)
-
- def load_state_dict(self, *args, **kwargs):
- """Forward to the twice-wrapped module."""
- return self.module.module.load_state_dict(*args, **kwargs)
-
- def forward(self, *args, **kwargs):
- return self.module(*args, **kwargs)
diff --git a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/conv2d_resample.py b/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/conv2d_resample.py
deleted file mode 100644
index dfde81ee19204a7993fd1c3cd21055a51418231b..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/conv2d_resample.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# python3.7
-
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""2D convolution with optional up/downsampling.
-
-Please refer to https://github.com/NVlabs/stylegan3
-"""
-
-# pylint: disable=line-too-long
-
-import torch
-
-from . import misc
-from . import conv2d_gradfix
-from . import upfirdn2d
-from .upfirdn2d import _parse_padding
-from .upfirdn2d import _get_filter_size
-
-#----------------------------------------------------------------------------
-
-def _get_weight_shape(w):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- shape = [int(sz) for sz in w.shape]
- misc.assert_shape(w, shape)
- return shape
-
-#----------------------------------------------------------------------------
-
-def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True, impl='cuda'):
- """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations.
- """
- _out_channels, _in_channels_per_group, kh, kw = _get_weight_shape(w)
-
- # Flip weight if requested.
- # Note: conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False).
- if not flip_weight and (kw > 1 or kh > 1):
- w = w.flip([2, 3])
-
- # Execute using conv2d_gradfix.
- op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d
- return op(x, w, stride=stride, padding=padding, groups=groups, impl=impl)
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False, impl='cuda'):
- r"""2D convolution with optional up/downsampling.
-
- Padding is performed only once at the beginning, not between the operations.
-
- Args:
- x: Input tensor of shape
- `[batch_size, in_channels, in_height, in_width]`.
- w: Weight tensor of shape
- `[out_channels, in_channels//groups, kernel_height, kernel_width]`.
- f: Low-pass filter for up/downsampling. Must be prepared beforehand by
- calling upfirdn2d.setup_filter(). None = identity (default).
- up: Integer upsampling factor (default: 1).
- down: Integer downsampling factor (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- groups: Split input channels into N groups (default: 1).
- flip_weight: False = convolution, True = correlation (default: True).
- flip_filter: False = convolution, True = correlation (default: False).
- impl: Implementation mode, 'cuda' for CUDA implementation, and 'ref' for
- native PyTorch implementation (default: 'cuda').
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and (x.ndim == 4)
- assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype)
- assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32)
- assert isinstance(up, int) and (up >= 1)
- assert isinstance(down, int) and (down >= 1)
- assert isinstance(groups, int) and (groups >= 1)
- out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
- fw, fh = _get_filter_size(f)
- px0, px1, py0, py1 = _parse_padding(padding)
-
- # Adjust padding to account for up/downsampling.
- if up > 1:
- px0 += (fw + up - 1) // 2
- px1 += (fw - up) // 2
- py0 += (fh + up - 1) // 2
- py1 += (fh - up) // 2
- if down > 1:
- px0 += (fw - down + 1) // 2
- px1 += (fw - down) // 2
- py0 += (fh - down + 1) // 2
- py1 += (fh - down) // 2
-
- # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve.
- if kw == 1 and kh == 1 and (down > 1 and up == 1):
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter, impl=impl)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight, impl=impl)
- return x
-
- # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample.
- if kw == 1 and kh == 1 and (up > 1 and down == 1):
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight, impl=impl)
- x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter, impl=impl)
- return x
-
- # Fast path: downsampling only => use strided convolution.
- if down > 1 and up == 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter, impl=impl)
- x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight, impl=impl)
- return x
-
- # Fast path: upsampling with optional downsampling => use transpose strided convolution.
- if up > 1:
- if groups == 1:
- w = w.transpose(0, 1)
- else:
- w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw)
- w = w.transpose(1, 2)
- w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw)
- px0 -= kw - 1
- px1 -= kw - up
- py0 -= kh - 1
- py1 -= kh - up
- pxt = max(min(-px0, -px1), 0)
- pyt = max(min(-py0, -py1), 0)
- x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight), impl=impl)
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter, impl=impl)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter, impl=impl)
- return x
-
- # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d.
- if up == 1 and down == 1:
- if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0:
- return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight, impl=impl)
-
- # Fallback: Generic reference implementation.
- x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter, impl=impl)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight, impl=impl)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter, impl=impl)
- return x
-
-#----------------------------------------------------------------------------
-
-# pylint: enable=line-too-long
diff --git a/spaces/Ibtehaj10/cheating-detection/fps_example.py b/spaces/Ibtehaj10/cheating-detection/fps_example.py
deleted file mode 100644
index a22f0d68930a1d219485f91331dc7f6db1b3aa23..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection/fps_example.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import cv2
-import datetime
-import imutils
-
-
-def main():
- cap = cv2.VideoCapture('test_video.mp4')
-
- fps_start_time = datetime.datetime.now()
- fps = 0
- total_frames = 0
-
- while True:
- ret, frame = cap.read()
- frame = imutils.resize(frame, width=800)
- total_frames = total_frames + 1
-
- fps_end_time = datetime.datetime.now()
- time_diff = fps_end_time - fps_start_time
- if time_diff.seconds == 0:
- fps = 0.0
- else:
- fps = (total_frames / time_diff.seconds)
-
- fps_text = "FPS: {:.2f}".format(fps)
-
- cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- cv2.imshow("Application", frame)
- key = cv2.waitKey(1)
- if key == ord('q'):
- break
-
- cv2.destroyAllWindows()
-
-
-main()
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/models/video_recurrent_gan_model.py b/spaces/Iceclear/StableSR/StableSR/basicsr/models/video_recurrent_gan_model.py
deleted file mode 100644
index 74cf81145c50ffafb220d22b51e56746dee5ba41..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/models/video_recurrent_gan_model.py
+++ /dev/null
@@ -1,180 +0,0 @@
-import torch
-from collections import OrderedDict
-
-from basicsr.archs import build_network
-from basicsr.losses import build_loss
-from basicsr.utils import get_root_logger
-from basicsr.utils.registry import MODEL_REGISTRY
-from .video_recurrent_model import VideoRecurrentModel
-
-
-@MODEL_REGISTRY.register()
-class VideoRecurrentGANModel(VideoRecurrentModel):
-
- def init_training_settings(self):
- train_opt = self.opt['train']
-
- self.ema_decay = train_opt.get('ema_decay', 0)
- if self.ema_decay > 0:
- logger = get_root_logger()
- logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}')
- # build network net_g with Exponential Moving Average (EMA)
- # net_g_ema only used for testing on one GPU and saving.
- # There is no need to wrap with DistributedDataParallel
- self.net_g_ema = build_network(self.opt['network_g']).to(self.device)
- # load pretrained model
- load_path = self.opt['path'].get('pretrain_network_g', None)
- if load_path is not None:
- self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')
- else:
- self.model_ema(0) # copy net_g weight
- self.net_g_ema.eval()
-
- # define network net_d
- self.net_d = build_network(self.opt['network_d'])
- self.net_d = self.model_to_device(self.net_d)
- self.print_network(self.net_d)
-
- # load pretrained models
- load_path = self.opt['path'].get('pretrain_network_d', None)
- if load_path is not None:
- param_key = self.opt['path'].get('param_key_d', 'params')
- self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True), param_key)
-
- self.net_g.train()
- self.net_d.train()
-
- # define losses
- if train_opt.get('pixel_opt'):
- self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device)
- else:
- self.cri_pix = None
-
- if train_opt.get('perceptual_opt'):
- self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device)
- else:
- self.cri_perceptual = None
-
- if train_opt.get('gan_opt'):
- self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device)
-
- self.net_d_iters = train_opt.get('net_d_iters', 1)
- self.net_d_init_iters = train_opt.get('net_d_init_iters', 0)
-
- # set up optimizers and schedulers
- self.setup_optimizers()
- self.setup_schedulers()
-
- def setup_optimizers(self):
- train_opt = self.opt['train']
- if train_opt['fix_flow']:
- normal_params = []
- flow_params = []
- for name, param in self.net_g.named_parameters():
- if 'spynet' in name: # The fix_flow now only works for spynet.
- flow_params.append(param)
- else:
- normal_params.append(param)
-
- optim_params = [
- { # add flow params first
- 'params': flow_params,
- 'lr': train_opt['lr_flow']
- },
- {
- 'params': normal_params,
- 'lr': train_opt['optim_g']['lr']
- },
- ]
- else:
- optim_params = self.net_g.parameters()
-
- # optimizer g
- optim_type = train_opt['optim_g'].pop('type')
- self.optimizer_g = self.get_optimizer(optim_type, optim_params, **train_opt['optim_g'])
- self.optimizers.append(self.optimizer_g)
- # optimizer d
- optim_type = train_opt['optim_d'].pop('type')
- self.optimizer_d = self.get_optimizer(optim_type, self.net_d.parameters(), **train_opt['optim_d'])
- self.optimizers.append(self.optimizer_d)
-
- def optimize_parameters(self, current_iter):
- logger = get_root_logger()
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- if self.fix_flow_iter:
- if current_iter == 1:
- logger.info(f'Fix flow network and feature extractor for {self.fix_flow_iter} iters.')
- for name, param in self.net_g.named_parameters():
- if 'spynet' in name or 'edvr' in name:
- param.requires_grad_(False)
- elif current_iter == self.fix_flow_iter:
- logger.warning('Train all the parameters.')
- self.net_g.requires_grad_(True)
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- _, _, c, h, w = self.output.size()
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, self.gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output.view(-1, c, h, w), self.gt.view(-1, c, h, w))
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output.view(-1, c, h, w))
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- # reshape to (b*n, c, h, w)
- real_d_pred = self.net_d(self.gt.view(-1, c, h, w))
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- # reshape to (b*n, c, h, w)
- fake_d_pred = self.net_d(self.output.view(-1, c, h, w).detach())
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- def save(self, epoch, current_iter):
- if self.ema_decay > 0:
- self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])
- else:
- self.save_network(self.net_g, 'net_g', current_iter)
- self.save_network(self.net_d, 'net_d', current_iter)
- self.save_training_state(epoch, current_iter)
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/cluster/train_cluster.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/cluster/train_cluster.py
deleted file mode 100644
index 4ac025d400414226e66849407f477ae786c3d5d3..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/cluster/train_cluster.py
+++ /dev/null
@@ -1,89 +0,0 @@
-import os
-from glob import glob
-from pathlib import Path
-import torch
-import logging
-import argparse
-import torch
-import numpy as np
-from sklearn.cluster import KMeans, MiniBatchKMeans
-import tqdm
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger(__name__)
-import time
-import random
-
-def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False):
-
- logger.info(f"Loading features from {in_dir}")
- features = []
- nums = 0
- for path in tqdm.tqdm(in_dir.glob("*.soft.pt")):
- features.append(torch.load(path).squeeze(0).numpy().T)
- # print(features[-1].shape)
- features = np.concatenate(features, axis=0)
- print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype)
- features = features.astype(np.float32)
- logger.info(f"Clustering features of shape: {features.shape}")
- t = time.time()
- if use_minibatch:
- kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features)
- else:
- kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features)
- print(time.time()-t, "s")
-
- x = {
- "n_features_in_": kmeans.n_features_in_,
- "_n_threads": kmeans._n_threads,
- "cluster_centers_": kmeans.cluster_centers_,
- }
- print("end")
-
- return x
-
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser()
- parser.add_argument('--dataset', type=Path, default="./dataset/44k",
- help='path of training data directory')
- parser.add_argument('--output', type=Path, default="logs/44k",
- help='path of model output directory')
-
- args = parser.parse_args()
-
- checkpoint_dir = args.output
- dataset = args.dataset
- n_clusters = 10000
-
- ckpt = {}
- for spk in os.listdir(dataset):
- if os.path.isdir(dataset/spk):
- print(f"train kmeans for {spk}...")
- in_dir = dataset/spk
- x = train_cluster(in_dir, n_clusters, verbose=False)
- ckpt[spk] = x
-
- checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt"
- checkpoint_path.parent.mkdir(exist_ok=True, parents=True)
- torch.save(
- ckpt,
- checkpoint_path,
- )
-
-
- # import cluster
- # for spk in tqdm.tqdm(os.listdir("dataset")):
- # if os.path.isdir(f"dataset/{spk}"):
- # print(f"start kmeans inference for {spk}...")
- # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)):
- # mel_path = feature_path.replace(".discrete.npy",".mel.npy")
- # mel_spectrogram = np.load(mel_path)
- # feature_len = mel_spectrogram.shape[-1]
- # c = np.load(feature_path)
- # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy()
- # feature = c.T
- # feature_class = cluster.get_cluster_result(feature, spk)
- # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class)
-
-
diff --git a/spaces/Intoval/privateChatGPT/modules/pdf_func.py b/spaces/Intoval/privateChatGPT/modules/pdf_func.py
deleted file mode 100644
index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000
--- a/spaces/Intoval/privateChatGPT/modules/pdf_func.py
+++ /dev/null
@@ -1,180 +0,0 @@
-from types import SimpleNamespace
-import pdfplumber
-import logging
-from llama_index import Document
-
-def prepare_table_config(crop_page):
- """Prepare table查找边界, 要求page为原始page
-
- From https://github.com/jsvine/pdfplumber/issues/242
- """
- page = crop_page.root_page # root/parent
- cs = page.curves + page.edges
- def curves_to_edges():
- """See https://github.com/jsvine/pdfplumber/issues/127"""
- edges = []
- for c in cs:
- edges += pdfplumber.utils.rect_to_edges(c)
- return edges
- edges = curves_to_edges()
- return {
- "vertical_strategy": "explicit",
- "horizontal_strategy": "explicit",
- "explicit_vertical_lines": edges,
- "explicit_horizontal_lines": edges,
- "intersection_y_tolerance": 10,
- }
-
-def get_text_outside_table(crop_page):
- ts = prepare_table_config(crop_page)
- if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0:
- return crop_page
-
- ### Get the bounding boxes of the tables on the page.
- bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)]
- def not_within_bboxes(obj):
- """Check if the object is in any of the table's bbox."""
- def obj_in_bbox(_bbox):
- """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404"""
- v_mid = (obj["top"] + obj["bottom"]) / 2
- h_mid = (obj["x0"] + obj["x1"]) / 2
- x0, top, x1, bottom = _bbox
- return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom)
- return not any(obj_in_bbox(__bbox) for __bbox in bboxes)
-
- return crop_page.filter(not_within_bboxes)
-# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹
-
-extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"])
-# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size'])
-
-def get_title_with_cropped_page(first_page):
- title = [] # 处理标题
- x0,top,x1,bottom = first_page.bbox # 获取页面边框
-
- for word in extract_words(first_page):
- word = SimpleNamespace(**word)
-
- if word.size >= 14:
- title.append(word.text)
- title_bottom = word.bottom
- elif word.text == "Abstract": # 获取页面abstract
- top = word.top
-
- user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))]
- # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included
- return title, user_info, first_page.within_bbox((x0,top,x1,bottom))
-
-def get_column_cropped_pages(pages, two_column=True):
- new_pages = []
- for page in pages:
- if two_column:
- left = page.within_bbox((0, 0, page.width/2, page.height),relative=True)
- right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True)
- new_pages.append(left)
- new_pages.append(right)
- else:
- new_pages.append(page)
-
- return new_pages
-
-def parse_pdf(filename, two_column = True):
- level = logging.getLogger().level
- if level == logging.getLevelName("DEBUG"):
- logging.getLogger().setLevel("INFO")
-
- with pdfplumber.open(filename) as pdf:
- title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0])
- new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column)
-
- chapters = []
- # tuple (chapter_name, [pageid] (start,stop), chapter_text)
- create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace(
- name=[],
- name_top=name_top,
- name_bottom=name_bottom,
- record_chapter_name = True,
-
- page_start=page_start,
- page_stop=None,
-
- text=[],
- )
- cur_chapter = None
-
- # 按页遍历PDF文档
- for idx, page in enumerate(new_pages):
- page = get_text_outside_table(page)
-
- # 按行遍历页面文本
- for word in extract_words(page):
- word = SimpleNamespace(**word)
-
- # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始
- if word.size >= 11: # 出现chapter name
- if cur_chapter is None:
- cur_chapter = create_chapter(page.page_number, word.top, word.bottom)
- elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top):
- # 不再继续写chapter name
- cur_chapter.page_stop = page.page_number # stop id
- chapters.append(cur_chapter)
- # 重置当前chapter信息
- cur_chapter = create_chapter(page.page_number, word.top, word.bottom)
-
- # print(word.size, word.top, word.bottom, word.text)
- cur_chapter.name.append(word.text)
- else:
- cur_chapter.record_chapter_name = False # chapter name 结束
- cur_chapter.text.append(word.text)
- else:
- # 处理最后一个章节
- cur_chapter.page_stop = page.page_number # stop id
- chapters.append(cur_chapter)
-
- for i in chapters:
- logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}")
- logging.debug(" ".join(i.text))
-
- title = " ".join(title)
- user_info = " ".join(user_info)
- text = f"Article Title: {title}, Information:{user_info}\n"
- for idx, chapter in enumerate(chapters):
- chapter.name = " ".join(chapter.name)
- text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n"
-
- logging.getLogger().setLevel(level)
- return Document(text=text, extra_info={"title": title})
-
-BASE_POINTS = """
-1. Who are the authors?
-2. What is the process of the proposed method?
-3. What is the performance of the proposed method? Please note down its performance metrics.
-4. What are the baseline models and their performances? Please note down these baseline methods.
-5. What dataset did this paper use?
-"""
-
-READING_PROMPT = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{}
-"""
-
-READING_PROMT_V2 = """
-You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n
-Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n
-When you are reading, You need to focus on these key points:{},
-
-And You need to generate a brief but informative title for this part.
-Your return format:
-- title: '...'
-- summary: '...'
-"""
-
-SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper."
-
-
-if __name__ == '__main__':
- # Test code
- z = parse_pdf("./build/test.pdf")
- print(z["user_info"])
- print(z["title"])
\ No newline at end of file
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddim/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddim/__init__.py
deleted file mode 100644
index 8fd31868a88ac0d9ec7118574f21a9d8a1d4069b..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddim/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# flake8: noqa
-from .pipeline_ddim import DDIMPipeline
diff --git a/spaces/Jamkonams/AutoGPT/run_continuous.bat b/spaces/Jamkonams/AutoGPT/run_continuous.bat
deleted file mode 100644
index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/run_continuous.bat
+++ /dev/null
@@ -1,3 +0,0 @@
-@echo off
-set argument=--continuous
-call run.bat %argument%
diff --git a/spaces/Jeff2323/ai-comic-factory/src/app/engine/forbidden.ts b/spaces/Jeff2323/ai-comic-factory/src/app/engine/forbidden.ts
deleted file mode 100644
index 512b65e22b18f3bd39f6aec58198576b2ffc67f5..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/app/engine/forbidden.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-
-// the NSFW has to contain bad words, but doing so might get the code flagged
-// or attract unwanted attention, so we hash them
-export const forbidden = [
- // TODO implement this
-]
\ No newline at end of file
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/ChuanhuChatbot.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/ChuanhuChatbot.py
deleted file mode 100644
index d498359af5c02037247406830672bcbbdbb7006b..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/ChuanhuChatbot.py
+++ /dev/null
@@ -1,559 +0,0 @@
-# -*- coding:utf-8 -*-
-import logging
-logging.basicConfig(
- level=logging.INFO,
- format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
-)
-
-import colorama
-import gradio as gr
-
-from modules import config
-from modules.config import *
-from modules.utils import *
-from modules.presets import *
-from modules.overwrites import *
-from modules.webui import *
-from modules.repo import *
-from modules.train_func import *
-from modules.models.models import get_model
-
-logging.getLogger("httpx").setLevel(logging.WARNING)
-
-gr.Chatbot._postprocess_chat_messages = postprocess_chat_messages
-gr.Chatbot.postprocess = postprocess
-
-# with open("web_assets/css/ChuanhuChat.css", "r", encoding="utf-8") as f:
-# ChuanhuChatCSS = f.read()
-
-def create_new_model():
- return get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0]
-
-with gr.Blocks(theme=small_and_beautiful_theme) as demo:
- user_name = gr.State("")
- promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2))
- user_question = gr.State("")
- assert type(my_api_key)==str
- user_api_key = gr.State(my_api_key)
- current_model = gr.State(create_new_model)
-
- topic = gr.State(i18n("未命名对话历史记录"))
-
- with gr.Row():
- gr.HTML(CHUANHU_TITLE, elem_id="app-title")
- status_display = gr.Markdown(get_geoip(), elem_id="status-display")
- with gr.Row(elem_id="float-display"):
- user_info = gr.Markdown(value="getting user info...", elem_id="user-info")
- config_info = gr.HTML(get_html("config_info.html").format(bot_avatar=config.bot_avatar, user_avatar=config.user_avatar), visible=False, elem_id="config-info")
- update_info = gr.HTML(get_html("update.html").format(
- current_version=repo_tag_html(),
- version_time=version_time(),
- cancel_btn=i18n("取消"),
- update_btn=i18n("更新"),
- seenew_btn=i18n("详情"),
- ok_btn=i18n("好"),
- ), visible=check_update)
-
- with gr.Row(equal_height=True):
- with gr.Column(scale=5):
- with gr.Row():
- chatbot = gr.Chatbot(label="Chuanhu Chat", elem_id="chuanhu-chatbot", latex_delimiters=latex_delimiters_set, height=700)
- with gr.Row():
- with gr.Column(min_width=225, scale=12):
- user_input = gr.Textbox(
- elem_id="user-input-tb",
- show_label=False, placeholder=i18n("在这里输入"),
- container=False
- )
- with gr.Column(min_width=42, scale=1):
- submitBtn = gr.Button(value="", variant="primary", elem_id="submit-btn")
- cancelBtn = gr.Button(value="", variant="secondary", visible=False, elem_id="cancel-btn")
- with gr.Row(elem_id="chatbot-buttons"):
- with gr.Column(min_width=120, scale=1):
- emptyBtn = gr.Button(
- i18n("🧹 新的对话"), elem_id="empty-btn"
- )
- with gr.Column(min_width=120, scale=1):
- retryBtn = gr.Button(i18n("🔄 重新生成"))
- with gr.Column(min_width=120, scale=1):
- delFirstBtn = gr.Button(i18n("🗑️ 删除最旧对话"))
- with gr.Column(min_width=120, scale=1):
- delLastBtn = gr.Button(i18n("🗑️ 删除最新对话"))
- with gr.Row(visible=False) as like_dislike_area:
- with gr.Column(min_width=20, scale=1):
- likeBtn = gr.Button(i18n("👍"))
- with gr.Column(min_width=20, scale=1):
- dislikeBtn = gr.Button(i18n("👎"))
-
- with gr.Column():
- with gr.Column(min_width=50, scale=1):
- with gr.Tab(label=i18n("模型")):
- keyTxt = gr.Textbox(
- show_label=True,
- placeholder=f"Your API-key...",
- value=hide_middle_chars(user_api_key.value),
- type="password",
- visible=not HIDE_MY_KEY,
- label="API-Key",
- )
- if multi_api_key:
- usageTxt = gr.Markdown(i18n("多账号模式已开启,无需输入key,可直接开始对话"), elem_id="usage-display", elem_classes="insert-block", visible=show_api_billing)
- else:
- usageTxt = gr.Markdown(i18n("**发送消息** 或 **提交key** 以显示额度"), elem_id="usage-display", elem_classes="insert-block", visible=show_api_billing)
- model_select_dropdown = gr.Dropdown(
- label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True
- )
- lora_select_dropdown = gr.Dropdown(
- label=i18n("选择LoRA模型"), choices=[], multiselect=False, interactive=True, visible=False
- )
- with gr.Row():
- single_turn_checkbox = gr.Checkbox(label=i18n("单轮对话"), value=False, elem_classes="switch-checkbox")
- use_websearch_checkbox = gr.Checkbox(label=i18n("使用在线搜索"), value=False, elem_classes="switch-checkbox")
- language_select_dropdown = gr.Dropdown(
- label=i18n("选择回复语言(针对搜索&索引功能)"),
- choices=REPLY_LANGUAGES,
- multiselect=False,
- value=REPLY_LANGUAGES[0],
- )
- index_files = gr.Files(label=i18n("上传"), type="file", elem_id="upload-index-file")
- two_column = gr.Checkbox(label=i18n("双栏pdf"), value=advance_docs["pdf"].get("two_column", False))
- summarize_btn = gr.Button(i18n("总结"))
- # TODO: 公式ocr
- # formula_ocr = gr.Checkbox(label=i18n("识别公式"), value=advance_docs["pdf"].get("formula_ocr", False))
-
- with gr.Tab(label="Prompt"):
- systemPromptTxt = gr.Textbox(
- show_label=True,
- placeholder=i18n("在这里输入System Prompt..."),
- label="System prompt",
- value=INITIAL_SYSTEM_PROMPT,
- lines=10
- )
- with gr.Accordion(label=i18n("加载Prompt模板"), open=True):
- with gr.Column():
- with gr.Row():
- with gr.Column(scale=6):
- templateFileSelectDropdown = gr.Dropdown(
- label=i18n("选择Prompt模板集合文件"),
- choices=get_template_names(plain=True),
- multiselect=False,
- value=get_template_names(plain=True)[0],
- container=False,
- )
- with gr.Column(scale=1):
- templateRefreshBtn = gr.Button(i18n("🔄 刷新"))
- with gr.Row():
- with gr.Column():
- templateSelectDropdown = gr.Dropdown(
- label=i18n("从Prompt模板中加载"),
- choices=load_template(
- get_template_names(plain=True)[0], mode=1
- ),
- multiselect=False,
- container=False,
- )
-
- with gr.Tab(label=i18n("保存/加载")):
- with gr.Accordion(label=i18n("保存/加载对话历史记录"), open=True):
- with gr.Column():
- with gr.Row():
- with gr.Column(scale=6):
- historyFileSelectDropdown = gr.Dropdown(
- label=i18n("从列表中加载对话"),
- choices=get_history_names(plain=True),
- multiselect=False,
- container=False,
- )
- with gr.Row():
- with gr.Column(min_width=42, scale=1):
- historyRefreshBtn = gr.Button(i18n("🔄 刷新"))
- with gr.Column(min_width=42, scale=1):
- historyDeleteBtn = gr.Button(i18n("🗑️ 删除"))
- with gr.Row():
- with gr.Column(scale=6):
- saveFileName = gr.Textbox(
- show_label=True,
- placeholder=i18n("设置文件名: 默认为.json,可选为.md"),
- label=i18n("设置保存文件名"),
- value=i18n("对话历史记录"),
- elem_classes="no-container"
- # container=False,
- )
- with gr.Column(scale=1):
- saveHistoryBtn = gr.Button(i18n("💾 保存对话"))
- exportMarkdownBtn = gr.Button(i18n("📝 导出为Markdown"))
- gr.Markdown(i18n("默认保存于history文件夹"))
- with gr.Row():
- with gr.Column():
- downloadFile = gr.File(interactive=True)
-
- with gr.Tab(label=i18n("微调")):
- openai_train_status = gr.Markdown(label=i18n("训练状态"), value=i18n("在这里[查看使用介绍](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/%E4%BD%BF%E7%94%A8%E6%95%99%E7%A8%8B#%E5%BE%AE%E8%B0%83-gpt-35)"))
-
- with gr.Tab(label=i18n("准备数据集")):
- dataset_preview_json = gr.JSON(label=i18n("数据集预览"), readonly=True)
- dataset_selection = gr.Files(label = i18n("选择数据集"), file_types=[".xlsx", ".jsonl"], file_count="single")
- upload_to_openai_btn = gr.Button(i18n("上传到OpenAI"), variant="primary", interactive=False)
-
- with gr.Tab(label=i18n("训练")):
- openai_ft_file_id = gr.Textbox(label=i18n("文件ID"), value="", lines=1, placeholder=i18n("上传到 OpenAI 后自动填充"))
- openai_ft_suffix = gr.Textbox(label=i18n("模型名称后缀"), value="", lines=1, placeholder=i18n("可选,用于区分不同的模型"))
- openai_train_epoch_slider = gr.Slider(label=i18n("训练轮数(Epochs)"), minimum=1, maximum=100, value=3, step=1, interactive=True)
- openai_start_train_btn = gr.Button(i18n("开始训练"), variant="primary", interactive=False)
-
- with gr.Tab(label=i18n("状态")):
- openai_status_refresh_btn = gr.Button(i18n("刷新状态"))
- openai_cancel_all_jobs_btn = gr.Button(i18n("取消所有任务"))
- add_to_models_btn = gr.Button(i18n("添加训练好的模型到模型列表"), interactive=False)
-
- with gr.Tab(label=i18n("高级")):
- gr.HTML(get_html("appearance_switcher.html").format(label=i18n("切换亮暗色主题")), elem_classes="insert-block")
- use_streaming_checkbox = gr.Checkbox(
- label=i18n("实时传输回答"), value=True, visible=ENABLE_STREAMING_OPTION, elem_classes="switch-checkbox"
- )
- checkUpdateBtn = gr.Button(i18n("🔄 检查更新..."), visible=check_update)
- gr.Markdown(i18n("# ⚠️ 务必谨慎更改 ⚠️"), elem_id="advanced-warning")
- with gr.Accordion(i18n("参数"), open=False):
- temperature_slider = gr.Slider(
- minimum=-0,
- maximum=2.0,
- value=1.0,
- step=0.1,
- interactive=True,
- label="temperature",
- )
- top_p_slider = gr.Slider(
- minimum=-0,
- maximum=1.0,
- value=1.0,
- step=0.05,
- interactive=True,
- label="top-p",
- )
- n_choices_slider = gr.Slider(
- minimum=1,
- maximum=10,
- value=1,
- step=1,
- interactive=True,
- label="n choices",
- )
- stop_sequence_txt = gr.Textbox(
- show_label=True,
- placeholder=i18n("停止符,用英文逗号隔开..."),
- label="stop",
- value="",
- lines=1,
- )
- max_context_length_slider = gr.Slider(
- minimum=1,
- maximum=32768,
- value=2000,
- step=1,
- interactive=True,
- label="max context",
- )
- max_generation_slider = gr.Slider(
- minimum=1,
- maximum=32768,
- value=1000,
- step=1,
- interactive=True,
- label="max generations",
- )
- presence_penalty_slider = gr.Slider(
- minimum=-2.0,
- maximum=2.0,
- value=0.0,
- step=0.01,
- interactive=True,
- label="presence penalty",
- )
- frequency_penalty_slider = gr.Slider(
- minimum=-2.0,
- maximum=2.0,
- value=0.0,
- step=0.01,
- interactive=True,
- label="frequency penalty",
- )
- logit_bias_txt = gr.Textbox(
- show_label=True,
- placeholder=f"word:likelihood",
- label="logit bias",
- value="",
- lines=1,
- )
- user_identifier_txt = gr.Textbox(
- show_label=True,
- placeholder=i18n("用于定位滥用行为"),
- label=i18n("用户名"),
- value=user_name.value,
- lines=1,
- )
-
- with gr.Accordion(i18n("网络参数"), open=False):
- gr.Markdown(i18n("---\n⚠️ 为保证API-Key安全,请在配置文件`config.json`中修改网络设置"), elem_id="netsetting-warning")
- default_btn = gr.Button(i18n("🔙 恢复默认网络设置"))
- # 网络代理
- proxyTxt = gr.Textbox(
- show_label=True,
- placeholder=i18n("未设置代理..."),
- label=i18n("代理地址"),
- value=config.http_proxy,
- lines=1,
- interactive=False,
- # container=False,
- elem_classes="view-only-textbox no-container",
- )
- # changeProxyBtn = gr.Button(i18n("🔄 设置代理地址"))
-
- # 优先展示自定义的api_host
- apihostTxt = gr.Textbox(
- show_label=True,
- placeholder="api.openai.com",
- label="OpenAI API-Host",
- value=config.api_host or shared.API_HOST,
- lines=1,
- interactive=False,
- # container=False,
- elem_classes="view-only-textbox no-container",
- )
- # changeAPIURLBtn = gr.Button(i18n("🔄 切换API地址"))
- updateChuanhuBtn = gr.Button(visible=False, elem_classes="invisible-btn", elem_id="update-chuanhu-btn")
-
-
- gr.Markdown(CHUANHU_DESCRIPTION, elem_id="description")
- gr.HTML(get_html("footer.html").format(versions=versions_html()), elem_id="footer")
-
- # https://github.com/gradio-app/gradio/pull/3296
- def create_greeting(request: gr.Request):
- if hasattr(request, "username") and request.username: # is not None or is not ""
- logging.info(f"Get User Name: {request.username}")
- user_info, user_name = gr.Markdown.update(value=f"User: {request.username}"), request.username
- else:
- user_info, user_name = gr.Markdown.update(value=f"", visible=False), ""
- current_model = get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0]
- current_model.set_user_identifier(user_name)
- chatbot = gr.Chatbot.update(label=MODELS[DEFAULT_MODEL])
- return user_info, user_name, current_model, toggle_like_btn_visibility(DEFAULT_MODEL), *current_model.auto_load(), get_history_names(False, user_name), chatbot
- demo.load(create_greeting, inputs=None, outputs=[user_info, user_name, current_model, like_dislike_area, systemPromptTxt, chatbot, historyFileSelectDropdown, chatbot], api_name="load")
- chatgpt_predict_args = dict(
- fn=predict,
- inputs=[
- current_model,
- user_question,
- chatbot,
- use_streaming_checkbox,
- use_websearch_checkbox,
- index_files,
- language_select_dropdown,
- ],
- outputs=[chatbot, status_display],
- show_progress=True,
- )
-
- start_outputing_args = dict(
- fn=start_outputing,
- inputs=[],
- outputs=[submitBtn, cancelBtn],
- show_progress=True,
- )
-
- end_outputing_args = dict(
- fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn]
- )
-
- reset_textbox_args = dict(
- fn=reset_textbox, inputs=[], outputs=[user_input]
- )
-
- transfer_input_args = dict(
- fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True
- )
-
- get_usage_args = dict(
- fn=billing_info, inputs=[current_model], outputs=[usageTxt], show_progress=False
- )
-
- load_history_from_file_args = dict(
- fn=load_chat_history,
- inputs=[current_model, historyFileSelectDropdown, user_name],
- outputs=[saveFileName, systemPromptTxt, chatbot]
- )
-
- refresh_history_args = dict(
- fn=get_history_names, inputs=[gr.State(False), user_name], outputs=[historyFileSelectDropdown]
- )
-
-
- # Chatbot
- cancelBtn.click(interrupt, [current_model], [])
-
- user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args)
- user_input.submit(**get_usage_args)
-
- submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args, api_name="predict").then(**end_outputing_args)
- submitBtn.click(**get_usage_args)
-
- index_files.change(handle_file_upload, [current_model, index_files, chatbot, language_select_dropdown], [index_files, chatbot, status_display])
- summarize_btn.click(handle_summarize_index, [current_model, index_files, chatbot, language_select_dropdown], [chatbot, status_display])
-
- emptyBtn.click(
- reset,
- inputs=[current_model],
- outputs=[chatbot, status_display],
- show_progress=True,
- _js='clearChatbot',
- )
-
- retryBtn.click(**start_outputing_args).then(
- retry,
- [
- current_model,
- chatbot,
- use_streaming_checkbox,
- use_websearch_checkbox,
- index_files,
- language_select_dropdown,
- ],
- [chatbot, status_display],
- show_progress=True,
- ).then(**end_outputing_args)
- retryBtn.click(**get_usage_args)
-
- delFirstBtn.click(
- delete_first_conversation,
- [current_model],
- [status_display],
- )
-
- delLastBtn.click(
- delete_last_conversation,
- [current_model, chatbot],
- [chatbot, status_display],
- show_progress=False
- )
-
- likeBtn.click(
- like,
- [current_model],
- [status_display],
- show_progress=False
- )
-
- dislikeBtn.click(
- dislike,
- [current_model],
- [status_display],
- show_progress=False
- )
-
- two_column.change(update_doc_config, [two_column], None)
-
- # LLM Models
- keyTxt.change(set_key, [current_model, keyTxt], [user_api_key, status_display], api_name="set_key").then(**get_usage_args)
- keyTxt.submit(**get_usage_args)
- single_turn_checkbox.change(set_single_turn, [current_model, single_turn_checkbox], None)
- model_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt, user_name], [current_model, status_display, chatbot, lora_select_dropdown, user_api_key, keyTxt], show_progress=True, api_name="get_model")
- model_select_dropdown.change(toggle_like_btn_visibility, [model_select_dropdown], [like_dislike_area], show_progress=False)
- lora_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt, user_name], [current_model, status_display, chatbot], show_progress=True)
-
- # Template
- systemPromptTxt.change(set_system_prompt, [current_model, systemPromptTxt], None)
- templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown])
- templateFileSelectDropdown.change(
- load_template,
- [templateFileSelectDropdown],
- [promptTemplates, templateSelectDropdown],
- show_progress=True,
- )
- templateSelectDropdown.change(
- get_template_content,
- [promptTemplates, templateSelectDropdown, systemPromptTxt],
- [systemPromptTxt],
- show_progress=True,
- )
-
- # S&L
- saveHistoryBtn.click(
- save_chat_history,
- [current_model, saveFileName, chatbot, user_name],
- downloadFile,
- show_progress=True,
- )
- saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown])
- exportMarkdownBtn.click(
- export_markdown,
- [current_model, saveFileName, chatbot, user_name],
- downloadFile,
- show_progress=True,
- )
- historyRefreshBtn.click(**refresh_history_args)
- historyDeleteBtn.click(delete_chat_history, [current_model, historyFileSelectDropdown, user_name], [status_display, historyFileSelectDropdown, chatbot], _js='(a,b,c)=>{return showConfirmationDialog(a, b, c);}')
- historyFileSelectDropdown.change(**load_history_from_file_args)
- downloadFile.change(upload_chat_history, [current_model, downloadFile, user_name], [saveFileName, systemPromptTxt, chatbot])
-
- # Train
- dataset_selection.upload(handle_dataset_selection, dataset_selection, [dataset_preview_json, upload_to_openai_btn, openai_train_status])
- dataset_selection.clear(handle_dataset_clear, [], [dataset_preview_json, upload_to_openai_btn])
- upload_to_openai_btn.click(upload_to_openai, [dataset_selection], [openai_ft_file_id, openai_train_status], show_progress=True)
-
- openai_ft_file_id.change(lambda x: gr.update(interactive=True) if len(x) > 0 else gr.update(interactive=False), [openai_ft_file_id], [openai_start_train_btn])
- openai_start_train_btn.click(start_training, [openai_ft_file_id, openai_ft_suffix, openai_train_epoch_slider], [openai_train_status])
-
- openai_status_refresh_btn.click(get_training_status, [], [openai_train_status, add_to_models_btn])
- add_to_models_btn.click(add_to_models, [], [model_select_dropdown, openai_train_status], show_progress=True)
- openai_cancel_all_jobs_btn.click(cancel_all_jobs, [], [openai_train_status], show_progress=True)
-
- # Advanced
- max_context_length_slider.change(set_token_upper_limit, [current_model, max_context_length_slider], None)
- temperature_slider.change(set_temperature, [current_model, temperature_slider], None)
- top_p_slider.change(set_top_p, [current_model, top_p_slider], None)
- n_choices_slider.change(set_n_choices, [current_model, n_choices_slider], None)
- stop_sequence_txt.change(set_stop_sequence, [current_model, stop_sequence_txt], None)
- max_generation_slider.change(set_max_tokens, [current_model, max_generation_slider], None)
- presence_penalty_slider.change(set_presence_penalty, [current_model, presence_penalty_slider], None)
- frequency_penalty_slider.change(set_frequency_penalty, [current_model, frequency_penalty_slider], None)
- logit_bias_txt.change(set_logit_bias, [current_model, logit_bias_txt], None)
- user_identifier_txt.change(set_user_identifier, [current_model, user_identifier_txt], None)
-
- default_btn.click(
- reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True
- )
- # changeAPIURLBtn.click(
- # change_api_host,
- # [apihostTxt],
- # [status_display],
- # show_progress=True,
- # )
- # changeProxyBtn.click(
- # change_proxy,
- # [proxyTxt],
- # [status_display],
- # show_progress=True,
- # )
- checkUpdateBtn.click(fn=None, _js='manualCheckUpdate')
-
- # Invisible elements
- updateChuanhuBtn.click(
- update_chuanhu,
- [],
- [status_display],
- show_progress=True,
- )
-
-logging.info(
- colorama.Back.GREEN
- + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面"
- + colorama.Style.RESET_ALL
-)
-# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接
-demo.title = i18n("川虎Chat 🚀")
-
-if __name__ == "__main__":
- reload_javascript()
- demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
- blocked_paths=["config.json"],
- favicon_path="./web_assets/favicon.ico",
- )
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/demucs/__init__.py b/spaces/Kangarroar/ApplioRVC-Inference/demucs/__init__.py
deleted file mode 100644
index d4182e356427e1b05a79f8da641c70bb732514fa..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/demucs/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-__version__ = "2.0.3"
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/csvutil.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/csvutil.py
deleted file mode 100644
index 79f432b6933f181d9194c50581656f2fd6e66c0c..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/csvutil.py
+++ /dev/null
@@ -1,41 +0,0 @@
-
-import numpy as np
-
-# import praatio
-# import praatio.praat_scripts
-import os
-import sys
-
-import random
-
-import csv
-
-# praatEXE = join('.',os.path.abspath(os.getcwd()) + r"\Praat.exe")
-
-
-def CSVutil(file, rw, type, *args):
- if type == "formanting":
- if rw == "r":
- with open(file) as fileCSVread:
- csv_reader = list(csv.reader(fileCSVread))
- return (
- (csv_reader[0][0], csv_reader[0][1], csv_reader[0][2])
- if csv_reader is not None
- else (lambda: exec('raise ValueError("No data")'))()
- )
- else:
- if args:
- doformnt = args[0]
- else:
- doformnt = False
- qfr = args[1] if len(args) > 1 else 1.0
- tmb = args[2] if len(args) > 2 else 1.0
- with open(file, rw, newline="") as fileCSVwrite:
- csv_writer = csv.writer(fileCSVwrite, delimiter=",")
- csv_writer.writerow([doformnt, qfr, tmb])
- elif type == "stop":
- stop = args[0] if args else False
- with open(file, rw, newline="") as fileCSVwrite:
- csv_writer = csv.writer(fileCSVwrite, delimiter=",")
- csv_writer.writerow([stop])
-
diff --git a/spaces/Kevin676/AutoGPT/tests/integration/memory_tests.py b/spaces/Kevin676/AutoGPT/tests/integration/memory_tests.py
deleted file mode 100644
index eead2da1cfa9b8a99592939623955808fc430068..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/tests/integration/memory_tests.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import random
-import string
-import sys
-import unittest
-from pathlib import Path
-
-from autogpt.config import Config
-from autogpt.memory.local import LocalCache
-
-
-class TestLocalCache(unittest.TestCase):
- def random_string(self, length):
- return "".join(random.choice(string.ascii_letters) for _ in range(length))
-
- def setUp(self):
- cfg = cfg = Config()
- self.cache = LocalCache(cfg)
- self.cache.clear()
-
- # Add example texts to the cache
- self.example_texts = [
- "The quick brown fox jumps over the lazy dog",
- "I love machine learning and natural language processing",
- "The cake is a lie, but the pie is always true",
- "ChatGPT is an advanced AI model for conversation",
- ]
-
- for text in self.example_texts:
- self.cache.add(text)
-
- # Add some random strings to test noise
- for _ in range(5):
- self.cache.add(self.random_string(10))
-
- def test_get_relevant(self):
- query = "I'm interested in artificial intelligence and NLP"
- k = 3
- relevant_texts = self.cache.get_relevant(query, k)
-
- print(f"Top {k} relevant texts for the query '{query}':")
- for i, text in enumerate(relevant_texts, start=1):
- print(f"{i}. {text}")
-
- self.assertEqual(len(relevant_texts), k)
- self.assertIn(self.example_texts[1], relevant_texts)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Kirihasan/rvc-holo/infer_pack/models.py b/spaces/Kirihasan/rvc-holo/infer_pack/models.py
deleted file mode 100644
index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000
--- a/spaces/Kirihasan/rvc-holo/infer_pack/models.py
+++ /dev/null
@@ -1,982 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- z_slice, ids_slice = commons.rand_slice_segments(
- x, y_lengths, self.segment_size
- )
-
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice
-
- def infer(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o, o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Kreaols/ChuanhuChatGPT/modules/llama_func.py b/spaces/Kreaols/ChuanhuChatGPT/modules/llama_func.py
deleted file mode 100644
index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000
--- a/spaces/Kreaols/ChuanhuChatGPT/modules/llama_func.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import os
-import logging
-
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filepath)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- for elem in text_list:
- documents.append(Document(elem))
- continue
- else:
- logging.debug("Loading text file...")
- with open(filepath, "r", encoding="utf-8") as f:
- text_raw = f.read()
- except Exception as e:
- logging.error(f"Error loading file: {filename}")
- pass
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- prompt_helper = PromptHelper(
- max_input_size=max_input_size,
- num_output=num_outputs,
- max_chunk_overlap=max_chunk_overlap,
- embedding_limit=embedding_limit,
- chunk_size_limit=600,
- separator=separator,
- )
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- if local_embedding:
- embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2"))
- else:
- embed_model = OpenAIEmbedding()
- logging.info("构建索引中……")
- with retrieve_proxy():
- service_context = ServiceContext.from_defaults(
- prompt_helper=prompt_helper,
- chunk_size_limit=chunk_size_limit,
- embed_model=embed_model,
- )
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/builder.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/builder.py
deleted file mode 100644
index dfa3872fe9931a4946368f07dfc5f5913a3e1f9f..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/builder.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmpretrain.registry import DATASETS
-
-
-def build_dataset(cfg):
- """Build dataset.
-
- Examples:
- >>> from mmpretrain.datasets import build_dataset
- >>> mnist_train = build_dataset(
- ... dict(type='MNIST', data_prefix='data/mnist/', test_mode=False))
- >>> print(mnist_train)
- Dataset MNIST
- Number of samples: 60000
- Number of categories: 10
- Prefix of data: data/mnist/
- >>> mnist_test = build_dataset(
- ... dict(type='MNIST', data_prefix='data/mnist/', test_mode=True))
- >>> print(mnist_test)
- Dataset MNIST
- Number of samples: 10000
- Number of categories: 10
- Prefix of data: data/mnist/
- """
- return DATASETS.build(cfg)
diff --git a/spaces/LanguageBind/LanguageBind/open_clip/constants.py b/spaces/LanguageBind/LanguageBind/open_clip/constants.py
deleted file mode 100644
index a670bb3fab442baeb9af53b91c312e6982af57ee..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/open_clip/constants.py
+++ /dev/null
@@ -1,2 +0,0 @@
-OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073)
-OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711)
diff --git a/spaces/Letheoricien/MLPC2023_MumBot/app.py b/spaces/Letheoricien/MLPC2023_MumBot/app.py
deleted file mode 100644
index 973117ac76b303563c2b8c031bf6742be446fbeb..0000000000000000000000000000000000000000
--- a/spaces/Letheoricien/MLPC2023_MumBot/app.py
+++ /dev/null
@@ -1,260 +0,0 @@
-#Import all packages we will need.
-import json
-import numpy as np
-import random
-import nltk
-#import utils as u
-nltk.download('punkt')
-nltk.download('wordnet')
-nltk.download('omw-1.4')
-from keras.models import Sequential
-from keras.layers import Dense, Activation, Dropout,LSTM,Attention
-#from keras.optimizers import gradient_descent_v2
-from tensorflow.keras.optimizers.legacy import SGD
-
-from keras.models import load_model
-import json
-import random
-
-#import utils as u
-import pickle
-
-#Creation of pickle files to store all Python objects which well use to the prediction process
-def create_pickle(list, pkl_url):
- return pickle.dump(list, open(pkl_url,'wb'))
-
-
-def load_pickle(pkl_url):
- return pickle.load(open(pkl_url,'rb'))
-class ChatModel:
-
- def __init__(self):
- #Call tokenizing procedure
- w, words, documents, classes, self._intents = self.tokenizing('Woman.json')
-
- #Call lemmatizing procedure
- w, words, documents, classes, lemmatizer = self.lemmatizing(w, words, documents, classes)
-
- #Call training_data procedure
- self._train_x, self._train_y = self.training_data(w, words, documents, classes, lemmatizer)
-
- #Call tokenizing procedure
- self._model = self.training(self._train_x, self._train_y)
-
-
- def tokenizing(self,url):
- words=[]
- classes = []
- documents = []
- intents = json.loads(open(url).read())
-
- for intent in intents['intents']:
- for pattern in intent['patterns']:
- #tokenize each word
- w = nltk.word_tokenize(pattern)
- words.extend(w)
- #add documents in the corpus
- documents.append((w, intent['tag']))
- # add to our classes list
- if intent['tag'] not in classes:
- classes.append(intent['tag'])
-
- return w, words, documents, classes, intents
-
- def lemmatizing(self, w, words, documents, classes):
- ignore_words = ['?', '!']
- lemmatizer = nltk.stem.WordNetLemmatizer()
-
- # lemmatize, lower each word and remove duplicates
- words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
-
- # sort classes and words
- classes = sorted(list(set(classes)))
- words = sorted(list(set(words)))
- # documents = combination between patterns and intents
- print (len(documents), "documents")
-
- # classes = intents
- print (len(classes), "classes", classes)
-
- # words = all words, vocabulary
- print (len(words), "unique lemmatized words", words)
-
- create_pickle(words, 'words_woman.pkl')
- create_pickle(classes, 'classes_woman.pkl')
- return w, words, documents, classes, lemmatizer
-
- def training_data(self, w, words, documents, classes, lemmatizer):
- # create our training data
- training = []
- train_x = []
- train_y = []
- # create an empty array for our output
- output_empty = [0] * len(classes)
-
- # training set, bag of words for each sentence
- for doc in documents:
- # initialize our bag of words
- bag = []
- # list of tokenized words for the pattern
- pattern_words = doc[0]
- # lemmatize each word - create base word, in attempt to represent related words
- pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
- # create our bag of words array with 1, if word match found in current pattern
-
- for w in words:
- bag.append(1) if w in pattern_words else bag.append(0)
-
- # output is a '0' for each tag and '1' for current tag (for each pattern)
- output_row = list(output_empty)
- output_row[classes.index(doc[1])] = 1
- training.append([bag, output_row])
-
- # shuffle our features and turn into np.array
- random.shuffle(training)
- training = np.array(training)
- # create train and test lists. X - patterns, Y - intents
- train_x = list(training[:,0])
- train_y = list(training[:,1])
-
- print("Training data is ready")
- return train_x, train_y
-
- def training(self,train_x, train_y):
- #Sequential from Keras
- # Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
- # equal to number of intents to predict output intent with softmax
- model = Sequential()
- #model.add(LSTM(64, return_sequences=True))
- #model.add(Dropout(0.5))
-
- model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))
- model.add(Dropout(0.5))
- model.add(Dense(64, activation='relu'))
- model.add(Dropout(0.5))
- model.add(Dense(len(train_y[0]), activation='softmax'))
- #input_layer = Input(shape=(len(train_x[0]),))
- #embedding_layer = Embedding(len(train_x), 128)(input_layer)
- #lstm_layer = LSTM(128)(embedding_layer)
- #dropout_layer = Dropout(0.5)(lstm_layer)
- #attention_layer = Attention()([dropout_layer, lstm_layer])
- #dense_layer = Dense(64, activation='relu')(attention_layer)
- #output_layer = Dense(len(train_y[0]), activation='softmax')(dense_layer)
- #model = Model(inputs=input_layer, outputs=output_layer)
-
- # Compile model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model
- #sgd = gradient_descent_v2.SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True)
- sgd = SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True)
- model.compile(loss='categorical_crossentropy', metrics=['accuracy'])
- #fitting and saving the model
- hist = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
- model.save('chatbotpo_model_woman.h5', hist)
- print("modeseql created")
-
- return model
-
- def get_train_x(self):
- return self._train_x
-
- def get_train_y(self):
- return self._train_y
-
- def get_model(self):
- return self._model
-
- def get_intents(self):
- return self._intents
-class ChatApp:
-
- def __init__(self):
- self.cM = ChatModel()
- self._lemmatizer = nltk.stem.WordNetLemmatizer()
- self._model = load_model('chatbotpo_model_woman.h5')
- self._intents = self.cM.get_intents()
- self._words = load_pickle('words_woman.pkl')
- self._classes = load_pickle('classes_woman.pkl')
-
- def clean_up_sentence(self,sentence):
- # tokenize the pattern - split words into array
- sentence_words = nltk.word_tokenize(sentence)
- # stem each word - create short form for word
- sentence_words = [self._lemmatizer.lemmatize(word.lower()) for word in sentence_words]
- return sentence_words
-
- # return bag of words array: 0 or 1 for each word in the bag that exists in the sentence
- def bow(self, sentence, words, show_details=True):
- # tokenize the pattern
- sentence_words = self.clean_up_sentence(sentence)
- # bag of words - matrix of N words, vocabulary matrix
- bag = [0]*len(words)
- for s in sentence_words:
- for i,w in enumerate(words):
- if w == s:
- # assign 1 if current word is in the vocabulary position
- bag[i] = 1
- if show_details:
- print ("found in bag: %s" % w)
- return(np.array(bag))
-
- def predict_class(self, sentence, model):
- ERROR_THRESHOLD = 0.50
- # filter out predictions below a threshold
- p = self.bow(sentence, self._words, show_details=False)
- res = self._model.predict(np.array([p]))[0]
-
- results = [[i,r] for i,r in enumerate(res) if r>ERROR_THRESHOLD]
- # sort by strength of probability
- results.sort(key=lambda x: x[1], reverse=True)
- return_list = []
- for r in results:
- return_list.append({"intent": self._classes[r[0]], "probability": str(r[1])})
- return return_list
-
-
- def getResponse(self, ints, intents_json):
- tag = ints[0]['intent']
- list_of_intents = intents_json['intents']
- for i in list_of_intents:
- if(i['tag']== tag):
- result = random.choice(i['responses'])
- break
- return result
-
- def chatbot_response(self, text):
- ints = self.predict_class(text, self._model)
- res = self.getResponse(ints, self._intents)
- return res
-
-myChat = ChatApp()
-import os
-
-import gradio as gr
-
-
-prompt = "The following is a conversation with an TheoAI . How can I help you today?\nHuman: "
-
-def chatgpt_clone(input, history):
- history = history or []
- s = list(sum(history, ()))
- s.append(input)
- inp = ' '.join(s)
-
- output = myChat.chatbot_response(input)
- history.append((input, output))
- return history, history
-
-
-block = gr.Blocks()
-
-
-with block:
- gr.Markdown("""
The intelligent chatbot for pregnants woman
- """)
- css = """ Chatbot {background-color : "pink"}"""
- chatbot = gr.Chatbot()
- message = gr.Textbox(placeholder=prompt)
- state = gr.State()
- submit = gr.Button("SEND")
- submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot, state])
-
-block.launch(debug = True)
diff --git a/spaces/MKFMIKU/Bi-Noising.Diffusion/diffusion_arch.py b/spaces/MKFMIKU/Bi-Noising.Diffusion/diffusion_arch.py
deleted file mode 100644
index fc2ae947669c3b93eb44c87baf836e100849da58..0000000000000000000000000000000000000000
--- a/spaces/MKFMIKU/Bi-Noising.Diffusion/diffusion_arch.py
+++ /dev/null
@@ -1,939 +0,0 @@
-from abc import abstractmethod
-
-import math
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-import torchvision
-import torchvision.transforms.functional as TF
-
-import numpy as np
-
-from guided_diffusion.fp16_util import convert_module_to_f16, convert_module_to_f32
-from guided_diffusion.nn import (
- checkpoint,
- conv_nd,
- linear,
- avg_pool_nd,
- zero_module,
- normalization,
- timestep_embedding,
-)
-
-class AttentionPool2d(nn.Module):
- """
- Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py
- """
-
- def __init__(
- self,
- spacial_dim: int,
- embed_dim: int,
- num_heads_channels: int,
- output_dim: int = None,
- ):
- super().__init__()
- self.positional_embedding = nn.Parameter(
- torch.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5
- )
- self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)
- self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)
- self.num_heads = embed_dim // num_heads_channels
- self.attention = QKVAttention(self.num_heads)
-
- def forward(self, x):
- b, c, *_spatial = x.shape
- x = x.reshape(b, c, -1) # NC(HW)
- x = torch.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1)
- x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1)
- x = self.qkv_proj(x)
- x = self.attention(x)
- x = self.c_proj(x)
- return x[:, :, 0]
-
-
-class TimestepBlock(nn.Module):
- """
- Any module where forward() takes timestep embeddings as a second argument.
- """
-
- @abstractmethod
- def forward(self, x, emb):
- """
- Apply the module to `x` given `emb` timestep embeddings.
- """
-
-
-class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
- """
- A sequential module that passes timestep embeddings to the children that
- support it as an extra input.
- """
-
- def forward(self, x, emb):
- for layer in self:
- if isinstance(layer, TimestepBlock):
- x = layer(x, emb)
- else:
- x = layer(x)
- return x
-
-class Upsample(nn.Module):
- """
- An upsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- upsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- if use_conv:
- self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=1)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.dims == 3:
- x = F.interpolate(
- x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
- )
- else:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
- if self.use_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- """
- A downsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- stride = 2 if dims != 3 else (1, 2, 2)
- if use_conv:
- self.op = conv_nd(
- dims, self.channels, self.out_channels, 3, stride=stride, padding=1
- )
- else:
- assert self.channels == self.out_channels
- self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.op(x)
-
-
-class ResBlock(TimestepBlock):
- """
- A residual block that can optionally change the number of channels.
-
- :param channels: the number of input channels.
- :param emb_channels: the number of timestep embedding channels.
- :param dropout: the rate of dropout.
- :param out_channels: if specified, the number of out channels.
- :param use_conv: if True and out_channels is specified, use a spatial
- convolution instead of a smaller 1x1 convolution to change the
- channels in the skip connection.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param use_checkpoint: if True, use gradient checkpointing on this module.
- :param up: if True, use this block for upsampling.
- :param down: if True, use this block for downsampling.
- """
-
- def __init__(
- self,
- channels,
- emb_channels,
- dropout,
- out_channels=None,
- use_conv=False,
- use_scale_shift_norm=False,
- dims=2,
- use_checkpoint=False,
- up=False,
- down=False,
- ):
- super().__init__()
- self.channels = channels
- self.emb_channels = emb_channels
- self.dropout = dropout
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_checkpoint = use_checkpoint
- self.use_scale_shift_norm = use_scale_shift_norm
-
- self.in_layers = nn.Sequential(
- normalization(channels),
- nn.SiLU(),
- conv_nd(dims, channels, self.out_channels, 3, padding=1),
- )
-
- self.updown = up or down
-
- if up:
- self.h_upd = Upsample(channels, False, dims)
- self.x_upd = Upsample(channels, False, dims)
- elif down:
- self.h_upd = Downsample(channels, False, dims)
- self.x_upd = Downsample(channels, False, dims)
- else:
- self.h_upd = self.x_upd = nn.Identity()
-
- self.emb_layers = nn.Sequential(
- nn.SiLU(),
- linear(
- emb_channels,
- 2 * self.out_channels if use_scale_shift_norm else self.out_channels,
- ),
- )
- self.out_layers = nn.Sequential(
- normalization(self.out_channels),
- nn.SiLU(),
- nn.Dropout(p=dropout),
- zero_module(
- conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)
- ),
- )
-
- if self.out_channels == channels:
- self.skip_connection = nn.Identity()
- elif use_conv:
- self.skip_connection = conv_nd(
- dims, channels, self.out_channels, 3, padding=1
- )
- else:
- self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
-
- def forward(self, x, emb):
- """
- Apply the block to a Tensor, conditioned on a timestep embedding.
-
- :param x: an [N x C x ...] Tensor of features.
- :param emb: an [N x emb_channels] Tensor of timestep embeddings.
- :return: an [N x C x ...] Tensor of outputs.
- """
- return checkpoint(
- self._forward, (x, emb), self.parameters(), self.use_checkpoint
- )
-
- def _forward(self, x, emb):
- if self.updown:
- in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
- h = in_rest(x)
- h = self.h_upd(h)
- x = self.x_upd(x)
- h = in_conv(h)
- else:
- h = self.in_layers(x)
- emb_out = self.emb_layers(emb).type(h.dtype)
- while len(emb_out.shape) < len(h.shape):
- emb_out = emb_out[..., None]
- if self.use_scale_shift_norm:
- out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
- scale, shift = torch.chunk(emb_out, 2, dim=1)
- h = out_norm(h) * (1 + scale) + shift
- h = out_rest(h)
- else:
- h = h + emb_out
- h = self.out_layers(h)
- return self.skip_connection(x) + h
-
-
-class AttentionBlock(nn.Module):
- """
- An attention block that allows spatial positions to attend to each other.
-
- Originally ported from here, but adapted to the N-d case.
- https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
- """
-
- def __init__(
- self,
- channels,
- num_heads=1,
- num_head_channels=-1,
- use_checkpoint=False,
- use_new_attention_order=False,
- ):
- super().__init__()
- self.channels = channels
- if num_head_channels == -1:
- self.num_heads = num_heads
- else:
- assert (
- channels % num_head_channels == 0
- ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
- self.num_heads = channels // num_head_channels
- self.use_checkpoint = use_checkpoint
- self.norm = normalization(channels)
- self.qkv = conv_nd(1, channels, channels * 3, 1)
- if use_new_attention_order:
- # split qkv before split heads
- self.attention = QKVAttention(self.num_heads)
- else:
- # split heads before split qkv
- self.attention = QKVAttentionLegacy(self.num_heads)
-
- self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
-
- def forward(self, x):
- return checkpoint(self._forward, (x,), self.parameters(), True)
-
- def _forward(self, x):
- b, c, *spatial = x.shape
- x = x.reshape(b, c, -1)
- qkv = self.qkv(self.norm(x))
- h = self.attention(qkv)
- h = self.proj_out(h)
- return (x + h).reshape(b, c, *spatial)
-
-
-def count_flops_attn(model, _x, y):
- """
- A counter for the `thop` package to count the operations in an
- attention operation.
- Meant to be used like:
- macs, params = thop.profile(
- model,
- inputs=(inputs, timestamps),
- custom_ops={QKVAttention: QKVAttention.count_flops},
- )
- """
- b, c, *spatial = y[0].shape
- num_spatial = int(np.prod(spatial))
- # We perform two matmuls with the same number of ops.
- # The first computes the weight matrix, the second computes
- # the combination of the value vectors.
- matmul_ops = 2 * b * (num_spatial ** 2) * c
- model.total_ops += torch.DoubleTensor([matmul_ops])
-
-
-class QKVAttentionLegacy(nn.Module):
- """
- A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
-
- :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = torch.einsum(
- "bct,bcs->bts", q * scale, k * scale
- ) # More stable with f16 than dividing afterwards
- weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = torch.einsum("bts,bcs->bct", weight, v)
- return a.reshape(bs, -1, length)
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class QKVAttention(nn.Module):
- """
- A module which performs QKV attention and splits in a different order.
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
-
- :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.chunk(3, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = torch.einsum(
- "bct,bcs->bts",
- (q * scale).view(bs * self.n_heads, ch, length),
- (k * scale).view(bs * self.n_heads, ch, length),
- ) # More stable with f16 than dividing afterwards
- weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = torch.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length))
- return a.reshape(bs, -1, length)
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-class UNetModel(nn.Module):
- """
- The full UNet model with attention and timestep embedding.
-
- :param in_channels: channels in the input Tensor.
- :param model_channels: base channel count for the model.
- :param out_channels: channels in the output Tensor.
- :param num_res_blocks: number of residual blocks per downsample.
- :param attention_resolutions: a collection of downsample rates at which
- attention will take place. May be a set, list, or tuple.
- For example, if this contains 4, then at 4x downsampling, attention
- will be used.
- :param dropout: the dropout probability.
- :param channel_mult: channel multiplier for each level of the UNet.
- :param conv_resample: if True, use learned convolutions for upsampling and
- downsampling.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param num_classes: if specified (as an int), then this model will be
- class-conditional with `num_classes` classes.
- :param use_checkpoint: use gradient checkpointing to reduce memory usage.
- :param num_heads: the number of attention heads in each attention layer.
- :param num_heads_channels: if specified, ignore num_heads and instead use
- a fixed channel width per attention head.
- :param num_heads_upsample: works with num_heads to set a different number
- of heads for upsampling. Deprecated.
- :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
- :param resblock_updown: use residual blocks for up/downsampling.
- :param use_new_attention_order: use a different attention pattern for potentially
- increased efficiency.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- model_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- num_classes=None,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.dtype = torch.float16 if use_fp16 else torch.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.dt_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- if self.num_classes is not None:
- self.label_emb = nn.Embedding(num_classes, time_embed_dim)
-
- ch = input_ch = int(channel_mult[0] * model_channels)
- self.input_blocks = nn.ModuleList(
- [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))]
- )
- self._feature_size = ch
- input_block_chans = [ch]
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=int(mult * model_channels),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(mult * model_channels)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
-
- self.output_blocks = nn.ModuleList([])
- for level, mult in list(enumerate(channel_mult))[::-1]:
- for i in range(num_res_blocks + 1):
- ich = input_block_chans.pop()
- layers = [
- ResBlock(
- ch + ich,
- time_embed_dim,
- dropout,
- out_channels=int(model_channels * mult),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(model_channels * mult)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads_upsample,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- if level and i == num_res_blocks:
- out_ch = ch
- layers.append(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- up=True,
- )
- if resblock_updown
- else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- ds //= 2
- self.output_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
-
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- zero_module(conv_nd(dims, input_ch, out_channels, 3, padding=1)),
- )
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
- self.output_blocks.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
- self.output_blocks.apply(convert_module_to_f32)
-
- def forward(self, x, dt, y=None):
- """
- Apply the model to an input batch.
-
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :param y: an [N] Tensor of labels, if class-conditional.
- :return: an [N x C x ...] Tensor of outputs.
- """
- assert (y is not None) == (
- self.num_classes is not None
- ), "must specify y if and only if the model is class-conditional"
-
- hs = []
- emb = self.dt_embed(timestep_embedding(dt, self.model_channels))
-
- if self.num_classes is not None:
- assert y.shape == (x.shape[0],)
- emb = emb + self.label_emb(y)
-
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- hs.append(h)
- h = self.middle_block(h, emb)
- for module in self.output_blocks:
- h = torch.cat([h, hs.pop()], dim=1)
- h = module(h, emb)
- h = h.type(x.dtype)
- return self.out(h)
-
-
-class ConditionalUNetModel(UNetModel):
- def __init__(self, in_channels, *args, **kwargs):
- super().__init__(in_channels * 2, *args, **kwargs)
-
- def forward(self, x, timesteps, lq=None, **kwargs):
- x = torch.cat([x, lq], dim=1)
- return super().forward(x, timesteps, **kwargs)
-
-
-class ILVRUNetModel(nn.Module):
- """
- The full UNet model with attention and timestep embedding.
-
- :param in_channels: channels in the input Tensor.
- :param model_channels: base channel count for the model.
- :param out_channels: channels in the output Tensor.
- :param num_res_blocks: number of residual blocks per downsample.
- :param attention_resolutions: a collection of downsample rates at which
- attention will take place. May be a set, list, or tuple.
- For example, if this contains 4, then at 4x downsampling, attention
- will be used.
- :param dropout: the dropout probability.
- :param channel_mult: channel multiplier for each level of the UNet.
- :param conv_resample: if True, use learned convolutions for upsampling and
- downsampling.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param num_classes: if specified (as an int), then this model will be
- class-conditional with `num_classes` classes.
- :param use_checkpoint: use gradient checkpointing to reduce memory usage.
- :param num_heads: the number of attention heads in each attention layer.
- :param num_heads_channels: if specified, ignore num_heads and instead use
- a fixed channel width per attention head.
- :param num_heads_upsample: works with num_heads to set a different number
- of heads for upsampling. Deprecated.
- :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
- :param resblock_updown: use residual blocks for up/downsampling.
- :param use_new_attention_order: use a different attention pattern for potentially
- increased efficiency.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- model_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- num_classes=None,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.dtype = torch.float16 if use_fp16 else torch.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- if self.num_classes is not None:
- self.label_emb = nn.Embedding(num_classes, time_embed_dim)
-
- ch = input_ch = int(channel_mult[0] * model_channels)
- self.input_blocks = nn.ModuleList(
- [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))]
- )
- self._feature_size = ch
- input_block_chans = [ch]
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=int(mult * model_channels),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(mult * model_channels)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
-
- self.output_blocks = nn.ModuleList([])
- for level, mult in list(enumerate(channel_mult))[::-1]:
- for i in range(num_res_blocks + 1):
- ich = input_block_chans.pop()
- layers = [
- ResBlock(
- ch + ich,
- time_embed_dim,
- dropout,
- out_channels=int(model_channels * mult),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(model_channels * mult)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads_upsample,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- if level and i == num_res_blocks:
- out_ch = ch
- layers.append(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- up=True,
- )
- if resblock_updown
- else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- ds //= 2
- self.output_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
-
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- zero_module(conv_nd(dims, input_ch, out_channels, 3, padding=1)),
- )
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
- self.output_blocks.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
- self.output_blocks.apply(convert_module_to_f32)
-
- def forward(self, x, dt, y=None):
- """
- Apply the model to an input batch.
-
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :param y: an [N] Tensor of labels, if class-conditional.
- :return: an [N x C x ...] Tensor of outputs.
- """
- assert (y is not None) == (
- self.num_classes is not None
- ), "must specify y if and only if the model is class-conditional"
-
- hs = []
- emb = self.time_embed(timestep_embedding(dt, self.model_channels))
-
- if self.num_classes is not None:
- assert y.shape == (x.shape[0],)
- emb = emb + self.label_emb(y)
-
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- hs.append(h)
- h = self.middle_block(h, emb)
- for module in self.output_blocks:
- h = torch.cat([h, hs.pop()], dim=1)
- h = module(h, emb)
- h = h.type(x.dtype)
- return self.out(h)
diff --git a/spaces/MMMMQZ/MQZGPT/modules/llama_func.py b/spaces/MMMMQZ/MQZGPT/modules/llama_func.py
deleted file mode 100644
index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000
--- a/spaces/MMMMQZ/MQZGPT/modules/llama_func.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import os
-import logging
-
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filepath)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- for elem in text_list:
- documents.append(Document(elem))
- continue
- else:
- logging.debug("Loading text file...")
- with open(filepath, "r", encoding="utf-8") as f:
- text_raw = f.read()
- except Exception as e:
- logging.error(f"Error loading file: {filename}")
- pass
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- prompt_helper = PromptHelper(
- max_input_size=max_input_size,
- num_output=num_outputs,
- max_chunk_overlap=max_chunk_overlap,
- embedding_limit=embedding_limit,
- chunk_size_limit=600,
- separator=separator,
- )
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- if local_embedding:
- embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2"))
- else:
- embed_model = OpenAIEmbedding()
- logging.info("构建索引中……")
- with retrieve_proxy():
- service_context = ServiceContext.from_defaults(
- prompt_helper=prompt_helper,
- chunk_size_limit=chunk_size_limit,
- embed_model=embed_model,
- )
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/utils.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/utils.py
deleted file mode 100644
index d1bec96ae744d68a4d471fbce68717f56296542c..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/utils.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from datetime import timedelta
-from pathlib import Path
-
-import torch
-import numpy as np
-
-from ..model.is_deeplab_model import get_deeplab_model
-from ..model.is_hrnet_model import get_hrnet_model
-
-
-def get_time_metrics(all_ious, elapsed_time):
- n_images = len(all_ious)
- n_clicks = sum(map(len, all_ious))
-
- mean_spc = elapsed_time / n_clicks
- mean_spi = elapsed_time / n_images
-
- return mean_spc, mean_spi
-
-
-def load_is_model(checkpoint, device, backbone='auto', **kwargs):
- if isinstance(checkpoint, (str, Path)):
- state_dict = torch.load(checkpoint, map_location='cpu')
- else:
- state_dict = checkpoint
-
- if backbone == 'auto':
- for k in state_dict.keys():
- if 'feature_extractor.stage2.0.branches' in k:
- return load_hrnet_is_model(state_dict, device, backbone, **kwargs)
- return load_deeplab_is_model(state_dict, device, backbone, **kwargs)
- elif 'resnet' in backbone:
- return load_deeplab_is_model(state_dict, device, backbone, **kwargs)
- elif 'hrnet' in backbone:
- return load_hrnet_is_model(state_dict, device, backbone, **kwargs)
- else:
- raise NotImplementedError('Unknown backbone')
-
-
-def load_hrnet_is_model(state_dict, device, backbone='auto', width=48, ocr_width=256,
- small=False, cpu_dist_maps=False, norm_radius=260):
- if backbone == 'auto':
- num_fe_weights = len([x for x in state_dict.keys() if 'feature_extractor.' in x])
- small = num_fe_weights < 1800
-
- ocr_f_down = [v for k, v in state_dict.items() if 'object_context_block.f_down.1.0.bias' in k]
- assert len(ocr_f_down) == 1
- ocr_width = ocr_f_down[0].shape[0]
-
- s2_conv1_w = [v for k, v in state_dict.items() if 'stage2.0.branches.0.0.conv1.weight' in k]
- assert len(s2_conv1_w) == 1
- width = s2_conv1_w[0].shape[0]
-
- model = get_hrnet_model(width=width, ocr_width=ocr_width, small=small,
- with_aux_output=False, cpu_dist_maps=cpu_dist_maps,
- norm_radius=norm_radius)
-
- model.load_state_dict(state_dict, strict=False)
- for param in model.parameters():
- param.requires_grad = False
- model.to(device)
- model.eval()
-
- return model
-
-
-def load_deeplab_is_model(state_dict, device, backbone='auto', deeplab_ch=128, aspp_dropout=0.2,
- cpu_dist_maps=False, norm_radius=260):
- if backbone == 'auto':
- num_backbone_params = len([x for x in state_dict.keys()
- if 'feature_extractor.backbone' in x and not('num_batches_tracked' in x)])
-
- if num_backbone_params <= 181:
- backbone = 'resnet34'
- elif num_backbone_params <= 276:
- backbone = 'resnet50'
- elif num_backbone_params <= 531:
- backbone = 'resnet101'
- else:
- raise NotImplementedError('Unknown backbone')
-
- if 'aspp_dropout' in state_dict:
- aspp_dropout = float(state_dict['aspp_dropout'].cpu().numpy())
- else:
- aspp_project_weight = [v for k, v in state_dict.items() if 'aspp.project.0.weight' in k][0]
- deeplab_ch = aspp_project_weight.size(0)
- if deeplab_ch == 256:
- aspp_dropout = 0.5
-
- model = get_deeplab_model(backbone=backbone, deeplab_ch=deeplab_ch,
- aspp_dropout=aspp_dropout, cpu_dist_maps=cpu_dist_maps,
- norm_radius=norm_radius)
-
- model.load_state_dict(state_dict, strict=False)
- for param in model.parameters():
- param.requires_grad = False
- model.to(device)
- model.eval()
-
- return model
-
-
-def get_iou(gt_mask, pred_mask, ignore_label=-1):
- ignore_gt_mask_inv = gt_mask != ignore_label
- obj_gt_mask = gt_mask == 1
-
- intersection = np.logical_and(np.logical_and(pred_mask, obj_gt_mask), ignore_gt_mask_inv).sum()
- union = np.logical_and(np.logical_or(pred_mask, obj_gt_mask), ignore_gt_mask_inv).sum()
-
- return intersection / union
-
-
-def compute_noc_metric(all_ious, iou_thrs, max_clicks=20):
- def _get_noc(iou_arr, iou_thr):
- vals = iou_arr >= iou_thr
- return np.argmax(vals) + 1 if np.any(vals) else max_clicks
-
- noc_list = []
- over_max_list = []
- for iou_thr in iou_thrs:
- scores_arr = np.array([_get_noc(iou_arr, iou_thr)
- for iou_arr in all_ious], dtype=np.int32)
-
- score = scores_arr.mean()
- over_max = (scores_arr == max_clicks).sum()
-
- noc_list.append(score)
- over_max_list.append(over_max)
-
- return noc_list, over_max_list
-
-
-def find_checkpoint(weights_folder, checkpoint_name):
- weights_folder = Path(weights_folder)
- if ':' in checkpoint_name:
- model_name, checkpoint_name = checkpoint_name.split(':')
- models_candidates = [x for x in weights_folder.glob(f'{model_name}*') if x.is_dir()]
- assert len(models_candidates) == 1
- model_folder = models_candidates[0]
- else:
- model_folder = weights_folder
-
- if checkpoint_name.endswith('.pth'):
- if Path(checkpoint_name).exists():
- checkpoint_path = checkpoint_name
- else:
- checkpoint_path = weights_folder / checkpoint_name
- else:
- model_checkpoints = list(model_folder.rglob(f'{checkpoint_name}*.pth'))
- assert len(model_checkpoints) == 1
- checkpoint_path = model_checkpoints[0]
-
- return str(checkpoint_path)
-
-
-def get_results_table(noc_list, over_max_list, brs_type, dataset_name, mean_spc, elapsed_time,
- n_clicks=20, model_name=None):
- table_header = (f'|{"BRS Type":^13}|{"Dataset":^11}|'
- f'{"NoC@80%":^9}|{"NoC@85%":^9}|{"NoC@90%":^9}|'
- f'{">="+str(n_clicks)+"@85%":^9}|{">="+str(n_clicks)+"@90%":^9}|'
- f'{"SPC,s":^7}|{"Time":^9}|')
- row_width = len(table_header)
-
- header = f'Eval results for model: {model_name}\n' if model_name is not None else ''
- header += '-' * row_width + '\n'
- header += table_header + '\n' + '-' * row_width
-
- eval_time = str(timedelta(seconds=int(elapsed_time)))
- table_row = f'|{brs_type:^13}|{dataset_name:^11}|'
- table_row += f'{noc_list[0]:^9.2f}|'
- table_row += f'{noc_list[1]:^9.2f}|' if len(noc_list) > 1 else f'{"?":^9}|'
- table_row += f'{noc_list[2]:^9.2f}|' if len(noc_list) > 2 else f'{"?":^9}|'
- table_row += f'{over_max_list[1]:^9}|' if len(noc_list) > 1 else f'{"?":^9}|'
- table_row += f'{over_max_list[2]:^9}|' if len(noc_list) > 2 else f'{"?":^9}|'
- table_row += f'{mean_spc:^7.3f}|{eval_time:^9}|'
-
- return header, table_row
\ No newline at end of file
diff --git a/spaces/Marshalls/testmtd/models/flowplusplus/transformer_nn.py b/spaces/Marshalls/testmtd/models/flowplusplus/transformer_nn.py
deleted file mode 100644
index 73c876a5132fe344b4327bcaf5323be0fe476390..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/models/flowplusplus/transformer_nn.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from torch.nn.utils import weight_norm
-from models.util import concat_elu #,WNConv2d
-from models.transformer import BasicTransformerModel
-
-class TransformerNN(nn.Module):
- """Neural network used to parametrize the transformations of an MLCoupling.
-
- An `NN` is a stack of blocks, where each block consists of the following
- two layers connected in a residual fashion:
- 1. Conv: input -> nonlinearit -> conv3x3 -> nonlinearity -> gate
- 2. Attn: input -> conv1x1 -> multihead self-attention -> gate,
- where gate refers to a 1×1 convolution that doubles the number of channels,
- followed by a gated linear unit (Dauphin et al., 2016).
- The convolutional layer is identical to the one used by PixelCNN++
- (Salimans et al., 2017), and the multi-head self attention mechanism we
- use is identical to the one in the Transformer (Vaswani et al., 2017).
-
- Args:
- in_channels (int): Number of channels in the input.
- out_channels (int): Number of channels in the output.
- num_channels (int): Number of channels in each block of the network.
- num_blocks (int): Number of blocks in the network.
- num_components (int): Number of components in the mixture.
- drop_prob (float): Dropout probability.
- use_attn (bool): Use attention in each block.
- aux_channels (int): Number of channels in optional auxiliary input.
- """
- def __init__(self, in_channels, out_channels, num_channels, num_layers, num_heads, num_components, drop_prob, use_pos_emb, use_rel_pos_emb, input_length, concat_dims, output_length):
- #import pdb;pdb.set_trace()
- super(TransformerNN, self).__init__()
- self.k = num_components # k = number of mixture components
- # import pdb;pdb.set_trace()
- self.transformer = BasicTransformerModel(out_channels * (2 + 3 * self.k), in_channels, num_heads, num_channels, num_layers, drop_prob, use_pos_emb=use_pos_emb, use_rel_pos_emb=use_rel_pos_emb, input_length=input_length)
- self.rescale = weight_norm(Rescale(out_channels))
- self.out_channels = out_channels
- self.concat_dims = concat_dims
- self.output_length = output_length
-
- def forward(self, x, aux=None):
- b, c, h, w = x.size()
- # import pdb;pdb.set_trace()
- x = x.squeeze(-1) # only squeeze the w dimension (important coz otherwise it would squeeze batch dim if theres only one element in minibatch..
- # import pdb;pdb.set_trace()
- x = x.permute(2,0,1)
- # import pdb;pdb.set_trace()
- if self.concat_dims:
- x = self.transformer(x)
- # x = torch.mean(self.transformer(x), dim=0, keepdim=True)
- # x = 0.5*x + 0.5*torch.mean(x, dim=0, keepdim=True)
- # x = self.transformer(x)[:1]
- else:
- x = self.transformer(x)[:self.output_length]
- # import pdb;pdb.set_trace()
- x = x.permute(1,2,0)
- # Split into components and post-process
- if self.concat_dims:
- x = x.view(b, -1, self.out_channels, h, w)
- # x = x.view(b, -1, self.out_channels, 1, w)
- else:
- x = x.view(b, -1, self.out_channels, self.output_length, w)
- s, t, pi, mu, scales = x.split((1, 1, self.k, self.k, self.k), dim=1)
- s = self.rescale(torch.tanh(s.squeeze(1)))
- t = t.squeeze(1)
- scales = scales.clamp(min=-7) # From the code in original Flow++ paper
-
- return s, t, pi, mu, scales
-
-class Rescale(nn.Module):
- """Per-channel rescaling. Need a proper `nn.Module` so we can wrap it
- with `torch.nn.utils.weight_norm`.
- Args:
- num_channels (int): Number of channels in the input.
- """
-
- def __init__(self, num_channels):
- super(Rescale, self).__init__()
- self.weight = nn.Parameter(torch.ones(num_channels, 1, 1))
-
- def forward(self, x):
- x = self.weight * x
- return x
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/datasets/register_oid.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/datasets/register_oid.py
deleted file mode 100644
index bd281f53f07074740b453838ba32f42f81a28383..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/datasets/register_oid.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Xingyi Zhou from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/coco.py
-import copy
-import io
-import logging
-import contextlib
-import os
-import datetime
-import json
-import numpy as np
-
-from PIL import Image
-
-from fvcore.common.timer import Timer
-from fvcore.common.file_io import PathManager, file_lock
-from detectron2.structures import BoxMode, PolygonMasks, Boxes
-from detectron2.data import DatasetCatalog, MetadataCatalog
-
-logger = logging.getLogger(__name__)
-
-"""
-This file contains functions to register a COCO-format dataset to the DatasetCatalog.
-"""
-
-__all__ = ["register_coco_instances", "register_coco_panoptic_separated"]
-
-
-
-def register_oid_instances(name, metadata, json_file, image_root):
- """
- """
- # 1. register a function which returns dicts
- DatasetCatalog.register(name, lambda: load_coco_json_mem_efficient(
- json_file, image_root, name))
-
- # 2. Optionally, add metadata about this dataset,
- # since they might be useful in evaluation, visualization or logging
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root, evaluator_type="oid", **metadata
- )
-
-
-def load_coco_json_mem_efficient(json_file, image_root, dataset_name=None, extra_annotation_keys=None):
- """
- Actually not mem efficient
- """
- from pycocotools.coco import COCO
-
- timer = Timer()
- json_file = PathManager.get_local_path(json_file)
- with contextlib.redirect_stdout(io.StringIO()):
- coco_api = COCO(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds()))
-
- id_map = None
- if dataset_name is not None:
- meta = MetadataCatalog.get(dataset_name)
- cat_ids = sorted(coco_api.getCatIds())
- cats = coco_api.loadCats(cat_ids)
- # The categories in a custom json file may not be sorted.
- thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])]
- meta.thing_classes = thing_classes
-
- if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)):
- if "coco" not in dataset_name:
- logger.warning(
- """
- Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
- """
- )
- id_map = {v: i for i, v in enumerate(cat_ids)}
- meta.thing_dataset_id_to_contiguous_id = id_map
-
- # sort indices for reproducible results
- img_ids = sorted(coco_api.imgs.keys())
- imgs = coco_api.loadImgs(img_ids)
- logger.info("Loaded {} images in COCO format from {}".format(len(imgs), json_file))
-
- dataset_dicts = []
-
- ann_keys = ["iscrowd", "bbox", "category_id"] + (extra_annotation_keys or [])
-
- for img_dict in imgs:
- record = {}
- record["file_name"] = os.path.join(image_root, img_dict["file_name"])
- record["height"] = img_dict["height"]
- record["width"] = img_dict["width"]
- image_id = record["image_id"] = img_dict["id"]
- anno_dict_list = coco_api.imgToAnns[image_id]
- if 'neg_category_ids' in img_dict:
- record['neg_category_ids'] = \
- [id_map[x] for x in img_dict['neg_category_ids']]
-
- objs = []
- for anno in anno_dict_list:
- assert anno["image_id"] == image_id
-
- assert anno.get("ignore", 0) == 0
-
- obj = {key: anno[key] for key in ann_keys if key in anno}
-
- segm = anno.get("segmentation", None)
- if segm: # either list[list[float]] or dict(RLE)
- if not isinstance(segm, dict):
- # filter out invalid polygons (< 3 points)
- segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
- if len(segm) == 0:
- num_instances_without_valid_segmentation += 1
- continue # ignore this instance
- obj["segmentation"] = segm
-
- obj["bbox_mode"] = BoxMode.XYWH_ABS
-
- if id_map:
- obj["category_id"] = id_map[obj["category_id"]]
- objs.append(obj)
- record["annotations"] = objs
- dataset_dicts.append(record)
-
- del coco_api
- return dataset_dicts
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/test_config_w32.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/test_config_w32.py
deleted file mode 100644
index 3d9e06f029e46c14cb9ddb39319cabe86fef9b44..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/test_config_w32.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/upernet_uniformer.py',
- '../../configs/_base_/datasets/ade20k.py',
- '../../configs/_base_/default_runtime.py',
- '../../configs/_base_/schedules/schedule_160k.py'
-]
-model = dict(
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.25,
- windows=True,
- hybrid=False,
- window_size=32
- ),
- decode_head=dict(
- in_channels=[64, 128, 320, 512],
- num_classes=150
- ),
- auxiliary_head=dict(
- in_channels=320,
- num_classes=150
- ))
-
-# AdamW optimizer, no weight decay for position embedding & layer norm in backbone
-optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-
-lr_config = dict(_delete_=True, policy='poly',
- warmup='linear',
- warmup_iters=1500,
- warmup_ratio=1e-6,
- power=1.0, min_lr=0.0, by_epoch=False)
-
-data=dict(samples_per_gpu=2)
\ No newline at end of file
diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/train_util.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/train_util.py
deleted file mode 100644
index 7d48cc7beba640703e744112aa2ec458a195a16b..0000000000000000000000000000000000000000
--- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/train_util.py
+++ /dev/null
@@ -1,204 +0,0 @@
-import torch
-import numpy as np
-from .mesh_util import *
-from .sample_util import *
-from .geometry import *
-import cv2
-from PIL import Image
-from tqdm import tqdm
-
-def reshape_multiview_tensors(image_tensor, calib_tensor):
- # Careful here! Because we put single view and multiview together,
- # the returned tensor.shape is 5-dim: [B, num_views, C, W, H]
- # So we need to convert it back to 4-dim [B*num_views, C, W, H]
- # Don't worry classifier will handle multi-view cases
- image_tensor = image_tensor.view(
- image_tensor.shape[0] * image_tensor.shape[1],
- image_tensor.shape[2],
- image_tensor.shape[3],
- image_tensor.shape[4]
- )
- calib_tensor = calib_tensor.view(
- calib_tensor.shape[0] * calib_tensor.shape[1],
- calib_tensor.shape[2],
- calib_tensor.shape[3]
- )
-
- return image_tensor, calib_tensor
-
-
-def reshape_sample_tensor(sample_tensor, num_views):
- if num_views == 1:
- return sample_tensor
- # Need to repeat sample_tensor along the batch dim num_views times
- sample_tensor = sample_tensor.unsqueeze(dim=1)
- sample_tensor = sample_tensor.repeat(1, num_views, 1, 1)
- sample_tensor = sample_tensor.view(
- sample_tensor.shape[0] * sample_tensor.shape[1],
- sample_tensor.shape[2],
- sample_tensor.shape[3]
- )
- return sample_tensor
-
-
-def gen_mesh(opt, net, cuda, data, save_path, use_octree=True):
- image_tensor = data['img'].to(device=cuda)
- calib_tensor = data['calib'].to(device=cuda)
-
- net.filter(image_tensor)
-
- b_min = data['b_min']
- b_max = data['b_max']
- try:
- save_img_path = save_path[:-4] + '.png'
- save_img_list = []
- for v in range(image_tensor.shape[0]):
- save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0
- save_img_list.append(save_img)
- save_img = np.concatenate(save_img_list, axis=1)
- Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path)
-
- verts, faces, _, _ = reconstruction(
- net, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree)
- verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float()
- xyz_tensor = net.projection(verts_tensor, calib_tensor[:1])
- uv = xyz_tensor[:, :2, :]
- color = index(image_tensor[:1], uv).detach().cpu().numpy()[0].T
- color = color * 0.5 + 0.5
- save_obj_mesh_with_color(save_path, verts, faces, color)
- except Exception as e:
- print(e)
- print('Can not create marching cubes at this time.')
-
-def gen_mesh_color(opt, netG, netC, cuda, data, save_path, use_octree=True):
- image_tensor = data['img'].to(device=cuda)
- calib_tensor = data['calib'].to(device=cuda)
-
- netG.filter(image_tensor)
- netC.filter(image_tensor)
- netC.attach(netG.get_im_feat())
-
- b_min = data['b_min']
- b_max = data['b_max']
- try:
- save_img_path = save_path[:-4] + '.png'
- save_img_list = []
- for v in range(image_tensor.shape[0]):
- save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0
- save_img_list.append(save_img)
- save_img = np.concatenate(save_img_list, axis=1)
- Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path)
-
- verts, faces, _, _ = reconstruction(
- netG, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree)
-
- # Now Getting colors
- verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float()
- verts_tensor = reshape_sample_tensor(verts_tensor, opt.num_views)
- color = np.zeros(verts.shape)
- interval = 10000
- for i in range(len(color) // interval):
- left = i * interval
- right = i * interval + interval
- if i == len(color) // interval - 1:
- right = -1
- netC.query(verts_tensor[:, :, left:right], calib_tensor)
- rgb = netC.get_preds()[0].detach().cpu().numpy() * 0.5 + 0.5
- color[left:right] = rgb.T
-
- save_obj_mesh_with_color(save_path, verts, faces, color)
- except Exception as e:
- print(e)
- print('Can not create marching cubes at this time.')
-
-def adjust_learning_rate(optimizer, epoch, lr, schedule, gamma):
- """Sets the learning rate to the initial LR decayed by schedule"""
- if epoch in schedule:
- lr *= gamma
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
- return lr
-
-
-def compute_acc(pred, gt, thresh=0.5):
- '''
- return:
- IOU, precision, and recall
- '''
- with torch.no_grad():
- vol_pred = pred > thresh
- vol_gt = gt > thresh
-
- union = vol_pred | vol_gt
- inter = vol_pred & vol_gt
-
- true_pos = inter.sum().float()
-
- union = union.sum().float()
- if union == 0:
- union = 1
- vol_pred = vol_pred.sum().float()
- if vol_pred == 0:
- vol_pred = 1
- vol_gt = vol_gt.sum().float()
- if vol_gt == 0:
- vol_gt = 1
- return true_pos / union, true_pos / vol_pred, true_pos / vol_gt
-
-
-def calc_error(opt, net, cuda, dataset, num_tests):
- if num_tests > len(dataset):
- num_tests = len(dataset)
- with torch.no_grad():
- erorr_arr, IOU_arr, prec_arr, recall_arr = [], [], [], []
- for idx in tqdm(range(num_tests)):
- data = dataset[idx * len(dataset) // num_tests]
- # retrieve the data
- image_tensor = data['img'].to(device=cuda)
- calib_tensor = data['calib'].to(device=cuda)
- sample_tensor = data['samples'].to(device=cuda).unsqueeze(0)
- if opt.num_views > 1:
- sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views)
- label_tensor = data['labels'].to(device=cuda).unsqueeze(0)
-
- res, error = net.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor)
-
- IOU, prec, recall = compute_acc(res, label_tensor)
-
- # print(
- # '{0}/{1} | Error: {2:06f} IOU: {3:06f} prec: {4:06f} recall: {5:06f}'
- # .format(idx, num_tests, error.item(), IOU.item(), prec.item(), recall.item()))
- erorr_arr.append(error.item())
- IOU_arr.append(IOU.item())
- prec_arr.append(prec.item())
- recall_arr.append(recall.item())
-
- return np.average(erorr_arr), np.average(IOU_arr), np.average(prec_arr), np.average(recall_arr)
-
-def calc_error_color(opt, netG, netC, cuda, dataset, num_tests):
- if num_tests > len(dataset):
- num_tests = len(dataset)
- with torch.no_grad():
- error_color_arr = []
-
- for idx in tqdm(range(num_tests)):
- data = dataset[idx * len(dataset) // num_tests]
- # retrieve the data
- image_tensor = data['img'].to(device=cuda)
- calib_tensor = data['calib'].to(device=cuda)
- color_sample_tensor = data['color_samples'].to(device=cuda).unsqueeze(0)
-
- if opt.num_views > 1:
- color_sample_tensor = reshape_sample_tensor(color_sample_tensor, opt.num_views)
-
- rgb_tensor = data['rgbs'].to(device=cuda).unsqueeze(0)
-
- netG.filter(image_tensor)
- _, errorC = netC.forward(image_tensor, netG.get_im_feat(), color_sample_tensor, calib_tensor, labels=rgb_tensor)
-
- # print('{0}/{1} | Error inout: {2:06f} | Error color: {3:06f}'
- # .format(idx, num_tests, errorG.item(), errorC.item()))
- error_color_arr.append(errorC.item())
-
- return np.average(error_color_arr)
-
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/tools/train_pl.py b/spaces/NAACL2022/CLIP-Caption-Reward/tools/train_pl.py
deleted file mode 100644
index 48ac2d0cf68466bd0e39f9c994056063a0529f27..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/tools/train_pl.py
+++ /dev/null
@@ -1,709 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-
-import numpy as np
-
-import time
-import os
-from collections import defaultdict
-
-import captioning.utils.opts as opts
-import captioning.models as models
-from captioning.data.pth_loader import CaptionDataset
-import captioning.utils.eval_utils as eval_utils
-import captioning.utils.misc as utils
-from captioning.utils.rewards import init_scorer, get_self_critical_reward
-from captioning.modules.loss_wrapper import LossWrapper
-
-import pytorch_lightning as pl
-
-import detectron2.utils.comm as d2comm
-from detectron2.utils.env import seed_all_rng
-seed_all_rng(1234)
-
-
-class LitModel(pl.LightningModule):
- def __init__(self, opt):
- super().__init__()
- self.opt = opt
- # Intilaize dataset
- self.dataset = CaptionDataset(opt)
- opt.vocab_size = self.dataset.vocab_size
- opt.seq_length = self.dataset.seq_length
- self.batch_size = opt.batch_size
-
- # Build model
- opt.vocab = self.dataset.get_vocab()
- model = models.setup(opt)
- # print(model)
- del opt.vocab
-
- # wrapper with loss in it.
- lw_model = LossWrapper(model, opt)
-
- self.model = model
- self.lw_model = lw_model
-
- self.struc_flag = None
- self.sc_flag = None
-
- # if self.opt.use_clipscore:
- # if self.opt.use_clipscore or os.getenv('EVALUATE', '0') == '1':
- # if CLIP-S+Grammar is used in reward -> Launch another CLIP-S where parameter is unchanged
- if getattr(self.opt, 'use_grammar', False):
- from captioning.utils.clipscore import CLIPScore
- self.val_clipscore_model = CLIPScore(
- mode=opt.clipscore_mode, use_grammar=False)
- for p in self.val_clipscore_model.parameters():
- p.requires_grad = False
- else:
- if self.lw_model.clipscore_model is not None:
- self.val_clipscore_model = self.lw_model.clipscore_model
- else:
- from captioning.utils.clipscore import CLIPScore
- self.val_clipscore_model = CLIPScore(
- mode=opt.clipscore_mode, use_grammar=False)
- for p in self.val_clipscore_model.parameters():
- p.requires_grad = False
- self.val_clipscore_model.eval()
-
- # BERTSCORE
- from bert_score import BERTScorer
- self.bert_scorer = BERTScorer(
- lang="en",
- # rescale_with_baseline=True,
- rescale_with_baseline=False,
- device='cpu'
- )
-
- def forward(self, *args, **kwargs):
- """
- I hate this design. Never pretend it as a nn.Module
- """
- raise NotImplementedError
-
- def train_dataloader(self):
- train_dataset = torch.utils.data.Subset(
- self.dataset,
- self.dataset.split_ix['train']
- )
-
- train_loader = torch.utils.data.DataLoader(
- dataset=train_dataset,
- batch_size=self.batch_size,
- shuffle=True,
- num_workers=4,
- collate_fn=self.dataset.collate_func
- )
- return train_loader
-
- def val_dataloader(self, split='val'):
- val_dataset = torch.utils.data.Subset(
- self.dataset,
- self.dataset.split_ix[split]
- )
- val_loader = torch.utils.data.DataLoader(
- val_dataset,
- batch_size=self.batch_size,
- shuffle=False,
- num_workers=4,
- drop_last=False,
- collate_fn=self.dataset.collate_func
- )
- return val_loader
-
- def test_dataloader(self):
- return self.val_dataloader('test')
-
- def training_step(self, data, batch_idx):
- sc_flag, struc_flag = self.sc_flag, self.struc_flag
-
- tmp = [data['fc_feats'], data['att_feats'],
- data['labels'], data['masks'], data['att_masks']]
- fc_feats, att_feats, labels, masks, att_masks = tmp
- if int(os.getenv('M2_cider', '0')) != 0:
- data['gts'] = data['rawgts']
-
- if self.opt.use_clipscore:
- clip_vis_feats = data['clip_vis_feats']
- model_out = self.lw_model(fc_feats, att_feats, labels, masks, att_masks,
- data['gts'], torch.arange(0, len(data['gts'])), sc_flag, struc_flag,
- clip_vis_feats=clip_vis_feats)
- else:
- model_out = self.lw_model(fc_feats, att_feats, labels, masks, att_masks,
- data['gts'], torch.arange(0, len(data['gts'])), sc_flag, struc_flag)
- loss = model_out['loss']
-
- data_time = self.trainer.profiler.recorded_durations["get_train_batch"][-1]
- data_time = torch.tensor(data_time)
-
- logger_logs = model_out.copy()
- # if struc_flag or sc_flag:
- # logger_logs['reward'] = model_out['reward'].mean()
- # logger_logs['reward_var'] = model_out['reward'].var(1).mean()
- if struc_flag or sc_flag:
- logger_logs['reward'] = model_out['reward'].mean()
- for k in ['CLIP-S', 'RefCLIP-S', 'CIDEr', 'grammar_reward']:
- if k in model_out:
- logger_logs[k] = model_out[k]
- if struc_flag:
- logger_logs['reward_var'] = model_out['reward'].var(1).mean()
-
- logger_logs['scheduled_sampling_prob'] = torch.tensor(
- self.model.ss_prob)
- # logger_logs['training_loss'] = loss
- logger_logs['loss'] = loss
- logger_logs['data_time'] = data_time
-
- # UserWarning: The {progress_bar:dict keyword} was deprecated in 0.9.1 and will be removed in 1.0.0
- # Please use self.log(...) inside the lightningModule instead.
-
- # # log on a step or aggregate epoch metric to the logger and/or progress bar
- # # (inside LightningModule)
- # self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
- # warnings.warn(*args, **kwargs)
- # UserWarning: The {log:dict keyword} was deprecated in 0.9.1 and will be removed in 1.0.0
- # Please use self.log(...) inside the lightningModule instead.
-
- # output = {
- # 'loss': loss,
- # 'log': logger_logs,
- # 'progress_bar': {'data_time': data_time}
- # }
-
- for k, v in logger_logs.items():
- if k in ['reward', 'reward_var', 'data_time', 'CLIP-S', 'RefCLIP-S', 'CIDEr', 'grammar_reward']:
- self.log('train/'+k, v, prog_bar=True)
- else:
- self.log('train/'+k, v)
-
- return loss
-
- def validation_step(self, data, batch_idx):
- model = self.model
- crit = self.lw_model.crit
-
- opt = self.opt
- eval_kwargs = {'dataset': opt.input_json}
- eval_kwargs.update(vars(opt))
-
- # CLIPScore
- use_grammar = getattr(self.opt, 'use_grammar', False)
- joint_out = getattr(self.opt, 'joint_out', False)
-
- verbose = eval_kwargs.get('verbose', True)
- verbose_beam = eval_kwargs.get('verbose_beam', 0)
- verbose_loss = eval_kwargs.get('verbose_loss', 1)
- # num_images = eval_kwargs.get('num_images', eval_kwargs.get('val_images_use', -1))
- # lang_eval = eval_kwargs.get('language_eval', 0)
- dataset = eval_kwargs.get('dataset', 'coco')
- beam_size = eval_kwargs.get('beam_size', 1)
- sample_n = eval_kwargs.get('sample_n', 1)
- remove_bad_endings = eval_kwargs.get('remove_bad_endings', 0)
- # Use this nasty way to make other code clean since it's a global configuration
- os.environ["REMOVE_BAD_ENDINGS"] = str(remove_bad_endings)
-
- predictions = []
- n_predictions = []
-
- loss = torch.tensor(0)
- if data.get('labels', None) is not None and verbose_loss:
- # forward the model to get loss
- tmp = [data['fc_feats'], data['att_feats'],
- data['labels'], data['masks'], data['att_masks']]
- fc_feats, att_feats, labels, masks, att_masks = tmp
-
- loss = crit(model(fc_feats, att_feats,
- labels[..., :-1], att_masks), labels[..., 1:], masks[..., 1:])
-
- # forward the model to also get generated samples for each image
- # Only leave one feature for each image, in case duplicate sample
- tmp_eval_kwargs = eval_kwargs.copy()
- tmp_eval_kwargs.update({'sample_n': 1})
- seq, seq_logprobs = model(
- fc_feats, att_feats, att_masks, opt=tmp_eval_kwargs, mode='sample')
- seq = seq.data
- entropy = - (F.softmax(seq_logprobs, dim=2) *
- seq_logprobs).sum(2).sum(1) / ((seq > 0).to(seq_logprobs).sum(1)+1)
- perplexity = - \
- seq_logprobs.gather(2, seq.unsqueeze(2)).squeeze(
- 2).sum(1) / ((seq > 0).to(seq_logprobs).sum(1)+1)
-
- # Print beam search
- if beam_size > 1 and verbose_beam:
- for i in range(fc_feats.shape[0]):
- print('\n'.join([utils.decode_sequence(model.vocab, _[
- 'seq'].unsqueeze(0))[0] for _ in model.done_beams[i]]))
- print('--' * 10)
- sents = utils.decode_sequence(model.vocab, seq)
-
- # if self.opt.use_clipscore or os.getenv('EVALUATE', '0') == '1':
- # text_feat = self.lw_model.clipscore_model.text_extract(sents)
- text_feat = self.val_clipscore_model.text_extract(sents, proj_norm=False)
-
- text_cont_feat = self.val_clipscore_model.clip_model.text_projection(text_feat)
- text_cont_feat = text_cont_feat / text_cont_feat.norm(dim=-1, keepdim=True)
-
- vis_feat = data['clip_vis_feats']
- # if self.opt.clipscore_mode == 'clip_s':
- # clip_s = self.val_clipscore_model(text_feat=text_cont_feat, img_feat=vis_feat, mode='clip_s')
-
- # elif self.opt.clipscore_mode == 'refclip_s':
- clip_s = self.val_clipscore_model(text_feat=text_cont_feat, img_feat=vis_feat, mode='clip_s')
- # ref_text = utils.decode_sequence(model.vocab, data['gts'])
-
- gt_indices = torch.arange(0, len(data['gts']))
- data_gts = [data['gts'][_] for _ in gt_indices.tolist()]
-
- B = len(data_gts)
-
- gts = []
- gts_valid_mask = []
- max_n_refs = max([len(_gts) for _gts in data_gts])
- for i in range(len(data_gts)):
- _gts = utils.decode_sequence(model.vocab, data_gts[i])
- # pad references
- n_ref = len(_gts)
- _gts.extend([''] * (max_n_refs - n_ref))
- gts.extend(_gts)
- gts_valid_mask.extend([1] * n_ref + [0] * (max_n_refs - n_ref))
- assert len(gts) == B * max_n_refs
- assert len(gts_valid_mask) == B * max_n_refs
-
- ref_text = gts
- ref_text_mask = gts_valid_mask
-
- refclip_s = self.val_clipscore_model(
- text_feat=text_cont_feat, img_feat=vis_feat,
- ref_text=ref_text, ref_text_mask=ref_text_mask, mode='refclip_s')
-
- # use_grammar = getattr(self.opt, 'use_grammar', False)
- # joint_out = getattr(self.opt, 'joint_out', False)
- if use_grammar and not joint_out:
- with torch.no_grad():
- # grammar_logit = self.val_clipscore_model.grammar_score_head(text_feat.view(-1, 512))
- grammar_logit = self.lw_model.clipscore_model.grammar_score_head(text_feat.view(-1, 512))
- grammar_prob = torch.softmax(grammar_logit, dim=-1)[:, 1]
-
-
- # BERTScore
- if next(self.bert_scorer._model.parameters()).device != self.device:
- self.bert_scorer._model.to(self.device)
- self.bert_scorer.device = self.device
-
-
- # [B*K] -> [B, K]
- ref_text_per_example = []
- for i in range(B):
- ref_text_list_example = []
- for k in range(max_n_refs):
- ref = ref_text[i * max_n_refs + k]
- if len(ref) > 0:
- ref_text_list_example.append(ref)
- # assert len(ref_text_list_example) == max_n_refs
- ref_text_per_example.append(ref_text_list_example)
- assert len(ref_text_per_example) == B
-
- P, R, F1 = self.bert_scorer.score(
- sents,
- ref_text_per_example,
- )
- bertscore_f1 = F1
- # print('Example 5:')
- # for i in range(5):
- # print('Generated:', sents[i])
- # print('ref_text:', ref_text_per_example[i])
- # print('BERT-Score:', F1[i].item())
-
-
- for k, sent in enumerate(sents):
- entry = {'image_id': data['infos'][k]['id'], 'caption': sent,
- 'perplexity': perplexity[k].item(), 'entropy': entropy[k].item()}
- if self.opt.use_clipscore or os.getenv('EVALUATE', '0') == '1':
- # if self.opt.clipscore_mode == 'clip_s':
- # entry['clipscore'] = clipscore[k].item()
- # entry['CLIP-S'] = clip_s[k].item()
- # elif self.opt.clipscore_mode == 'refclip_s':
- entry['CLIP-S'] = clip_s[k].item()
- entry['RefCLIP-S'] = refclip_s[k].item()
-
- if use_grammar and not joint_out:
- entry['grammar_prob'] = grammar_prob[k].item()
-
- # BERT-S
- entry['BERT-S'] = bertscore_f1[k].item()
-
- if eval_kwargs.get('dump_path', 0) == 1:
- entry['file_name'] = data['infos'][k]['file_path']
- predictions.append(entry)
- if eval_kwargs.get('dump_images', 0) == 1:
- # dump the raw image to vis/ folder
- cmd = 'cp "' + os.path.join(eval_kwargs['image_root'], data['infos'][k]['file_path']) + \
- '" vis/imgs/img' + \
- str(len(predictions)) + '.jpg' # bit gross
- print(cmd)
- os.system(cmd)
-
- if verbose:
- print('image %s: %s' %
- (entry['image_id'], entry['caption']))
-
- if sample_n > 1:
- eval_utils.eval_split_n(model, n_predictions, [
- fc_feats, att_feats, att_masks, data], eval_kwargs)
-
- output = {
- # 'val_loss': loss,
- 'loss': loss,
- 'predictions': predictions,
- 'n_predictions': n_predictions,
- }
- return output
-
- def test_step(self, *args, **kwargs):
- return self.validation_step(*args, **kwargs)
-
- def validation_epoch_end(self, outputs, split='val'):
- outputs = d2comm.gather(outputs)
- # master node
- if d2comm.is_main_process():
- assert self.trainer.node_rank == 0 and self.trainer.local_rank == 0
- outputs = sum(outputs, [])
-
- opt = self.opt
- # val_loss_mean = sum([_['val_loss']
- # val_loss_mean = sum([_['val_loss'].cpu()
- val_loss_mean = sum([_['loss'].cpu()
- for _ in outputs]) / len(outputs)
-
- predictions = sum([_['predictions'] for _ in outputs], [])
- if len(outputs[0]['n_predictions']) != 0:
- n_predictions = sum([_['n_predictions'] for _ in outputs], [])
- else:
- n_predictions = []
-
- lang_stats = None
- if len(n_predictions) > 0 and 'perplexity' in n_predictions[0]:
- n_predictions = sorted(
- n_predictions, key=lambda x: x['perplexity'])
-
- if not os.path.isdir('eval_results'):
- os.mkdir('eval_results')
- torch.save((predictions, n_predictions), os.path.join(
- 'eval_results/', '.saved_pred_' + opt.id + '_' + split + '.pth'))
-
- if opt.language_eval:
- lang_stats = eval_utils.language_eval(
- opt.input_json, predictions, n_predictions, vars(opt), split)
-
- if opt.reduce_on_plateau:
- optimizer = self.trainer.optimizers[0]
- if 'CIDEr' in lang_stats:
- optimizer.scheduler_step(-lang_stats['CIDEr'])
- else:
- optimizer.scheduler_step(val_loss_mean)
-
- # out = {
- # 'val_loss': val_loss_mean
- # }
- out = {
- 'loss': val_loss_mean
- }
- out.update(lang_stats)
- # out['to_monitor'] = lang_stats['CIDEr'] if lang_stats is not None else -val_loss_mean
- if self.opt.use_clipscore or os.getenv('EVALUATE', '0') == '1':
- # if self.opt.clipscore_mode == 'clip_s':
- # out['clipscore'] = sum([p['clipscore'] for p in predictions]) / len(predictions)
- # print('CLIPScore', out['clipscore'])
- # out['CLIP-S'] = sum([p['CLIP-S'] for p in predictions]) / len(predictions)
- # print('CLIP-S', out['CLIP-S'])
- # elif self.opt.clipscore_mode == 'refclip_s':
- out['CLIP-S'] = sum([p['CLIP-S'] for p in predictions]) / len(predictions)
- print('CLIP-S', out['CLIP-S'])
-
- out['RefCLIP-S'] = sum([p['RefCLIP-S'] for p in predictions]) / len(predictions)
- print('RefCLIP-S', out['RefCLIP-S'])
-
- if getattr(self.opt, 'use_grammar', False) and not getattr(self.opt, 'joint_out', False):
- out['grammar_prob'] = sum([p['grammar_prob'] for p in predictions]) / len(predictions)
- print('grammar_prob', out['grammar_prob'])
-
- out['BERT-S'] = sum([p['BERT-S'] for p in predictions]) / len(predictions)
- print('BERT-S', out['BERT-S'])
- else:
- out = {}
-
- out = d2comm.all_gather(out)[0] # Only the one from master node
- assert len(out) > 0 # make sure the head has index 0
-
- # must all be tensors
- out = {k: torch.tensor(v) if not torch.is_tensor(
- v) else v for k, v in out.items()}
-
- # return {
- # 'progress_bar': {'val_loss': out['val_loss']},
- # 'log': out,
- # }
- for k, v in out.items():
- # if k in ['loss', 'clipscore', 'RefCLIP-S', 'CIDEr']:
- # if split != 'test':
- # self.log(f'{split}/{k}', v, prog_bar=True)
- # elif k == 'to_monitor':
- # if split != 'test':
- # self.log(f'{split}/{k}', v)
- # else:
- self.log(f'{split}/{k}', v)
-
- def test_epoch_end(self, outputs):
- # out = self.validation_epoch_end(outputs, 'test')
- # out['progress_bar'] = {
- # # 'test_loss': out['progress_bar']['val_loss']
- # 'test_loss': out['progress_bar']['loss']
- # }
- # out['log']['test_loss'] = out['log']['val_loss']
- # del out['log']['val_loss']
- # del out['log']['to_monitor']
-
- # out['log'] = {'test_'+k if 'test' not in k else k:v \
- # for k,v in out['log'].items()}
-
- # return out
- self.validation_epoch_end(outputs, 'test')
-
- def configure_optimizers(self):
- opt = self.opt
- model = self.model
-
- parameters = [p for p in model.parameters() if p.requires_grad]
-
- if opt.noamopt:
- # assert opt.caption_model in ['transformer', 'bert', 'm2transformer'], 'noamopt can only work with transformer'
- optimizer = utils.get_std_opt(
- model, optim_func=opt.optim, factor=opt.noamopt_factor, warmup=opt.noamopt_warmup)
- elif opt.reduce_on_plateau:
- # optimizer = utils.build_optimizer(model.parameters(), opt)
- optimizer = utils.build_optimizer(parameters, opt)
- optimizer = utils.ReduceLROnPlateau(optimizer,
- factor=opt.reduce_on_plateau_factor,
- patience=opt.reduce_on_plateau_patience)
- else:
- # optimizer = utils.build_optimizer(model.parameters(), opt)
- optimizer = utils.build_optimizer(parameters, opt)
- return [optimizer], []
-
- def optimizer_step(self, epoch, batch_idx, optimizer,
- optimizer_idx, *args, **kwargs):
- # warm up lr
- opt = self.opt
- iteration = self.trainer.global_step
- if opt.use_warmup and (iteration < opt.noamopt_warmup):
- opt.current_lr = opt.learning_rate * \
- (iteration+1) / opt.noamopt_warmup
- utils.set_lr(optimizer, opt.current_lr)
-
- super().optimizer_step(epoch, batch_idx, optimizer,
- optimizer_idx, *args, **kwargs)
-
- def state_dict(self):
- """
- Save the model state dict as well as opt and vocab
- """
- state_dict = self.model.state_dict()
- device = next(iter(state_dict.values())).device
- assert '_vocab' not in state_dict and '_opt' not in state_dict, 'Just in case'
- state_dict.update({
- '_vocab': utils.serialize_to_tensor(self.model.vocab).to(device),
- '_opt': utils.serialize_to_tensor(self.opt).to(device)
- })
- return state_dict
-
- def load_state_dict(self, state_dict=None, strict=True):
- if '_vocab' in state_dict:
- self.model.vocab = utils.deserialize(state_dict['_vocab'])
- del state_dict['_vocab']
- # elif strict:
- # raise KeyError
- if '_opt' in state_dict:
- saved_model_opt = utils.deserialize(state_dict['_opt'])
- del state_dict['_opt']
- opt = self.opt
- # Make sure the saved opt is compatible with the curren topt
- need_be_same = ["caption_model",
- "rnn_type", "rnn_size", "num_layers"]
- for checkme in need_be_same:
- if getattr(saved_model_opt, checkme) in ['updown', 'topdown'] and \
- getattr(opt, checkme) in ['updown', 'topdown']:
- continue
- assert getattr(saved_model_opt, checkme) == getattr(
- opt, checkme), "Command line argument and saved model disagree on '%s' " % checkme
- # elif strict:
- # raise KeyError
- self.model.load_state_dict(state_dict, strict)
-
-
-class OnEpochStartCallback(pl.Callback):
-
- def on_epoch_start(self, trainer, pl_module):
- # Update lr/training stage/scheduled sampling prob etc.
- opt = pl_module.opt
- model = pl_module.model
- epoch = trainer.current_epoch
- optimizer = trainer.optimizers[0]
-
- if not opt.noamopt and not opt.reduce_on_plateau:
- # Assign the learning rate
- if epoch > opt.learning_rate_decay_start and opt.learning_rate_decay_start >= 0:
- frac = (
- epoch - opt.learning_rate_decay_start) // opt.learning_rate_decay_every
- decay_factor = opt.learning_rate_decay_rate ** frac
- opt.current_lr = opt.learning_rate * decay_factor
- else:
- opt.current_lr = opt.learning_rate
- utils.set_lr(optimizer, opt.current_lr) # set the decayed rate
- # Assign the scheduled sampling prob
- if epoch > opt.scheduled_sampling_start and opt.scheduled_sampling_start >= 0:
- frac = (
- epoch - opt.scheduled_sampling_start) // opt.scheduled_sampling_increase_every
- opt.ss_prob = min(opt.scheduled_sampling_increase_prob *
- frac, opt.scheduled_sampling_max_prob)
- model.ss_prob = opt.ss_prob
-
- # If start self critical training
- if opt.self_critical_after != -1 and epoch >= opt.self_critical_after:
- sc_flag = True
- init_scorer(opt.cached_tokens)
- else:
- sc_flag = False
-
- # If start structure loss training
- if opt.structure_after != -1 and epoch >= opt.structure_after:
- struc_flag = True
- init_scorer(opt.cached_tokens)
- else:
- struc_flag = False
-
- pl_module.struc_flag = struc_flag
- pl_module.sc_flag = sc_flag
-
-
-class ModelCheckpoint(pl.callbacks.ModelCheckpoint):
-
- def on_keyboard_interrupt(self, trainer, pl_module):
- # Save model when keyboard interrupt
- filepath = os.path.join(self.dirpath, self.prefix + 'interrupt.ckpt')
- self._save_model(filepath)
-
-
-opt = opts.parse_opt()
-
-checkpoint_callback = ModelCheckpoint(
- filepath=opt.checkpoint_path,
- # dirpath=opt.checkpoint_path,
- save_last=True,
- save_top_k=1,
- verbose=True,
- # monitor='to_monitor',
- # monitor='val/to_monitor',
- monitor='val/CIDEr',
- mode='max',
- # prefix=opt.id+'_',
- prefix=opt.id,
- # filename=f'{opt.id}_',
-)
-
-verbose = True
-# import torch
-# if torch.cuda.current_device() in [0, -1]:
-if 'LOCAL_RANK' in os.environ and os.environ['LOCAL_RANK'] != '0':
- verbose = False
-
-if verbose:
- print(opt)
- print("""
- val_image_use,
- save_checkpoint_very
- save_every_epoch,
- save_history-ckpt will be ignored.
- """)
-
-# Lightning defines batch size as batch size per gpu
-assert opt.batch_size % torch.cuda.device_count() == 0
-opt.batch_size = opt.batch_size // torch.cuda.device_count()
-
-# If resume from last checkpoint
-# if opt.start_from is not None and os.path.isfile(os.path.join(opt.start_from, f'{opt.id}_last.ckpt')):
-# resume_from = os.path.join(opt.start_from, f'{opt.id}_last.ckpt')
-if opt.start_from is not None:
- resume_from = os.path.join(opt.start_from, f'{opt.id}-last.ckpt')
- if os.path.isfile(resume_from):
- if verbose:
- print('Loading checkpoint from', resume_from)
- else:
- print("Checkpoint not found:", resume_from)
- resume_from = None
-else:
- resume_from = None
-
-from pytorch_lightning.loggers import WandbLogger
-wandb_logger = WandbLogger(
- project='CLIP-ViL-COCOCaption',
- name=opt.id,
-)
-
-if verbose:
- wandb_logger.experiment.config.update(opt)
- from pathlib import Path
- import glob
- import wandb
- # src_dir = Path(__file__).resolve().parent.parent
- glob_str = "**/*.py"
- base_path = './'
- wandb.save(glob_str=glob_str, base_path=base_path)
-
- # code = wandb.Artifact('project-source', type='code')
- # for path in glob.glob('**/*.py', recursive=True):
- # code.add_file(path, name='source/'+path)
- # print(path)
- # wandb.run.use_artifact(code)
-
-
-
-
-lit = LitModel(opt)
-# warning grad_clip_mode is ignored.
-trainer = pl.Trainer(
- callbacks=[
- OnEpochStartCallback(),
- # pl.callbacks.lr_logger.LearningRateLogger()
- pl.callbacks.LearningRateMonitor()
- ],
- default_root_dir=opt.checkpoint_path,
- resume_from_checkpoint=resume_from,
- distributed_backend='ddp',
- check_val_every_n_epoch=1,
- max_epochs=opt.max_epochs,
- gradient_clip_val=opt.grad_clip_value,
- gpus=torch.cuda.device_count(),
- checkpoint_callback=checkpoint_callback,
- log_gpu_memory='min_max',
- # log_save_interval=opt.losses_log_every,
- log_every_n_steps=opt.losses_log_every,
- profiler=True,
- # profiler='simple',
- # row_log_interval=10, # what is it?
- flush_logs_every_n_steps=10,
- num_sanity_val_steps=0,
- # val_check_interval=0.01,
- # limit_train_batches=500,
- # progress_bar_refresh_rate=0,
- # fast_dev_run=True,
- precision=opt.precision,
- logger=wandb_logger
-)
-
-if os.getenv('EVALUATE', '0') == '1':
- trainer.test(lit)
-else:
- trainer.fit(lit)
diff --git a/spaces/NN520/AI/src/lib/hooks/use-enter-submit.tsx b/spaces/NN520/AI/src/lib/hooks/use-enter-submit.tsx
deleted file mode 100644
index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/lib/hooks/use-enter-submit.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import { useRef, type RefObject } from 'react'
-
-export function useEnterSubmit(): {
- formRef: RefObject
- onKeyDown: (event: React.KeyboardEvent) => void
-} {
- const formRef = useRef(null)
-
- const handleKeyDown = (
- event: React.KeyboardEvent
- ): void => {
- if (
- event.key === 'Enter' &&
- !event.shiftKey &&
- !event.nativeEvent.isComposing
- ) {
- formRef.current?.requestSubmit()
- event.preventDefault()
- }
- }
-
- return { formRef, onKeyDown: handleKeyDown }
-}
diff --git a/spaces/Nehal07/text-translator-with-voice/README.md b/spaces/Nehal07/text-translator-with-voice/README.md
deleted file mode 100644
index ee92ba1470f02b4f893cf18dd19ddcd81e1e672f..0000000000000000000000000000000000000000
--- a/spaces/Nehal07/text-translator-with-voice/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Translator
-emoji: 🏢
-colorFrom: yellow
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/__init__.py
deleted file mode 100644
index 69f21684872f72ae8ee26d9ff7d2d2b6e6d526c3..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import criterions, models, modules # noqa
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/test_fsdp.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/test_fsdp.sh
deleted file mode 100644
index 1f428a035e4474427ded991f8e8307ea59f61f69..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/test_fsdp.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-rm -rf fsdp_dummy
-mkdir -p fsdp_dummy
-CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 256 --batch-size 8 \
- --arch transformer_lm_gpt2_tiny \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 5 --log-format json --log-interval 1 \
- --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \
- --restore-file x.pt "$@"
-
-# Now we try to load the checkpoint
-CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 256 --batch-size 8 \
- --arch transformer_lm_gpt2_tiny \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 2 --log-format json --log-interval 1 \
- --save-interval-updates 2 --save-dir fsdp_dummy
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/run_scripts/caption/coco_eval.py b/spaces/OFA-Sys/OFA-Image_Caption/run_scripts/caption/coco_eval.py
deleted file mode 100644
index c46ff0812fa0eecf46748fba9281af01abaee4df..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/run_scripts/caption/coco_eval.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import json
-import sys
-import os.path as op
-
-from pycocotools.coco import COCO
-from pycocoevalcap.eval import COCOEvalCap
-
-
-def evaluate_on_coco_caption(res_file, label_file, outfile=None):
- """
- res_file: txt file, each row is [image_key, json format list of captions].
- Each caption is a dict, with fields "caption", "conf".
- label_file: JSON file of ground truth captions in COCO format.
- """
- coco = COCO(label_file)
- cocoRes = coco.loadRes(res_file)
- cocoEval = COCOEvalCap(coco, cocoRes)
-
- # evaluate on a subset of images by setting
- # cocoEval.params['image_id'] = cocoRes.getImgIds()
- # please remove this line when evaluating the full validation set
- cocoEval.params['image_id'] = cocoRes.getImgIds()
-
- # evaluate results
- # SPICE will take a few minutes the first time, but speeds up due to caching
- cocoEval.evaluate()
- result = cocoEval.eval
- if not outfile:
- print(result)
- else:
- with open(outfile, 'w') as fp:
- json.dump(result, fp, indent=4)
- return result
-
-
-if __name__ == "__main__":
- if len(sys.argv) == 3:
- evaluate_on_coco_caption(sys.argv[1], sys.argv[2])
- elif len(sys.argv) == 4:
- evaluate_on_coco_caption(sys.argv[1], sys.argv[2], sys.argv[3])
- else:
- raise NotImplementedError
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/distributed/test_utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/distributed/test_utils.py
deleted file mode 100644
index 30f995b67acd39af5816d2eb412d6b4df7f44f8c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/distributed/test_utils.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import functools
-import sys
-import unittest
-
-import torch
-
-from fairseq.distributed import utils as dist_utils
-
-from .utils import objects_are_equal, spawn_and_init
-
-
-class DistributedTest(unittest.TestCase):
- def setUp(self):
- if not torch.cuda.is_available():
- raise unittest.SkipTest("CUDA not available, skipping test")
- if sys.platform == "win32":
- raise unittest.SkipTest("NCCL doesn't support Windows, skipping test")
- if torch.cuda.device_count() < 2:
- raise unittest.SkipTest("distributed tests require 2+ GPUs, skipping")
-
-
-class TestBroadcastObject(DistributedTest):
- def test_str(self):
- spawn_and_init(
- functools.partial(
- TestBroadcastObject._test_broadcast_object, "hello world"
- ),
- world_size=2,
- )
-
- def test_tensor(self):
- spawn_and_init(
- functools.partial(
- TestBroadcastObject._test_broadcast_object,
- torch.rand(5),
- ),
- world_size=2,
- )
-
- def test_complex(self):
- spawn_and_init(
- functools.partial(
- TestBroadcastObject._test_broadcast_object,
- {
- "a": "1",
- "b": [2, torch.rand(2, 3), 3],
- "c": (torch.rand(2, 3), 4),
- "d": {5, torch.rand(5)},
- "e": torch.rand(5),
- "f": torch.rand(5).int().cuda(),
- },
- ),
- world_size=2,
- )
-
- @staticmethod
- def _test_broadcast_object(ref_obj, rank, group):
- obj = dist_utils.broadcast_object(
- ref_obj if rank == 0 else None, src_rank=0, group=group
- )
- assert objects_are_equal(ref_obj, obj)
-
-
-class TestAllGatherList(DistributedTest):
- def test_str_equality(self):
- spawn_and_init(
- functools.partial(
- TestAllGatherList._test_all_gather_list_equality,
- "hello world",
- ),
- world_size=2,
- )
-
- def test_tensor_equality(self):
- spawn_and_init(
- functools.partial(
- TestAllGatherList._test_all_gather_list_equality,
- torch.rand(5),
- ),
- world_size=2,
- )
-
- def test_complex_equality(self):
- spawn_and_init(
- functools.partial(
- TestAllGatherList._test_all_gather_list_equality,
- {
- "a": "1",
- "b": [2, torch.rand(2, 3), 3],
- "c": (torch.rand(2, 3), 4),
- "d": {5, torch.rand(5)},
- "e": torch.rand(5),
- "f": torch.rand(5).int(),
- },
- ),
- world_size=2,
- )
-
- @staticmethod
- def _test_all_gather_list_equality(ref_obj, rank, group):
- objs = dist_utils.all_gather_list(ref_obj, group)
- for obj in objs:
- assert objects_are_equal(ref_obj, obj)
-
- def test_rank_tensor(self):
- spawn_and_init(
- TestAllGatherList._test_all_gather_list_rank_tensor, world_size=2
- )
-
- @staticmethod
- def _test_all_gather_list_rank_tensor(rank, group):
- obj = torch.tensor([rank])
- objs = dist_utils.all_gather_list(obj, group)
- for i, obj in enumerate(objs):
- assert obj.item() == i
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/losses/fid/fid_score.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/losses/fid/fid_score.py
deleted file mode 100644
index 6ca8e602c21bb6a624d646da3f6479aea033b0ac..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/losses/fid/fid_score.py
+++ /dev/null
@@ -1,328 +0,0 @@
-#!/usr/bin/env python3
-"""Calculates the Frechet Inception Distance (FID) to evalulate GANs
-
-The FID metric calculates the distance between two distributions of images.
-Typically, we have summary statistics (mean & covariance matrix) of one
-of these distributions, while the 2nd distribution is given by a GAN.
-
-When run as a stand-alone program, it compares the distribution of
-images that are stored as PNG/JPEG at a specified location with a
-distribution given by summary statistics (in pickle format).
-
-The FID is calculated by assuming that X_1 and X_2 are the activations of
-the pool_3 layer of the inception net for generated samples and real world
-samples respectively.
-
-See --help to see further details.
-
-Code apapted from https://github.com/bioinf-jku/TTUR to use PyTorch instead
-of Tensorflow
-
-Copyright 2018 Institute of Bioinformatics, JKU Linz
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-"""
-import os
-import pathlib
-from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser
-
-import numpy as np
-import torch
-# from scipy.misc import imread
-from imageio import imread
-from PIL import Image, JpegImagePlugin
-from scipy import linalg
-from torch.nn.functional import adaptive_avg_pool2d
-from torchvision.transforms import CenterCrop, Compose, Resize, ToTensor
-
-try:
- from tqdm import tqdm
-except ImportError:
- # If not tqdm is not available, provide a mock version of it
- def tqdm(x): return x
-
-try:
- from .inception import InceptionV3
-except ModuleNotFoundError:
- from inception import InceptionV3
-
-parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
-parser.add_argument('path', type=str, nargs=2,
- help=('Path to the generated images or '
- 'to .npz statistic files'))
-parser.add_argument('--batch-size', type=int, default=50,
- help='Batch size to use')
-parser.add_argument('--dims', type=int, default=2048,
- choices=list(InceptionV3.BLOCK_INDEX_BY_DIM),
- help=('Dimensionality of Inception features to use. '
- 'By default, uses pool3 features'))
-parser.add_argument('-c', '--gpu', default='', type=str,
- help='GPU to use (leave blank for CPU only)')
-parser.add_argument('--resize', default=256)
-
-transform = Compose([Resize(256), CenterCrop(256), ToTensor()])
-
-
-def get_activations(files, model, batch_size=50, dims=2048,
- cuda=False, verbose=False, keep_size=False):
- """Calculates the activations of the pool_3 layer for all images.
-
- Params:
- -- files : List of image files paths
- -- model : Instance of inception model
- -- batch_size : Batch size of images for the model to process at once.
- Make sure that the number of samples is a multiple of
- the batch size, otherwise some samples are ignored. This
- behavior is retained to match the original FID score
- implementation.
- -- dims : Dimensionality of features returned by Inception
- -- cuda : If set to True, use GPU
- -- verbose : If set to True and parameter out_step is given, the number
- of calculated batches is reported.
- Returns:
- -- A numpy array of dimension (num images, dims) that contains the
- activations of the given tensor when feeding inception with the
- query tensor.
- """
- model.eval()
-
- if len(files) % batch_size != 0:
- print(('Warning: number of images is not a multiple of the '
- 'batch size. Some samples are going to be ignored.'))
- if batch_size > len(files):
- print(('Warning: batch size is bigger than the data size. '
- 'Setting batch size to data size'))
- batch_size = len(files)
-
- n_batches = len(files) // batch_size
- n_used_imgs = n_batches * batch_size
-
- pred_arr = np.empty((n_used_imgs, dims))
-
- for i in tqdm(range(n_batches)):
- if verbose:
- print('\rPropagating batch %d/%d' % (i + 1, n_batches),
- end='', flush=True)
- start = i * batch_size
- end = start + batch_size
-
- # # Official code goes below
- # images = np.array([imread(str(f)).astype(np.float32)
- # for f in files[start:end]])
-
- # # Reshape to (n_images, 3, height, width)
- # images = images.transpose((0, 3, 1, 2))
- # images /= 255
- # batch = torch.from_numpy(images).type(torch.FloatTensor)
- # #
-
- t = transform if not keep_size else ToTensor()
-
- if isinstance(files[0], pathlib.PosixPath):
- images = [t(Image.open(str(f))) for f in files[start:end]]
-
- elif isinstance(files[0], Image.Image):
- images = [t(f) for f in files[start:end]]
-
- else:
- raise ValueError(f"Unknown data type for image: {type(files[0])}")
-
- batch = torch.stack(images)
-
- if cuda:
- batch = batch.cuda()
-
- pred = model(batch)[0]
-
- # If model output is not scalar, apply global spatial average pooling.
- # This happens if you choose a dimensionality not equal 2048.
- if pred.shape[2] != 1 or pred.shape[3] != 1:
- pred = adaptive_avg_pool2d(pred, output_size=(1, 1))
-
- pred_arr[start:end] = pred.cpu().data.numpy().reshape(batch_size, -1)
-
- if verbose:
- print(' done')
-
- return pred_arr
-
-
-def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
- """Numpy implementation of the Frechet Distance.
- The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)
- and X_2 ~ N(mu_2, C_2) is
- d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).
-
- Stable version by Dougal J. Sutherland.
-
- Params:
- -- mu1 : Numpy array containing the activations of a layer of the
- inception net (like returned by the function 'get_predictions')
- for generated samples.
- -- mu2 : The sample mean over activations, precalculated on an
- representative data set.
- -- sigma1: The covariance matrix over activations for generated samples.
- -- sigma2: The covariance matrix over activations, precalculated on an
- representative data set.
-
- Returns:
- -- : The Frechet Distance.
- """
-
- mu1 = np.atleast_1d(mu1)
- mu2 = np.atleast_1d(mu2)
-
- sigma1 = np.atleast_2d(sigma1)
- sigma2 = np.atleast_2d(sigma2)
-
- assert mu1.shape == mu2.shape, \
- 'Training and test mean vectors have different lengths'
- assert sigma1.shape == sigma2.shape, \
- 'Training and test covariances have different dimensions'
-
- diff = mu1 - mu2
-
- # Product might be almost singular
- covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
- if not np.isfinite(covmean).all():
- msg = ('fid calculation produces singular product; '
- 'adding %s to diagonal of cov estimates') % eps
- print(msg)
- offset = np.eye(sigma1.shape[0]) * eps
- covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
-
- # Numerical error might give slight imaginary component
- if np.iscomplexobj(covmean):
- # if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
- if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-2):
- m = np.max(np.abs(covmean.imag))
- raise ValueError('Imaginary component {}'.format(m))
- covmean = covmean.real
-
- tr_covmean = np.trace(covmean)
-
- return (diff.dot(diff) + np.trace(sigma1) +
- np.trace(sigma2) - 2 * tr_covmean)
-
-
-def calculate_activation_statistics(files, model, batch_size=50,
- dims=2048, cuda=False, verbose=False, keep_size=False):
- """Calculation of the statistics used by the FID.
- Params:
- -- files : List of image files paths
- -- model : Instance of inception model
- -- batch_size : The images numpy array is split into batches with
- batch size batch_size. A reasonable batch size
- depends on the hardware.
- -- dims : Dimensionality of features returned by Inception
- -- cuda : If set to True, use GPU
- -- verbose : If set to True and parameter out_step is given, the
- number of calculated batches is reported.
- Returns:
- -- mu : The mean over samples of the activations of the pool_3 layer of
- the inception model.
- -- sigma : The covariance matrix of the activations of the pool_3 layer of
- the inception model.
- """
- act = get_activations(files, model, batch_size, dims, cuda, verbose, keep_size=keep_size)
- mu = np.mean(act, axis=0)
- sigma = np.cov(act, rowvar=False)
- return mu, sigma
-
-
-def _compute_statistics_of_path(path, model, batch_size, dims, cuda):
- if path.endswith('.npz'):
- f = np.load(path)
- m, s = f['mu'][:], f['sigma'][:]
- f.close()
- else:
- path = pathlib.Path(path)
- files = list(path.glob('*.jpg')) + list(path.glob('*.png'))
- m, s = calculate_activation_statistics(files, model, batch_size,
- dims, cuda)
-
- return m, s
-
-
-def _compute_statistics_of_images(images, model, batch_size, dims, cuda, keep_size=False):
- if isinstance(images, list): # exact paths to files are provided
- m, s = calculate_activation_statistics(images, model, batch_size,
- dims, cuda, keep_size=keep_size)
-
- return m, s
-
- else:
- raise ValueError
-
-
-def calculate_fid_given_paths(paths, batch_size, cuda, dims):
- """Calculates the FID of two paths"""
- for p in paths:
- if not os.path.exists(p):
- raise RuntimeError('Invalid path: %s' % p)
-
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
-
- model = InceptionV3([block_idx])
- if cuda:
- model.cuda()
-
- m1, s1 = _compute_statistics_of_path(paths[0], model, batch_size,
- dims, cuda)
- m2, s2 = _compute_statistics_of_path(paths[1], model, batch_size,
- dims, cuda)
- fid_value = calculate_frechet_distance(m1, s1, m2, s2)
-
- return fid_value
-
-
-def calculate_fid_given_images(images, batch_size, cuda, dims, use_globals=False, keep_size=False):
- if use_globals:
- global FID_MODEL # for multiprocessing
-
- for imgs in images:
- if isinstance(imgs, list) and isinstance(imgs[0], (Image.Image, JpegImagePlugin.JpegImageFile)):
- pass
- else:
- raise RuntimeError('Invalid images')
-
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
-
- if 'FID_MODEL' not in globals() or not use_globals:
- model = InceptionV3([block_idx])
- if cuda:
- model.cuda()
-
- if use_globals:
- FID_MODEL = model
-
- else:
- model = FID_MODEL
-
- m1, s1 = _compute_statistics_of_images(images[0], model, batch_size,
- dims, cuda, keep_size=False)
- m2, s2 = _compute_statistics_of_images(images[1], model, batch_size,
- dims, cuda, keep_size=False)
- fid_value = calculate_frechet_distance(m1, s1, m2, s2)
- return fid_value
-
-
-if __name__ == '__main__':
- args = parser.parse_args()
- os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
-
- fid_value = calculate_fid_given_paths(args.path,
- args.batch_size,
- args.gpu != '',
- args.dims)
- print('FID: ', fid_value)
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/losses/mgpt.py b/spaces/OpenMotionLab/MotionGPT/mGPT/losses/mgpt.py
deleted file mode 100644
index 69846b2cfb59b04e8ee9fd7f51f0b6e5c624e18a..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/losses/mgpt.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import torch
-import torch.nn as nn
-from .base import BaseLosses
-
-
-class CommitLoss(nn.Module):
- """
- Useless Wrapper
- """
- def __init__(self, **kwargs):
- super().__init__()
-
- def forward(self, commit, commit2, **kwargs):
- return commit
-
-
-class GPTLosses(BaseLosses):
-
- def __init__(self, cfg, stage, num_joints, **kwargs):
- # Save parameters
- self.stage = stage
- recons_loss = cfg.LOSS.ABLATION.RECONS_LOSS
-
- # Define losses
- losses = []
- params = {}
- if stage == "vae":
- losses.append("recons_feature")
- params['recons_feature'] = cfg.LOSS.LAMBDA_FEATURE
-
- losses.append("recons_velocity")
- params['recons_velocity'] = cfg.LOSS.LAMBDA_VELOCITY
-
- losses.append("vq_commit")
- params['vq_commit'] = cfg.LOSS.LAMBDA_COMMIT
- elif stage in ["lm_pretrain", "lm_instruct"]:
- losses.append("gpt_loss")
- params['gpt_loss'] = cfg.LOSS.LAMBDA_CLS
-
- # Define loss functions & weights
- losses_func = {}
- for loss in losses:
- if loss.split('_')[0] == 'recons':
- if recons_loss == "l1":
- losses_func[loss] = nn.L1Loss
- elif recons_loss == "l2":
- losses_func[loss] = nn.MSELoss
- elif recons_loss == "l1_smooth":
- losses_func[loss] = nn.SmoothL1Loss
- elif loss.split('_')[1] in [
- 'commit', 'loss', 'gpt', 'm2t2m', 't2m2t'
- ]:
- losses_func[loss] = CommitLoss
- elif loss.split('_')[1] in ['cls', 'lm']:
- losses_func[loss] = nn.CrossEntropyLoss
- else:
- raise NotImplementedError(f"Loss {loss} not implemented.")
-
- super().__init__(cfg, losses, params, losses_func, num_joints,
- **kwargs)
-
- def update(self, rs_set):
- '''Update the losses'''
- total: float = 0.0
-
- if self.stage in ["vae"]:
- total += self._update_loss("recons_feature", rs_set['m_rst'],
- rs_set['m_ref'])
- # total += self._update_loss("recons_joints", rs_set['joints_rst'], rs_set['joints_ref'])
- nfeats = rs_set['m_rst'].shape[-1]
- if nfeats in [263, 135 + 263]:
- if nfeats == 135 + 263:
- vel_start = 135 + 4
- elif nfeats == 263:
- vel_start = 4
- total += self._update_loss(
- "recons_velocity",
- rs_set['m_rst'][..., vel_start:(self.num_joints - 1) * 3 +
- vel_start],
- rs_set['m_ref'][..., vel_start:(self.num_joints - 1) * 3 +
- vel_start])
- else:
- if self._params['recons_velocity'] != 0.0:
- raise NotImplementedError(
- "Velocity not implemented for nfeats = {})".format(nfeats))
- total += self._update_loss("vq_commit", rs_set['loss_commit'],
- rs_set['loss_commit'])
-
- if self.stage in ["lm_pretrain", "lm_instruct"]:
- total += self._update_loss("gpt_loss", rs_set['outputs'].loss,
- rs_set['outputs'].loss)
-
- # Update the total loss
- self.total += total.detach()
- self.count += 1
-
- return total
diff --git a/spaces/P1ne4ppl/Text_generator/app.py b/spaces/P1ne4ppl/Text_generator/app.py
deleted file mode 100644
index 5369142d3f1553743d4b2e77a9ef64798c264670..0000000000000000000000000000000000000000
--- a/spaces/P1ne4ppl/Text_generator/app.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-
-title="txt generator"
-description="txt"
-
-mod1=gr.Interface.load("huggingface/gpt2")
-mod2=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-mod3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B")
-
-gr.Parallel(mod1, mod2, mod3, title=txt generator, description=txt).launch()
\ No newline at end of file
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/log_buffer.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/log_buffer.py
deleted file mode 100644
index d949e2941c5400088c7cd8a1dc893d8b233ae785..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/log_buffer.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from collections import OrderedDict
-
-import numpy as np
-
-
-class LogBuffer:
-
- def __init__(self):
- self.val_history = OrderedDict()
- self.n_history = OrderedDict()
- self.output = OrderedDict()
- self.ready = False
-
- def clear(self):
- self.val_history.clear()
- self.n_history.clear()
- self.clear_output()
-
- def clear_output(self):
- self.output.clear()
- self.ready = False
-
- def update(self, vars, count=1):
- assert isinstance(vars, dict)
- for key, var in vars.items():
- if key not in self.val_history:
- self.val_history[key] = []
- self.n_history[key] = []
- self.val_history[key].append(var)
- self.n_history[key].append(count)
-
- def average(self, n=0):
- """Average latest n values or all values."""
- assert n >= 0
- for key in self.val_history:
- values = np.array(self.val_history[key][-n:])
- nums = np.array(self.n_history[key][-n:])
- avg = np.sum(values * nums) / np.sum(nums)
- self.output[key] = avg
- self.ready = True
diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/models/diffusion/plms.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/models/diffusion/plms.py
deleted file mode 100644
index 78eeb1003aa45d27bdbfc6b4a1d7ccbff57cd2e3..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/stable-diffusion/ldm/models/diffusion/plms.py
+++ /dev/null
@@ -1,236 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-from functools import partial
-
-from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like
-
-
-class PLMSSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- if ddim_eta != 0:
- raise ValueError('ddim_eta must be 0 for PLMS')
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- print(f'Data shape for PLMS sampling is {size}')
-
- samples, intermediates = self.plms_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def plms_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None,):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = list(reversed(range(0,timesteps))) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- print(f"Running PLMS Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='PLMS Sampler', total=total_steps)
- old_eps = []
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
- ts_next = torch.full((b,), time_range[min(i + 1, len(time_range) - 1)], device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- old_eps=old_eps, t_next=ts_next)
- img, pred_x0, e_t = outs
- old_eps.append(e_t)
- if len(old_eps) >= 4:
- old_eps.pop(0)
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None):
- b, *_, device = *x.shape, x.device
-
- def get_model_output(x, t):
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- c_in = torch.cat([unconditional_conditioning, c])
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- return e_t
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
-
- def get_x_prev_and_pred_x0(e_t, index):
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- e_t = get_model_output(x, t)
- if len(old_eps) == 0:
- # Pseudo Improved Euler (2nd order)
- x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index)
- e_t_next = get_model_output(x_prev, t_next)
- e_t_prime = (e_t + e_t_next) / 2
- elif len(old_eps) == 1:
- # 2nd order Pseudo Linear Multistep (Adams-Bashforth)
- e_t_prime = (3 * e_t - old_eps[-1]) / 2
- elif len(old_eps) == 2:
- # 3nd order Pseudo Linear Multistep (Adams-Bashforth)
- e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12
- elif len(old_eps) >= 3:
- # 4nd order Pseudo Linear Multistep (Adams-Bashforth)
- e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24
-
- x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index)
-
- return x_prev, pred_x0, e_t
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/time-signature-settings.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/time-signature-settings.go
deleted file mode 100644
index f4a21d485b971ed45edf874593ae12099a479aa7..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/time-signature-settings.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/memory/no_memory.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/memory/no_memory.py
deleted file mode 100644
index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/autogpt/memory/no_memory.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""A class that does not store any data. This is the default memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-from autogpt.memory.base import MemoryProviderSingleton
-
-
-class NoMemory(MemoryProviderSingleton):
- """
- A class that does not store any data. This is the default memory provider.
- """
-
- def __init__(self, cfg):
- """
- Initializes the NoMemory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- pass
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory. No action is taken in NoMemory.
-
- Args:
- data: The data to add.
-
- Returns: An empty string.
- """
- return ""
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
-
- Returns: None
- """
- return None
-
- def clear(self) -> str:
- """
- Clears the memory. No action is taken in NoMemory.
-
- Returns: An empty string.
- """
- return ""
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: None
- """
- return None
-
- def get_stats(self):
- """
- Returns: An empty dictionary as there are no stats in NoMemory.
- """
- return {}
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/nms.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/nms.py
deleted file mode 100644
index 6d9634281f486ab284091786886854c451368052..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/nms.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import os
-
-import numpy as np
-import torch
-
-from annotator.uniformer.mmcv.utils import deprecated_api_warning
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['nms', 'softnms', 'nms_match', 'nms_rotated'])
-
-
-# This function is modified from: https://github.com/pytorch/vision/
-class NMSop(torch.autograd.Function):
-
- @staticmethod
- def forward(ctx, bboxes, scores, iou_threshold, offset, score_threshold,
- max_num):
- is_filtering_by_score = score_threshold > 0
- if is_filtering_by_score:
- valid_mask = scores > score_threshold
- bboxes, scores = bboxes[valid_mask], scores[valid_mask]
- valid_inds = torch.nonzero(
- valid_mask, as_tuple=False).squeeze(dim=1)
-
- inds = ext_module.nms(
- bboxes, scores, iou_threshold=float(iou_threshold), offset=offset)
-
- if max_num > 0:
- inds = inds[:max_num]
- if is_filtering_by_score:
- inds = valid_inds[inds]
- return inds
-
- @staticmethod
- def symbolic(g, bboxes, scores, iou_threshold, offset, score_threshold,
- max_num):
- from ..onnx import is_custom_op_loaded
- has_custom_op = is_custom_op_loaded()
- # TensorRT nms plugin is aligned with original nms in ONNXRuntime
- is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT'
- if has_custom_op and (not is_trt_backend):
- return g.op(
- 'mmcv::NonMaxSuppression',
- bboxes,
- scores,
- iou_threshold_f=float(iou_threshold),
- offset_i=int(offset))
- else:
- from torch.onnx.symbolic_opset9 import select, squeeze, unsqueeze
- from ..onnx.onnx_utils.symbolic_helper import _size_helper
-
- boxes = unsqueeze(g, bboxes, 0)
- scores = unsqueeze(g, unsqueeze(g, scores, 0), 0)
-
- if max_num > 0:
- max_num = g.op(
- 'Constant',
- value_t=torch.tensor(max_num, dtype=torch.long))
- else:
- dim = g.op('Constant', value_t=torch.tensor(0))
- max_num = _size_helper(g, bboxes, dim)
- max_output_per_class = max_num
- iou_threshold = g.op(
- 'Constant',
- value_t=torch.tensor([iou_threshold], dtype=torch.float))
- score_threshold = g.op(
- 'Constant',
- value_t=torch.tensor([score_threshold], dtype=torch.float))
- nms_out = g.op('NonMaxSuppression', boxes, scores,
- max_output_per_class, iou_threshold,
- score_threshold)
- return squeeze(
- g,
- select(
- g, nms_out, 1,
- g.op(
- 'Constant',
- value_t=torch.tensor([2], dtype=torch.long))), 1)
-
-
-class SoftNMSop(torch.autograd.Function):
-
- @staticmethod
- def forward(ctx, boxes, scores, iou_threshold, sigma, min_score, method,
- offset):
- dets = boxes.new_empty((boxes.size(0), 5), device='cpu')
- inds = ext_module.softnms(
- boxes.cpu(),
- scores.cpu(),
- dets.cpu(),
- iou_threshold=float(iou_threshold),
- sigma=float(sigma),
- min_score=float(min_score),
- method=int(method),
- offset=int(offset))
- return dets, inds
-
- @staticmethod
- def symbolic(g, boxes, scores, iou_threshold, sigma, min_score, method,
- offset):
- from packaging import version
- assert version.parse(torch.__version__) >= version.parse('1.7.0')
- nms_out = g.op(
- 'mmcv::SoftNonMaxSuppression',
- boxes,
- scores,
- iou_threshold_f=float(iou_threshold),
- sigma_f=float(sigma),
- min_score_f=float(min_score),
- method_i=int(method),
- offset_i=int(offset),
- outputs=2)
- return nms_out
-
-
-@deprecated_api_warning({'iou_thr': 'iou_threshold'})
-def nms(boxes, scores, iou_threshold, offset=0, score_threshold=0, max_num=-1):
- """Dispatch to either CPU or GPU NMS implementations.
-
- The input can be either torch tensor or numpy array. GPU NMS will be used
- if the input is gpu tensor, otherwise CPU NMS
- will be used. The returned type will always be the same as inputs.
-
- Arguments:
- boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4).
- scores (torch.Tensor or np.ndarray): scores in shape (N, ).
- iou_threshold (float): IoU threshold for NMS.
- offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset).
- score_threshold (float): score threshold for NMS.
- max_num (int): maximum number of boxes after NMS.
-
- Returns:
- tuple: kept dets(boxes and scores) and indice, which is always the \
- same data type as the input.
-
- Example:
- >>> boxes = np.array([[49.1, 32.4, 51.0, 35.9],
- >>> [49.3, 32.9, 51.0, 35.3],
- >>> [49.2, 31.8, 51.0, 35.4],
- >>> [35.1, 11.5, 39.1, 15.7],
- >>> [35.6, 11.8, 39.3, 14.2],
- >>> [35.3, 11.5, 39.9, 14.5],
- >>> [35.2, 11.7, 39.7, 15.7]], dtype=np.float32)
- >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.5, 0.4, 0.3],\
- dtype=np.float32)
- >>> iou_threshold = 0.6
- >>> dets, inds = nms(boxes, scores, iou_threshold)
- >>> assert len(inds) == len(dets) == 3
- """
- assert isinstance(boxes, (torch.Tensor, np.ndarray))
- assert isinstance(scores, (torch.Tensor, np.ndarray))
- is_numpy = False
- if isinstance(boxes, np.ndarray):
- is_numpy = True
- boxes = torch.from_numpy(boxes)
- if isinstance(scores, np.ndarray):
- scores = torch.from_numpy(scores)
- assert boxes.size(1) == 4
- assert boxes.size(0) == scores.size(0)
- assert offset in (0, 1)
-
- if torch.__version__ == 'parrots':
- indata_list = [boxes, scores]
- indata_dict = {
- 'iou_threshold': float(iou_threshold),
- 'offset': int(offset)
- }
- inds = ext_module.nms(*indata_list, **indata_dict)
- else:
- inds = NMSop.apply(boxes, scores, iou_threshold, offset,
- score_threshold, max_num)
- dets = torch.cat((boxes[inds], scores[inds].reshape(-1, 1)), dim=1)
- if is_numpy:
- dets = dets.cpu().numpy()
- inds = inds.cpu().numpy()
- return dets, inds
-
-
-@deprecated_api_warning({'iou_thr': 'iou_threshold'})
-def soft_nms(boxes,
- scores,
- iou_threshold=0.3,
- sigma=0.5,
- min_score=1e-3,
- method='linear',
- offset=0):
- """Dispatch to only CPU Soft NMS implementations.
-
- The input can be either a torch tensor or numpy array.
- The returned type will always be the same as inputs.
-
- Arguments:
- boxes (torch.Tensor or np.ndarray): boxes in shape (N, 4).
- scores (torch.Tensor or np.ndarray): scores in shape (N, ).
- iou_threshold (float): IoU threshold for NMS.
- sigma (float): hyperparameter for gaussian method
- min_score (float): score filter threshold
- method (str): either 'linear' or 'gaussian'
- offset (int, 0 or 1): boxes' width or height is (x2 - x1 + offset).
-
- Returns:
- tuple: kept dets(boxes and scores) and indice, which is always the \
- same data type as the input.
-
- Example:
- >>> boxes = np.array([[4., 3., 5., 3.],
- >>> [4., 3., 5., 4.],
- >>> [3., 1., 3., 1.],
- >>> [3., 1., 3., 1.],
- >>> [3., 1., 3., 1.],
- >>> [3., 1., 3., 1.]], dtype=np.float32)
- >>> scores = np.array([0.9, 0.9, 0.5, 0.5, 0.4, 0.0], dtype=np.float32)
- >>> iou_threshold = 0.6
- >>> dets, inds = soft_nms(boxes, scores, iou_threshold, sigma=0.5)
- >>> assert len(inds) == len(dets) == 5
- """
-
- assert isinstance(boxes, (torch.Tensor, np.ndarray))
- assert isinstance(scores, (torch.Tensor, np.ndarray))
- is_numpy = False
- if isinstance(boxes, np.ndarray):
- is_numpy = True
- boxes = torch.from_numpy(boxes)
- if isinstance(scores, np.ndarray):
- scores = torch.from_numpy(scores)
- assert boxes.size(1) == 4
- assert boxes.size(0) == scores.size(0)
- assert offset in (0, 1)
- method_dict = {'naive': 0, 'linear': 1, 'gaussian': 2}
- assert method in method_dict.keys()
-
- if torch.__version__ == 'parrots':
- dets = boxes.new_empty((boxes.size(0), 5), device='cpu')
- indata_list = [boxes.cpu(), scores.cpu(), dets.cpu()]
- indata_dict = {
- 'iou_threshold': float(iou_threshold),
- 'sigma': float(sigma),
- 'min_score': min_score,
- 'method': method_dict[method],
- 'offset': int(offset)
- }
- inds = ext_module.softnms(*indata_list, **indata_dict)
- else:
- dets, inds = SoftNMSop.apply(boxes.cpu(), scores.cpu(),
- float(iou_threshold), float(sigma),
- float(min_score), method_dict[method],
- int(offset))
-
- dets = dets[:inds.size(0)]
-
- if is_numpy:
- dets = dets.cpu().numpy()
- inds = inds.cpu().numpy()
- return dets, inds
- else:
- return dets.to(device=boxes.device), inds.to(device=boxes.device)
-
-
-def batched_nms(boxes, scores, idxs, nms_cfg, class_agnostic=False):
- """Performs non-maximum suppression in a batched fashion.
-
- Modified from https://github.com/pytorch/vision/blob
- /505cd6957711af790211896d32b40291bea1bc21/torchvision/ops/boxes.py#L39.
- In order to perform NMS independently per class, we add an offset to all
- the boxes. The offset is dependent only on the class idx, and is large
- enough so that boxes from different classes do not overlap.
-
- Arguments:
- boxes (torch.Tensor): boxes in shape (N, 4).
- scores (torch.Tensor): scores in shape (N, ).
- idxs (torch.Tensor): each index value correspond to a bbox cluster,
- and NMS will not be applied between elements of different idxs,
- shape (N, ).
- nms_cfg (dict): specify nms type and other parameters like iou_thr.
- Possible keys includes the following.
-
- - iou_thr (float): IoU threshold used for NMS.
- - split_thr (float): threshold number of boxes. In some cases the
- number of boxes is large (e.g., 200k). To avoid OOM during
- training, the users could set `split_thr` to a small value.
- If the number of boxes is greater than the threshold, it will
- perform NMS on each group of boxes separately and sequentially.
- Defaults to 10000.
- class_agnostic (bool): if true, nms is class agnostic,
- i.e. IoU thresholding happens over all boxes,
- regardless of the predicted class.
-
- Returns:
- tuple: kept dets and indice.
- """
- nms_cfg_ = nms_cfg.copy()
- class_agnostic = nms_cfg_.pop('class_agnostic', class_agnostic)
- if class_agnostic:
- boxes_for_nms = boxes
- else:
- max_coordinate = boxes.max()
- offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))
- boxes_for_nms = boxes + offsets[:, None]
-
- nms_type = nms_cfg_.pop('type', 'nms')
- nms_op = eval(nms_type)
-
- split_thr = nms_cfg_.pop('split_thr', 10000)
- # Won't split to multiple nms nodes when exporting to onnx
- if boxes_for_nms.shape[0] < split_thr or torch.onnx.is_in_onnx_export():
- dets, keep = nms_op(boxes_for_nms, scores, **nms_cfg_)
- boxes = boxes[keep]
- # -1 indexing works abnormal in TensorRT
- # This assumes `dets` has 5 dimensions where
- # the last dimension is score.
- # TODO: more elegant way to handle the dimension issue.
- # Some type of nms would reweight the score, such as SoftNMS
- scores = dets[:, 4]
- else:
- max_num = nms_cfg_.pop('max_num', -1)
- total_mask = scores.new_zeros(scores.size(), dtype=torch.bool)
- # Some type of nms would reweight the score, such as SoftNMS
- scores_after_nms = scores.new_zeros(scores.size())
- for id in torch.unique(idxs):
- mask = (idxs == id).nonzero(as_tuple=False).view(-1)
- dets, keep = nms_op(boxes_for_nms[mask], scores[mask], **nms_cfg_)
- total_mask[mask[keep]] = True
- scores_after_nms[mask[keep]] = dets[:, -1]
- keep = total_mask.nonzero(as_tuple=False).view(-1)
-
- scores, inds = scores_after_nms[keep].sort(descending=True)
- keep = keep[inds]
- boxes = boxes[keep]
-
- if max_num > 0:
- keep = keep[:max_num]
- boxes = boxes[:max_num]
- scores = scores[:max_num]
-
- return torch.cat([boxes, scores[:, None]], -1), keep
-
-
-def nms_match(dets, iou_threshold):
- """Matched dets into different groups by NMS.
-
- NMS match is Similar to NMS but when a bbox is suppressed, nms match will
- record the indice of suppressed bbox and form a group with the indice of
- kept bbox. In each group, indice is sorted as score order.
-
- Arguments:
- dets (torch.Tensor | np.ndarray): Det boxes with scores, shape (N, 5).
- iou_thr (float): IoU thresh for NMS.
-
- Returns:
- List[torch.Tensor | np.ndarray]: The outer list corresponds different
- matched group, the inner Tensor corresponds the indices for a group
- in score order.
- """
- if dets.shape[0] == 0:
- matched = []
- else:
- assert dets.shape[-1] == 5, 'inputs dets.shape should be (N, 5), ' \
- f'but get {dets.shape}'
- if isinstance(dets, torch.Tensor):
- dets_t = dets.detach().cpu()
- else:
- dets_t = torch.from_numpy(dets)
- indata_list = [dets_t]
- indata_dict = {'iou_threshold': float(iou_threshold)}
- matched = ext_module.nms_match(*indata_list, **indata_dict)
- if torch.__version__ == 'parrots':
- matched = matched.tolist()
-
- if isinstance(dets, torch.Tensor):
- return [dets.new_tensor(m, dtype=torch.long) for m in matched]
- else:
- return [np.array(m, dtype=np.int) for m in matched]
-
-
-def nms_rotated(dets, scores, iou_threshold, labels=None):
- """Performs non-maximum suppression (NMS) on the rotated boxes according to
- their intersection-over-union (IoU).
-
- Rotated NMS iteratively removes lower scoring rotated boxes which have an
- IoU greater than iou_threshold with another (higher scoring) rotated box.
-
- Args:
- boxes (Tensor): Rotated boxes in shape (N, 5). They are expected to \
- be in (x_ctr, y_ctr, width, height, angle_radian) format.
- scores (Tensor): scores in shape (N, ).
- iou_threshold (float): IoU thresh for NMS.
- labels (Tensor): boxes' label in shape (N,).
-
- Returns:
- tuple: kept dets(boxes and scores) and indice, which is always the \
- same data type as the input.
- """
- if dets.shape[0] == 0:
- return dets, None
- multi_label = labels is not None
- if multi_label:
- dets_wl = torch.cat((dets, labels.unsqueeze(1)), 1)
- else:
- dets_wl = dets
- _, order = scores.sort(0, descending=True)
- dets_sorted = dets_wl.index_select(0, order)
-
- if torch.__version__ == 'parrots':
- keep_inds = ext_module.nms_rotated(
- dets_wl,
- scores,
- order,
- dets_sorted,
- iou_threshold=iou_threshold,
- multi_label=multi_label)
- else:
- keep_inds = ext_module.nms_rotated(dets_wl, scores, order, dets_sorted,
- iou_threshold, multi_label)
- dets = torch.cat((dets[keep_inds], scores[keep_inds].reshape(-1, 1)),
- dim=1)
- return dets, keep_inds
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py
deleted file mode 100644
index f8b0c1084e8fa866ee9b1043bf4bc9fdd4383669..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py
+++ /dev/null
@@ -1,216 +0,0 @@
-# A modification version from chainercv repository.
-# (See https://github.com/chainer/chainercv/blob/master/chainercv/evaluations/eval_detection_voc.py)
-from __future__ import division
-
-import os
-from collections import defaultdict
-import numpy as np
-from maskrcnn_benchmark.structures.bounding_box import BoxList
-from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou
-
-
-def do_voc_evaluation(dataset, predictions, output_folder, logger):
- # TODO need to make the use_07_metric format available
- # for the user to choose
- pred_boxlists = []
- gt_boxlists = []
- for image_id, prediction in enumerate(predictions):
- img_info = dataset.get_img_info(image_id)
- if len(prediction) == 0:
- continue
- image_width = img_info["width"]
- image_height = img_info["height"]
- prediction = prediction.resize((image_width, image_height))
- pred_boxlists.append(prediction)
-
- gt_boxlist = dataset.get_groundtruth(image_id)
- gt_boxlists.append(gt_boxlist)
- result = eval_detection_voc(
- pred_boxlists=pred_boxlists,
- gt_boxlists=gt_boxlists,
- iou_thresh=0.5,
- use_07_metric=True,
- )
- result_str = "mAP: {:.4f}\n".format(result["map"])
- for i, ap in enumerate(result["ap"]):
- if i == 0: # skip background
- continue
- result_str += "{:<16}: {:.4f}\n".format(
- dataset.map_class_id_to_class_name(i), ap
- )
- logger.info(result_str)
- if output_folder:
- with open(os.path.join(output_folder, "result.txt"), "w") as fid:
- fid.write(result_str)
- return result
-
-
-def eval_detection_voc(pred_boxlists, gt_boxlists, iou_thresh=0.5, use_07_metric=False):
- """Evaluate on voc dataset.
- Args:
- pred_boxlists(list[BoxList]): pred boxlist, has labels and scores fields.
- gt_boxlists(list[BoxList]): ground truth boxlist, has labels field.
- iou_thresh: iou thresh
- use_07_metric: boolean
- Returns:
- dict represents the results
- """
- assert len(gt_boxlists) == len(
- pred_boxlists
- ), "Length of gt and pred lists need to be same."
- prec, rec = calc_detection_voc_prec_rec(
- pred_boxlists=pred_boxlists, gt_boxlists=gt_boxlists, iou_thresh=iou_thresh
- )
- ap = calc_detection_voc_ap(prec, rec, use_07_metric=use_07_metric)
- return {"ap": ap, "map": np.nanmean(ap)}
-
-
-def calc_detection_voc_prec_rec(gt_boxlists, pred_boxlists, iou_thresh=0.5):
- """Calculate precision and recall based on evaluation code of PASCAL VOC.
- This function calculates precision and recall of
- predicted bounding boxes obtained from a dataset which has :math:`N`
- images.
- The code is based on the evaluation code used in PASCAL VOC Challenge.
- """
- n_pos = defaultdict(int)
- score = defaultdict(list)
- match = defaultdict(list)
- for gt_boxlist, pred_boxlist in zip(gt_boxlists, pred_boxlists):
- pred_bbox = pred_boxlist.bbox.numpy()
- pred_label = pred_boxlist.get_field("labels").numpy()
- pred_score = pred_boxlist.get_field("scores").numpy()
- gt_bbox = gt_boxlist.bbox.numpy()
- gt_label = gt_boxlist.get_field("labels").numpy()
- gt_difficult = gt_boxlist.get_field("difficult").numpy()
-
- for l in np.unique(np.concatenate((pred_label, gt_label)).astype(int)):
- pred_mask_l = pred_label == l
- pred_bbox_l = pred_bbox[pred_mask_l]
- pred_score_l = pred_score[pred_mask_l]
- # sort by score
- order = pred_score_l.argsort()[::-1]
- pred_bbox_l = pred_bbox_l[order]
- pred_score_l = pred_score_l[order]
-
- gt_mask_l = gt_label == l
- gt_bbox_l = gt_bbox[gt_mask_l]
- gt_difficult_l = gt_difficult[gt_mask_l]
-
- n_pos[l] += np.logical_not(gt_difficult_l).sum()
- score[l].extend(pred_score_l)
-
- if len(pred_bbox_l) == 0:
- continue
- if len(gt_bbox_l) == 0:
- match[l].extend((0,) * pred_bbox_l.shape[0])
- continue
-
- # VOC evaluation follows integer typed bounding boxes.
- pred_bbox_l = pred_bbox_l.copy()
- pred_bbox_l[:, 2:] += 1
- gt_bbox_l = gt_bbox_l.copy()
- gt_bbox_l[:, 2:] += 1
- iou = boxlist_iou(
- BoxList(pred_bbox_l, gt_boxlist.size),
- BoxList(gt_bbox_l, gt_boxlist.size),
- ).numpy()
- gt_index = iou.argmax(axis=1)
- # set -1 if there is no matching ground truth
- gt_index[iou.max(axis=1) < iou_thresh] = -1
- del iou
-
- selec = np.zeros(gt_bbox_l.shape[0], dtype=bool)
- for gt_idx in gt_index:
- if gt_idx >= 0:
- if gt_difficult_l[gt_idx]:
- match[l].append(-1)
- else:
- if not selec[gt_idx]:
- match[l].append(1)
- else:
- match[l].append(0)
- selec[gt_idx] = True
- else:
- match[l].append(0)
-
- n_fg_class = max(n_pos.keys()) + 1
- prec = [None] * n_fg_class
- rec = [None] * n_fg_class
-
- for l in n_pos.keys():
- score_l = np.array(score[l])
- match_l = np.array(match[l], dtype=np.int8)
-
- order = score_l.argsort()[::-1]
- match_l = match_l[order]
-
- tp = np.cumsum(match_l == 1)
- fp = np.cumsum(match_l == 0)
-
- # If an element of fp + tp is 0,
- # the corresponding element of prec[l] is nan.
- prec[l] = tp / (fp + tp)
- # If n_pos[l] is 0, rec[l] is None.
- if n_pos[l] > 0:
- rec[l] = tp / n_pos[l]
-
- return prec, rec
-
-
-def calc_detection_voc_ap(prec, rec, use_07_metric=False):
- """Calculate average precisions based on evaluation code of PASCAL VOC.
- This function calculates average precisions
- from given precisions and recalls.
- The code is based on the evaluation code used in PASCAL VOC Challenge.
- Args:
- prec (list of numpy.array): A list of arrays.
- :obj:`prec[l]` indicates precision for class :math:`l`.
- If :obj:`prec[l]` is :obj:`None`, this function returns
- :obj:`numpy.nan` for class :math:`l`.
- rec (list of numpy.array): A list of arrays.
- :obj:`rec[l]` indicates recall for class :math:`l`.
- If :obj:`rec[l]` is :obj:`None`, this function returns
- :obj:`numpy.nan` for class :math:`l`.
- use_07_metric (bool): Whether to use PASCAL VOC 2007 evaluation metric
- for calculating average precision. The default value is
- :obj:`False`.
- Returns:
- ~numpy.ndarray:
- This function returns an array of average precisions.
- The :math:`l`-th value corresponds to the average precision
- for class :math:`l`. If :obj:`prec[l]` or :obj:`rec[l]` is
- :obj:`None`, the corresponding value is set to :obj:`numpy.nan`.
- """
-
- n_fg_class = len(prec)
- ap = np.empty(n_fg_class)
- for l in range(n_fg_class):
- if prec[l] is None or rec[l] is None:
- ap[l] = np.nan
- continue
-
- if use_07_metric:
- # 11 point metric
- ap[l] = 0
- for t in np.arange(0.0, 1.1, 0.1):
- if np.sum(rec[l] >= t) == 0:
- p = 0
- else:
- p = np.max(np.nan_to_num(prec[l])[rec[l] >= t])
- ap[l] += p / 11
- else:
- # correct AP calculation
- # first append sentinel values at the end
- mpre = np.concatenate(([0], np.nan_to_num(prec[l]), [0]))
- mrec = np.concatenate(([0], rec[l], [1]))
-
- mpre = np.maximum.accumulate(mpre[::-1])[::-1]
-
- # to calculate area under PR curve, look for points
- # where X axis (recall) changes value
- i = np.where(mrec[1:] != mrec[:-1])[0]
-
- # and sum (\Delta recall) * prec
- ap[l] = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
-
- return ap
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/optim/__init__.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/optim/__init__.py
deleted file mode 100644
index f48c17dfafa9a2be46a91ed1fb64f54c5572a730..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/optim/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Optimization stuff. In particular, optimizers (DAdaptAdam), schedulers
-and Exponential Moving Average.
-"""
-
-# flake8: noqa
-from .cosine_lr_scheduler import CosineLRScheduler
-from .dadam import DAdaptAdam
-from .inverse_sqrt_lr_scheduler import InverseSquareRootLRScheduler
-from .linear_warmup_lr_scheduler import LinearWarmupLRScheduler
-from .polynomial_decay_lr_scheduler import PolynomialDecayLRScheduler
-from .ema import ModuleDictEMA
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/adversarial/test_losses.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/adversarial/test_losses.py
deleted file mode 100644
index 0e30bc3a6dde00003e13c00f15e977e39425063c..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/adversarial/test_losses.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import random
-
-import torch
-
-from audiocraft.adversarial import (
- AdversarialLoss,
- get_adv_criterion,
- get_real_criterion,
- get_fake_criterion,
- FeatureMatchingLoss,
- MultiScaleDiscriminator,
-)
-
-
-class TestAdversarialLoss:
-
- def test_adversarial_single_multidiscriminator(self):
- adv = MultiScaleDiscriminator()
- optimizer = torch.optim.Adam(
- adv.parameters(),
- lr=1e-4,
- )
- loss, loss_real, loss_fake = get_adv_criterion('mse'), get_real_criterion('mse'), get_fake_criterion('mse')
- adv_loss = AdversarialLoss(adv, optimizer, loss, loss_real, loss_fake)
-
- B, C, T = 4, 1, random.randint(1000, 5000)
- real = torch.randn(B, C, T)
- fake = torch.randn(B, C, T)
-
- disc_loss = adv_loss.train_adv(fake, real)
- assert isinstance(disc_loss, torch.Tensor) and isinstance(disc_loss.item(), float)
-
- loss, loss_feat = adv_loss(fake, real)
- assert isinstance(loss, torch.Tensor) and isinstance(loss.item(), float)
- # we did not specify feature loss
- assert loss_feat.item() == 0.
-
- def test_adversarial_feat_loss(self):
- adv = MultiScaleDiscriminator()
- optimizer = torch.optim.Adam(
- adv.parameters(),
- lr=1e-4,
- )
- loss, loss_real, loss_fake = get_adv_criterion('mse'), get_real_criterion('mse'), get_fake_criterion('mse')
- feat_loss = FeatureMatchingLoss()
- adv_loss = AdversarialLoss(adv, optimizer, loss, loss_real, loss_fake, feat_loss)
-
- B, C, T = 4, 1, random.randint(1000, 5000)
- real = torch.randn(B, C, T)
- fake = torch.randn(B, C, T)
-
- loss, loss_feat = adv_loss(fake, real)
-
- assert isinstance(loss, torch.Tensor) and isinstance(loss.item(), float)
- assert isinstance(loss_feat, torch.Tensor) and isinstance(loss.item(), float)
-
-
-class TestGeneratorAdversarialLoss:
-
- def test_hinge_generator_adv_loss(self):
- adv_loss = get_adv_criterion(loss_type='hinge')
-
- t0 = torch.randn(1, 2, 0)
- t1 = torch.FloatTensor([1.0, 2.0, 3.0])
-
- assert adv_loss(t0).item() == 0.0
- assert adv_loss(t1).item() == -2.0
-
- def test_mse_generator_adv_loss(self):
- adv_loss = get_adv_criterion(loss_type='mse')
-
- t0 = torch.randn(1, 2, 0)
- t1 = torch.FloatTensor([1.0, 1.0, 1.0])
- t2 = torch.FloatTensor([2.0, 5.0, 5.0])
-
- assert adv_loss(t0).item() == 0.0
- assert adv_loss(t1).item() == 0.0
- assert adv_loss(t2).item() == 11.0
-
-
-class TestDiscriminatorAdversarialLoss:
-
- def _disc_loss(self, loss_type: str, fake: torch.Tensor, real: torch.Tensor):
- disc_loss_real = get_real_criterion(loss_type)
- disc_loss_fake = get_fake_criterion(loss_type)
-
- loss = disc_loss_fake(fake) + disc_loss_real(real)
- return loss
-
- def test_hinge_discriminator_adv_loss(self):
- loss_type = 'hinge'
- t0 = torch.FloatTensor([0.0, 0.0, 0.0])
- t1 = torch.FloatTensor([1.0, 2.0, 3.0])
-
- assert self._disc_loss(loss_type, t0, t0).item() == 2.0
- assert self._disc_loss(loss_type, t1, t1).item() == 3.0
-
- def test_mse_discriminator_adv_loss(self):
- loss_type = 'mse'
-
- t0 = torch.FloatTensor([0.0, 0.0, 0.0])
- t1 = torch.FloatTensor([1.0, 1.0, 1.0])
-
- assert self._disc_loss(loss_type, t0, t0).item() == 1.0
- assert self._disc_loss(loss_type, t1, t0).item() == 2.0
-
-
-class TestFeatureMatchingLoss:
-
- def test_features_matching_loss_base(self):
- ft_matching_loss = FeatureMatchingLoss()
- length = random.randrange(1, 100_000)
- t1 = torch.randn(1, 2, length)
-
- loss = ft_matching_loss([t1], [t1])
- assert isinstance(loss, torch.Tensor)
- assert loss.item() == 0.0
-
- def test_features_matching_loss_raises_exception(self):
- ft_matching_loss = FeatureMatchingLoss()
- length = random.randrange(1, 100_000)
- t1 = torch.randn(1, 2, length)
- t2 = torch.randn(1, 2, length + 1)
-
- with pytest.raises(AssertionError):
- ft_matching_loss([], [])
-
- with pytest.raises(AssertionError):
- ft_matching_loss([t1], [t1, t1])
-
- with pytest.raises(AssertionError):
- ft_matching_loss([t1], [t2])
-
- def test_features_matching_loss_output(self):
- loss_nonorm = FeatureMatchingLoss(normalize=False)
- loss_layer_normed = FeatureMatchingLoss(normalize=True)
-
- length = random.randrange(1, 100_000)
- t1 = torch.randn(1, 2, length)
- t2 = torch.randn(1, 2, length)
-
- assert loss_nonorm([t1, t2], [t1, t2]).item() == 0.0
- assert loss_layer_normed([t1, t2], [t1, t2]).item() == 0.0
-
- t3 = torch.FloatTensor([1.0, 2.0, 3.0])
- t4 = torch.FloatTensor([2.0, 10.0, 3.0])
-
- assert loss_nonorm([t3], [t4]).item() == 3.0
- assert loss_nonorm([t3, t3], [t4, t4]).item() == 6.0
-
- assert loss_layer_normed([t3], [t4]).item() == 3.0
- assert loss_layer_normed([t3, t3], [t4, t4]).item() == 3.0
diff --git a/spaces/RMXK/RVC_HFF/infer/modules/vc/modules.py b/spaces/RMXK/RVC_HFF/infer/modules/vc/modules.py
deleted file mode 100644
index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/modules/vc/modules.py
+++ /dev/null
@@ -1,526 +0,0 @@
-import os, sys
-import traceback
-import logging
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-logger = logging.getLogger(__name__)
-import lib.globals.globals as rvc_globals
-import numpy as np
-import soundfile as sf
-import torch
-from io import BytesIO
-from infer.lib.audio import load_audio
-from infer.lib.audio import wav2
-from infer.lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from infer.modules.vc.pipeline import Pipeline
-from infer.modules.vc.utils import *
-import time
-import scipy.io.wavfile as wavfile
-
-def note_to_hz(note_name):
- SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2}
- pitch_class, octave = note_name[:-1], int(note_name[-1])
- semitone = SEMITONES[pitch_class]
- note_number = 12 * (octave - 4) + semitone
- frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number
- return frequency
-
-class VC:
- def __init__(self, config):
- self.n_spk = None
- self.tgt_sr = None
- self.net_g = None
- self.pipeline = None
- self.cpt = None
- self.version = None
- self.if_f0 = None
- self.version = None
- self.hubert_model = None
-
- self.config = config
-
- def get_vc(self, sid, *to_return_protect):
- logger.info("Get sid: " + sid)
-
- to_return_protect0 = {
- "visible": self.if_f0 != 0,
- "value": to_return_protect[0]
- if self.if_f0 != 0 and to_return_protect
- else 0.5,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": self.if_f0 != 0,
- "value": to_return_protect[1]
- if self.if_f0 != 0 and to_return_protect
- else 0.33,
- "__type__": "update",
- }
-
- if not sid:
- if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
- logger.info("Clean model cache")
- del (
- self.net_g,
- self.n_spk,
- self.vc,
- self.hubert_model,
- self.tgt_sr,
- ) # ,cpt
- self.hubert_model = (
- self.net_g
- ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- ###楼下不这么折腾清理不干净
- self.if_f0 = self.cpt.get("f0", 1)
- self.version = self.cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *self.cpt["config"], is_half=self.config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *self.cpt["config"], is_half=self.config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"])
- del self.net_g, self.cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return (
- {"visible": False, "__type__": "update"},
- {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- },
- {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- },
- "",
- "",
- )
- #person = f'{os.getenv("weight_root")}/{sid}'
- person = f'{sid}'
- #logger.info(f"Loading: {person}")
- logger.info(f"Loading...")
- self.cpt = torch.load(person, map_location="cpu")
- self.tgt_sr = self.cpt["config"][-1]
- self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = self.cpt.get("f0", 1)
- self.version = self.cpt.get("version", "v1")
-
- synthesizer_class = {
- ("v1", 1): SynthesizerTrnMs256NSFsid,
- ("v1", 0): SynthesizerTrnMs256NSFsid_nono,
- ("v2", 1): SynthesizerTrnMs768NSFsid,
- ("v2", 0): SynthesizerTrnMs768NSFsid_nono,
- }
-
- self.net_g = synthesizer_class.get(
- (self.version, self.if_f0), SynthesizerTrnMs256NSFsid
- )(*self.cpt["config"], is_half=self.config.is_half)
-
- del self.net_g.enc_q
-
- self.net_g.load_state_dict(self.cpt["weight"], strict=False)
- self.net_g.eval().to(self.config.device)
- if self.config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
-
- self.pipeline = Pipeline(self.tgt_sr, self.config)
- n_spk = self.cpt["config"][-3]
- index = {"value": get_index_path_from_model(sid), "__type__": "update"}
- logger.info("Select index: " + index["value"])
-
- return (
- (
- {"visible": False, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1
- )
- if to_return_protect
- else {"visible": False, "maximum": n_spk, "__type__": "update"}
- )
-
-
- def vc_single(
- self,
- sid,
- input_audio_path0,
- input_audio_path1,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- global total_time
- total_time = 0
- start_time = time.time()
- if not input_audio_path0 and not input_audio_path1:
- return "You need to upload an audio", None
-
- if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))):
- return "Audio was not properly selected or doesn't exist", None
-
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'")
- print("-------------------")
- f0_up_key = int(f0_up_key)
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"Attempting to load {input_audio_path1}....")
- audio = load_audio(file=input_audio_path1,
- sr=16000,
- DoFormant=rvc_globals.DoFormant,
- Quefrency=rvc_globals.Quefrency,
- Timbre=rvc_globals.Timbre)
-
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
-
- if self.hubert_model is None:
- self.hubert_model = load_hubert(self.config)
-
- try:
- self.if_f0 = self.cpt.get("f0", 1)
- except NameError:
- message = "Model was not properly selected"
- print(message)
- return message, None
-
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
-
- try:
- audio_opt = self.pipeline.pipeline(
- self.hubert_model,
- self.net_g,
- sid,
- audio,
- input_audio_path1,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- self.if_f0,
- filter_radius,
- self.tgt_sr,
- resample_sr,
- rms_mix_rate,
- self.version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=f0_file,
- f0_min=f0_min,
- f0_max=f0_max
- )
- except AssertionError:
- message = "Mismatching index version detected (v1 with v2, or v2 with v1)."
- print(message)
- return message, None
- except NameError:
- message = "RVC libraries are still loading. Please try again in a few seconds."
- print(message)
- return message, None
-
- if self.tgt_sr != resample_sr >= 16000:
- self.tgt_sr = resample_sr
- index_info = (
- "Index:\n%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- end_time = time.time()
- total_time = end_time - start_time
-
- output_folder = "audio-outputs"
- os.makedirs(output_folder, exist_ok=True)
- output_filename = "generated_audio_{}.wav"
- output_count = 1
- while True:
- current_output_path = os.path.join(output_folder, output_filename.format(output_count))
- if not os.path.exists(current_output_path):
- break
- output_count += 1
-
- wavfile.write(current_output_path, self.tgt_sr, audio_opt)
- print(f"Generated audio saved to: {current_output_path}")
- return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- logger.warn(info)
- return info, (None, None)
-
- def vc_single_dont_save(
- self,
- sid,
- input_audio_path0,
- input_audio_path1,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- global total_time
- total_time = 0
- start_time = time.time()
- if not input_audio_path0 and not input_audio_path1:
- return "You need to upload an audio", None
-
- if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))):
- return "Audio was not properly selected or doesn't exist", None
-
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'")
- print("-------------------")
- f0_up_key = int(f0_up_key)
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"Attempting to load {input_audio_path1}....")
- audio = load_audio(file=input_audio_path1,
- sr=16000,
- DoFormant=rvc_globals.DoFormant,
- Quefrency=rvc_globals.Quefrency,
- Timbre=rvc_globals.Timbre)
-
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
-
- if self.hubert_model is None:
- self.hubert_model = load_hubert(self.config)
-
- try:
- self.if_f0 = self.cpt.get("f0", 1)
- except NameError:
- message = "Model was not properly selected"
- print(message)
- return message, None
-
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
-
- try:
- audio_opt = self.pipeline.pipeline(
- self.hubert_model,
- self.net_g,
- sid,
- audio,
- input_audio_path1,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- self.if_f0,
- filter_radius,
- self.tgt_sr,
- resample_sr,
- rms_mix_rate,
- self.version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=f0_file,
- f0_min=f0_min,
- f0_max=f0_max
- )
- except AssertionError:
- message = "Mismatching index version detected (v1 with v2, or v2 with v1)."
- print(message)
- return message, None
- except NameError:
- message = "RVC libraries are still loading. Please try again in a few seconds."
- print(message)
- return message, None
-
- if self.tgt_sr != resample_sr >= 16000:
- self.tgt_sr = resample_sr
- index_info = (
- "Index:\n%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- end_time = time.time()
- total_time = end_time - start_time
-
- return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- logger.warn(info)
- return info, (None, None)
-
-
- def vc_multi(
- self,
- sid,
- dir_path,
- opt_root,
- paths,
- f0_up_key,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- format1,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- dir_path = (
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- os.makedirs(opt_root, exist_ok=True)
- try:
- if dir_path != "":
- paths = [
- os.path.join(dir_path, name) for name in os.listdir(dir_path)
- ]
- else:
- paths = [path.name for path in paths]
- except:
- traceback.print_exc()
- paths = [path.name for path in paths]
- infos = []
- for path in paths:
- info, opt = self.vc_single(
- sid,
- path,
- f0_up_key,
- None,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- )
- if "Success" in info:
- try:
- tgt_sr, audio_opt = opt
- if format1 in ["wav", "flac"]:
- sf.write(
- "%s/%s.%s"
- % (opt_root, os.path.basename(path), format1),
- audio_opt,
- tgt_sr,
- )
- else:
- path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1)
- with BytesIO() as wavf:
- sf.write(
- wavf,
- audio_opt,
- tgt_sr,
- format="wav"
- )
- wavf.seek(0, 0)
- with open(path, "wb") as outf:
- wav2(wavf, outf, format1)
- except:
- info += traceback.format_exc()
- infos.append("%s->%s" % (os.path.basename(path), info))
- yield "\n".join(infos)
- yield "\n".join(infos)
- except:
- yield traceback.format_exc()
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/models.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/models.py
deleted file mode 100644
index b6bb21a8b26680b38c3af8278ed139b6628356c5..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/models.py
+++ /dev/null
@@ -1,39 +0,0 @@
-"""Utilities for defining models
-"""
-
-import operator
-from typing import Any, Callable, Type
-
-
-class KeyBasedCompareMixin:
- """Provides comparison capabilities that is based on a key"""
-
- __slots__ = ["_compare_key", "_defining_class"]
-
- def __init__(self, key: Any, defining_class: Type["KeyBasedCompareMixin"]) -> None:
- self._compare_key = key
- self._defining_class = defining_class
-
- def __hash__(self) -> int:
- return hash(self._compare_key)
-
- def __lt__(self, other: Any) -> bool:
- return self._compare(other, operator.__lt__)
-
- def __le__(self, other: Any) -> bool:
- return self._compare(other, operator.__le__)
-
- def __gt__(self, other: Any) -> bool:
- return self._compare(other, operator.__gt__)
-
- def __ge__(self, other: Any) -> bool:
- return self._compare(other, operator.__ge__)
-
- def __eq__(self, other: Any) -> bool:
- return self._compare(other, operator.__eq__)
-
- def _compare(self, other: Any, method: Callable[[Any, Any], bool]) -> bool:
- if not isinstance(other, self._defining_class):
- return NotImplemented
-
- return method(self._compare_key, other._compare_key)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/__about__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/__about__.py
deleted file mode 100644
index 3551bc2d29846441299cf57b397b02fc164c99b9..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/__about__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-__all__ = [
- "__title__",
- "__summary__",
- "__uri__",
- "__version__",
- "__author__",
- "__email__",
- "__license__",
- "__copyright__",
-]
-
-__title__ = "packaging"
-__summary__ = "Core utilities for Python packages"
-__uri__ = "https://github.com/pypa/packaging"
-
-__version__ = "21.3"
-
-__author__ = "Donald Stufft and individual contributors"
-__email__ = "donald@stufft.io"
-
-__license__ = "BSD-2-Clause or Apache-2.0"
-__copyright__ = "2014-2019 %s" % __author__
diff --git a/spaces/Rbrq/DeticChatGPT/tools/unzip_imagenet_lvis.py b/spaces/Rbrq/DeticChatGPT/tools/unzip_imagenet_lvis.py
deleted file mode 100644
index 56ccad1a9024f425951ae025182fb709d2effcab..0000000000000000000000000000000000000000
--- a/spaces/Rbrq/DeticChatGPT/tools/unzip_imagenet_lvis.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-import argparse
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--src_path', default='datasets/imagenet/ImageNet-21K/')
- parser.add_argument('--dst_path', default='datasets/imagenet/ImageNet-LVIS/')
- parser.add_argument('--data_path', default='datasets/imagenet_lvis_wnid.txt')
- args = parser.parse_args()
-
- f = open(args.data_path)
- for i, line in enumerate(f):
- cmd = 'mkdir {x} && tar -xf {src}/{l}.tar -C {x}'.format(
- src=args.src_path,
- l=line.strip(),
- x=args.dst_path + '/' + line.strip())
- print(i, cmd)
- os.system(cmd)
diff --git a/spaces/Red54/convert-sd-ckpt/README.md b/spaces/Red54/convert-sd-ckpt/README.md
deleted file mode 100644
index d1573bfa3ad52f643dfe1810e35fdf8c111df9af..0000000000000000000000000000000000000000
--- a/spaces/Red54/convert-sd-ckpt/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Convert to Diffusers
-emoji: 🤖
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: diffusers/convert-sd-ckpt
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op_gpu/conv2d_gradfix.py b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op_gpu/conv2d_gradfix.py
deleted file mode 100644
index bb2f94bbcb8132299fd4d538972d32bd7ff6e7d6..0000000000000000000000000000000000000000
--- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op_gpu/conv2d_gradfix.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import contextlib
-import warnings
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-
-enabled = True
-weight_gradients_disabled = False
-
-
-@contextlib.contextmanager
-def no_weight_gradients():
- global weight_gradients_disabled
-
- old = weight_gradients_disabled
- weight_gradients_disabled = True
- yield
- weight_gradients_disabled = old
-
-
-def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=False,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=0,
- dilation=dilation,
- groups=groups,
- ).apply(input, weight, bias)
-
- return F.conv2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def conv_transpose2d(
- input,
- weight,
- bias=None,
- stride=1,
- padding=0,
- output_padding=0,
- groups=1,
- dilation=1,
-):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=True,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- groups=groups,
- dilation=dilation,
- ).apply(input, weight, bias)
-
- return F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def could_use_op(input):
- if (not enabled) or (not torch.backends.cudnn.enabled):
- return False
-
- if input.device.type != "cuda":
- return False
-
- if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]):
- return True
-
- warnings.warn(
- f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
- )
-
- return False
-
-
-def ensure_tuple(xs, ndim):
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
-
- return xs
-
-
-conv2d_gradfix_cache = dict()
-
-
-def conv2d_gradfix(
- transpose, weight_shape, stride, padding, output_padding, dilation, groups
-):
- ndim = 2
- weight_shape = tuple(weight_shape)
- stride = ensure_tuple(stride, ndim)
- padding = ensure_tuple(padding, ndim)
- output_padding = ensure_tuple(output_padding, ndim)
- dilation = ensure_tuple(dilation, ndim)
-
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
- if key in conv2d_gradfix_cache:
- return conv2d_gradfix_cache[key]
-
- common_kwargs = dict(
- stride=stride, padding=padding, dilation=dilation, groups=groups
- )
-
- def calc_output_padding(input_shape, output_shape):
- if transpose:
- return [0, 0]
-
- return [
- input_shape[i + 2]
- - (output_shape[i + 2] - 1) * stride[i]
- - (1 - 2 * padding[i])
- - dilation[i] * (weight_shape[i + 2] - 1)
- for i in range(ndim)
- ]
-
- class Conv2d(autograd.Function):
- @staticmethod
- def forward(ctx, input, weight, bias):
- if not transpose:
- out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
-
- else:
- out = F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- output_padding=output_padding,
- **common_kwargs,
- )
-
- ctx.save_for_backward(input, weight)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, weight = ctx.saved_tensors
- grad_input, grad_weight, grad_bias = None, None, None
-
- if ctx.needs_input_grad[0]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, weight, None)
-
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
-
- if ctx.needs_input_grad[2]:
- grad_bias = grad_output.sum((0, 2, 3))
-
- return grad_input, grad_weight, grad_bias
-
- class Conv2dGradWeight(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input):
- op = torch._C._jit_get_operation(
- "aten::cudnn_convolution_backward_weight"
- if not transpose
- else "aten::cudnn_convolution_transpose_backward_weight"
- )
- flags = [
- torch.backends.cudnn.benchmark,
- torch.backends.cudnn.deterministic,
- torch.backends.cudnn.allow_tf32,
- ]
- grad_weight = op(
- weight_shape,
- grad_output,
- input,
- padding,
- stride,
- dilation,
- groups,
- *flags,
- )
- ctx.save_for_backward(grad_output, input)
-
- return grad_weight
-
- @staticmethod
- def backward(ctx, grad_grad_weight):
- grad_output, input = ctx.saved_tensors
- grad_grad_output, grad_grad_input = None, None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = Conv2d.apply(input, grad_grad_weight, None)
-
- if ctx.needs_input_grad[1]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, grad_grad_weight, None)
-
- return grad_grad_output, grad_grad_input
-
- conv2d_gradfix_cache[key] = Conv2d
-
- return Conv2d
diff --git a/spaces/Reself/StableVideo/ldm/modules/midas/midas/dpt_depth.py b/spaces/Reself/StableVideo/ldm/modules/midas/midas/dpt_depth.py
deleted file mode 100644
index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000
--- a/spaces/Reself/StableVideo/ldm/modules/midas/midas/dpt_depth.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .base_model import BaseModel
-from .blocks import (
- FeatureFusionBlock,
- FeatureFusionBlock_custom,
- Interpolate,
- _make_encoder,
- forward_vit,
-)
-
-
-def _make_fusion_block(features, use_bn):
- return FeatureFusionBlock_custom(
- features,
- nn.ReLU(False),
- deconv=False,
- bn=use_bn,
- expand=False,
- align_corners=True,
- )
-
-
-class DPT(BaseModel):
- def __init__(
- self,
- head,
- features=256,
- backbone="vitb_rn50_384",
- readout="project",
- channels_last=False,
- use_bn=False,
- ):
-
- super(DPT, self).__init__()
-
- self.channels_last = channels_last
-
- hooks = {
- "vitb_rn50_384": [0, 1, 8, 11],
- "vitb16_384": [2, 5, 8, 11],
- "vitl16_384": [5, 11, 17, 23],
- }
-
- # Instantiate backbone and reassemble blocks
- self.pretrained, self.scratch = _make_encoder(
- backbone,
- features,
- False, # Set to true of you want to train from scratch, uses ImageNet weights
- groups=1,
- expand=False,
- exportable=False,
- hooks=hooks[backbone],
- use_readout=readout,
- )
-
- self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
-
- self.scratch.output_conv = head
-
-
- def forward(self, x):
- if self.channels_last == True:
- x.contiguous(memory_format=torch.channels_last)
-
- layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return out
-
-
-class DPTDepthModel(DPT):
- def __init__(self, path=None, non_negative=True, **kwargs):
- features = kwargs["features"] if "features" in kwargs else 256
-
- head = nn.Sequential(
- nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- super().__init__(head, **kwargs)
-
- if path is not None:
- self.load(path)
-
- def forward(self, x):
- return super().forward(x).squeeze(dim=1)
-
diff --git a/spaces/Ritori/TTS_Yui/hifi-gan/train.py b/spaces/Ritori/TTS_Yui/hifi-gan/train.py
deleted file mode 100644
index 191c9f478ab3f01c049b240d3dd7267fd03cdcb1..0000000000000000000000000000000000000000
--- a/spaces/Ritori/TTS_Yui/hifi-gan/train.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import warnings
-warnings.simplefilter(action='ignore', category=FutureWarning)
-import itertools
-import os
-import time
-import argparse
-import json
-import torch
-import torch.nn.functional as F
-from torch.utils.tensorboard import SummaryWriter
-from torch.utils.data import DistributedSampler, DataLoader
-import torch.multiprocessing as mp
-from torch.distributed import init_process_group
-from torch.nn.parallel import DistributedDataParallel
-from env import AttrDict, build_env
-from meldataset import MelDataset, mel_spectrogram, get_dataset_filelist
-from models import Generator, MultiPeriodDiscriminator, MultiScaleDiscriminator, feature_loss, generator_loss,\
- discriminator_loss
-from hifiutils import plot_spectrogram, scan_checkpoint, load_checkpoint, save_checkpoint
-
-torch.backends.cudnn.benchmark = True
-
-
-def train(rank, a, h, warm_start):
- if h.num_gpus > 1:
- init_process_group(backend=h.dist_config['dist_backend'], init_method=h.dist_config['dist_url'],
- world_size=h.dist_config['world_size'] * h.num_gpus, rank=rank)
-
- torch.cuda.manual_seed(h.seed)
- device = torch.device('cuda:{:d}'.format(rank))
-
- generator = Generator(h).to(device)
- mpd = MultiPeriodDiscriminator().to(device)
- msd = MultiScaleDiscriminator().to(device)
-
- if rank == 0:
- print(generator)
- os.makedirs(a.checkpoint_path, exist_ok=True)
- print("checkpoints directory : ", a.checkpoint_path)
-
- if os.path.isdir(a.checkpoint_path):
- cp_g = scan_checkpoint(a.checkpoint_path, 'g_')
- cp_do = scan_checkpoint(a.checkpoint_path, 'do_')
-
- steps = 0
- if cp_g is None or cp_do is None:
- state_dict_do = None
- last_epoch = -1
- else:
- state_dict_g = load_checkpoint(cp_g, device)
- state_dict_do = load_checkpoint(cp_do, device)
- generator.load_state_dict(state_dict_g['generator'])
- mpd.load_state_dict(state_dict_do['mpd'])
- msd.load_state_dict(state_dict_do['msd'])
- if warm_start:
- steps = 1
- last_epoch = 0
- else:
- steps = state_dict_do['steps'] + 1
- last_epoch = state_dict_do['epoch']
-
- if h.num_gpus > 1:
- generator = DistributedDataParallel(generator, device_ids=[rank]).to(device)
- mpd = DistributedDataParallel(mpd, device_ids=[rank]).to(device)
- msd = DistributedDataParallel(msd, device_ids=[rank]).to(device)
-
- optim_g = torch.optim.AdamW(generator.parameters(), h.learning_rate, betas=[h.adam_b1, h.adam_b2])
- optim_d = torch.optim.AdamW(itertools.chain(msd.parameters(), mpd.parameters()),
- h.learning_rate, betas=[h.adam_b1, h.adam_b2])
-
- if state_dict_do is not None or warm_start:
- optim_g.load_state_dict(state_dict_do['optim_g'])
- optim_d.load_state_dict(state_dict_do['optim_d'])
- if warm_start:
- optim_g.param_groups[0]["lr"] = h.learning_rate
- optim_d.param_groups[0]["lr"] = h.learning_rate
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=h.lr_decay, last_epoch=last_epoch)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=h.lr_decay, last_epoch=last_epoch)
-
- training_filelist, validation_filelist = get_dataset_filelist(a)
-
- trainset = MelDataset(training_filelist, h.segment_size, h.n_fft, h.num_mels,
- h.hop_size, h.win_size, h.sampling_rate, h.fmin, h.fmax, n_cache_reuse=0,
- shuffle=False if h.num_gpus > 1 else True, fmax_loss=h.fmax_for_loss, device=device,
- fine_tuning=a.fine_tuning, base_mels_path=a.input_mels_dir)
-
- train_sampler = DistributedSampler(trainset) if h.num_gpus > 1 else None
-
- train_loader = DataLoader(trainset, num_workers=h.num_workers, shuffle=False,
- sampler=train_sampler,
- batch_size=h.batch_size,
- pin_memory=True,
- drop_last=True)
-
- if rank == 0:
- validset = MelDataset(validation_filelist, h.segment_size, h.n_fft, h.num_mels,
- h.hop_size, h.win_size, h.sampling_rate, h.fmin, h.fmax, False, False, n_cache_reuse=0,
- fmax_loss=h.fmax_for_loss, device=device, fine_tuning=a.fine_tuning,
- base_mels_path=a.input_mels_dir)
- validation_loader = DataLoader(validset, num_workers=1, shuffle=False,
- sampler=None,
- batch_size=1,
- pin_memory=True,
- drop_last=True)
-
- sw = SummaryWriter(os.path.join(a.checkpoint_path, 'logs'))
-
- generator.train()
- mpd.train()
- msd.train()
- for epoch in range(max(0, last_epoch), a.training_epochs):
- for param_group in optim_g.param_groups:
- print("Current learning rate: " + str(param_group["lr"]))
- if rank == 0:
- start = time.time()
- print("Epoch: {}".format(epoch+1))
-
- if h.num_gpus > 1:
- train_sampler.set_epoch(epoch)
-
- for i, batch in enumerate(train_loader):
- if rank == 0:
- start_b = time.time()
- x, y, _, y_mel = batch
- x = torch.autograd.Variable(x.to(device, non_blocking=True))
- y = torch.autograd.Variable(y.to(device, non_blocking=True))
- y_mel = torch.autograd.Variable(y_mel.to(device, non_blocking=True))
- y = y.unsqueeze(1)
-
- y_g_hat = generator(x)
- y_g_hat_mel = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels, h.sampling_rate, h.hop_size, h.win_size,
- h.fmin, h.fmax_for_loss)
-
- optim_d.zero_grad()
-
- # MPD
- y_df_hat_r, y_df_hat_g, _, _ = mpd(y, y_g_hat.detach())
- loss_disc_f, losses_disc_f_r, losses_disc_f_g = discriminator_loss(y_df_hat_r, y_df_hat_g)
-
- # MSD
- y_ds_hat_r, y_ds_hat_g, _, _ = msd(y, y_g_hat.detach())
- loss_disc_s, losses_disc_s_r, losses_disc_s_g = discriminator_loss(y_ds_hat_r, y_ds_hat_g)
-
- loss_disc_all = loss_disc_s + loss_disc_f
-
- loss_disc_all.backward()
- optim_d.step()
-
- # Generator
- optim_g.zero_grad()
-
- # L1 Mel-Spectrogram Loss
- loss_mel = F.l1_loss(y_mel, y_g_hat_mel) * 45
-
- y_df_hat_r, y_df_hat_g, fmap_f_r, fmap_f_g = mpd(y, y_g_hat)
- y_ds_hat_r, y_ds_hat_g, fmap_s_r, fmap_s_g = msd(y, y_g_hat)
- loss_fm_f = feature_loss(fmap_f_r, fmap_f_g)
- loss_fm_s = feature_loss(fmap_s_r, fmap_s_g)
- loss_gen_f, losses_gen_f = generator_loss(y_df_hat_g)
- loss_gen_s, losses_gen_s = generator_loss(y_ds_hat_g)
- loss_gen_all = loss_gen_s + loss_gen_f + loss_fm_s + loss_fm_f + loss_mel
-
- loss_gen_all.backward()
- optim_g.step()
-
- if rank == 0:
- # STDOUT logging
- if steps % a.stdout_interval == 0:
- with torch.no_grad():
- mel_error = F.l1_loss(y_mel, y_g_hat_mel).item()
-
- print('Steps : {:d}, Gen Loss Total : {:4.3f}, Mel-Spec. Error : {:4.3f}, s/b : {:4.3f}'.
- format(steps, loss_gen_all, mel_error, time.time() - start_b))
-
- # checkpointing
- if steps % a.checkpoint_interval == 0 and steps != 0:
- checkpoint_path = "{}/g_00000000".format(a.checkpoint_path)
- save_checkpoint(checkpoint_path,
- {'generator': (generator.module if h.num_gpus > 1 else generator).state_dict()})
- checkpoint_path = "{}/do_00000000".format(a.checkpoint_path)
- save_checkpoint(checkpoint_path,
- {'mpd': (mpd.module if h.num_gpus > 1
- else mpd).state_dict(),
- 'msd': (msd.module if h.num_gpus > 1
- else msd).state_dict(),
- 'optim_g': optim_g.state_dict(), 'optim_d': optim_d.state_dict(), 'steps': steps,
- 'epoch': epoch})
-
- # Tensorboard summary logging
- if steps % a.summary_interval == 0:
- sw.add_scalar("training/gen_loss_total", loss_gen_all, steps)
- sw.add_scalar("training/mel_spec_error", mel_error, steps)
-
- # Validation
- if steps % a.validation_interval == 0 and not a.fine_tuning: # and steps != 0:
- generator.eval()
- torch.cuda.empty_cache()
- val_err_tot = 0
- with torch.no_grad():
- for j, batch in enumerate(validation_loader):
- x, y, _, y_mel = batch
- y_g_hat = generator(x.to(device))
- y_mel = torch.autograd.Variable(y_mel.to(device, non_blocking=True))
- y_g_hat_mel = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels, h.sampling_rate,
- h.hop_size, h.win_size,
- h.fmin, h.fmax_for_loss)
- val_err_tot += F.l1_loss(y_mel, y_g_hat_mel).item()
-
- if j <= 4:
- if steps == 0:
- sw.add_audio('gt/y_{}'.format(j), y[0], steps, h.sampling_rate)
- sw.add_figure('gt/y_spec_{}'.format(j), plot_spectrogram(x[0]), steps)
-
- sw.add_audio('generated/y_hat_{}'.format(j), y_g_hat[0], steps, h.sampling_rate)
- y_hat_spec = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels,
- h.sampling_rate, h.hop_size, h.win_size,
- h.fmin, h.fmax)
- sw.add_figure('generated/y_hat_spec_{}'.format(j),
- plot_spectrogram(y_hat_spec.squeeze(0).cpu().numpy()), steps)
-
- val_err = val_err_tot / (j+1)
- sw.add_scalar("validation/mel_spec_error", val_err, steps)
-
- generator.train()
-
- steps += 1
-
- scheduler_g.step()
- scheduler_d.step()
-
- if rank == 0:
- print('Time taken for epoch {} is {} sec\n'.format(epoch + 1, int(time.time() - start)))
-
-
-def main():
- print('Initializing Training Process..')
-
- parser = argparse.ArgumentParser()
-
- parser.add_argument('--group_name', default=None)
- parser.add_argument('--input_wavs_dir', default='LJSpeech-1.1/wavs')
- parser.add_argument('--input_mels_dir', default='ft_dataset')
- parser.add_argument('--input_training_file', default='LJSpeech-1.1/training.txt')
- parser.add_argument('--input_validation_file', default='LJSpeech-1.1/validation.txt')
- parser.add_argument('--checkpoint_path', default='cp_hifigan')
- parser.add_argument('--config', default='')
- parser.add_argument('--training_epochs', default=3100, type=int)
- parser.add_argument('--stdout_interval', default=5, type=int)
- parser.add_argument('--checkpoint_interval', default=5000, type=int)
- parser.add_argument('--summary_interval', default=100, type=int)
- parser.add_argument('--validation_interval', default=1000, type=int)
- parser.add_argument('--fine_tuning', default=False, type=bool)
- parser.add_argument('--warm_start', default=False, type=bool)
-
- a = parser.parse_args()
-
- with open(a.config) as f:
- data = f.read()
-
- json_config = json.loads(data)
- h = AttrDict(json_config)
- build_env(a.config, 'config.json', a.checkpoint_path)
-
- torch.manual_seed(h.seed)
- if torch.cuda.is_available():
- torch.cuda.manual_seed(h.seed)
- h.num_gpus = torch.cuda.device_count()
- h.batch_size = int(h.batch_size / h.num_gpus)
- print('Batch size per GPU :', h.batch_size)
- else:
- pass
-
- if h.num_gpus > 1:
- mp.spawn(train, nprocs=h.num_gpus, args=(a, h, a.warm_start,))
- else:
- train(0, a, h, a.warm_start)
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Rongjiehuang/GenerSpeech/vocoders/base_vocoder.py b/spaces/Rongjiehuang/GenerSpeech/vocoders/base_vocoder.py
deleted file mode 100644
index fe49a9e4f790ecdc5e76d60a23f96602b59fc48d..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/vocoders/base_vocoder.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import importlib
-VOCODERS = {}
-
-
-def register_vocoder(cls):
- VOCODERS[cls.__name__.lower()] = cls
- VOCODERS[cls.__name__] = cls
- return cls
-
-
-def get_vocoder_cls(hparams):
- if hparams['vocoder'] in VOCODERS:
- return VOCODERS[hparams['vocoder']]
- else:
- vocoder_cls = hparams['vocoder']
- pkg = ".".join(vocoder_cls.split(".")[:-1])
- cls_name = vocoder_cls.split(".")[-1]
- vocoder_cls = getattr(importlib.import_module(pkg), cls_name)
- return vocoder_cls
-
-
-class BaseVocoder:
- def spec2wav(self, mel):
- """
-
- :param mel: [T, 80]
- :return: wav: [T']
- """
-
- raise NotImplementedError
-
- @staticmethod
- def wav2spec(wav_fn):
- """
-
- :param wav_fn: str
- :return: wav, mel: [T, 80]
- """
- raise NotImplementedError
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/misc.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/misc.py
deleted file mode 100644
index d447829a091d94e56b2984e801de74b4c9ec5d19..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/misc.py
+++ /dev/null
@@ -1,268 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import re
-import contextlib
-import numpy as np
-import torch
-import warnings
-import dnnlib
-
-#----------------------------------------------------------------------------
-# Cached construction of constant tensors. Avoids CPU=>GPU copy when the
-# same constant is used multiple times.
-
-_constant_cache = dict()
-
-def constant(value, shape=None, dtype=None, device=None, memory_format=None):
- value = np.asarray(value)
- if shape is not None:
- shape = tuple(shape)
- if dtype is None:
- dtype = torch.get_default_dtype()
- if device is None:
- device = torch.device('cpu')
- if memory_format is None:
- memory_format = torch.contiguous_format
-
- key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format)
- tensor = _constant_cache.get(key, None)
- if tensor is None:
- tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device)
- if shape is not None:
- tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape))
- tensor = tensor.contiguous(memory_format=memory_format)
- _constant_cache[key] = tensor
- return tensor
-
-#----------------------------------------------------------------------------
-# Replace NaN/Inf with specified numerical values.
-
-try:
- nan_to_num = torch.nan_to_num # 1.8.0a0
-except AttributeError:
- def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin
- assert isinstance(input, torch.Tensor)
- if posinf is None:
- posinf = torch.finfo(input.dtype).max
- if neginf is None:
- neginf = torch.finfo(input.dtype).min
- assert nan == 0
- return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out)
-
-#----------------------------------------------------------------------------
-# Symbolic assert.
-
-try:
- symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access
-except AttributeError:
- symbolic_assert = torch.Assert # 1.7.0
-
-#----------------------------------------------------------------------------
-# Context manager to suppress known warnings in torch.jit.trace().
-
-class suppress_tracer_warnings(warnings.catch_warnings):
- def __enter__(self):
- super().__enter__()
- warnings.simplefilter('ignore', category=torch.jit.TracerWarning)
- return self
-
-#----------------------------------------------------------------------------
-# Assert that the shape of a tensor matches the given list of integers.
-# None indicates that the size of a dimension is allowed to vary.
-# Performs symbolic assertion when used in torch.jit.trace().
-
-def assert_shape(tensor, ref_shape):
- if tensor.ndim != len(ref_shape):
- raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}')
- for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)):
- if ref_size is None:
- pass
- elif isinstance(ref_size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}')
- elif isinstance(size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}')
- elif size != ref_size:
- raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}')
-
-#----------------------------------------------------------------------------
-# Function decorator that calls torch.autograd.profiler.record_function().
-
-def profiled_function(fn):
- def decorator(*args, **kwargs):
- with torch.autograd.profiler.record_function(fn.__name__):
- return fn(*args, **kwargs)
- decorator.__name__ = fn.__name__
- return decorator
-
-#----------------------------------------------------------------------------
-# Sampler for torch.utils.data.DataLoader that loops over the dataset
-# indefinitely, shuffling items as it goes.
-
-class InfiniteSampler(torch.utils.data.Sampler):
- def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5):
- assert len(dataset) > 0
- assert num_replicas > 0
- assert 0 <= rank < num_replicas
- assert 0 <= window_size <= 1
- super().__init__(dataset)
- self.dataset = dataset
- self.rank = rank
- self.num_replicas = num_replicas
- self.shuffle = shuffle
- self.seed = seed
- self.window_size = window_size
-
- def __iter__(self):
- order = np.arange(len(self.dataset))
- rnd = None
- window = 0
- if self.shuffle:
- rnd = np.random.RandomState(self.seed)
- rnd.shuffle(order)
- window = int(np.rint(order.size * self.window_size))
-
- idx = 0
- while True:
- i = idx % order.size
- if idx % self.num_replicas == self.rank:
- yield order[i]
- if window >= 2:
- j = (i - rnd.randint(window)) % order.size
- order[i], order[j] = order[j], order[i]
- idx += 1
-
-#----------------------------------------------------------------------------
-# Utilities for operating with torch.nn.Module parameters and buffers.
-
-def params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.parameters()) + list(module.buffers())
-
-def named_params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.named_parameters()) + list(module.named_buffers())
-
-def copy_params_and_buffers(src_module, dst_module, require_all=False):
- assert isinstance(src_module, torch.nn.Module)
- assert isinstance(dst_module, torch.nn.Module)
- src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)}
- for name, tensor in named_params_and_buffers(dst_module):
- assert (name in src_tensors) or (not require_all)
- if name in src_tensors:
- tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad)
-
-#----------------------------------------------------------------------------
-# Context manager for easily enabling/disabling DistributedDataParallel
-# synchronization.
-
-@contextlib.contextmanager
-def ddp_sync(module, sync):
- assert isinstance(module, torch.nn.Module)
- if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel):
- yield
- else:
- with module.no_sync():
- yield
-
-#----------------------------------------------------------------------------
-# Check DistributedDataParallel consistency across processes.
-
-def check_ddp_consistency(module, ignore_regex=None):
- assert isinstance(module, torch.nn.Module)
- for name, tensor in named_params_and_buffers(module):
- fullname = type(module).__name__ + '.' + name
- flag = False
- if ignore_regex is not None:
- for regex in ignore_regex:
- if re.fullmatch(regex, fullname):
- flag = True
- break
- if flag:
- continue
- tensor = tensor.detach()
- other = tensor.clone()
- torch.distributed.broadcast(tensor=other, src=0)
- assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname
-
-#----------------------------------------------------------------------------
-# Print summary table of module hierarchy.
-
-def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True):
- assert isinstance(module, torch.nn.Module)
- assert not isinstance(module, torch.jit.ScriptModule)
- assert isinstance(inputs, (tuple, list))
-
- # Register hooks.
- entries = []
- nesting = [0]
- def pre_hook(_mod, _inputs):
- nesting[0] += 1
- def post_hook(mod, _inputs, outputs):
- nesting[0] -= 1
- if nesting[0] <= max_nesting:
- outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs]
- outputs = [t for t in outputs if isinstance(t, torch.Tensor)]
- entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs))
- hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()]
- hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()]
-
- # Run module.
- outputs = module(*inputs)
- for hook in hooks:
- hook.remove()
-
- # Identify unique outputs, parameters, and buffers.
- tensors_seen = set()
- for e in entries:
- e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen]
- e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen]
- e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen]
- tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs}
-
- # Filter out redundant entries.
- if skip_redundant:
- entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)]
-
- # Construct table.
- rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']]
- rows += [['---'] * len(rows[0])]
- param_total = 0
- buffer_total = 0
- submodule_names = {mod: name for name, mod in module.named_modules()}
- for e in entries:
- name = '' if e.mod is module else submodule_names[e.mod]
- param_size = sum(t.numel() for t in e.unique_params)
- buffer_size = sum(t.numel() for t in e.unique_buffers)
- output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs]
- output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs]
- rows += [[
- name + (':0' if len(e.outputs) >= 2 else ''),
- str(param_size) if param_size else '-',
- str(buffer_size) if buffer_size else '-',
- (output_shapes + ['-'])[0],
- (output_dtypes + ['-'])[0],
- ]]
- for idx in range(1, len(e.outputs)):
- rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]]
- param_total += param_size
- buffer_total += buffer_size
- rows += [['---'] * len(rows[0])]
- rows += [['Total', str(param_total), str(buffer_total), '-', '-']]
-
- # Print table.
- widths = [max(len(cell) for cell in column) for column in zip(*rows)]
- print()
- for row in rows:
- print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths)))
- print()
- return outputs
-
-#----------------------------------------------------------------------------
diff --git a/spaces/SIGGRAPH2022/sketch2pose/src/spin/__init__.py b/spaces/SIGGRAPH2022/sketch2pose/src/spin/__init__.py
deleted file mode 100644
index 1e3ee87d6b2fd310370cfdb2c75ba7549c5a2982..0000000000000000000000000000000000000000
--- a/spaces/SIGGRAPH2022/sketch2pose/src/spin/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from .constants import JOINT_NAMES
-from .hmr import hmr
-from .smpl import SMPLX
-from .utils import process_image
-
-__all__ = [
- "hmr",
- "SMPLX",
- "process_image",
- "JOINT_NAMES",
-]
diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/__init__.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/__init__.py
deleted file mode 100644
index ac93faf469484dd6e7c5523555b181f3681c5e9a..0000000000000000000000000000000000000000
--- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from streamlit_langchain_chat.customized_langchain.llms.openai import AzureOpenAI, OpenAI, OpenAIChat, AzureOpenAIChat
diff --git a/spaces/Sammy03/neuralserach/app.py b/spaces/Sammy03/neuralserach/app.py
deleted file mode 100644
index 8267768d02d0ecee433b2a273aacad23b84b5ac0..0000000000000000000000000000000000000000
--- a/spaces/Sammy03/neuralserach/app.py
+++ /dev/null
@@ -1,41 +0,0 @@
-from sentence_transformers import SentenceTransformer, CrossEncoder, util
-import torch
-import pickle
-import pandas as pd
-import gradio as gr
-
-bi_encoder = SentenceTransformer("multi-qa-MiniLM-L6-cos-v1")
-cross_encoder = CrossEncoder("cross-encoder/ms-marco-MiniLM-L-6-v2")
-corpus_embeddings=pd.read_pickle("corpus_embeddings_cpu.pkl")
-corpus=pd.read_pickle("corpus.pkl")
-
-def search(query,top_k=100):
- print("Top 5 Answer by the NSE:")
- print()
- ans=[]
- ##### Sematic Search #####
- # Encode the query using the bi-encoder and find potentially relevant passages
- question_embedding = bi_encoder.encode(query, convert_to_tensor=True)
- hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)
- hits = hits[0] # Get the hits for the first query
-
- ##### Re-Ranking #####
- # Now, score all retrieved passages with the cross_encoder
- cross_inp = [[query, corpus[hit['corpus_id']]] for hit in hits]
- cross_scores = cross_encoder.predict(cross_inp)
-
- # Sort results by the cross-encoder scores
- for idx in range(len(cross_scores)):
- hits[idx]['cross-score'] = cross_scores[idx]
-
- hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)
-
- for idx, hit in enumerate(hits[0:5]):
- ans.append(corpus[hit['corpus_id']])
- return ans[0],ans[1],ans[2],ans[3],ans[4]
-
-inp=gr.inputs.Textbox(lines=1, placeholder=None, default="", label="search you query here")
-out=gr.outputs.Textbox(type="auto",label="search results")
-
-iface = gr.Interface(fn=search, inputs=inp, outputs=[out,out,out,out,out],title="Neural Search Engine",theme="huggingface",layout='vertical')
-iface.launch()
\ No newline at end of file
diff --git a/spaces/SeViLA/SeViLA/lavis/processors/randaugment.py b/spaces/SeViLA/SeViLA/lavis/processors/randaugment.py
deleted file mode 100644
index 5c6a9e6d62f74358f490d19546c9829b3ac6aaef..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/lavis/processors/randaugment.py
+++ /dev/null
@@ -1,398 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import cv2
-import numpy as np
-
-import torch
-
-
-## aug functions
-def identity_func(img):
- return img
-
-
-def autocontrast_func(img, cutoff=0):
- """
- same output as PIL.ImageOps.autocontrast
- """
- n_bins = 256
-
- def tune_channel(ch):
- n = ch.size
- cut = cutoff * n // 100
- if cut == 0:
- high, low = ch.max(), ch.min()
- else:
- hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
- low = np.argwhere(np.cumsum(hist) > cut)
- low = 0 if low.shape[0] == 0 else low[0]
- high = np.argwhere(np.cumsum(hist[::-1]) > cut)
- high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0]
- if high <= low:
- table = np.arange(n_bins)
- else:
- scale = (n_bins - 1) / (high - low)
- offset = -low * scale
- table = np.arange(n_bins) * scale + offset
- table[table < 0] = 0
- table[table > n_bins - 1] = n_bins - 1
- table = table.clip(0, 255).astype(np.uint8)
- return table[ch]
-
- channels = [tune_channel(ch) for ch in cv2.split(img)]
- out = cv2.merge(channels)
- return out
-
-
-def equalize_func(img):
- """
- same output as PIL.ImageOps.equalize
- PIL's implementation is different from cv2.equalize
- """
- n_bins = 256
-
- def tune_channel(ch):
- hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
- non_zero_hist = hist[hist != 0].reshape(-1)
- step = np.sum(non_zero_hist[:-1]) // (n_bins - 1)
- if step == 0:
- return ch
- n = np.empty_like(hist)
- n[0] = step // 2
- n[1:] = hist[:-1]
- table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8)
- return table[ch]
-
- channels = [tune_channel(ch) for ch in cv2.split(img)]
- out = cv2.merge(channels)
- return out
-
-
-def rotate_func(img, degree, fill=(0, 0, 0)):
- """
- like PIL, rotate by degree, not radians
- """
- H, W = img.shape[0], img.shape[1]
- center = W / 2, H / 2
- M = cv2.getRotationMatrix2D(center, degree, 1)
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill)
- return out
-
-
-def solarize_func(img, thresh=128):
- """
- same output as PIL.ImageOps.posterize
- """
- table = np.array([el if el < thresh else 255 - el for el in range(256)])
- table = table.clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def color_func(img, factor):
- """
- same output as PIL.ImageEnhance.Color
- """
- ## implementation according to PIL definition, quite slow
- # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis]
- # out = blend(degenerate, img, factor)
- # M = (
- # np.eye(3) * factor
- # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor)
- # )[np.newaxis, np.newaxis, :]
- M = np.float32(
- [[0.886, -0.114, -0.114], [-0.587, 0.413, -0.587], [-0.299, -0.299, 0.701]]
- ) * factor + np.float32([[0.114], [0.587], [0.299]])
- out = np.matmul(img, M).clip(0, 255).astype(np.uint8)
- return out
-
-
-def contrast_func(img, factor):
- """
- same output as PIL.ImageEnhance.Contrast
- """
- mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299]))
- table = (
- np.array([(el - mean) * factor + mean for el in range(256)])
- .clip(0, 255)
- .astype(np.uint8)
- )
- out = table[img]
- return out
-
-
-def brightness_func(img, factor):
- """
- same output as PIL.ImageEnhance.Contrast
- """
- table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def sharpness_func(img, factor):
- """
- The differences the this result and PIL are all on the 4 boundaries, the center
- areas are same
- """
- kernel = np.ones((3, 3), dtype=np.float32)
- kernel[1][1] = 5
- kernel /= 13
- degenerate = cv2.filter2D(img, -1, kernel)
- if factor == 0.0:
- out = degenerate
- elif factor == 1.0:
- out = img
- else:
- out = img.astype(np.float32)
- degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :]
- out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate)
- out = out.astype(np.uint8)
- return out
-
-
-def shear_x_func(img, factor, fill=(0, 0, 0)):
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, factor, 0], [0, 1, 0]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def translate_x_func(img, offset, fill=(0, 0, 0)):
- """
- same output as PIL.Image.transform
- """
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, -offset], [0, 1, 0]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def translate_y_func(img, offset, fill=(0, 0, 0)):
- """
- same output as PIL.Image.transform
- """
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, 0], [0, 1, -offset]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def posterize_func(img, bits):
- """
- same output as PIL.ImageOps.posterize
- """
- out = np.bitwise_and(img, np.uint8(255 << (8 - bits)))
- return out
-
-
-def shear_y_func(img, factor, fill=(0, 0, 0)):
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, 0], [factor, 1, 0]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def cutout_func(img, pad_size, replace=(0, 0, 0)):
- replace = np.array(replace, dtype=np.uint8)
- H, W = img.shape[0], img.shape[1]
- rh, rw = np.random.random(2)
- pad_size = pad_size // 2
- ch, cw = int(rh * H), int(rw * W)
- x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H)
- y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W)
- out = img.copy()
- out[x1:x2, y1:y2, :] = replace
- return out
-
-
-### level to args
-def enhance_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- return ((level / MAX_LEVEL) * 1.8 + 0.1,)
-
- return level_to_args
-
-
-def shear_level_to_args(MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * 0.3
- if np.random.random() > 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-def translate_level_to_args(translate_const, MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * float(translate_const)
- if np.random.random() > 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * cutout_const)
- return (level, replace_value)
-
- return level_to_args
-
-
-def solarize_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * 256)
- return (level,)
-
- return level_to_args
-
-
-def none_level_to_args(level):
- return ()
-
-
-def posterize_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * 4)
- return (level,)
-
- return level_to_args
-
-
-def rotate_level_to_args(MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * 30
- if np.random.random() < 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-func_dict = {
- "Identity": identity_func,
- "AutoContrast": autocontrast_func,
- "Equalize": equalize_func,
- "Rotate": rotate_func,
- "Solarize": solarize_func,
- "Color": color_func,
- "Contrast": contrast_func,
- "Brightness": brightness_func,
- "Sharpness": sharpness_func,
- "ShearX": shear_x_func,
- "TranslateX": translate_x_func,
- "TranslateY": translate_y_func,
- "Posterize": posterize_func,
- "ShearY": shear_y_func,
-}
-
-translate_const = 10
-MAX_LEVEL = 10
-replace_value = (128, 128, 128)
-arg_dict = {
- "Identity": none_level_to_args,
- "AutoContrast": none_level_to_args,
- "Equalize": none_level_to_args,
- "Rotate": rotate_level_to_args(MAX_LEVEL, replace_value),
- "Solarize": solarize_level_to_args(MAX_LEVEL),
- "Color": enhance_level_to_args(MAX_LEVEL),
- "Contrast": enhance_level_to_args(MAX_LEVEL),
- "Brightness": enhance_level_to_args(MAX_LEVEL),
- "Sharpness": enhance_level_to_args(MAX_LEVEL),
- "ShearX": shear_level_to_args(MAX_LEVEL, replace_value),
- "TranslateX": translate_level_to_args(translate_const, MAX_LEVEL, replace_value),
- "TranslateY": translate_level_to_args(translate_const, MAX_LEVEL, replace_value),
- "Posterize": posterize_level_to_args(MAX_LEVEL),
- "ShearY": shear_level_to_args(MAX_LEVEL, replace_value),
-}
-
-
-class RandomAugment(object):
- def __init__(self, N=2, M=10, isPIL=False, augs=[]):
- self.N = N
- self.M = M
- self.isPIL = isPIL
- if augs:
- self.augs = augs
- else:
- self.augs = list(arg_dict.keys())
-
- def get_random_ops(self):
- sampled_ops = np.random.choice(self.augs, self.N)
- return [(op, 0.5, self.M) for op in sampled_ops]
-
- def __call__(self, img):
- if self.isPIL:
- img = np.array(img)
- ops = self.get_random_ops()
- for name, prob, level in ops:
- if np.random.random() > prob:
- continue
- args = arg_dict[name](level)
- img = func_dict[name](img, *args)
- return img
-
-
-class VideoRandomAugment(object):
- def __init__(self, N=2, M=10, p=0.0, tensor_in_tensor_out=True, augs=[]):
- self.N = N
- self.M = M
- self.p = p
- self.tensor_in_tensor_out = tensor_in_tensor_out
- if augs:
- self.augs = augs
- else:
- self.augs = list(arg_dict.keys())
-
- def get_random_ops(self):
- sampled_ops = np.random.choice(self.augs, self.N, replace=False)
- return [(op, self.M) for op in sampled_ops]
-
- def __call__(self, frames):
- assert (
- frames.shape[-1] == 3
- ), "Expecting last dimension for 3-channels RGB (b, h, w, c)."
-
- if self.tensor_in_tensor_out:
- frames = frames.numpy().astype(np.uint8)
-
- num_frames = frames.shape[0]
-
- ops = num_frames * [self.get_random_ops()]
- apply_or_not = num_frames * [np.random.random(size=self.N) > self.p]
-
- frames = torch.stack(
- list(map(self._aug, frames, ops, apply_or_not)), dim=0
- ).float()
-
- return frames
-
- def _aug(self, img, ops, apply_or_not):
- for i, (name, level) in enumerate(ops):
- if not apply_or_not[i]:
- continue
- args = arg_dict[name](level)
- img = func_dict[name](img, *args)
- return torch.from_numpy(img)
-
-
-if __name__ == "__main__":
- a = RandomAugment()
- img = np.random.randn(32, 32, 3)
- a(img)
diff --git a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/backups_test.py b/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/backups_test.py
deleted file mode 100644
index f3edf15811b5035ee82f21e54e87b7e87ce413eb..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/backups_test.py
+++ /dev/null
@@ -1,138 +0,0 @@
-
-import os
-import shutil
-import hashlib
-import time
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' # change this to your Google Drive path
- LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
- WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
- weights_exist = False
- files_to_copy = []
- weights_to_copy = []
-
- def handle_files(root, files, is_weight_files=False):
- for filename in files:
- filepath = os.path.join(root, filename)
- if filename.endswith('.pth') and is_weight_files:
- weights_exist = True
- backup_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- else:
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created folder: {backup_folderpath}', flush=True)
- if is_weight_files:
- weights_to_copy.append((filepath, backup_filepath))
- else:
- files_to_copy.append((filepath, backup_filepath))
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'logs')):
- handle_files(root, files)
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- handle_files(root, files, True)
-
- # Copy files in batches
- total_files = len(files_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(files_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying file {i} of {total_files} ({i * 100 / total_files:.2f}%)', end="")
- start_time = time.time()
- print(f'\nImported {len(files_to_copy)} files from Google Drive backup')
-
- # Copy weights in batches
- total_weights = len(weights_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(weights_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying weight file {i} of {total_weights} ({i * 100 / total_weights:.2f}%)', end="")
- start_time = time.time()
- if weights_exist:
- print(f'\nImported {len(weights_to_copy)} weight files')
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("\nNo weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def backup_files():
- print("\n Starting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except:
- last_backup_timestamps = {}
-
- while True:
- updated = False
- files_to_copy = []
- files_to_delete = []
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
-
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
-
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- files_to_copy.append((filepath, backup_filepath)) # add to list of files to copy
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- files_to_delete.append(backup_filepath) # add to list of files to delete
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- # Copy files in batches
- if files_to_copy:
- for source, dest in files_to_copy:
- shutil.copy2(source, dest)
- print(f'Copied or updated {len(files_to_copy)} files')
-
- # Delete files in batches
- if files_to_delete:
- for file in files_to_delete:
- os.remove(file)
- print(f'Deleted {len(files_to_delete)} files')
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
- time.sleep(15) # wait for 15 seconds before checking again
diff --git a/spaces/SeyedAli/Persian-Speech-Transcription/app.py b/spaces/SeyedAli/Persian-Speech-Transcription/app.py
deleted file mode 100644
index b37295c15b1aeb0c5431b4e64eb4f2b27c65c969..0000000000000000000000000000000000000000
--- a/spaces/SeyedAli/Persian-Speech-Transcription/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import tempfile ,os
-import gradio as gr
-from transformers import AutoProcessor, AutoModelForCTC,pipeline
-import torch
-import numpy as np
-import torchaudio
-import numpy as np
-import re
-import string
-
-audio_input = gr.Audio(label="صوت گفتار فارسی",type="filepath")
-text_output = gr.TextArea(label="متن فارسی",text_align="right",rtl=True,type="text")
-
-processor = AutoProcessor.from_pretrained("SeyedAli/Persian-Speech-Transcription-Wav2Vec2-V1")
-model = AutoModelForCTC.from_pretrained("SeyedAli/Persian-Speech-Transcription-Wav2Vec2-V1")
-
-def ASR(audio):
- pipe = pipeline("automatic-speech-recognition", model="SeyedAli/Persian-Speech-Transcription-Wav2Vec2-V1")
- with tempfile.NamedTemporaryFile(suffix=".wav") as temp_audio_file:
- # Copy the contents of the uploaded audio file to the temporary file
- temp_audio_file.write(open(audio, "rb").read())
- temp_audio_file.flush()
- # Load the audio file using torchaudio
- waveform, sample_rate = torchaudio.load(temp_audio_file.name)
- # Resample the audio to 16kHz
- resampler = torchaudio.transforms.Resample(sample_rate, 16000)
- waveform = resampler(waveform)
- # Convert the PyTorch tensor to a NumPy ndarray
- # Preprocess the audio file
- input_values = processor(waveform.squeeze().numpy(),sampling_rate=16_000, return_tensors="pt").input_values
- attention_mask = processor(waveform.squeeze().numpy(),sampling_rate=16_000, return_tensors="pt").attention_mask
- # Transcribe the audio file
- with torch.no_grad():
- logits = model(input_values,attention_mask).logits
- # Decode the transcription
- result = processor.decode(torch.argmax(logits[0], dim=-1))
- return result
-iface = gr.Interface(fn=ASR, inputs=audio_input, outputs=text_output)
-iface.launch(share=False)
\ No newline at end of file
diff --git a/spaces/Shadow344/ogkalu-Comic-Diffusion/README.md b/spaces/Shadow344/ogkalu-Comic-Diffusion/README.md
deleted file mode 100644
index f429ad5b3164d3d6f0e9101c46b2992f652f8f6c..0000000000000000000000000000000000000000
--- a/spaces/Shadow344/ogkalu-Comic-Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ogkalu Comic Diffusion
-emoji: 🔥
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Slammed96/Monero-WizardLM-Uncensored-SuperCOT-StoryTelling-30bb/app.py b/spaces/Slammed96/Monero-WizardLM-Uncensored-SuperCOT-StoryTelling-30bb/app.py
deleted file mode 100644
index 3e105c5d32ddba1fe3c38a567c9e2042cc984d78..0000000000000000000000000000000000000000
--- a/spaces/Slammed96/Monero-WizardLM-Uncensored-SuperCOT-StoryTelling-30bb/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b").launch()
\ No newline at end of file
diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/train.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/train.py
deleted file mode 100644
index 22dd117830bb403829d0a60b1b95e120d1e6978b..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/train.py
+++ /dev/null
@@ -1,157 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Entry point for dora to launch solvers for running training loops.
-See more info on how to use dora: https://github.com/facebookresearch/dora
-"""
-
-import logging
-import multiprocessing
-import os
-import sys
-import typing as tp
-
-from dora import git_save, hydra_main, XP
-import flashy
-import hydra
-import omegaconf
-
-from .environment import AudioCraftEnvironment
-from .utils.cluster import get_slurm_parameters
-
-logger = logging.getLogger(__name__)
-
-
-def resolve_config_dset_paths(cfg):
- """Enable Dora to load manifest from git clone repository."""
- # manifest files for the different splits
- for key, value in cfg.datasource.items():
- if isinstance(value, str):
- cfg.datasource[key] = git_save.to_absolute_path(value)
-
-
-def get_solver(cfg):
- from . import solvers
- # Convert batch size to batch size for each GPU
- assert cfg.dataset.batch_size % flashy.distrib.world_size() == 0
- cfg.dataset.batch_size //= flashy.distrib.world_size()
- for split in ['train', 'valid', 'evaluate', 'generate']:
- if hasattr(cfg.dataset, split) and hasattr(cfg.dataset[split], 'batch_size'):
- assert cfg.dataset[split].batch_size % flashy.distrib.world_size() == 0
- cfg.dataset[split].batch_size //= flashy.distrib.world_size()
- resolve_config_dset_paths(cfg)
- solver = solvers.get_solver(cfg)
- return solver
-
-
-def get_solver_from_xp(xp: XP, override_cfg: tp.Optional[tp.Union[dict, omegaconf.DictConfig]] = None,
- restore: bool = True, load_best: bool = True,
- ignore_state_keys: tp.List[str] = [], disable_fsdp: bool = True):
- """Given a XP, return the Solver object.
-
- Args:
- xp (XP): Dora experiment for which to retrieve the solver.
- override_cfg (dict or None): If not None, should be a dict used to
- override some values in the config of `xp`. This will not impact
- the XP signature or folder. The format is different
- than the one used in Dora grids, nested keys should actually be nested dicts,
- not flattened, e.g. `{'optim': {'batch_size': 32}}`.
- restore (bool): If `True` (the default), restore state from the last checkpoint.
- load_best (bool): If `True` (the default), load the best state from the checkpoint.
- ignore_state_keys (list[str]): List of sources to ignore when loading the state, e.g. `optimizer`.
- disable_fsdp (bool): if True, disables FSDP entirely. This will
- also automatically skip loading the EMA. For solver specific
- state sources, like the optimizer, you might want to
- use along `ignore_state_keys=['optimizer']`. Must be used with `load_best=True`.
- """
- logger.info(f"Loading solver from XP {xp.sig}. "
- f"Overrides used: {xp.argv}")
- cfg = xp.cfg
- if override_cfg is not None:
- cfg = omegaconf.OmegaConf.merge(cfg, omegaconf.DictConfig(override_cfg))
- if disable_fsdp and cfg.fsdp.use:
- cfg.fsdp.use = False
- assert load_best is True
- # ignoring some keys that were FSDP sharded like model, ema, and best_state.
- # fsdp_best_state will be used in that case. When using a specific solver,
- # one is responsible for adding the relevant keys, e.g. 'optimizer'.
- # We could make something to automatically register those inside the solver, but that
- # seem overkill at this point.
- ignore_state_keys = ignore_state_keys + ['model', 'ema', 'best_state']
-
- try:
- with xp.enter():
- solver = get_solver(cfg)
- if restore:
- solver.restore(load_best=load_best, ignore_state_keys=ignore_state_keys)
- return solver
- finally:
- hydra.core.global_hydra.GlobalHydra.instance().clear()
-
-
-def get_solver_from_sig(sig: str, *args, **kwargs):
- """Return Solver object from Dora signature, i.e. to play with it from a notebook.
- See `get_solver_from_xp` for more information.
- """
- xp = main.get_xp_from_sig(sig)
- return get_solver_from_xp(xp, *args, **kwargs)
-
-
-def init_seed_and_system(cfg):
- import numpy as np
- import torch
- import random
- from audiocraft.modules.transformer import set_efficient_attention_backend
-
- multiprocessing.set_start_method(cfg.mp_start_method)
- logger.debug('Setting mp start method to %s', cfg.mp_start_method)
- random.seed(cfg.seed)
- np.random.seed(cfg.seed)
- # torch also initialize cuda seed if available
- torch.manual_seed(cfg.seed)
- torch.set_num_threads(cfg.num_threads)
- os.environ['MKL_NUM_THREADS'] = str(cfg.num_threads)
- os.environ['OMP_NUM_THREADS'] = str(cfg.num_threads)
- logger.debug('Setting num threads to %d', cfg.num_threads)
- set_efficient_attention_backend(cfg.efficient_attention_backend)
- logger.debug('Setting efficient attention backend to %s', cfg.efficient_attention_backend)
-
-
-@hydra_main(config_path='../config', config_name='config', version_base='1.1')
-def main(cfg):
- init_seed_and_system(cfg)
-
- # Setup logging both to XP specific folder, and to stderr.
- log_name = '%s.log.{rank}' % cfg.execute_only if cfg.execute_only else 'solver.log.{rank}'
- flashy.setup_logging(level=str(cfg.logging.level).upper(), log_name=log_name)
- # Initialize distributed training, no need to specify anything when using Dora.
- flashy.distrib.init()
- solver = get_solver(cfg)
- if cfg.show:
- solver.show()
- return
-
- if cfg.execute_only:
- assert cfg.execute_inplace or cfg.continue_from is not None, \
- "Please explicitly specify the checkpoint to continue from with continue_from= " + \
- "when running with execute_only or set execute_inplace to True."
- solver.restore(replay_metrics=False) # load checkpoint
- solver.run_one_stage(cfg.execute_only)
- return
-
- return solver.run()
-
-
-main.dora.dir = AudioCraftEnvironment.get_dora_dir()
-main._base_cfg.slurm = get_slurm_parameters(main._base_cfg.slurm)
-
-if main.dora.shared is not None and not os.access(main.dora.shared, os.R_OK):
- print("No read permission on dora.shared folder, ignoring it.", file=sys.stderr)
- main.dora.shared = None
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/__init__.py
deleted file mode 100644
index a6c5f474c15d29032a23a50d9c884e00235e56ac..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/__init__.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""Implementation of all the magic functions built into IPython.
-"""
-#-----------------------------------------------------------------------------
-# Copyright (c) 2012 The IPython Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-#-----------------------------------------------------------------------------
-# Imports
-#-----------------------------------------------------------------------------
-
-from ..magic import Magics, magics_class
-from .auto import AutoMagics
-from .basic import BasicMagics, AsyncMagics
-from .code import CodeMagics, MacroToEdit
-from .config import ConfigMagics
-from .display import DisplayMagics
-from .execution import ExecutionMagics
-from .extension import ExtensionMagics
-from .history import HistoryMagics
-from .logging import LoggingMagics
-from .namespace import NamespaceMagics
-from .osm import OSMagics
-from .packaging import PackagingMagics
-from .pylab import PylabMagics
-from .script import ScriptMagics
-
-#-----------------------------------------------------------------------------
-# Magic implementation classes
-#-----------------------------------------------------------------------------
-
-@magics_class
-class UserMagics(Magics):
- """Placeholder for user-defined magics to be added at runtime.
-
- All magics are eventually merged into a single namespace at runtime, but we
- use this class to isolate the magics defined dynamically by the user into
- their own class.
- """
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/namespace.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/namespace.py
deleted file mode 100644
index 5da8f7161a0a9e8883e9b492588613d193063513..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/namespace.py
+++ /dev/null
@@ -1,711 +0,0 @@
-"""Implementation of namespace-related magic functions.
-"""
-#-----------------------------------------------------------------------------
-# Copyright (c) 2012 The IPython Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-#-----------------------------------------------------------------------------
-# Imports
-#-----------------------------------------------------------------------------
-
-# Stdlib
-import gc
-import re
-import sys
-
-# Our own packages
-from IPython.core import page
-from IPython.core.error import StdinNotImplementedError, UsageError
-from IPython.core.magic import Magics, magics_class, line_magic
-from IPython.testing.skipdoctest import skip_doctest
-from IPython.utils.encoding import DEFAULT_ENCODING
-from IPython.utils.openpy import read_py_file
-from IPython.utils.path import get_py_filename
-
-#-----------------------------------------------------------------------------
-# Magic implementation classes
-#-----------------------------------------------------------------------------
-
-@magics_class
-class NamespaceMagics(Magics):
- """Magics to manage various aspects of the user's namespace.
-
- These include listing variables, introspecting into them, etc.
- """
-
- @line_magic
- def pinfo(self, parameter_s='', namespaces=None):
- """Provide detailed information about an object.
-
- '%pinfo object' is just a synonym for object? or ?object."""
-
- #print 'pinfo par: <%s>' % parameter_s # dbg
- # detail_level: 0 -> obj? , 1 -> obj??
- detail_level = 0
- # We need to detect if we got called as 'pinfo pinfo foo', which can
- # happen if the user types 'pinfo foo?' at the cmd line.
- pinfo,qmark1,oname,qmark2 = \
- re.match(r'(pinfo )?(\?*)(.*?)(\??$)',parameter_s).groups()
- if pinfo or qmark1 or qmark2:
- detail_level = 1
- if "*" in oname:
- self.psearch(oname)
- else:
- self.shell._inspect('pinfo', oname, detail_level=detail_level,
- namespaces=namespaces)
-
- @line_magic
- def pinfo2(self, parameter_s='', namespaces=None):
- """Provide extra detailed information about an object.
-
- '%pinfo2 object' is just a synonym for object?? or ??object."""
- self.shell._inspect('pinfo', parameter_s, detail_level=1,
- namespaces=namespaces)
-
- @skip_doctest
- @line_magic
- def pdef(self, parameter_s='', namespaces=None):
- """Print the call signature for any callable object.
-
- If the object is a class, print the constructor information.
-
- Examples
- --------
- ::
-
- In [3]: %pdef urllib.urlopen
- urllib.urlopen(url, data=None, proxies=None)
- """
- self.shell._inspect('pdef',parameter_s, namespaces)
-
- @line_magic
- def pdoc(self, parameter_s='', namespaces=None):
- """Print the docstring for an object.
-
- If the given object is a class, it will print both the class and the
- constructor docstrings."""
- self.shell._inspect('pdoc',parameter_s, namespaces)
-
- @line_magic
- def psource(self, parameter_s='', namespaces=None):
- """Print (or run through pager) the source code for an object."""
- if not parameter_s:
- raise UsageError('Missing object name.')
- self.shell._inspect('psource',parameter_s, namespaces)
-
- @line_magic
- def pfile(self, parameter_s='', namespaces=None):
- """Print (or run through pager) the file where an object is defined.
-
- The file opens at the line where the object definition begins. IPython
- will honor the environment variable PAGER if set, and otherwise will
- do its best to print the file in a convenient form.
-
- If the given argument is not an object currently defined, IPython will
- try to interpret it as a filename (automatically adding a .py extension
- if needed). You can thus use %pfile as a syntax highlighting code
- viewer."""
-
- # first interpret argument as an object name
- out = self.shell._inspect('pfile',parameter_s, namespaces)
- # if not, try the input as a filename
- if out == 'not found':
- try:
- filename = get_py_filename(parameter_s)
- except IOError as msg:
- print(msg)
- return
- page.page(self.shell.pycolorize(read_py_file(filename, skip_encoding_cookie=False)))
-
- @line_magic
- def psearch(self, parameter_s=''):
- """Search for object in namespaces by wildcard.
-
- %psearch [options] PATTERN [OBJECT TYPE]
-
- Note: ? can be used as a synonym for %psearch, at the beginning or at
- the end: both a*? and ?a* are equivalent to '%psearch a*'. Still, the
- rest of the command line must be unchanged (options come first), so
- for example the following forms are equivalent
-
- %psearch -i a* function
- -i a* function?
- ?-i a* function
-
- Arguments:
-
- PATTERN
-
- where PATTERN is a string containing * as a wildcard similar to its
- use in a shell. The pattern is matched in all namespaces on the
- search path. By default objects starting with a single _ are not
- matched, many IPython generated objects have a single
- underscore. The default is case insensitive matching. Matching is
- also done on the attributes of objects and not only on the objects
- in a module.
-
- [OBJECT TYPE]
-
- Is the name of a python type from the types module. The name is
- given in lowercase without the ending type, ex. StringType is
- written string. By adding a type here only objects matching the
- given type are matched. Using all here makes the pattern match all
- types (this is the default).
-
- Options:
-
- -a: makes the pattern match even objects whose names start with a
- single underscore. These names are normally omitted from the
- search.
-
- -i/-c: make the pattern case insensitive/sensitive. If neither of
- these options are given, the default is read from your configuration
- file, with the option ``InteractiveShell.wildcards_case_sensitive``.
- If this option is not specified in your configuration file, IPython's
- internal default is to do a case sensitive search.
-
- -e/-s NAMESPACE: exclude/search a given namespace. The pattern you
- specify can be searched in any of the following namespaces:
- 'builtin', 'user', 'user_global','internal', 'alias', where
- 'builtin' and 'user' are the search defaults. Note that you should
- not use quotes when specifying namespaces.
-
- -l: List all available object types for object matching. This function
- can be used without arguments.
-
- 'Builtin' contains the python module builtin, 'user' contains all
- user data, 'alias' only contain the shell aliases and no python
- objects, 'internal' contains objects used by IPython. The
- 'user_global' namespace is only used by embedded IPython instances,
- and it contains module-level globals. You can add namespaces to the
- search with -s or exclude them with -e (these options can be given
- more than once).
-
- Examples
- --------
- ::
-
- %psearch a* -> objects beginning with an a
- %psearch -e builtin a* -> objects NOT in the builtin space starting in a
- %psearch a* function -> all functions beginning with an a
- %psearch re.e* -> objects beginning with an e in module re
- %psearch r*.e* -> objects that start with e in modules starting in r
- %psearch r*.* string -> all strings in modules beginning with r
-
- Case sensitive search::
-
- %psearch -c a* list all object beginning with lower case a
-
- Show objects beginning with a single _::
-
- %psearch -a _* list objects beginning with a single underscore
-
- List available objects::
-
- %psearch -l list all available object types
- """
- # default namespaces to be searched
- def_search = ['user_local', 'user_global', 'builtin']
-
- # Process options/args
- opts,args = self.parse_options(parameter_s,'cias:e:l',list_all=True)
- opt = opts.get
- shell = self.shell
- psearch = shell.inspector.psearch
-
- # select list object types
- list_types = False
- if 'l' in opts:
- list_types = True
-
- # select case options
- if 'i' in opts:
- ignore_case = True
- elif 'c' in opts:
- ignore_case = False
- else:
- ignore_case = not shell.wildcards_case_sensitive
-
- # Build list of namespaces to search from user options
- def_search.extend(opt('s',[]))
- ns_exclude = ns_exclude=opt('e',[])
- ns_search = [nm for nm in def_search if nm not in ns_exclude]
-
- # Call the actual search
- try:
- psearch(args,shell.ns_table,ns_search,
- show_all=opt('a'),ignore_case=ignore_case, list_types=list_types)
- except:
- shell.showtraceback()
-
- @skip_doctest
- @line_magic
- def who_ls(self, parameter_s=''):
- """Return a sorted list of all interactive variables.
-
- If arguments are given, only variables of types matching these
- arguments are returned.
-
- Examples
- --------
- Define two variables and list them with who_ls::
-
- In [1]: alpha = 123
-
- In [2]: beta = 'test'
-
- In [3]: %who_ls
- Out[3]: ['alpha', 'beta']
-
- In [4]: %who_ls int
- Out[4]: ['alpha']
-
- In [5]: %who_ls str
- Out[5]: ['beta']
- """
-
- user_ns = self.shell.user_ns
- user_ns_hidden = self.shell.user_ns_hidden
- nonmatching = object() # This can never be in user_ns
- out = [ i for i in user_ns
- if not i.startswith('_') \
- and (user_ns[i] is not user_ns_hidden.get(i, nonmatching)) ]
-
- typelist = parameter_s.split()
- if typelist:
- typeset = set(typelist)
- out = [i for i in out if type(user_ns[i]).__name__ in typeset]
-
- out.sort()
- return out
-
- @skip_doctest
- @line_magic
- def who(self, parameter_s=''):
- """Print all interactive variables, with some minimal formatting.
-
- If any arguments are given, only variables whose type matches one of
- these are printed. For example::
-
- %who function str
-
- will only list functions and strings, excluding all other types of
- variables. To find the proper type names, simply use type(var) at a
- command line to see how python prints type names. For example:
-
- ::
-
- In [1]: type('hello')\\
- Out[1]:
-
- indicates that the type name for strings is 'str'.
-
- ``%who`` always excludes executed names loaded through your configuration
- file and things which are internal to IPython.
-
- This is deliberate, as typically you may load many modules and the
- purpose of %who is to show you only what you've manually defined.
-
- Examples
- --------
-
- Define two variables and list them with who::
-
- In [1]: alpha = 123
-
- In [2]: beta = 'test'
-
- In [3]: %who
- alpha beta
-
- In [4]: %who int
- alpha
-
- In [5]: %who str
- beta
- """
-
- varlist = self.who_ls(parameter_s)
- if not varlist:
- if parameter_s:
- print('No variables match your requested type.')
- else:
- print('Interactive namespace is empty.')
- return
-
- # if we have variables, move on...
- count = 0
- for i in varlist:
- print(i+'\t', end=' ')
- count += 1
- if count > 8:
- count = 0
- print()
- print()
-
- @skip_doctest
- @line_magic
- def whos(self, parameter_s=''):
- """Like %who, but gives some extra information about each variable.
-
- The same type filtering of %who can be applied here.
-
- For all variables, the type is printed. Additionally it prints:
-
- - For {},[],(): their length.
-
- - For numpy arrays, a summary with shape, number of
- elements, typecode and size in memory.
-
- - Everything else: a string representation, snipping their middle if
- too long.
-
- Examples
- --------
- Define two variables and list them with whos::
-
- In [1]: alpha = 123
-
- In [2]: beta = 'test'
-
- In [3]: %whos
- Variable Type Data/Info
- --------------------------------
- alpha int 123
- beta str test
- """
-
- varnames = self.who_ls(parameter_s)
- if not varnames:
- if parameter_s:
- print('No variables match your requested type.')
- else:
- print('Interactive namespace is empty.')
- return
-
- # if we have variables, move on...
-
- # for these types, show len() instead of data:
- seq_types = ['dict', 'list', 'tuple']
-
- # for numpy arrays, display summary info
- ndarray_type = None
- if 'numpy' in sys.modules:
- try:
- from numpy import ndarray
- except ImportError:
- pass
- else:
- ndarray_type = ndarray.__name__
-
- # Find all variable names and types so we can figure out column sizes
-
- # some types are well known and can be shorter
- abbrevs = {'IPython.core.macro.Macro' : 'Macro'}
- def type_name(v):
- tn = type(v).__name__
- return abbrevs.get(tn,tn)
-
- varlist = [self.shell.user_ns[n] for n in varnames]
-
- typelist = []
- for vv in varlist:
- tt = type_name(vv)
-
- if tt=='instance':
- typelist.append( abbrevs.get(str(vv.__class__),
- str(vv.__class__)))
- else:
- typelist.append(tt)
-
- # column labels and # of spaces as separator
- varlabel = 'Variable'
- typelabel = 'Type'
- datalabel = 'Data/Info'
- colsep = 3
- # variable format strings
- vformat = "{0:<{varwidth}}{1:<{typewidth}}"
- aformat = "%s: %s elems, type `%s`, %s bytes"
- # find the size of the columns to format the output nicely
- varwidth = max(max(map(len,varnames)), len(varlabel)) + colsep
- typewidth = max(max(map(len,typelist)), len(typelabel)) + colsep
- # table header
- print(varlabel.ljust(varwidth) + typelabel.ljust(typewidth) + \
- ' '+datalabel+'\n' + '-'*(varwidth+typewidth+len(datalabel)+1))
- # and the table itself
- kb = 1024
- Mb = 1048576 # kb**2
- for vname,var,vtype in zip(varnames,varlist,typelist):
- print(vformat.format(vname, vtype, varwidth=varwidth, typewidth=typewidth), end=' ')
- if vtype in seq_types:
- print("n="+str(len(var)))
- elif vtype == ndarray_type:
- vshape = str(var.shape).replace(',','').replace(' ','x')[1:-1]
- if vtype==ndarray_type:
- # numpy
- vsize = var.size
- vbytes = vsize*var.itemsize
- vdtype = var.dtype
-
- if vbytes < 100000:
- print(aformat % (vshape, vsize, vdtype, vbytes))
- else:
- print(aformat % (vshape, vsize, vdtype, vbytes), end=' ')
- if vbytes < Mb:
- print('(%s kb)' % (vbytes/kb,))
- else:
- print('(%s Mb)' % (vbytes/Mb,))
- else:
- try:
- vstr = str(var)
- except UnicodeEncodeError:
- vstr = var.encode(DEFAULT_ENCODING,
- 'backslashreplace')
- except:
- vstr = "