diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Demonii Iubirii Anne K Joy Pdf 198 [HOT].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Demonii Iubirii Anne K Joy Pdf 198 [HOT].md
deleted file mode 100644
index 58357f2cf4b550f592447dc97d54b5a4f80e7cf5..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Demonii Iubirii Anne K Joy Pdf 198 [HOT].md
+++ /dev/null
@@ -1,68 +0,0 @@
-## Demonii Iubirii Anne K Joy Pdf 198
-
-
-
-
-
-
-
-
-
-**Click Here 🗸 [https://jinyurl.com/2tA010](https://jinyurl.com/2tA010)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Demonii Iubirii by Anne K Joy: A Captivating Romance Novel
-
-
-
-If you are looking for a romance novel that will keep you hooked from the first page to the last, you might want to check out Demonii Iubirii by Anne K Joy. This is a story about Jessica Thomas, a young woman who has had a hard life since childhood, marked by the inappropriate behavior of a stepfather and a superficial mother who pretended to have a perfect family. Jessica is married in name only, with no children or pets to welcome her home after a long and exhausting day of work. She feels lonely and unhappy, until she meets Ian Peterson, a mysterious and handsome man who seems to have a dark past and a hidden agenda.
-
-
-
-Demonii Iubirii is the first book in a series by Anne K Joy, a Romanian writer who has published several novels in different genres, such as fantasy, paranormal, historical, and contemporary romance. She has a talent for creating engaging characters, intriguing plots, and emotional scenes that will make you laugh, cry, and swoon. Her books have been praised by readers and critics alike for their originality, humor, and passion.
-
-
-
-You can read Demonii Iubirii online for free on Scribd[^1^], where you can also find other books by Anne K Joy. You can also download the PDF version of the book for free from various websites[^2^] [^3^]. However, if you want to support the author and enjoy her work in high quality, you can buy the book from online or offline bookstores. You will not regret it!
-
-
-
-Demonii Iubirii is the first book in a series by Anne K Joy that follows the lives and loves of different characters who are connected by a common theme: they all have demons in their past that haunt them and affect their relationships. The series has four books so far: Demonii Iubirii, Demonul Ucis, Demonii Trecutului, and Demonul RÄzbunÄrii. Each book can be read as a standalone, but they also have references and appearances of characters from previous books.
-
-
-
-Anne K Joy is one of the most prolific and popular romance writers in Romania. She has written 34 books in different genres and series, such as Anotimpuri, IdentitÄÈi False, Insuficient, Promisiuni, Magia CrÄciunului, Destine Frânte, Triumful Iubirii, and Dragoste în 30 de zile. She has a loyal fan base who love her stories for their originality, humor, passion, and emotion. She also interacts with her readers on social media and on her website[^4^], where she posts news, updates, excerpts, and giveaways.
-
-
-
-If you are a fan of romance novels that will make you feel all kinds of emotions and keep you entertained until the end, you should definitely give Demonii Iubirii by Anne K Joy a try. You will not be disappointed by this captivating story of love, secrets, betrayal, and redemption. You will also discover a new author who will make you fall in love with her characters and her writing style. Don't miss this opportunity to read one of the best romance novels of the year!
-
-
-
-But why should you read romance novels in the first place? What are the benefits of immersing yourself in stories of love, passion, and adventure? Well, there are many reasons why romance novels are good for you, and not just for your entertainment. Romance novels can also have positive effects on your health, your mind, and your relationships. Here are some of the benefits of reading romance novels that you might not have known about.
-
-
-
-One of the benefits of reading romance novels is that they can help reduce stress. Reading is a form of escapism that can take you away from your worries and problems and transport you to a different world. Romance novels, in particular, can offer you a sense of optimism, hope, and happiness that can counteract the negative emotions that stress can cause. Reading romance novels can also lower your heart rate and blood pressure, as well as release endorphins, the feel-good hormones that can boost your mood and well-being[^2^] [^3^].
-
-
-
-Another benefit of reading romance novels is that they can improve your language skills. Romance novels are often rich in vocabulary, expressions, metaphors, and descriptions that can enhance your communication abilities. Reading romance novels can also expose you to different cultures, histories, and perspectives that can broaden your horizons and increase your knowledge[^1^]. Romance novels can also inspire you to write your own stories or express your own feelings in words.
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X7 Free Download A Complete Guide for Windows 7 (32-bit) Users.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X7 Free Download A Complete Guide for Windows 7 (32-bit) Users.md
deleted file mode 100644
index f386352df323733107914c4747aee5e2f11eefa1..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X7 Free Download A Complete Guide for Windows 7 (32-bit) Users.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
How to Download Corel Draw X7 for Free on Windows 7 (32-bit)
-
-
If you are looking for a powerful and versatile graphic design software, you might want to try Corel Draw X7. This software offers a wide range of features and tools to create stunning logos, illustrations, brochures, flyers, web graphics and more. Corel Draw X7 is compatible with Windows 7 (32-bit) and can be downloaded for free from the official website. In this article, we will show you how to download and install Corel Draw X7 on your Windows 7 (32-bit) computer.
-
corel draw x7 free download full version with crack 32 bit windows 7
The first step is to visit the official website of Corel Draw at https://www.coreldraw.com/en/. Here you will find information about the software, its features, pricing and system requirements. You can also browse through the gallery of user-created artworks and get inspired by the possibilities of Corel Draw.
-
-
Step 2: Click on the "Free Trial" button
-
-
The next step is to click on the "Free Trial" button at the top right corner of the website. This will take you to a page where you can choose between different versions of Corel Draw. For this tutorial, we will select Corel Draw Graphics Suite X7, which includes Corel Draw X7 and other applications such as Corel Photo-Paint, Corel PowerTrace and Corel Connect.
-
-
Step 3: Fill in your details and download the setup file
-
-
After selecting Corel Draw Graphics Suite X7, you will need to fill in your name, email address and country. You will also need to agree to the terms and conditions and privacy policy of Corel. Then, click on the "Download Now" button to start downloading the setup file. The file size is about 465 MB and may take some time depending on your internet speed.
-
-
Step 4: Run the setup file and follow the instructions
-
-
Once the download is complete, locate the setup file in your downloads folder and double-click on it to run it. You will see a welcome screen with the option to install or customize your installation. We recommend choosing the default installation option, which will install all the components of Corel Draw Graphics Suite X7. Click on the "Install Now" button and follow the instructions on the screen. The installation process may take several minutes depending on your system configuration.
-
-
Step 5: Launch Corel Draw X7 and enjoy your free trial
-
-
After the installation is complete, you can launch Corel Draw X7 from your desktop or start menu. You will see a splash screen with the option to activate your product or start your free trial. You can choose to start your free trial, which will give you access to all the features of Corel Draw X7 for 15 days. You can also activate your product if you have purchased a license key or if you are eligible for an upgrade.
-
-
-
Congratulations! You have successfully downloaded and installed Corel Draw X7 on your Windows 7 (32-bit) computer. Now you can explore the software and create amazing graphics for your personal or professional projects. If you need any help or support, you can visit the official website of Corel Draw or check out their online tutorials and community forums.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PicBasic Pro 3.0.7 Full Crack A Comprehensive Review of Features and Benefits.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PicBasic Pro 3.0.7 Full Crack A Comprehensive Review of Features and Benefits.md
deleted file mode 100644
index 58c646e8ae401382a0997f3ce470f409105c2815..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PicBasic Pro 3.0.7 Full Crack A Comprehensive Review of Features and Benefits.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Download PICBASIC PRO 3.0.7 Full Crack: A Guide for Beginners
-
If you are looking for a fast and easy way to develop microcontroller-based projects using Microchip's PIC microcontrollers, you might want to try PICBASIC PRO. PICBASIC PRO is a powerful BASIC compiler that generates optimized, machine-ready code for PIC MCUs. It has been used by professionals and hobbyists alike for over 15 years, thanks to its simplicity, stability, and speed.
-
In this article, we will show you how to download and install PICBASIC PRO 3.0.7, the latest version of this amazing tool. We will also show you how to get a full crack for it, so you can enjoy all its features without any limitations.
PICBASIC PRO (PBP) is a BASIC compiler that converts your source code into hex files that can be programmed into PIC microcontrollers. It supports a wide range of PIC devices, from 8-bit to 32-bit, with various peripherals and features.
-
PBP is not a slow BASIC interpreter like some of the old ones. It is a modern development tool that produces code in the same manner as a C compiler, but with the ease and readability of BASIC. PBP allows you to write high-level code that is close to natural language, without worrying about low-level details such as registers, bits, and ports.
-
download picbasic pro 3.0.7 full version free
-download picbasic pro 3.0.7 full crack youtube
-download picbasic pro 3.0.7 full crack sonsivri
-download picbasic pro 3.0.7 full crack cannon beach tsp
-download picbasic pro 3.0.7 full crack zip
-download picbasic pro 3.0.7 full crack link
-download picbasic pro 3.0.7 full crack pbpx.exe
-download picbasic pro 3.0.7 full crack codestudio.exe
-download picbasic pro 3.0.7 full crack windows 10
-download picbasic pro 3.0.7 full crack windows 7
-download picbasic pro 3.0.7 full crack mplab
-download picbasic pro 3.0.7 full crack microcode studio
-download picbasic pro 3.0.7 full crack activation key
-download picbasic pro 3.0.7 full crack offline registration
-download picbasic pro 3.0.7 full crack system32 folder
-download picbasic pro 3.0.7 full crack tutorial
-download picbasic pro 3.0.7 full crack serial number
-download picbasic pro 3.0.7 full crack license file
-download picbasic pro 3.0.7 full crack patch
-download picbasic pro 3.0.7 full crack keygen
-download picbasic pro 3.0.7 full crack torrent
-download picbasic pro 3.0.7 full crack mega.nz
-download picbasic pro 3.0.7 full crack mediafire.com
-download picbasic pro 3.0.7 full crack google drive
-download picbasic pro 3.0.7 full crack dropbox.com
-download picbasic pro 3.0.7 full crack for mac
-download picbasic pro 3.0.7 full crack for linux
-download picbasic pro 3.0.7 full crack for raspberry pi
-download picbasic pro 3.0.7 full crack for arduino
-download picbasic pro 3.0.7 full crack for esp32
-download picbasic pro 3.0.7 full crack for stm32
-download picbasic pro 3.0.7 full crack for avr
-download picbasic pro 3.0.7 full crack for arm
-download picbasic pro 3.0.7 full crack for msp430
-download picbasic pro 3
-
Features and benefits of PICBASIC PRO
-
Some of the features and benefits of PBP are:
-
-
It is easy to learn and use, even for beginners.
-
It generates fast and efficient code that can run on small and cheap PICs.
-
It has built-in libraries and routines that make complex tasks easy, such as LCDs, serial communication, PWM, ADC, timers, interrupts, etc.
-
It supports inline assembly code, so you can mix BASIC and assembly for maximum performance and flexibility.
-
It has a friendly and helpful user community that provides support and resources.
-
It has a free trial version that you can download and use for 15 days.
-
-
How to download and install PICBASIC PRO 3.0.7
-
To download and install PBP 3.0.7, you need to follow these steps:
-
Step 1: Download the trial version from the official website
-
The first step is to download the trial version of PBP 3.0.7 from the official website of microEngineering Labs. You can find it under the "Downloads" section.
-
The trial version is a zip file that contains the setup file and some documentation files. The size of the file is about 148 MB.
-
Step 2: Extract the zip file and run the setup file
-
The next step is to extract the zip file using a tool like WinRAR or 7-Zip. You will get a folder called "PBP-3_1_6". Inside this folder, you will find the setup file called "PBP-316-Setup.exe". Double-click on this file to run it.
-
Step 3: Follow the installation wizard and accept the license agreement
-
The setup file will launch an installation wizard that will guide you through the process of installing PBP on your computer. You need to accept the license agreement, choose your language, select your components, and agree to create shortcuts.
-
Step 4: Choose the destination folder and click install
-
The next step is to choose where you want to install PBP on your computer. The default location is "C:\Program Files (x86)\PBP". You can change it if you want, but make sure you have enough space on your drive.
-
After choosing your destination folder, click on "Install" to start copying files.
-
Step 5: Launch the program and activate it with a valid key
-
The final step is to launch PBP from your desktop or start menu shortcut. You will see a splash screen that shows your version number (3.1.6) and your activation status (Trial).
-
To activate PBP with a valid key, you need to click on "Help" > "Activate" > "Enter Key". You will see a dialog box where you need to enter your name, company name (optional), email address (optional), serial number (16 digits), activation key (16 digits), and password (8 digits).
-
You can get these information from microEngineering Labs if you have purchased PBP or from other sources if you have obtained them illegally (not recommended).
-
After entering your information, click on "Activate" to complete the activation process.
-
How to get a full crack for PICBASIC PRO 3.0.7
-
If you want to use PBP without any restrictions or expiration date, you need to get a full crack for it. A crack is a program or file that modifies or bypasses the original protection mechanism of another program or file.
-
To get a full crack for PBP 3.0.7, you need to follow these steps:
-
Step 1: Search for a reliable crack source online
-
The first step is to search for a reliable crack source online using a search engine like Google or Bing. You need to use keywords like "picbasic pro 3.0.7 full crack", "picbasic pro 3.0.7 keygen", "picbasic pro 3.0.7 patch", etc.
-
You will get many results that claim to offer cracks for PBP 3.0.7, but not all of them are trustworthy or working. Some of them may contain viruses, malware, spyware, adware, or other unwanted programs that can harm your computer or steal your personal information.
-
You need to be careful when choosing a crack source online and check its reputation, reviews, comments, ratings, etc., before downloading anything from it.
-
Step 2: Download the crack file and scan it for viruses
-
The next step is to download the crack file from your chosen source online and scan it for viruses using an antivirus program like Avast or Malwarebytes.
-
The crack file may be in different formats such as .exe, .rar, .zip, .iso, .dll, etc., depending on how it was created or packaged by its author.
-
You need to extract or mount the crack file if it is compressed or archived using a tool like WinRAR or Daemon Tools Lite.
-
You need to scan the crack file before running it or copying it into any folder on your computer because it may contain malicious code that can infect your system or damage your files.
-
Step 3: Copy and paste the crack file into the installation folder
-
The next step is to copy and paste the crack file into the installation folder of PBP on your computer.
-
The installation folder of PBP is usually located at "C:\Program Files (x86)\PBP", unless you have changed it during installation.
-
Step 4: Run the crack file and wait for it to patch the program
-
The next step is to run the crack file that you have copied into the installation folder of PBP. You may need to right-click on it and choose "Run as administrator" if you have Windows Vista or later.
-
The crack file will open a command prompt window that will show some messages and instructions. You need to follow them carefully and wait for it to patch the program.
-
The patching process may take a few seconds or minutes, depending on your computer speed and the size of the program. You will see a message that says "Done" when it is finished.
-
Step 5: Enjoy the full version of PICBASIC PRO 3.0.7
-
The final step is to enjoy the full version of PBP 3.0.7 that you have cracked successfully. You can now launch PBP from your desktop or start menu shortcut and use all its features without any limitations.
-
You can also check your activation status by clicking on "Help" > "About". You will see a dialog box that shows your version number (3.1.6) and your activation status (Activated).
-
Congratulations! You have downloaded and installed PICBASIC PRO 3.0.7 full crack on your computer.
-
Conclusion
-
In this article, we have shown you how to download and install PICBASIC PRO 3.0.7, a powerful BASIC compiler for Microchip's PIC microcontrollers. We have also shown you how to get a full crack for it, so you can use it without any restrictions or expiration date.
-
PICBASIC PRO is a great tool for developing microcontroller-based projects, whether you are a beginner or a pro. It is easy to learn and use, yet fast and efficient. It has built-in libraries and routines that make complex tasks easy, and it supports inline assembly code for maximum performance and flexibility.
-
If you want to try PICBASIC PRO 3.0.7 for yourself, you can download it from the official website of microEngineering Labs or from other sources online. However, we do not recommend using cracks or illegal keys, as they may contain viruses or malware that can harm your computer or steal your personal information.
-
Instead, we suggest you purchase a license from microEngineering Labs or from their authorized distributors. By doing so, you will support the development of this amazing product and get access to updates, support, and resources.
-
We hope you have enjoyed this article and learned something useful from it. If you have any questions or comments, please feel free to share them below.
-
FAQs
-
Here are some frequently asked questions about PICBASIC PRO 3.0.7:
-
-
What are the system requirements for PICBASIC PRO 3.0.7?
-
PICBASIC PRO 3.0.7 requires Windows XP or later (32-bit or 64-bit), at least 512 MB of RAM, at least 200 MB of hard disk space, and a serial or USB port for programming PIC microcontrollers.
-
What are the differences between PICBASIC PRO 3.0.7 and previous versions?
-
PICBASIC PRO 3.0.7 has many improvements and new features compared to previous versions, such as:
-
-
Support for more than 800 PIC devices, including PIC10F/12F/16F/18F/24F/dsPIC30F/dsPIC33F/PIC32 families.
-
New commands and functions, such as COS, DIV32, MAX, MIN, REV, SQR, etc.
-
New libraries and routines for LCDs, serial communication, PWM, ADC, timers, interrupts, etc.
-
New configuration settings for oscillator type, watchdog timer, brown-out reset, etc.
-
New options for compiler output format (hex or cof), optimization level (speed or size), error handling (ignore or abort), etc.
-
New tools for debugging (DEBUG), simulation (SIM), in-circuit programming (ICD), etc.
-
-
How can I learn more about PICBASIC PRO 3.0.7?
-
You can learn more about PBP 3.0.7 by reading the manual, watching the tutorials, browsing the forums, or visiting the website of microEngineering Labs.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Paper Jamz Pro Drums For Windows 10 64-bit Free [HOT].md b/spaces/1gistliPinn/ChatGPT4/Examples/Driver Paper Jamz Pro Drums For Windows 10 64-bit Free [HOT].md
deleted file mode 100644
index 363cc3b05f29f2326a7d5a47cb3de1f84df6ae5b..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Paper Jamz Pro Drums For Windows 10 64-bit Free [HOT].md
+++ /dev/null
@@ -1,10 +0,0 @@
-
Driver Paper Jamz Pro Drums for Windows 10 64-bit free
-
-2015-09-16 19:22. Paper Jamz Drums for Windows 10 X64 | Download,Paper Jamz Pro Drums For Windows 10. Paper Jamz Pro Drums For Windows 10 X64 Free. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. This is in great condition. Appears to be in good shape. Laser print on glossy paper of the cover of the fourth edition of the Paper Jamz Pro Drums For Windows 10 X64 book. 2015-09-16 19:22. Paper Jamz Pro Drums For Windows 10 | Download,Paper Jamz Pro Drums For Windows 10. Paper Jamz Pro Drums For Windows 10. 6. 1GB. FREE. DOWNLOAD: Paper Jamz Pro Drums For Windows 10. FREE. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. This is in great condition. Paper Jamz Pro Drums For Windows 10 - 4th edition - Celebrating a decade of Drumming. THIS NEW EDITION OF THE PAPER JAMZ PRO BOOK OFFERS THE DETAILS AND FINE PRICING OF THIS FIRST SOLO BOOK FOR THE PAPER JAMZ PRO DRUMS. All the features and functions of the original Paper Jamz drum-machine give you more features in this computerized version.
-
-.
-
-Paper Jamz Pro Drums For Windows 10 64-bit free Kaexavye. Download Free driver Paper Jamz Pro Drums For Windows 10 X64 Free Kaexavye. DOWNLOAD: Paper Jamz Pro Drums For Windows 10 X64 Free Kaexavye. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 10 X64 Free Free Download. Paper Jamz Pro Drums For Windows 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Illustrator CC 2021 Full Crack - Phn mm thit k ha chuyn nghip.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Illustrator CC 2021 Full Crack - Phn mm thit k ha chuyn nghip.md
deleted file mode 100644
index 95f739d53bba061f54481e75bcefa2b942ee0368..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Illustrator CC 2021 Full Crack - Phn mm thit k ha chuyn nghip.md
+++ /dev/null
@@ -1,170 +0,0 @@
-
-
Download AI 2021 Full Crack: Is It Worth It?
AI 2021 is the latest version of Adobe Illustrator, one of the most popular and powerful vector graphics software in the world. With AI 2021, you can create stunning illustrations, logos, icons, graphics, billboards, and more. AI 2021 also comes with new features and improvements that make it faster, easier, and more versatile than ever.
But AI 2021 is not cheap. The official price for a single app subscription is $20.99 per month or $239.88 per year. That's why some people might be tempted to download AI 2021 for free using a crack.
A crack is a program that modifies or bypasses the software protection measures that prevent unauthorized copying or distribution of software. By using a crack, you can access the full version of AI 2021 without paying anything.
But is it worth it? In this article, we'll explain what are the risks of using cracked software and what are the alternatives to cracked software.
What Is a Crack and How Does It Work?
A crack is a program that modifies or bypasses the software protection measures that prevent unauthorized copying or distribution of software. These measures can include license keys, encryption keys, digital rights management (DRM), online activation, or other mechanisms.
There are different types of cracks depending on how they work:
A keygen crack is a program that generates valid license keys for software.
A patch
A patch crack is a program that modifies the original software files to remove or alter the protection measures.
-
A loader crack is a program that runs before the software and tricks it into thinking that it is activated or licensed.
-
A cracked version is a software that has been already cracked by someone else and distributed as a ready-to-use file.
-
-
Cracks are usually created by hackers or crackers who reverse engineer the software code and find ways to bypass the protection measures. Cracks are often shared on cracking sites, forums, or peer-to-peer networks.
-
What Are the Risks of Using Cracked Software?
-
Using cracked software may seem like a good way to save money and get access to premium features, but it comes with many risks and disadvantages. Here are some of the most common ones:
-
How to download legend of hanuman season 3 for free
-Download legend of hanuman season 3 full episodes in HD
-Watch legend of hanuman season 3 online on Hotstar
-Legend of hanuman season 3 release date and cast
-Download legend of hanuman season 3 trailer and songs
-Legend of hanuman season 3 review and ratings
-Download legend of hanuman season 3 subtitles in Hindi
-Legend of hanuman season 3 plot and spoilers
-Download legend of hanuman season 3 wallpapers and posters
-Legend of hanuman season 3 behind the scenes and making
-Download legend of hanuman season 3 in Tamil and Telugu
-Legend of hanuman season 3 best scenes and quotes
-Download legend of hanuman season 3 comic book and art
-Legend of hanuman season 3 facts and trivia
-Download legend of hanuman season 3 game and app
-Legend of hanuman season 3 merchandise and gifts
-Download legend of hanuman season 3 ringtone and theme song
-Legend of hanuman season 3 fan theories and predictions
-Download legend of hanuman season 3 memes and jokes
-Legend of hanuman season 3 awards and nominations
-Download legend of hanuman season 3 torrent and magnet link
-Legend of hanuman season 3 comparison with Ramayana
-Download legend of hanuman season 3 in MP4 and MKV format
-Legend of hanuman season 3 voice actors and characters
-Download legend of hanuman season 3 on Netflix and Amazon Prime
-Legend of hanuman season 3 history and mythology
-Download legend of hanuman season 3 on PC and laptop
-Legend of hanuman season 3 animation and graphics
-Download legend of hanuman season 3 on mobile and tablet
-Legend of hanuman season 3 feedback and comments
-Download legend of hanuman season 3 on Roku and Firestick
-Legend of hanuman season 3 creators and producers
-Download legend of hanuman season 3 on Disney Plus and Hulu
-Legend of hanuman season 3 inspiration and message
-Download legend of hanuman season 3 on YouTube and Vimeo
-
Malware Infections and Other Security Threats
-
One of the biggest risks of using cracked software is that it can contain malware, which is any type of malicious software that can harm your computer, steal your data, or download more malware. Malware can include viruses, worms, trojans, ransomware, spyware, adware, rootkits, and more.
-
Malware can infect your computer in various ways:
-
-
It can be hidden inside the crack file or the cracked software file.
-
It can be downloaded along with the crack or the cracked software from untrusted sources.
-
It can be activated when you run the crack or the cracked software.
-
-
Malware can cause various problems for your computer and your data, such as:
-
-
Slowing down your system performance or crashing your programs.
-
Displaying unwanted ads, popups, or redirects on your browser.
-
Stealing your personal information, such as passwords, credit card numbers, or bank accounts.
-
Encrypting your files and demanding a ransom to unlock them.
-
Installing more malware or allowing remote access to hackers.
-
-
Some examples of malware found in cracked software are:
-
-
The Shlayer Trojan, which was found in cracked versions of Adobe Photoshop and Microsoft Office for Mac. It infected over 10% of all Macs in 2019.
-
The Crackonosh Malware, which was found in cracked versions of popular games like GTA V and NBA 2K19. It infected over 220,000 computers and earned over $2 million for its creators by mining cryptocurrency.
-
The STOP Ransomware, which was found in cracked versions of Adobe Illustrator and Photoshop. It encrypted over 460,000 files and demanded $980 to unlock them.
-
Deactivation of Antivirus Software During Installation
-
Another risk of using cracked software is that some cracks require you to deactivate your antivirus software during installation. This is because the antivirus software may detect the crack as a threat and block it or delete it. However, by disabling your antivirus software, you are also exposing your computer to other threats that may be lurking in the background or online.
-
Some cracking sites or instructions may claim that the crack is safe and that the antivirus software is just giving a false positive. However, this is not always true and you should not trust them blindly. Even if the crack itself is not malicious, it may still compromise your system security by altering your settings, registry, or firewall.
-
Downloading from Untrusted Sources
-
A third risk of using cracked software is that you have to download it from untrusted sources, such as cracking sites, forums, or peer-to-peer networks. These sources are often unreliable, unsafe, and full of popups or redirects that can lead you to more dangerous sites or downloads.
-
Some of the problems that you may encounter when downloading from untrusted sources are:
-
-
The crack or the cracked software may not work as advertised or may not work at all.
-
The crack or the cracked software may be outdated, incomplete, or corrupted.
-
The crack or the cracked software may contain malware or unwanted programs.
-
The download link may be broken, expired, or fake.
-
The download speed may be slow, unstable, or limited.
-
The download may be illegal, infringing on the intellectual property rights of the software developer.
-
-
Lack of Updates, Leaving the Software Exposed to Security Threats
-
A fourth risk of using cracked software is that it cannot receive updates from the official source, which can leave it vulnerable to bugs, errors, or exploits. Updates are important for software because they can fix issues, improve performance, add features, or enhance security. However, cracked software cannot connect to the official server or verify its license, so it cannot download or install updates.
-
This means that cracked software may have:
-
-
Unresolved bugs or errors that can affect its functionality or stability.
-
Outdated features or performance that can make it less efficient or compatible with other software or hardware.
-
Unpatched vulnerabilities that can expose it to hackers or malware attacks.
-
Poorly Functioning Functions
-
A fifth risk of using cracked software is that it may not work as intended, have missing or corrupted features, or cause compatibility issues with other software or hardware. Cracked software may have been modified or damaged by the cracking process, which can affect its quality and performance.
-
Some of the problems that you may encounter when using cracked software are:
-
-
The software may crash, freeze, or display error messages frequently.
-
The software may have reduced functionality, such as disabled features, limited options, or lower quality output.
-
The software may have corrupted files, such as missing icons, fonts, or images.
-
The software may have compatibility issues with other software or hardware, such as conflicts, errors, or crashes.
-
-
Loss of Revenue for Software Developers
-
A sixth risk of using cracked software is that it harms the software industry, reduces the incentive for innovation, and deprives developers of their rightful income. Software development is a costly and time-consuming process that requires a lot of resources, skills, and creativity. Software developers deserve to be compensated for their work and to protect their intellectual property rights.
-
When you use cracked software, you are:
-
-
Violating the terms and conditions of the software license agreement.
-
Infringing on the copyright and trademark of the software developer.
-
Stealing the revenue that the software developer would have earned from your purchase or subscription.
-
Discouraging the software developer from investing in further development, improvement, or support of the software.
-
-
What Are the Alternatives to Cracked Software?
-
Now that you know the risks of using cracked software, you may be wondering what are the alternatives to cracked software. Fortunately, there are many legal and ethical ways to get access to AI 2021 or similar software without breaking the law or compromising your security. Here are some of them:
-
Free Trial Version
-
One of the easiest and safest ways to try AI 2021 for free is to download a free trial version from Adobe's website. The free trial version allows you to use AI 2021 for 7 days without any limitations. You can access all the features and functions of AI 2021 and create as many projects as you want. You can also save and export your work in various formats.
-
To download the free trial version of AI 2021, you need to create a free Adobe account and install the Creative Cloud desktop app on your computer. Then you can download and install AI 2021 from the app. You can also cancel the trial anytime before it expires without any charges.
If you are a student or a teacher, you can get a 60% discount on AI 2021 and other Creative Cloud apps. This means that you can get access to AI 2021 and over 20 other apps for only $19.99 per month or $239.88 per year. You can also get access to 100 GB of cloud storage, Adobe Fonts, Adobe Portfolio, and Adobe Spark.
-
To get the student or teacher discount, you need to verify your eligibility by providing proof of enrollment or employment at an accredited educational institution. You can do this online or by contacting Adobe customer service. Once you are verified, you can purchase the discounted plan from Adobe's website.
Another alternative to cracked software is to use open source AI software. Open source software is software that is free, legal, and often comparable to AI 2021 in terms of features and performance. Open source software is developed and maintained by a community of volunteers who share their code and contributions with the public.
-
Some of the benefits of using open source software are:
-
-
You can use it for any purpose, personal or commercial, without any restrictions or fees.
-
You can modify it, customize it, or improve it according to your needs or preferences.
-
You can access the latest updates, bug fixes, or new features from the developers or the community.
-
You can support the open source movement and contribute to its development or promotion.
-
-
Some examples of open source AI software are:
-
-
Inkscape: A vector graphics editor that supports SVG, PNG, PDF, EPS, and other formats. It has many tools and features similar to AI 2021, such as paths, shapes, gradients, filters, text, clones, markers, transformations, and more. You can download Inkscape from this link: https://inkscape.org/
-
GIMP: A raster graphics editor that supports JPEG, PNG, GIF, TIFF, PSD, and other formats. It has many tools and features similar to AI 2021, such as layers, masks, brushes, gradients, filters, text, vectors, transformations, and more. You can download GIMP from this link: https://www.gimp.org/
-
Krita: A digital painting and illustration software that supports PSD, PNG, JPEG, BMP, TIFF, and other formats. It has many tools and features similar to AI 2021, such as brushes, vectors, shapes, gradients, filters, text, layers, masks, transformations, and more. You can download Krita from this link: https://krita.org/en/
-
Conclusion
-
In conclusion, downloading AI 2021 full crack is not worth it. It is illegal, unethical, and risky. You may end up with malware infections, security threats, poor performance, compatibility issues, or legal troubles. You may also harm the software industry and discourage innovation.
-
Instead of using cracked software, you should consider the alternatives that are legal and ethical. You can try AI 2021 for free for 7 days, get a student or teacher discount, or use open source software. These options will give you access to quality software without breaking the law or compromising your security.
-
We hope this article has helped you understand the risks of using cracked software and the alternatives to cracked software. If you have any questions or comments, please feel free to leave them below.
-
FAQs
-
Here are some frequently asked questions related to the topic of this article:
-
-
Q: How can I tell if a software is cracked or not?
-
A: There are some signs that can indicate if a software is cracked or not, such as:
-
-
The software is downloaded from an untrusted source, such as a cracking site, a forum, or a peer-to-peer network.
-
The software requires you to disable your antivirus software during installation or run a crack file before using it.
-
The software does not ask for a license key or activation code when you install or use it.
-
The software does not receive updates from the official source or shows an error message when you try to update it.
-
The software has missing or corrupted features, files, or icons.
-
-
Q: Is it legal to use cracked software for personal use?
-
A: No, it is not legal to use cracked software for personal use. Cracked software is considered a form of piracy, which is the unauthorized copying or distribution of software. Piracy is illegal in most countries and can result in fines, lawsuits, or criminal charges.
-
Q: What are the penalties for using cracked software?
-
A: The penalties for using cracked software vary depending on the country, the type of software, and the extent of the infringement. However, some of the possible penalties are:
-
-
Fines ranging from hundreds to thousands of dollars per infringement.
-
Lawsuits from the software developer or the owner of the intellectual property rights.
-
Criminal charges that can lead to imprisonment or community service.
-
-
Q: How can I protect myself from malware infections when downloading software?
-
A: There are some steps that you can take to protect yourself from malware infections when downloading software, such as:
-
-
Download software only from trusted sources, such as the official website of the developer or a reputable online store.
-
Use a reliable antivirus software and keep it updated regularly.
-
Scan any file that you download before opening or running it.
-
Avoid clicking on suspicious links, popups, or redirects that may lead you to malicious sites or downloads.
-
-
Q: How can I support the software industry and encourage innovation?
-
A: There are some ways that you can support the software industry and encourage innovation, such as:
-
-
Purchase or subscribe to software from the official source and pay the fair price for it.
-
Respect the terms and conditions of the software license agreement and do not share or distribute it without permission.
-
Provide feedback, reviews, or suggestions to the software developer or the community.
-
Donate to open source projects or contribute to their development or promotion.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Facebook Old Version APK for Android from Uptodown.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Facebook Old Version APK for Android from Uptodown.md
deleted file mode 100644
index cbc945dbdbc2fd9c0e3cc3acf6f6c6f17c523662..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Facebook Old Version APK for Android from Uptodown.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
How to Download Facebook Old Version APK from Uptodown
-
Introduction
-
Facebook is one of the most popular social media platforms in the world, with over 2.8 billion monthly active users as of 2021. However, not everyone is happy with the latest version of the Facebook app, which can be slow, buggy, and full of ads. If you are one of those people who prefer the old version of Facebook, you might be wondering how to download it on your Android device. Fortunately, there is a way to do that, thanks to a website called Uptodown.
APK stands for Android Package Kit, which is a file format that contains the code and resources of an Android app. An APK file can be installed on an Android device manually, without using the Google Play Store. This is also known as sideloading. A Facebook Old Version APK is an APK file that contains an older version of the Facebook app, which may have different features, design, and performance than the current version.
-
Why would you want to download Facebook Old Version APK?
-
There are many reasons why you might want to download Facebook Old Version APK on your Android device. Some of them are:
-
-
You like the old interface and layout of Facebook better than the new one.
-
You want to use some features that are no longer available in the latest version, such as chat heads, stickers, or games.
-
You have an older device that cannot run the latest version smoothly or at all.
-
You want to save data and battery by using a lighter and simpler version of Facebook.
-
You want to avoid ads and permissions that are annoying or intrusive.
-
-
How to download Facebook Old Version APK from Uptodown
-
If you are interested in downloading Facebook Old Version APK on your Android device, you can follow these simple steps:
-
facebook messenger old version apk uptodown
-facebook lite old version apk uptodown
-facebook old version apk download uptodown
-facebook old version apk 2019 uptodown
-facebook old version apk 2020 uptodown
-facebook old version apk 2021 uptodown
-facebook old version apk 2018 uptodown
-facebook old version apk 2017 uptodown
-facebook old version apk 2016 uptodown
-facebook old version apk 2015 uptodown
-facebook old version apk 2014 uptodown
-facebook old version apk 2013 uptodown
-facebook old version apk 2012 uptodown
-facebook old version apk 2011 uptodown
-facebook old version apk 2010 uptodown
-facebook old version apk for android uptodown
-facebook old version apk for ios uptodown
-facebook old version apk for pc uptodown
-facebook old version apk for windows uptodown
-facebook old version apk for mac uptodown
-facebook old version apk for linux uptodown
-facebook old version apk free download uptodown
-facebook old version apk mod uptodown
-facebook old version apk hack uptodown
-facebook old version apk pro uptodown
-facebook old version apk premium uptodown
-facebook old version apk cracked uptodown
-facebook old version apk unlocked uptodown
-facebook old version apk full uptodown
-facebook old version apk latest uptodown
-facebook old version apk update uptodown
-facebook old version apk offline uptodown
-facebook old version apk online uptodown
-facebook old version apk no ads uptodown
-facebook old version apk dark mode uptodown
-facebook old version apk light mode uptodown
-facebook old version apk beta uptodown
-facebook old version apk alpha uptodown
-facebook old version apk stable uptodown
-facebook old version apk review uptodown
-facebook old version apk rating uptodown
-facebook old version apk feedback uptodown
-facebook old version apk support uptodown
-facebook old version apk help uptodown
-facebook old version apk guide uptodown
-facebook old version apk tutorial uptodown
-facebook old version apk tips and tricks uptodown
-
Step 1: Visit the Uptodown website
-
Uptodown is a website that offers free downloads of apps and games for various platforms, including Android, Windows, Mac, iOS, and more. It also has a large collection of older versions of popular apps, such as Facebook Messenger. To visit the Uptodown website, you can use any browser on your device and go to https://en.uptodown.com/.
-
Step 2: Search for Facebook Messenger
-
Once you are on the Uptodown website, you can use the search bar at the top to look for Facebook Messenger. You can also browse through the categories or use the filters to narrow down your search. You should see a list of results that match your query. Click on the one that says "Facebook Messenger" with the blue logo.
-
Step 3: Choose an older version of Facebook Messenger
-
After clicking on Facebook Messenger, you will be taken to a page that shows information about the app, such as its description, screenshots, ratings, and more. You will also see a button that says "Download Latest Version". However, you don't want to download the latest version, you want to download an older version. To do that, you need to scroll down to the bottom of the page, where you will see a section that says "Previous versions". Here, you will see a table that shows the different versions of Facebook Messenger that are available on Uptodown, along with their release date, size, and download link. You can choose any version that you like, but make sure that it is compatible with your device and Android version. For example, if you have Android 4.0 or higher, you can choose version 229.0.0.8.118, which was released on June 11, 2020.
-
Step 4: Download and install the APK file
-
Once you have chosen an older version of Facebook Messenger, you can click on the green button that says "Download" next to it. This will start the download process of the APK file on your device. Depending on your browser settings, you may need to confirm the download or choose a location to save the file. After the download is complete, you need to install the APK file on your device. To do that, you need to enable the option to install apps from unknown sources in your device settings. This may vary depending on your device model and Android version, but generally, you can find it under Security or Applications. Once you have enabled this option, you can locate the APK file in your device storage and tap on it to install it. You may need to grant some permissions or accept some warnings before the installation is complete. After the installation is complete, you can open Facebook Messenger and enjoy the old version.
-
Benefits of using Facebook Old Version APK from Uptodown
-
By downloading Facebook Old Version APK from Uptodown, you can enjoy some benefits that are not available in the latest version of Facebook Messenger. Some of them are:
-
Access to features that are no longer available in the latest version
-
Some features that were popular and useful in the old version of Facebook Messenger have been removed or changed in the latest version. For example, chat heads, which allowed you to chat with your friends without leaving other apps, are no longer supported by some devices or Android versions. Stickers, which added fun and expression to your messages, are now limited and require a separate app to use them. Games, which let you play with your friends within Messenger, are now gone and replaced by a new gaming platform called Facebook Gaming. By using an older version of Facebook Messenger, you can still access these features and enjoy them as before.
-
Faster and smoother performance on older devices
-
The latest version of Facebook Messenger may be too heavy and demanding for some older devices or Android versions. It may cause lagging, crashing, freezing, or draining of battery and data. By using an older version of Facebook Messenger, you can avoid these problems and have a faster and smoother performance on your device. The older version may also have a simpler and cleaner interface that is easier to use and navigate.
-
Less intrusive ads and permissions
-
The latest version of Facebook Messenger may have more ads and permissions than the older version. Ads may pop up or appear in your chats or stories, which can be annoying or distracting. Permissions may ask for access to your contacts, location, camera, microphone, or other sensitive information, which can be invasive or risky. By using an older version of Facebook Messenger, you can reduce the amount of ads and permissions that you have to deal with and have more control over your privacy and security.
-
Risks of using Facebook Old Version APK from Uptodown
-
While there are some benefits of using Facebook Old Version APK from Uptodown, there are also some risks that you should be aware of before downloading it. Some of them are:
-
Security and privacy issues
-
By downloading an APK file from an unknown source like Uptodown, you may expose your device and data to potential threats such as malware, viruses, spyware, or hackers. These threats may harm your device or steal your personal information such as passwords, credit card numbers, or photos. You may also lose access to some security features or updates that are available in the latest version of Facebook Messenger, such as encryption, verification codes, or bug fixes. These features or updates may protect your account and messages from unauthorized access or interception.
-
Compatibility and stability issues
-
By using an older version of Facebook Messenger that is not compatible with your device or Android version, you may encounter some compatibility and stability issues that may affect your user experience. For example, you may not be able to chat with some of your friends who are using the latest version of Facebook Messenger, or you may not be able to send or receive some types of messages, such as voice notes, videos, or GIFs. You may also experience some glitches, errors, or crashes that may interrupt your chats or cause data loss.
-
Missing out on new updates and features
-
By using an older version of Facebook Messenger, you may miss out on some new updates and features that are available in the latest version. These updates and features may improve the functionality, design, or performance of the app, or add some new options or capabilities that may enhance your communication or entertainment. For example, you may not be able to use some of the new features that are available in the latest version of Facebook Messenger, such as dark mode, vanish mode, watch together, or rooms.
-
Conclusion
-
Summary of the main points
-
In conclusion, downloading Facebook Old Version APK from Uptodown is a way to use an older version of Facebook Messenger on your Android device. This may have some benefits, such as accessing features that are no longer available in the latest version, having faster and smoother performance on older devices, or avoiding ads and permissions that are intrusive. However, this may also have some risks, such as security and privacy issues, compatibility and stability issues, or missing out on new updates and features. Therefore, you should weigh the pros and cons carefully before deciding to download Facebook Old Version APK from Uptodown.
-
Call to action
-
If you are still interested in downloading Facebook Old Version APK from Uptodown, you can follow the steps that we have outlined in this article. However, if you want to avoid the risks and enjoy the latest features and updates of Facebook Messenger, you can always download the latest version from the Google Play Store. Either way, we hope that this article has been helpful and informative for you. Thank you for reading!
-
FAQs
-
-
Is it legal to download Facebook Old Version APK from Uptodown?
-
It is not illegal to download Facebook Old Version APK from Uptodown, as long as you do not use it for any malicious or fraudulent purposes. However, it may violate the terms of service of Facebook or Google Play Store, which may result in some consequences such as account suspension or termination.
-
Is it safe to download Facebook Old Version APK from Uptodown?
-
It is not completely safe to download Facebook Old Version APK from Uptodown, as there may be some security and privacy risks involved. You may expose your device and data to potential threats such as malware, viruses, spyware, or hackers. You may also lose access to some security features or updates that are available in the latest version of Facebook Messenger. Therefore, you should be careful and cautious when downloading Facebook Old Version APK from Uptodown.
-
How can I update Facebook Old Version APK from Uptodown?
-
You cannot update Facebook Old Version APK from Uptodown directly, as Uptodown does not provide automatic updates for its apps. You will have to manually check for new versions on the Uptodown website and download them if you want to update your app. However, this may not be advisable, as you may encounter some compatibility and stability issues with newer versions. Alternatively, you can uninstall Facebook Old Version APK from Uptodown and install the latest version from the Google Play Store.
-
How can I uninstall Facebook Old Version APK from Uptodown?
-
You can uninstall Facebook Old Version APK from Uptodown by following these steps:
-
-
Go to your device settings and tap on Apps or Applications.
-
Find and tap on Facebook Messenger.
-
Tap on Uninstall and confirm your action.
-
-
This will remove Facebook Old Version APK from Uptodown from your device completely.
-
What are some alternatives to downloading Facebook Old Version APK from Uptodown?
-
If you are not satisfied with downloading Facebook Old Version APK from Uptodown, you can try some alternatives such as:
-
-
Downloading the latest version of Facebook Messenger from the Google Play Store.
-
Using a different messaging app that suits your needs and preferences better.
-
Using the web version of Facebook Messenger on your browser.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Free 3D House Models in Various Formats and Styles.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Free 3D House Models in Various Formats and Styles.md
deleted file mode 100644
index 49faf5d4a72c37761625c07fa35b499fa8bc0434..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Free 3D House Models in Various Formats and Styles.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Download 3D House Model Free: A Guide for Beginners
-
If you are interested in creating or exploring virtual houses in three dimensions, you might want to download a 3D house model for free. A 3D house model is a digital representation of a real or imaginary house that can be viewed and manipulated in three dimensions. You can use it for various purposes, such as architecture, design, gaming, education, entertainment, etc.
In this article, we will explain what is a 3D house model, why it is useful, and how to download it for free. We will also provide some tips on how to choose the right 3D house model for your needs and how to open and edit it with different software. By the end of this article, you will have a better understanding of how to download free 3D house models and use them for your projects.
-
What is a 3D House Model?
-
A 3D house model is a digital representation of a real or imaginary house that can be viewed and manipulated in three dimensions. It consists of vertices (points), edges (lines), faces (surfaces), and textures (images) that define the shape and appearance of the house.
-
Types of 3D House Models
-
There are different types of 3D house models depending on various factors, such as:
-
-
Poly count: This refers to the number of polygons (triangles or quadrilaterals) that make up the model. Low-poly models have fewer polygons and are simpler and faster to render but less detailed. High-poly models have more polygons and are more complex and realistic but slower to render.
-
Interior or exterior: This refers to whether the model shows only the outside or the inside of the house, or both. Interior models show the rooms, furniture, and decorations of the house. Exterior models show the walls, roof, windows, doors, and landscape of the house.
-
Realistic or stylized: This refers to whether the model tries to mimic the appearance of a real house or adopts a more artistic or abstract style. Realistic models use realistic textures, colors, lighting, and shadows to create a believable impression of the house. Stylized models use exaggerated or simplified features, such as cartoonish shapes, bright colors, or flat shading to create a distinctive or expressive look of the house.
-
-
Uses of 3D House Models
-
3D house models can be used for various purposes, such as:
-
download free 3d house models in blender format
-free 3d house model download for cinema 4d
-download 3d house models free for maya
-free 3d house model obj download
-download free 3d house models fbx
-free 3d house model max download
-download 3d house models free for sketchup
-free 3d house model c4d download
-download free 3d house models in stl format
-free 3d house model dae download
-download 3d house models free for unity
-free 3d house model lwo download
-download free 3d house models in x format
-free 3d house model xsi download
-download 3d house models free for unreal engine
-free 3d house model abc download
-download free 3d house models in wip format
-free 3d house model x3d download
-download 3d house models free for lumion
-free 3d house model dxf download
-download free 3d house models in dwg format
-free 3d house model mtl download
-download 3d house models free for revit
-free 3d house model png download
-download free 3d house models in textures format
-free 3d house model dds download
-download 3d house models free for archicad
-free 3d house model mb download
-download free 3d house models in ma format
-free 3d house model unknown download
-download 3d house models free for rhino
-free 3d house model skp download
-download free 3d house models in vrml format
-free 3d house model ply download
-download 3d house models free for blender cycles
-free 3d house model usdz download
-download free 3d house models in gltf format
-free 3d house model glb download
-download 3d house models free for zbrush
-free 3d house model ztl download
-download free 3d house models in mudbox format
-free 3d house model mubx download
-download 3d house models free for substance painter
-free 3d house model spm download
-download free 3d house models in gcode format
-free 3d house model gco download
-download 3d house models free for octane render
-free 3d house model orbx download
-download free 3d house models in rvt format
-
-
Architecture: 3D house models can help architects and engineers design and visualize new houses or renovate existing ones. They can also help clients and contractors understand and communicate the plans and specifications of the houses.
-
Design: 3D house models can help designers and artists create and showcase different styles and themes of houses. They can also help homeowners and decorators plan and arrange the interior and exterior of their houses.
-
Gaming: 3D house models can help game developers and players create and explore immersive and interactive environments in various genres of games, such as simulation, adventure, horror, etc.
-
Education: 3D house models can help teachers and students learn and teach various subjects related to houses, such as history, culture, geography, math, physics, etc.
-
Entertainment: 3D house models can help filmmakers and animators create and animate realistic or fantastical scenes involving houses. They can also help hobbyists and enthusiasts enjoy and share their passion for houses.
-
-
Why Download 3D House Model Free?
-
If you want to use a 3D house model for any of the purposes mentioned above, you might want to download it for free. There are many benefits of downloading free 3D house models, such as:
-
-
Saving money: You don't have to pay anything to download a free 3D house model. You can save your money for other expenses or investments.
-
Saving time: You don't have to spend hours or days creating your own 3D house model from scratch. You can save your time for other tasks or activities.
-
Saving resources: You don't have to use a lot of computer power or storage space to create or store your own 3D house model. You can save your resources for other purposes or applications.
-
Accessing a wide variety of models: You can choose from thousands of free 3D house models available online. You can find different types, styles, sizes, and qualities of models to suit your needs and preferences.
-
Learning new skills: You can learn from the free 3D house models created by other people. You can study how they modeled, textured, lit, and rendered their models. You can also improve your skills by editing or modifying their models.
-
-
Sources of Free 3D House Models
-
There are many websites that offer free 3D house models for download. Some of the best ones are:
-
-
Free3D: This is one of the largest online repositories of free 3D models. You can find over 10,000 free 3D house models in various formats, such as OBJ, FBX, STL, etc. You can also filter by category, rating, license, etc.
-
TurboSquid: This is one of the most popular online marketplaces of 3D models. You can find over 4,000 free 3D house models in various formats, such as OBJ, FBX, STL, etc. You can also filter by category, rating, license, etc.
-
Sketchfab: This is one of the most innovative online platforms of 3D models. You can find over 3,000 free 3D house models in various formats, such as OBJ, FBX, STL, etc. You can also view and interact with the models in 3D and VR.
-
-
Tips for Choosing the Right 3D House Model
-
Before you download a free 3D house model, you should consider some factors that can affect your choice, such as:
-
-
Format: You should check the format of the 3D house model and make sure it is compatible with the software you want to use. Some of the most common formats are OBJ, FBX, STL, etc. You can also use online converters to change the format of the model if needed.
-
Quality: You should check the quality of the 3D house model and make sure it meets your expectations. Some of the indicators of quality are poly count, texture resolution, lighting, shading, etc. You can also use online tools to optimize or enhance the quality of the model if needed.
-
License: You should check the license of the 3D house model and make sure it allows you to use it for your intended purpose. Some of the common licenses are CC0 (public domain), CC BY (attribution), CC BY-SA (attribution-share alike), etc. You should also respect the rights and credits of the original creators of the model.
-
Compatibility: You should check the compatibility of the 3D house model and make sure it works well with your hardware and software. Some of the factors that can affect compatibility are file size, memory usage, rendering speed, etc. You should also test the model before using it for your project.
-
-
How to Download 3D House Model Free?
-
Once you have chosen the right 3D house model for your needs, you can download it for free from the website that offers it. The steps involved in downloading a free 3D house model are:
-
-
Search: You can use the search bar or the categories to find the 3D house model you want.
-
Browse: You can browse through the results and see the thumbnails and details of each model.
-
Filter: You can use the filters to narrow down your results and sort them by relevance, popularity, date, etc.
-
Preview: You can preview the 3D house model in 3D and see how it looks from different angles and perspectives.
-
Download: You can click on the download button and choose the format and quality of the model. You can also agree to the terms and conditions of the license.
-
-
How to Open and Edit a 3D House Model?
-
After you have downloaded a free 3D house model, you can open and edit it with different software. Some of the best software that can open and edit a 3D house model are:
-
-
Blender: This is one of the most powerful and versatile open-source software for 3D modeling, animation, rendering, etc. You can import and export various formats of 3D models, such as OBJ, FBX, STL, etc. You can also edit and modify the 3D house model with various tools, such as sculpting, painting, rigging, etc.
-
SketchUp: This is one of the most user-friendly and intuitive software for 3D modeling, design, and visualization. You can import and export various formats of 3D models, such as OBJ, FBX, STL, etc. You can also edit and modify the 3D house model with various tools, such as drawing, extruding, scaling, etc.
-
Maya: This is one of the most professional and advanced software for 3D modeling, animation, rendering, etc. You can import and export various formats of 3D models, such as OBJ, FBX, STL, etc. You can also edit and modify the 3D house model with various tools, such as modeling, texturing, lighting, etc.
-
-
How to Export and Share a 3D House Model?
-
After you have opened and edited a free 3D house model, you can export and share it with different formats. Some of the common formats that can be used to export and share a 3D house model are:
-
-
OBJ: This is one of the most widely used and supported formats for 3D models. It can store the geometry and texture of the model in a simple and readable way. It can be opened by almost any software that can handle 3D models.
-
FBX: This is one of the most versatile and flexible formats for 3D models. It can store the geometry, texture, animation, and other attributes of the model in a compact and efficient way. It can be opened by many software that can handle 3D models.
-
STL: This is one of the most popular formats for 3D printing. It can store the geometry of the model in a binary or ASCII format. It can be opened by many software that can handle 3D printing.
-
-
Conclusion
-
In this article, we have explained what is a 3D house model, why it is useful, and how to download it for free. We have also provided some tips on how to choose the right 3D house model for your needs and how to open and edit it with different software. We hope you have learned something new and useful from this article.
-
If you want to download free 3D house models for your projects, you can visit some of the websites we mentioned above, such as Free3D, TurboSquid, Sketchfab, etc. You can also use some of the software we recommended above, such as Blender, SketchUp, Maya, etc.
-
Thank you for reading this article. We hope you enjoyed it and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Happy downloading!
-
FAQs
-
Here are some of the frequently asked questions about downloading free 3D house models:
-
-
Q: How do I know if a 3D house model is free or not?
-
A: You can check the license of the 3D house model on the website that offers it. Usually, there will be a label or a link that indicates whether the model is free or not. You can also read the terms and conditions of the license to understand what you can and cannot do with the model.
-
Q: How do I know if a 3D house model is good or not?
-
A: You can check the quality of the 3D house model by previewing it in 3D on the website that offers it. You can also read the reviews and ratings of other users who have downloaded or used the model. You can also compare different models and see which one suits your needs and preferences better.
-
Q: How do I know if a 3D house model is compatible with my software or not?
-
A: You can check the format of the 3D house model on the website that offers it. Usually, there will be a list or a dropdown menu that shows which formats are available for download. You can also check the specifications or documentation of your software to see which formats it can import or export.
-
Q: How do I download multiple 3D house models at once?
-
A: Some websites may allow you to download multiple 3D house models at once by selecting them and adding them to your cart or download folder. However, some websites may limit the number of downloads per day or per user. You may have to register or sign in to access more downloads. You may also have to pay a fee or subscribe to a plan to download more models.
-
Q: How do I share my 3D house model with others?
-
A: You can share your 3D house model with others by exporting it to a common format, such as OBJ, FBX, STL, etc. and sending it via email, cloud storage, social media, etc. You can also upload your 3D house model to a website that allows you to showcase and share your 3D models, such as Sketchfab, ArtStation, etc.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Autocad Error Unable To Load The Modeler Dlls.md b/spaces/1phancelerku/anime-remove-background/Autocad Error Unable To Load The Modeler Dlls.md
deleted file mode 100644
index b5386b8e92fcde80b53d43330bfa2ea0ec59b0e2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Autocad Error Unable To Load The Modeler Dlls.md
+++ /dev/null
@@ -1,154 +0,0 @@
-## Autocad Error Unable To Load The Modeler Dlls
-
-
-
-
-
- 
-
-
-
-
-
-**Download ===> [https://bracadfofor.blogspot.com/?l=2txiza](https://bracadfofor.blogspot.com/?l=2txiza)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Fix Autocad Error Unable To Load The Modeler Dlls
-
-
-
-If you are using Autocad and encounter a fatal error that says "unable to load the modeler dlls", you may be wondering what causes this problem and how to solve it. In this article, we will explain what are the modeler dlls, why they are needed, and what are some possible solutions to fix this error.
-
-
-
-## What are the modeler dlls?
-
-
-
-The modeler dlls are dynamic link libraries that are used by Autocad to create and manipulate 3D objects and solids. They are essential for working with 3D modeling and rendering in Autocad. The modeler dlls are located in the \uE000Acgex19\uE001 folder under the Autocad installation directory.
-
-
-
-## Why do they fail to load?
-
-
-
-There are several possible reasons why the modeler dlls fail to load when you start Autocad or open a drawing that contains 3D blocks. Some of the common causes are:
-
-
-
-- The modeler dlls are missing, corrupted, or outdated.
-
-- The \uE000Acgex19\uE001 folder is missing or has incorrect permissions.
-
-- The \uE000Acgex19\uE001 folder is not registered in the Windows registry.
-
-- The drawing file is corrupted or contains invalid 3D blocks.
-
-- The system resources are insufficient or conflicting.
-
-
-
-## How to fix the error?
-
-
-
-Depending on the cause of the error, there are different solutions that you can try to fix it. Here are some of the possible methods:
-
-
-
-1. Run Disk Cleanup to delete temporary files that may interfere with the loading of the modeler dlls[^1^].
-
-2. Repair or reinstall Autocad to restore the missing or corrupted modeler dlls.
-
-3. Copy the \uE000Acgex19\uE001 folder from another working computer or from the installation media to your Autocad installation directory.
-
-4. Register the \uE000Acgex19\uE001 folder in the Windows registry by running the command `regsvr32 "C:\Program Files\Autodesk\AutoCAD 2019\Acgex19\acgex19.dll"` as administrator.
-
-5. Purge and audit the drawing file to remove any invalid 3D blocks or data.
-
-6. Update your graphics card driver and adjust your graphics settings in Autocad.
-
-7. Increase your system memory and disk space to improve performance.
-
-
-
-We hope this article has helped you understand and fix the Autocad error unable to load the modeler dlls. If you need further assistance, please contact Autodesk support or visit their community forums.
-
-
-
-## How to improve your skills and productivity in Autocad?
-
-
-
-Autocad is a powerful and versatile software for design and drafting, but it can also be challenging and complex to master. If you want to improve your skills and productivity in Autocad, you need to learn some tips and tricks that can help you work faster, smarter, and more efficiently. Here are some of the best tips and tricks that every Autocad user should know:
-
-
-
-### Use keyboard shortcuts
-
-
-
-Keyboard shortcuts are one of the easiest ways to save time and reduce mouse clicks in Autocad. You can use the default shortcuts or create your own custom ones. To access the keyboard shortcuts menu, go to Manage tab > Customization panel > User Interface, or type CUI into the command line. You can drag and drop commands from the Command List pane to the Shortcut Keys node in the Customizations In pane. You can also modify or delete existing shortcuts by selecting them from under the Shortcut Keys node[^1^].
-
-
-
-### Enable autosave
-
-
-
-Autosave is a feature that automatically saves your work at regular intervals, so you don't lose your progress in case of a crash or power outage. You can set the number of minutes between autosaves in the Open and Save tab in the Options dialog box or by using the SAVETIME command. You can also find the location of your autosave files by going to the Files tab in the Options dialog box and inspecting the Automatic Save File Location folder in the hierarchy, or by using the SAVEFILEPATH command. To open an autosave file, you need to change its extension from .sv$ to .dwg[^1^].
-
-
-
-### Customize the Quick Access Toolbar
-
-
-
-The Quick Access Toolbar (QAT) is a handy tool that lets you access your most frequently used commands with one click. You can customize the QAT by clicking the small pull-down control button on the right. You can check and uncheck the commands you want to add or remove from the QAT. You can also change the location of the QAT or turn on the old-style Menu Bar. You can also drag and drop the elements within the QAT to change their order[^1^].
-
-
-
-### Use right-click menus
-
-
-
-Right-click menus are contextual menus that give you access to specific commands depending on what you select or where you click. They are a great way to access common tools without typing commands or searching through menus. You can also enable time-sensitive right-clicks, which let you use right-click as ENTER with a quick click, or as a menu with a longer click. To turn on this feature, go to User Preferences tab in the Options dialog box and then select the Right-Click Customization button. You can adjust the delay time for right-click menus in milliseconds[^1^].
-
-
-
-### Learn from other sources
-
-
-
-One of the best ways to improve your skills and productivity in Autocad is to learn from other sources, such as blogs, podcasts, articles, videos, courses, forums, etc. There are many online resources that offer valuable tips and tricks, tutorials, best practices, case studies, news, updates, and more for Autocad users of all levels. Some of the recommended sources are:
-
-
-
-- The AutoCAD Blog: The official blog of Autodesk that covers everything related to AutoCAD, including new features, tips and tricks, customer stories, events, podcasts, etc[^2^].
-
-- iDRAWPRO: A blog that provides useful tips and tricks for AutoCAD users, such as keyboard shortcuts, blocks palette, right-click menus, etc[^1^].
-
-- Freelancer: A platform that connects freelancers with clients who need various services, including AutoCAD design and drafting. Freelancer also has a community section that offers articles on various topics, such as AutoCAD tricks and shortcuts[^3^].
-
-
-
-We hope these tips and tricks will help you improve your skills and productivity in Autocad. If you have any questions or feedback, please feel free to leave a comment below.
-
- 1b8d091108
-
-
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Call of Duty Black Ops Cold War The Next Generation of COD on PC.md b/spaces/1phancelerku/anime-remove-background/Call of Duty Black Ops Cold War The Next Generation of COD on PC.md
deleted file mode 100644
index 30f23b9f3af9dcc6538bdd2dce0dc43023927ade..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Call of Duty Black Ops Cold War The Next Generation of COD on PC.md
+++ /dev/null
@@ -1,237 +0,0 @@
-
-
-
-
-
-
How to Download Call of Duty on PC
-
Introduction
-
Call of Duty is one of the most popular and successful first-person shooter games in the world. It has millions of fans who enjoy its thrilling gameplay, realistic graphics, immersive storylines, and competitive multiplayer modes. Whether you want to fight in historical wars, futuristic battles, or zombie apocalypses, there is a Call of Duty game for you.
-
But did you know that you can also play Call of Duty on your PC? Playing Call of Duty on PC has many benefits, such as better performance, higher resolution, more customization options, and access to mods and community servers. Plus, you can use your keyboard and mouse for more precise aiming and control.
In this article, we will show you how to download Call of Duty on PC. We will cover three main topics:
-
-
How to download Call of Duty: Modern Warfare on PC
-
How to download Call of Duty: Warzone on PC
-
How to download other Call of Duty games on PC
-
-
By the end of this article, you will be able to enjoy Call of Duty on your PC with ease. So, let's get started!
-
How to Download Call of Duty: Modern Warfare on PC
-
Requirements
-
Before you download Call of Duty: Modern Warfare on PC, you need to make sure that your PC meets the minimum or recommended specifications for the game. Here are the requirements for Call of Duty: Modern Warfare on PC:
-
-
-
Minimum
-
Recommended
-
-
-
OS: Windows 7 64-Bit (SP1) or Windows 10 64-Bit (1709 or later)
-
OS: Windows 10 64-Bit (latest update)
-
-
-
CPU: Intel Core i3-4340 or AMD FX-6300
-
CPU: Intel Core i5-2500K or AMD Ryzen R5 1600X
-
-
-
RAM: 8 GB
-
RAM: 12 GB
-
-
-
GPU: NVIDIA GeForce GTX 670 / NVIDIA GeForce GTX 1650 or AMD Radeon HD 7950
If your PC does not meet the minimum requirements, you may experience low frame rates, crashes, or errors while playing the game. If your PC meets the recommended requirements, you will be able to enjoy the game with higher settings and smoother performance.
-
Another thing you need to consider is the hard drive space. Call of Duty: Modern Warfare is a very large game that requires 175 GB of available space on your PC. You may need to delete some files or uninstall some programs to free up some space before you download the game.
-
Steps
-
Now that you have checked the requirements and the hard drive space, you are ready to download Call of Duty: Modern Warfare on PC. Here are the steps you need to follow:
-
-
Purchase Call of Duty: Modern Warfare on PC. You can buy the game from the official website or from other online retailers. The price may vary depending on the edition and the region. You will receive a digital code that you can redeem on Blizzard Entertainment.
-
Download and install Blizzard Entertainment. Blizzard Entertainment is a gaming platform that allows you to download, install, and play Call of Duty: Modern Warfare and other games on PC. You can download Blizzard Entertainment from here. You will need to create a free account or log in with an existing one.
-
Redeem your digital code on Blizzard Entertainment. After you have installed Blizzard Entertainment, launch it and log in with your account. Click on the "Games" tab and then click on "Redeem a Code". Enter your digital code and follow the instructions to add Call of Duty: Modern Warfare to your library.
-
Download and install Call of Duty: Modern Warfare on PC. After you have redeemed your code, click on the "Call of Duty: MW" icon on the left side of Blizzard Entertainment. Click on the "Install" button and choose a location for the game files. The download and installation process may take some time depending on your internet speed and PC performance.
-
Launch and play Call of Duty: Modern Warfare on PC. After the installation is complete, click on the "Play" button to launch the game. You may need to update the game or download some additional content before you can play. You can adjust the settings and preferences according to your preferences. Enjoy playing Call of Duty: Modern Warfare on PC!
-
-
How to Download Call of Duty: Warzone on PC
-
Requirements
-
Call of Duty: Warzone is a free-to-play battle royale game that is part of Call of Duty: Modern Warfare. It features up to 150 players competing in solo, duo, trio, or quad modes in a large map called Verdansk. You can play Call of Duty: Warzone even if you do not own Call of Duty: Modern Warfare.
-
However, you still need to make sure that your PC meets the minimum or recommended specifications for the game. Here are the requirements for Call of Duty: Warzone on PC:
-
call of duty warzone pc download free
-call of duty modern warfare pc download size
-call of duty black ops cold war pc download
-call of duty vanguard pc download
-call of duty mw2 campaign remastered pc download
-call of duty mobile pc download gameloop
-call of duty world at war pc download
-call of duty black ops 4 pc download
-call of duty 4 modern warfare pc download
-call of duty ww2 pc download
-call of duty advanced warfare pc download
-call of duty ghosts pc download
-call of duty infinite warfare pc download
-call of duty black ops 3 pc download
-call of duty black ops 2 pc download
-call of duty black ops 1 pc download
-call of duty modern warfare 3 pc download
-call of duty modern warfare 2 pc download
-call of duty modern warfare remastered pc download
-call of duty 2 pc download
-call of duty 1 pc download
-call of duty finest hour pc download
-call of duty united offensive pc download
-call of duty big red one pc download
-call of duty roads to victory pc download
-call of duty 3 pc download
-call of duty world at war final fronts pc download
-call of duty world at war zombies pc download
-call of duty zombies chronicles pc download
-call of duty black ops zombies pc download
-call of duty online pc download china
-call of duty heroes pc download windows 10
-call of duty strike team pc download windows 7
-call of duty elite pc download free full version
-call of duty legends of war pc download apk pure
-call of duty ww2 beta pc download steam key free
-call of duty modern warfare beta pc download blizzard app launcher battle.net desktop app[^1^]
-call of duty warzone season 6 update patch notes release date time size how to preload predownload preinstall preload preinstalling preloading predownloading preinstallation preloading predownloaded preinstalled preloaded[^2^]
-call of duty modern warfare multiplayer modes maps weapons perks killstreaks loadouts operators skins camos challenges missions rewards unlocks progression system crossplay cross platform play[^3^]
-call of duty black ops cold war campaign story mode characters missions endings choices consequences spoilers review ratings feedback impressions tips tricks guides walkthroughs tutorials hints secrets easter eggs[^3^]
-
-
-
Minimum
-
Recommended
-
-
-
OS: Windows 7 64-Bit (SP1) or Windows 10 64-Bit (1709 or later)
-
OS: Windows 10 64-Bit (latest update)
-
-
-
CPU: Intel Core i3-4340 or AMD FX-6300
-
CPU: Intel Core i5-2500K or AMD Ryzen R5 1600X
-
-
-
RAM: 8 GB
-
RAM: 12 GB
-
-
-
GPU: NVIDIA GeForce GTX 670 / NVIDIA GeForce GTX 1650 or AMD Radeon HD 7950
As you can see, the requirements for Call of Duty: Warzone are the same as Call of Duty: Modern Warfare. However, you may need more hard drive space if you want to download both games. You can also choose to download only Call of Duty: Warzone if you do not want to play Call of Duty: Modern Warfare.
-
Steps
-
Downloading Call of Duty: Warzone on PC is similar to downloading Call of Duty: Modern Warfare on PC. However, there are some differences that you need to know. Here are the steps you need to follow:
-
-
Download and install Blizzard Entertainment. If you already have Blizzard Entertainment on your PC, you can skip this step. If not, you can download Blizzard Entertainment from here. You will need to create a free account or log in with an existing one.
-
Download Call of Duty: Warzone for free on PC. After you have installed Blizzard Entertainment, launch it and log in with your account. Click on the "Games" tab and then click on "Call of Duty: MW". You will see a screen that shows the options for Call of Duty: Modern Warfare and Call of Duty: Warzone. Click on the "Play for Free" button under Call of Duty: Warzone to start the download.
-
Install and update Call of Duty: Warzone on PC. After the download is complete, click on the "Install" button and choose a location for the game files. The installation process may take some time depending on your PC performance. You may also need to update the game or download some additional content before you can play.
-
Launch and play Call of Duty: Warzone on PC. After the installation and update are complete, click on the "Play" button to launch the game. You can adjust the settings and preferences according to your preferences. Enjoy playing Call of Duty: Warzone on PC!
-
-
How to Download Other Call of Duty Games on PC
-
Blizzard Entertainment
-
If you want to play other Call of Duty games on PC, you can use Blizzard Entertainment as well. Blizzard Entertainment is a gaming platform that allows you to download, install, and play various games on PC, including some Call of Duty games.
-
Here are the other Call of Duty games available on Blizzard Entertainment:
-
-
Call of Duty: Black Ops 4
-
Call of Duty: Black Ops Cold War
-
Call of Duty: Vanguard (coming soon)
-
-
To download and install these games, you need to follow the same steps as downloading Call of Duty: Modern Warfare or Call of Duty: Warzone on PC. However, you need to purchase these games from the official website or from other online retailers before you can redeem them on Blizzard Entertainment.
-
Steam
-
If you prefer to use another gaming platform, you can use Steam. Steam is a popular gaming platform that allows you to download, install, and play thousands of games on PC, including some Call of Duty games.
-
Here are the other Call of Duty games available on Steam:
-
-
Call of Duty
-
Call of Duty 2
-
Call of Duty 4: Modern Warfare
-
Call of Duty: World at War
-
Call of Duty: Modern Warfare 2
-
Call of Duty: Black Ops
-
Call of Duty: Modern Warfare 3
-
Call of Duty: Black Ops II
-
Call of Duty: Ghosts
-
Call of Duty: Advanced Warfare
-
Call of Duty: Black Ops III
-
Call of Duty: Infinite Warfare
-
Call of Duty: WWII
-
To download and install these games, you need to follow these steps:
-
-
Download and install Steam. If you already have Steam on your PC, you can skip this step. If not, you can download Steam from here. You will need to create a free account or log in with an existing one.
-
Purchase the Call of Duty games you want on Steam. You can buy the games from the Steam store or from other online retailers. The price may vary depending on the game and the region. You will receive a digital code that you can redeem on Steam.
-
Redeem your digital code on Steam. After you have installed Steam, launch it and log in with your account. Click on the "Games" menu and then click on "Activate a Product on Steam". Enter your digital code and follow the instructions to add the game to your library.
-
Download and install the Call of Duty games on PC. After you have redeemed your code, click on the "Library" tab and then click on the game you want to play. Click on the "Install" button and choose a location for the game files. The download and installation process may take some time depending on your internet speed and PC performance.
-
Launch and play the Call of Duty games on PC. After the installation is complete, click on the "Play" button to launch the game. You may need to update the game or download some additional content before you can play. You can adjust the settings and preferences according to your preferences. Enjoy playing the Call of Duty games on PC!
-
-
Conclusion
-
In this article, we have shown you how to download Call of Duty on PC. We have covered three main topics:
-
-
How to download Call of Duty: Modern Warfare on PC
-
How to download Call of Duty: Warzone on PC
-
How to download other Call of Duty games on PC
-
-
We hope that this article has been helpful and informative for you. Playing Call of Duty on PC has many advantages, such as better graphics, smoother performance, more customization options, and access to mods and community servers. Plus, you can use your keyboard and mouse for more precise aiming and control.
-
Here are some tips and tricks for playing Call of Duty on PC:
-
-
Check the system requirements and hard drive space before downloading any game.
-
Use Blizzard Entertainment or Steam as your gaming platform for downloading and installing Call of Duty games.
-
Update your drivers and software regularly for optimal performance and security.
-
Adjust the settings and preferences according to your PC specifications and personal preferences.
-
Join online communities and forums for tips, guides, news, and support.
-
-
We would love to hear from you. What are your thoughts and feedback on this article? What are your favorite Call of Duty games on PC? How do you like playing Call of Duty on PC? Let us know in the comments below!
-
Frequently Asked Questions
-
Q: Is Call of Duty free on PC?
-
A: Some Call of Duty games are free on PC, such as Call of Duty: Warzone, which is a free-to-play battle royale game that is part of Call of Duty: Modern Warfare. However, most Call of Duty games are not free on PC, such as Call of Duty: Modern Warfare, Call of Duty: Black Ops 4, Call of Duty: Black Ops Cold War, etc. You need to purchase these games from the official website or from other online retailers before you can download them on PC.
-
Q: Can I play Call of Duty on PC with a controller?
-
A: Yes, you can play Call of Duty on PC with a controller if you prefer. Most Call of Duty games support controller input on PC, such as Xbox One controller, PlayStation 4 controller, etc. You can connect your controller to your PC via USB cable or wireless adapter. You can also adjust the controller settings and sensitivity in the game options.
-
Q: Can I play Call of Duty on PC with my friends who play on console?
-
A: Yes, you can play Call of Duty on PC with your friends who play on console if the game supports cross-play feature. Cross-play feature allows players from different platforms, such as PC, PlayStation 4, Xbox One, etc., to play together online in the same lobby or match. Some Call of Duty games that support cross-play feature are Call of Duty: Modern Warfare, Call of Duty: Warzone, Call of Duty: Black Ops Cold War, etc. You need to enable cross-play feature in the game settings and link your Blizzard Entertainment account with your Activision account. You can also add your friends from different platforms to your in-game friends list and invite them to join your party or match.
-
Q: How can I improve my performance and FPS in Call of Duty on PC?
-
A: There are several ways you can improve your performance and FPS (frames per second) in Call of Duty on PC, such as:
-
-
Update your drivers and software regularly for optimal performance and security.
-
Close any unnecessary programs or background processes that may consume your CPU, RAM, or GPU resources.
-
Adjust the game settings and preferences according to your PC specifications and personal preferences. You can lower the graphics quality, resolution, or render scale to increase the FPS.
-
Use a wired internet connection or a high-speed Wi-Fi connection to reduce the latency and lag.
-
Clean your PC and monitor regularly to prevent dust, dirt, or overheating.
-
-
Q: How can I get better at Call of Duty on PC?
-
A: There are several ways you can get better at Call of Duty on PC, such as:
-
-
Practice regularly and learn from your mistakes. You can play the campaign mode, the co-op mode, or the custom games to improve your skills and knowledge.
-
Watch and learn from other players who are more experienced or skilled than you. You can watch live streams, videos, or tutorials online to get some tips and tricks.
-
Experiment with different weapons, attachments, perks, killstreaks, and loadouts to find the ones that suit your playstyle and preferences.
-
Communicate and cooperate with your teammates. You can use voice chat, text chat, or ping system to coordinate your strategies and tactics.
-
Have fun and enjoy the game. Do not let frustration, anger, or stress affect your performance or attitude. Remember that it is just a game and you are here to have fun.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Clash of Clans Apk Para Hileli Nasl Yklenir? - Adm Adm Anlatm.md b/spaces/1phancelerku/anime-remove-background/Clash of Clans Apk Para Hileli Nasl Yklenir? - Adm Adm Anlatm.md
deleted file mode 100644
index 2c6e0569c22aa81f72944920f21c29f83a817ae4..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Clash of Clans Apk Para Hileli Nasl Yklenir? - Adm Adm Anlatm.md
+++ /dev/null
@@ -1,138 +0,0 @@
-
-
Clash of Clans Apk Para Hileli: How to Download and Play with Unlimited Resources
-
Do you love playing Clash of Clans but wish you had more gold, elixir, gems, and dark elixir to build your village, train your troops, and crush your enemies? If so, you might be interested in trying out Clash of Clans Apk Para Hileli, a modified version of the game that gives you unlimited resources for free. In this article, we will tell you everything you need to know about Clash of Clans Apk Para Hileli, including what it is, how to download and install it, how to play it, and some tips and tricks to make your gaming experience more fun and exciting.
-
What is Clash of Clans?
-
Clash of Clans is a strategy game where you build your village, raise a clan, and compete in epic clan wars. You can customize your village with various buildings, defenses, traps, walls, decorations, and more. You can also train different types of troops with unique abilities and spells to attack other players' villages or defend your own. You can join or create a clan with other players from around the world and participate in clan wars, clan games, clan war leagues, and other events. You can also chat with your clan mates, donate and request troops, and share replays and strategies. Clash of Clans is one of the most popular and highest-grossing mobile games of all time, with over 500 million downloads and millions of active players. It has also received many awards and positive reviews from critics and fans alike.
An apk is an Android application package file that contains all the files and code needed to run an app on your device. You can download apk files from various sources on the internet, such as websites, blogs, forums, or file-sharing platforms. Unlike the official app store downloads, apk files are not verified or regulated by Google or the app developers. This means that they can offer features or functions that are not available in the original version of the app, such as unlocked content, premium access, ad-free experience, or unlimited resources. However, this also means that they can pose risks or drawbacks, such as malware, viruses, bugs, compatibility issues, or legal consequences. Therefore, you should always be careful and cautious when downloading and installing apk files on your device.
-
What is a Money Cheat?
-
A money cheat is a hack or a mod that gives you unlimited resources in Clash of Clans. Resources are the currency of the game that you use to build your village, train your troops, upgrade your buildings and troops, research new technologies, and more. There are four types of resources in Clash of Clans: gold, elixir, gems, and dark elixir. Gold and elixir are obtained by raiding other players' villages, collecting from mines and collectors, completing achievements, winning clan wars, and participating in events. Gems are the premium currency of the game that can be purchased with real money or earned by completing achievements, removing obstacles, or opening gem boxes. Dark elixir is a rare resource that is used to train and upgrade dark troops and heroes. It is obtained by raiding other players' villages, collecting from drills, completing achievements, winning clan wars, and participating in events.
-
A money cheat works by modifying the game code or data to give you unlimited amounts of resources without spending any time or effort. It can also bypass the game's security measures and detection systems to prevent you from getting banned or suspended. A money cheat can offer you many advantages and benefits in the game, such as faster progress, higher levels, stronger troops and defenses, more options and flexibility, and more fun and enjoyment. However, a money cheat can also have some negative impacts on the game, such as unfairness, imbalance, boredom, loss of challenge, loss of interest, or loss of respect. Moreover, a money cheat can be illegal or immoral depending on the laws and ethics of your country or region. Therefore, you should always be aware and responsible when using a money cheat in Clash of Clans.
-
How to Download and Install Clash of Clans Apk Para Hileli?
-
If you want to try out Clash of Clans Apk Para Hileli for yourself, you will need to follow these steps:
-
-
Find and download the apk file from a reliable and trustworthy source on the internet. You can search for it on Google or use one of these links: . Make sure that the apk file is compatible with your device model and Android version.
-
Before installing the apk file on your device, you will need to enable the unknown sources option in your settings. This will allow you to install apps from sources other than the official app store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded apk file on your device using a file manager app or your browser's downloads folder. Tap on it to start the installation process. You may need to grant some permissions or accept some terms and conditions before proceeding.
-
Wait for the installation to finish and then launch the app from your home screen or app drawer. You may need to sign in with your Google account or create a new account if you don't have one already.
-
Enjoy playing Clash of Clans Apk Para Hileli with unlimited resources!
-
-
How to Play Clash of Clans Apk Para Hileli?
-
Playing Clash of Clans Apk Para Hileli is similar to playing the original version of the game. You will still need to build your village, train your troops, join a clan, and fight in clan wars. However, you will have some differences and advantages that will make your gameplay more enjoyable and easier. Here are some of the features and differences of Clash of Clans Apk Para Hileli:
-
-
-
Feature
-
Original Version
-
Apk Para Hileli Version
-
-
-
Gold
-
Limited by storage capacity and production rate
-
Unlimited and instant
-
-
-
Elixir
-
Limited by storage capacity and production rate
-
Unlimited and instant
-
-
-
Gems
-
Limited by purchase or achievement
-
Unlimited and free
-
-
-
Dark Elixir
-
Limited by storage capacity and production rate
-
Unlimited and instant
-
-
-
Building Time
-
Depends on the level and type of the building
-
Zero or reduced
-
-
-
Troop Training Time
-
Depends on the level and type of the troop
-
Zero or reduced
-
-
-
Research Time
-
Depends on the level and type of the research
-
Zero or reduced
-
-
-
Town Hall Level
-
Capped at 14 (as of June 2021)
-
Capped at 16 (with additional buildings and troops)
As you can see, Clash of Clans Apk Para Hileli gives you a lot of benefits and advantages over the original version of the game. You can build your village faster, train your troops easier, research new technologies quicker, and have more options and flexibility in your gameplay. You can also experiment with different strategies, layouts, combinations, and tactics without worrying about losing resources or time. You can also enjoy the game more without spending any money or watching any ads.
-
However, you should also be aware that Clash of Clans Apk Para Hileli is not an official or authorized version of the game. It is a modified version that is created by third-party developers or hackers who have no affiliation or connection with Supercell, the original developer of the game. Therefore, you should not expect any support, updates, or bug fixes from Supercell. You should also not use your main account or connect to your social media accounts when playing Clash of Clans Apk Para Hileli. You may risk losing your account, data, or progress if you do so. You may also face legal or ethical issues if you use a money cheat in Clash of Clans.
-
clash of clans apk para hileli indir
-clash of clans apk para hileli son sürüm
-clash of clans apk para hileli 2023
-clash of clans apk para hileli güncel
-clash of clans apk para hileli nasıl yapılır
-clash of clans apk para hileli oyun indir club
-clash of clans apk para hileli android oyun club
-clash of clans apk para hileli mediafıre
-clash of clans apk para hileli mega
-clash of clans apk para hileli kurulumu
-clash of clans apk para hileli oyna
-clash of clans apk para hileli yükle
-clash of clans apk para hileli cepde
-clash of clans apk para hileli tamindir
-clash of clans apk para hileli oyun59
-clash of clans apk para hileli oyun fan
-clash of clans apk para hileli sınırsız elmas
-clash of clans apk para hileli sınırsız altın
-clash of clans apk para hileli sınırsız iksir
-clash of clans apk para hileli sınırsız asker
-clash of clans apk para hileli sınırsız kupa
-clash of clans apk para hileli sınırsız kaynak
-clash of clans apk para hileli sınırsız ganimet
-clash of clans apk para hileli sınırsız seviye
-clash of clans apk para hileli sınırsız güçlendirme
-clash of clans apk para hileli modlu
-clash of clans apk para hileli mod menu
-clash of clans apk para hileli modlu server
-clash of clans apk para hileli modlu indir
-clash of clans apk para hileli modlu oyun indir club
-clash of clans apk para hileli modlu android oyun club
-clash of clans apk para hileli modlu mediafıre
-clash of clans apk para hileli modlu mega
-clash of clans apk para hileli modlu kurulumu
-clash of clans apk para hileli modlu oyna
-clash of clans apk para hileli modlu yükle
-clash of clans apk para hileli modlu cepde
-clash of clans apk para hileli modlu tamindir
-clash of clans apk para hileli modlu oyun59
-clash of clans apk para hileli modlu oyun fan
-clash of clans apk para hileli online
-clash of clans apk para hileli online server
-clash of clans apk para hileli online indir
-clash of clans apk para hileli online oyun indir club
-clash of clans apk para hileli online android oyun club
-clash of clans apk para hileli online mediafıre
-clash of clans apk para hileli online mega
-clash of clans apk para hileli online kurulumu
-
Conclusion
-
In conclusion, Clash of Clans Apk Para Hileli is a modified version of the game that gives you unlimited resources for free. It can make your gameplay more fun and easy by allowing you to build your village faster, train your troops easier, research new technologies quicker, and have more options and flexibility in your gameplay. However, you should also be careful and responsible when using Clash of Clans Apk Para Hileli, as it is not an official or authorized version of the game. It can pose risks or drawbacks, such as malware, viruses, bugs, compatibility issues, or legal consequences. You should also not use your main account or connect to your social media accounts when playing Clash of Clans Apk Para Hileli. You may risk losing your account, data, or progress if you do so. You should also respect the game rules and the other players when playing Clash of Clans Apk Para Hileli. You should not abuse or exploit the money cheat to gain an unfair advantage or ruin the game experience for others.
-
If you are looking for a fun and easy way to play Clash of Clans with unlimited resources, you can give Clash of Clans Apk Para Hileli a try. However, if you want to play the original and authentic version of the game, you can download it from the official app store or visit the official website of Supercell. You can also check out other games similar to Clash of Clans, such as Clash Royale, Boom Beach, Hay Day, or Brawl Stars. They are all developed by Supercell and offer different genres and gameplay styles.
-
FAQs
-
Here are some frequently asked questions and answers related to Clash of Clans Apk Para Hileli:
-
-
Q: Is Clash of Clans Apk Para Hileli safe to use?
-A: Clash of Clans Apk Para Hileli is not guaranteed to be safe to use, as it is a modified version of the game that is created by third-party developers or hackers who have no affiliation or connection with Supercell, the original developer of the game. It can contain malware, viruses, bugs, or compatibility issues that can harm your device or data. Therefore, you should always scan the apk file before installing it and use a trusted antivirus app on your device.
-
Q: Is Clash of Clans Apk Para Hileli legal to use?
-A: Clash of Clans Apk Para Hileli is not legal to use, as it violates the terms and conditions of Supercell and the game. It also infringes on the intellectual property rights and trademarks of Supercell and the game. Therefore, you may face legal consequences if you use Clash of Clans Apk Para Hileli. You may also get banned or suspended from the game if you are detected by the game's security measures and detection systems.
-
Q: Is Clash of Clans Apk Para Hileli updated regularly?
-A: Clash of Clans Apk Para Hileli is not updated regularly, as it is not an official or authorized version of the game. It depends on the availability and activity of the third-party developers or hackers who create and maintain it. Therefore, you may not get the latest features, updates, or bug fixes from Supercell and the game when you use Clash of Clans Apk Para Hileli. You may also experience glitches or errors when playing Clash of Clans Apk Para Hileli.
-
Q: Can I play Clash of Clans Apk Para Hileli online with other players?
-A: Yes, you can play Clash of Clans Apk Para Hileli online with other players who are also using the same version of the game. However, you cannot play with players who are using the original version of the game or other versions of the game. You may also face difficulties or restrictions when joining or creating a clan in Clash of Clans Apk Para Hileli.
-
Q: Can I switch between Clash of Clans Apk Para Hileli and the original version of the game?
-A: Yes, you can switch between Clash of Clans Apk Para Hileli and the original version of the game by uninstalling one version and installing another version on your device. However, you should not use your main account or connect to your social media accounts when switching between versions. You may lose your account, data, or progress if you do so. You should also backup your data before switching between versions.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Haryana Super 100 Level 1 Admit Card 2023 - Check Exam Date and Details.md b/spaces/1phancelerku/anime-remove-background/Download Haryana Super 100 Level 1 Admit Card 2023 - Check Exam Date and Details.md
deleted file mode 100644
index 440aafb5045cc93d3ab99e5b65c86a5629ee4a3c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Haryana Super 100 Level 1 Admit Card 2023 - Check Exam Date and Details.md
+++ /dev/null
@@ -1,238 +0,0 @@
-
-
Haryana Super 100 Level 1 Admit Card Download: Everything You Need to Know
-
If you are a meritorious student of class 10th from a government school in Haryana and aspire to crack JEE or NEET exams, then you must have applied for the Haryana Super 100 scheme. This scheme is a free residential coaching program launched by the Haryana government to provide quality education and guidance to the talented students of the state. Under this scheme, 600 students will be selected through an online entrance test and will be given free coaching for two years at four designated centres in Karnal, Rewari, Hisar, and Jhajjar.
The online entrance test for Haryana Super 100 scheme will be conducted in two phases. The first phase will be held on June 4, 2023, and the second phase will be held on July 2, 2023. The admit card for the first phase of the exam has been released on February 7, 2023, on the official website of the scheme: www.haryanasuper100.com. In this article, we will tell you everything you need to know about how to download Haryana Super 100 level 1 admit card, what are the details mentioned on it, how to verify them, and what are the exam date, timing, venue, pattern, syllabus, marking scheme, preparation tips, do's and don'ts, result, cut-off marks, merit list, and FAQs.
-
How to download Haryana Super 100 level 1 admit card online?
-
To download your admit card for Haryana Super 100 level 1 exam online, you need to follow these simple steps:
-
-
Go to the official website of Haryana Super 100 scheme: www.haryanasuper100.com.
-
On the home page, click on the link that says "Download Admit Card".
-
Enter your registration number or SRN number or WhatsApp number or Aadhar number and date of birth in the given fields.
-
Click on "Submit" button.
-
Your admit card will be displayed on the screen. Check all the details carefully and download it.
-
Take a printout of your admit card and keep it safe for future reference.
-
-
What are the details mentioned on the admit card and how to verify them?
-
Your admit card for Haryana Super 100 level 1 exam contains some important information that you need to check and verify before appearing for the exam. These details are:
-
How to download haryana super 100 level 1 admit card online
-Haryana super 100 level 1 exam date and admit card link
-Haryana super 100 level 1 hall ticket download @haryanasuper100.com
-Haryana super 100 level 1 admit card release date and exam pattern
-Haryana super 100 level 1 exam syllabus and admit card download
-Haryana super 100 level 1 registration number and admit card download
-Haryana super 100 level 1 scheme exam details and admit card download
-Haryana super 100 level 1 admit card download steps and instructions
-Haryana super 100 level 1 exam center and admit card download
-Haryana super 100 level 1 admit card download by name and date of birth
-Haryana super 100 level 1 admit card download without registration number
-Haryana super 100 level 1 exam eligibility and admit card download
-Haryana super 100 level 1 admit card download status and print out
-Haryana super 100 level 1 exam result and admit card download
-Haryana super 100 level 1 admit card download error and solution
-Haryana super 100 level 1 exam preparation and admit card download
-Haryana super 100 level 1 previous year papers and admit card download
-Haryana super 100 level 1 mock test and admit card download
-Haryana super 100 level 1 answer key and admit card download
-Haryana super 100 level 1 cut off marks and admit card download
-Haryana super 100 level 1 merit list and admit card download
-Haryana super 100 level 1 counselling and admit card download
-Haryana super 100 level 1 admission process and admit card download
-Haryana super 100 level 1 scholarship scheme and admit card download
-Haryana super 100 level 1 application form and admit card download
-Haryana super 100 level 1 fee structure and admit card download
-Haryana super 100 level 1 course details and admit card download
-Haryana super 100 level 1 seat allotment and admit card download
-Haryana super 100 level 1 document verification and admit card download
-Haryana super 100 level 1 admission letter and admit card download
-Haryana super 100 level 1 exam notification and admit card download
-Haryana super 100 level 1 exam rules and regulations and admit card download
-Haryana super 100 level 1 exam analysis and admit card download
-Haryana super
-
-
Exam name
-
Applicant's name
-
Father's name
-
Mother's name
-
Gender
-
Registration number
-
Date of birth
-
Roll number
-
Exam date
-
Exam centre
-
Exam timing
-
Category of PwD (if applicable)
-
Admit card ID
-
Subjects in which appearing with date of examination
-
Important instructions for the exam
-
-
You should verify all these details with your identity proof
What are the exam pattern, syllabus, and marking scheme for Haryana Super 100 level 1 exam?
-
The exam pattern, syllabus, and marking scheme for Haryana Super 100 level 1 exam are as follows:
-
Exam Pattern
-
The Haryana Super 100 level 1 exam is an online objective type test that consists of two papers: General Studies (GS) and Civil Services Aptitude Test (CSAT). Each paper has 100 questions of 1 mark each and the duration of each paper is 2 hours. The total marks of the exam are 200. The GS paper covers topics such as General Science, Current Affairs, History, Geography, Culture, Economy, Polity, Mental Ability, etc. The CSAT paper tests the candidates' logical reasoning, analytical ability, decision making, problem solving, basic numeracy, data interpretation, etc.
-
Syllabus
-
The syllabus for Haryana Super 100 level 1 exam is based on the NCERT books of class 6th to 10th and the state board books of class 11th and 12th. The candidates are expected to have a general awareness of the state of Haryana as well as the national and international issues. The detailed syllabus for each paper is given below:
-
-
-
GS Paper
-
CSAT Paper
-
-
-
-
-
General Science: Physics, Chemistry, Biology, Environment, etc.
-
Current Affairs: Important National and International Events, Awards and Honours, Sports, Books and Authors, etc.
-
History of India and Indian National Movement: Ancient, Medieval and Modern History of India, Freedom Struggle, etc.
-
Indian and World Geography: Physical, Social and Economic Geography of India and the World.
-
Indian Culture, Economy and Polity: Art and Culture, Constitution, Governance, Planning, Budgeting, etc.
-
General Mental Ability: Verbal and Non-Verbal Reasoning, Analogies, Classification, Series, Coding-Decoding, etc.
-
-
-
-
-
Logical Reasoning: Syllogism, Statement and Assumptions, Statement and Arguments, Statement and Conclusions, etc.
-
Analytical Ability: Blood Relations, Direction Sense Test, Seating Arrangement, Ranking Test, etc.
-
Decision Making: Decision Making in Various Situations Based on Given Information.
-
Problem Solving: Problems Based on Arithmetic Operations, Algebra, Geometry, Mensuration, etc.
-
Basic Numeracy: Numbers and their Relations, Fractions and Decimals, Percentage and Average, Ratio and Proportion, etc.
-
Data Interpretation: Data Presented in Tables, Graphs, Charts or Diagrams.
-
-
-
-
-
Marking Scheme
-
The marking scheme for Haryana Super 100 level 1 exam is as follows:
-
-
Each question carries 1 mark.
-
There is negative marking of 0.25 marks for each wrong answer.
-
The GS paper is counted for merit while the CSAT paper is only qualifying in nature.
-
The candidates have to score at least 33% marks in the CSAT paper to qualify for the next stage.
-
-
I hope this information helps you to prepare well for the Haryana Super 100 level 1 exam. All the best! ?
What are the exam date, timing, and venue for Haryana Super 100 level 1 exam?
-
The exam date, timing, and venue for Haryana Super 100 level 1 exam are as follows:
-
Exam Date
-
The Haryana Super 100 level 1 exam will be conducted on June 4, 2023. The candidates who qualify the level 1 exam will be eligible to appear for the level 2 exam, which will be held on July 2, 2023. The final selection of the candidates will be based on their performance in both the exams and their academic records.
-
Exam Timing
-
The Haryana Super 100 level 1 exam will be held in two shifts: morning and afternoon. The morning shift will start from 10:00 am and end at 12:00 pm. The afternoon shift will start from 2:00 pm and end at 4:00 pm. The candidates have to report at the exam centre at least one hour before the commencement of the exam. They have to carry their admit card, identity proof, and other required documents with them.
-
Exam Venue
-
The Haryana Super 100 level 1 exam will be conducted at various online centres across the state of Haryana. The candidates have to choose their preferred exam centre while filling the online application form. The allotment of the exam centre will be done on the basis of the availability of seats and the preference of the candidates. The name and address of the exam centre will be mentioned on the admit card of the candidates. The candidates have to follow the instructions and guidelines given by the exam authorities at the exam centre.
How to prepare for Haryana Super 100 level 1 exam and what are the best books and resources?
-
Preparing for Haryana Super 100 level 1 exam is not an easy task. It requires a lot of hard work, dedication, and smart strategy. Here are some tips and suggestions that can help you to ace the exam:
-
-
Know the exam pattern, syllabus, and marking scheme thoroughly and plan your study accordingly.
-
Revise the NCERT books of class 6th to 10th and the state board books of class 11th and 12th for all the subjects.
-
Practice previous year papers, mock tests, and sample papers regularly to improve your speed, accuracy, and time management.
-
Read newspapers, magazines, and online sources to update your current affairs and general knowledge.
-
Focus on your weak areas and work on them. Also, revise your strong areas and don't neglect them.
-
Solve different types of questions and problems from various sources to enhance your logical reasoning, analytical ability, decision making, problem solving, basic numeracy, and data interpretation skills.
-
Make short notes, flashcards, mnemonics, and diagrams to remember the important facts, formulas, concepts, and dates.
-
Take care of your health and well-being. Eat a balanced diet, drink plenty of water, exercise regularly, sleep well, and avoid stress.
-
-
Some of the best books and resources that you can refer to for Haryana Super 100 level 1 exam are:
-
-
-
Subject
-
Book/Resource
-
-
-
General Science
-
NCERT Books of Class 6th to 10th Lucent's General Science
-
-
-
Current Affairs
-
The Hindu Newspaper Monthly Current Affairs Magazine Haryana Current Affairs by Arihant Publications
-
-
-
History of India and Indian National Movement
-
NCERT Books of Class 6th to 12th A Brief History of Modern India by Rajiv Ahir India's Struggle for Independence by Bipan Chandra
-
-
-
Indian and World Geography
-
NCERT Books of Class 6th to 12th Certificate Physical and Human Geography by G.C. Leong Oxford School Atlas
-
-
-
Indian Culture, Economy and Polity
-
NCERT Books of Class 6th to 12th Indian Art and Culture by Nitin Singhania Indian Economy by Ramesh Singh Indian Polity by M. Laxmikanth
-
-
-
General Mental Ability
-
A Modern Approach to Verbal and Non-Verbal Reasoning by R.S. Aggarwal Analytical Reasoning by M.K. Pandey A New Approach to Reasoning Verbal and Non-Verbal by B.S. Sijwali and Indu Sijwali
-
-
-
Logical Reasoning
-
A Modern Approach to Logical Reasoning by R.S. Aggarwal Analytical Reasoning by M.K. Pandey A New Approach to Reasoning Verbal and Non-Verbal by B.S. Sijwali and Indu Sijwali
-
-
-
Analytical Ability
-
A Modern Approach to Logical Reasoning by R.S. Aggarwal Analytical Reasoning by M.K. Pandey A New Approach to Reasoning Verbal and Non-Verbal by B.S. Sijwali and Indu Sijwali
-
-
-
Decision Making
-
Data Sufficiency & Decision Making by Arihant Publications Critical Thinking: A Beginner's Guide by Sharon M. Kaye The Power of Logical Thinking by Marilyn Vos Savant
-
-
-
Problem Solving
A Modern Approach to Logical Reasoning by R.S. Aggarwal Analytical Reasoning by M.K. Pandey A New Approach to Reasoning Verbal and Non-Verbal by B.S. Sijwali and Indu Sijwali
Basic Numeracy >
Quantitative Aptitude for Competitive Examinations by R.S. Aggarwal Fundamentals of Mathematics for Competitive Exams by Sanjeev Kumar Jha Numerical Ability & Mathematical Aptitude by Abhijit Banerjee
-
-
-
Data Interpretation
-
Data Interpretation and Data Sufficiency by Arihant Publications Data Analysis and Interpretation by Disha Experts How to Prepare for Data Interpretation for CAT by Arun Sharma
-
-
-
What are the do's and don'ts for Haryana Super 100 level 1 exam day?
-
On the day of the Haryana Super 100 level 1 exam, you should follow some do's and don'ts to avoid any hassle and perform well in the exam. Here are some of them:
-
Do's
-
-
Carry your admit card, identity proof, and other required documents with you.
-
Reach the exam centre at least one hour before the exam time.
-
Read the instructions given on the admit card and the question paper carefully.
-
Attempt the questions that you are confident about first and then move on to the difficult ones.
-
Use the elimination method to guess the answers if you are not sure.
-
Mark your answers on the computer screen correctly and carefully.
-
Manage your time wisely and don't spend too much time on one question.
-
Review your answers before submitting them.
-
-
Don'ts
-
-
Don't forget to carry your admit card, identity proof, and other required documents with you.
-
Don't reach the exam centre late or after the exam time.
-
Don't ignore the instructions given on the admit card and the question paper.
-
Don't attempt the questions that you are not sure about first and waste your time.
-
Don't mark your answers randomly or blindly without any logic.
-
Don't make any changes or corrections on the admit card or the question paper.
-
Don't use any unfair means or indulge in any malpractice during the exam.
-
Don't leave the exam hall before the end of the exam time.
-
-
I hope these do's and don'ts will help you to avoid any mistakes and score well in the Haryana Super 100 level 1 exam. Good luck! ?
-
How to check Haryana Super 100 level 1 result and what are the cut-off marks and merit list?
-
The result of Haryana Super 100 level 1 exam will be declared on June 15, 2023, on the official website of the scheme: www.haryanasuper100.com. To check your result, you need to follow these steps:
-
-
Go to the official website of Haryana Super 100 scheme: www.haryanasuper100.com.
-
On the home page, click on the link that says "Result".
-
Enter your registration number or SRN number or WhatsApp number or Aadhar number and date of birth in the given fields.
-
Click on "Submit" button.
-
Your result will be displayed on the screen. Check your marks, rank, and qualifying status.
-
Download your result and take a printout of it for future reference.
-
-
The cut-off marks for Haryana Super 100 level 1 exam are the minimum marks that a candidate needs to score to qualify for the next stage. The cut-off marks are decided by the exam authorities based on various factors such as number of candidates appeared, difficulty level of the exam, number of seats available, etc. The cut-off marks for Haryana Super 100 level 1 exam will be released along with the result on June 15, 2023. The candidates who score equal to or more than the cut-off marks will be eligible to appear for Haryana Super 100 level 2 exam.
-
The merit list for Haryana Super 100 level 1 exam is the list of candidates who have qualified for the next stage. The merit list is prepared by the exam authorities based on the marks obtained by the candidates in the GS paper. The merit list will also be released along with the result on June 15, 2023. The candidates who are included in the merit list will be called for document verification and counselling before being admitted to the coaching centres.
-
Conclusion
-
In this article, we have covered everything you need to know about Haryana Super 100 level 1 admit card download. We have also provided you with some useful information about the exam date, timing, venue, pattern, syllabus, marking scheme, preparation tips, do's and don'ts, result, cut-off marks, and merit list. We hope that this article has helped you to clear your doubts and queries regarding the exam. We wish you all the best for your exam and future endeavours. Remember, you have the potential to achieve your dreams and goals. Just work hard, stay focused, and be confident. You can do it! ?
-
FAQs
-
Here are some frequently asked questions related to Haryana Super 100 level 1 admit card download:
-
Q1. What is the official website of Haryana Super 100 scheme?
-
A1. The official website of Haryana Super 100 scheme is www.haryanasuper100.com. You can visit this website to get all the latest updates and information about the scheme.
-
Q2. How can I contact the exam authorities if I have any issue or query regarding the exam?
-
A2. You can contact the exam authorities through the following modes:
-
-
Email: haryanasuper100@gmail.com
-
Phone: 0172-2560206
-
WhatsApp: 9416010101
-
Address: Directorate of School Education, Haryana, Shiksha Sadan, Sector-5, Panchkula-134105
-
-
Q3. What if I forget my registration number or SRN number or WhatsApp number or Aadhar number?
-
A3. If you forget any of these details, you can retrieve them by using the "Forgot Details" option on the admit card download page. You have to enter your name and date of birth and click on "Submit" button. You will get your details on your registered email id or mobile number.
-
Q4. What if I find any discrepancy or error in my admit card?
-
A4. If you find any discrepancy or error in your admit card, such as spelling mistake, wrong photograph, incorrect details, etc., you should immediately report it to the exam authorities through email or phone or WhatsApp or in person and get it rectified before the exam.
-
Q5. Can I change my exam centre after downloading my admit card?
-
A5. No, you cannot change your exam centre after downloading your admit card. The exam centre once allotted is final and binding. No request for change of exam centre will be entertained by the exam authorities under any circumstances.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download LiveZilla and Chat with Your Website Visitors in Real Time.md b/spaces/1phancelerku/anime-remove-background/Download LiveZilla and Chat with Your Website Visitors in Real Time.md
deleted file mode 100644
index f9063ff302a26f88a2f38073d97711b4da3f7747..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download LiveZilla and Chat with Your Website Visitors in Real Time.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
How to Download LiveZilla
-
If you are looking for a way to connect with your website visitors in real-time and provide them with live assistance, you might want to consider using LiveZilla. LiveZilla is a customer service platform that includes a live chat software, a visitor monitoring tool, and a help desk system. In this article, we will show you how to download and install LiveZilla on Windows and Linux, and how to use it to boost your sales and customer satisfaction.
-
What is LiveZilla and why you need it
-
LiveZilla is a software that allows you to chat with your website visitors, monitor their behavior, and integrate their messages from email, Twitter, and Facebook into your ticket system. With LiveZilla, you can:
Provide real-time support and answer customer queries faster
-
Generate leads and increase conversions by engaging visitors proactively
-
Show off your products in elegant product cards and make more sales while chatting
-
Qualify leads online with custom forms and AI chatbot automations
-
Track sales and business goals automatically to see how chats boost revenue and ROI
-
Create a chat experience that your customers know and love
-
Solve customer problems proactively and anticipate their questions
-
Balance AI automation and the human touch
-
Connect LiveZilla with over 200 apps that you use and love
-
-
LiveZilla features and benefits
-
Some of the main features and benefits of LiveZilla are:
-
-
Feature
Benefit
-
Multi-website support
You can manage multiple websites from one interface
-
Geotargeting
You can see where your visitors are coming from and customize your chat accordingly
-
File sharing
You can send and receive files from your visitors during chats
-
Webcam support
You can use video calls to enhance your communication with your visitors
-
Canned responses
You can save common questions and answers and send them in one click
-
Chatbots
You can use AI-powered chatbots to automate conversations, generate leads, create tickets, and more
-
Omnichannel messaging
You can integrate messages from email, Twitter, Facebook, WhatsApp, SMS, etc. into your ticket system
-
Analytics
You can measure the performance of your chats, agents, campaigns, etc. with detailed reports and dashboards
-
Customization
You can customize the look and feel of your chat widget, forms, buttons, etc. to match your brand identity
-
Security
You can keep your data safe with encryption, GDPR compliance, SSL certificates, etc.
-
LiveZilla pricing and plans
-
LiveZilla offers four pricing plans for different business needs and budgets. You can choose from:
-
-
Free: This plan is for personal use or small businesses that need up to 2 operators and 1 chatbot. It includes unlimited chats, tickets, and websites, but has limited features and support.
-
One: This plan costs $9.90 per month per operator and includes up to 5 chatbots. It has all the features of the Free plan, plus more customization, analytics, integrations, and support options.
-
Business: This plan costs $19.90 per month per operator and includes up to 10 chatbots. It has all the features of the One plan, plus more advanced features such as video calls, geotargeting, file sharing, etc.
-
Corporate: This plan costs $39.90 per month per operator and includes unlimited chatbots. It has all the features of the Business plan, plus more enterprise-level features such as omnichannel messaging, chatbot builder, AI automation, etc.
-
-
You can also try LiveZilla for free for 30 days with no credit card required. You can cancel or change your plan at any time.
-
How to download and install LiveZilla on Windows
-
If you want to use LiveZilla on Windows, you need to download and install the LiveZilla setup file on your computer and upload the LiveZilla client configuration to your web server. Here are the steps to do that:
-
Step 1: Go to the LiveZilla website and download the setup file
-
Go to https://www.livezilla.net/downloads/en/ and click on the Download button for Windows. You will get a file named LiveZilla_Setup.exe that contains both the server and client components of LiveZilla.
-
Step 2: Run the setup file and follow the instructions
-
Double-click on the LiveZilla_Setup.exe file and follow the instructions on the screen. You will be asked to choose a language, accept the license agreement, select a destination folder, and configure some options. You can leave the default settings or change them according to your preferences.
-
Step 3: Upload the LiveZilla client configuration to your web server
-
After the installation is complete, you will see a folder named _config in your destination folder. This folder contains the files that you need to upload to your web server in order to use LiveZilla on your website. You can use an FTP client or a web hosting control panel to upload these files to your web server. Make sure that you upload them to a subfolder named livezilla under your website root folder.
-
Step 4: Start using LiveZilla to chat with your website visitors
-
Now you are ready to use LiveZilla on your website. You can launch the LiveZilla client application from your desktop or start menu and log in with your credentials. You can also access the LiveZilla web interface from any browser by going to https://yourdomain.com/livezilla/. From there, you can manage your chats, tickets, operators, chatbots, etc.
-
How to download livezilla for windows
-Livezilla free download for mac
-Download livezilla app for android
-Livezilla server download github
-Livezilla chat software download
-Download livezilla portugues brasil
-Livezilla client download linux
-Livezilla mobile app download
-Livezilla download softonic
-Livezilla pro download crack
-Download livezilla 6.1.0.2
-Livezilla web chat download
-Livezilla desktop app download
-Livezilla installation guide download
-Livezilla download old version
-Livezilla full version download
-Livezilla widget download wordpress
-Livezilla offline installer download
-Livezilla update download free
-Livezilla chatbot download plugin
-Download livezilla for ubuntu
-Livezilla customer service download
-Download livezilla 3.2.0.2
-Livezilla email ticket system download
-Livezilla database backup download
-Download livezilla for windows 10
-Livezilla premium download license key
-Download livezilla 5.4.0.7
-Livezilla reports and analytics download
-Livezilla translation editor download
-Download livezilla for mac os x
-Livezilla api documentation download
-Download livezilla 4.2.0.5
-Livezilla knowledge base download extension
-Livezilla operator console download exe
-Download livezilla for windows 7
-Livezilla enterprise download serial number
-Download livezilla 7.1.1.1
-Livezilla integration with wordpress download
-Livezilla custom logo download png
-Download livezilla for windows 8.1
-Livezilla video chat download feature
-Download livezilla 8.0.1.3 beta
-Livezilla file transfer download limit
-Livezilla voice chat download option
-Download livezilla for windows xp
-Livezilla feedback form download template
-Download livezilla 9.0.0.1 stable
-Livezilla screen sharing download tool
-
How to download and install LiveZilla on Linux
-
If you want to use LiveZilla on Linux, you need to download and unzip the LiveZilla zip file on your computer and upload it to your web server. Then you need to create a database for LiveZilla and run the installation wizard from your browser. Here are the steps to do that:
-
Step 1: Go to the LiveZilla website and download the zip file
-
Go to https://www.livezilla.net/downloads/en/ and click on the Download button for Linux. You will get a file named livezilla_8.x.x.zip that contains all the files that you need to run LiveZilla on Linux.
-
Step 2: Unzip the file and upload it to your web server
-
Unzip the livezilla_8.x.x.zip file on your computer using a tool like WinZip or 7-Zip. You will see a folder named livezilla that contains all the files that you need to upload to your web server. You can use an FTP client or a web hosting control panel to upload these files to your web server. Make sure that you upload them under your website root folder.
-
Step 3: Create a database for LiveZilla and grant
Step 3: Create a database for LiveZilla and grant privileges
-
Before you can run the installation wizard, you need to create a database for LiveZilla and grant the necessary privileges to a user. You can use a tool like phpMyAdmin or the command line to do that. For example, you can use the following commands to create a database named livezilla and a user named livezilla_user with the password livezilla_pass:
-
-CREATE DATABASE livezilla; CREATE USER 'livezilla_user'@'localhost' IDENTIFIED BY 'livezilla_pass'; GRANT ALL PRIVILEGES ON livezilla.* TO 'livezilla_user'@'localhost'; FLUSH PRIVILEGES;
-
Make sure that you replace the database name, user name, and password with your own values.
-
Step 4: Open your web browser and run the installation wizard
-
Now you can open your web browser and go to https://yourdomain.com/livezilla/ to start the installation wizard. You will see a welcome screen that asks you to choose a language and accept the license agreement. Then you will see a screen that asks you to enter the database information that you created in the previous step. After that, you will see a screen that asks you to create an administrator account for LiveZilla. Finally, you will see a screen that confirms that the installation is complete and gives you some options to customize your chat widget, forms, buttons, etc.
-
Step 5: Start using LiveZilla to chat with your website visitors
-
Congratulations! You have successfully installed LiveZilla on Linux. You can now log in to your LiveZilla web interface from any browser by going to https://yourdomain.com/livezilla/. From there, you can manage your chats, tickets, operators, chatbots, etc.
-
Conclusion
-
In this article, we have shown you how to download and install LiveZilla on Windows and Linux, and how to use it to chat with your website visitors and provide customer support. LiveZilla is a powerful and versatile customer service platform that includes a live chat software, a visitor monitoring tool, and a help desk system. It has many features and benefits that can help you increase your sales and customer satisfaction. You can try LiveZilla for free for 30 days or choose from one of its affordable pricing plans. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us or leave a comment below.
-
FAQs
-
Q: How do I update LiveZilla?
-
A: You can update LiveZilla by downloading the latest version from the LiveZilla website and running the setup file on Windows or unzipping the zip file on Linux. The update process will preserve your settings and data.
-
Q: How do I uninstall LiveZilla?
-
A: You can uninstall LiveZilla by deleting the LiveZilla folder from your computer and web server. You can also delete the database that you created for LiveZilla if you don't need it anymore.
-
Q: How do I add more operators or chatbots to LiveZilla?
-
A: You can add more operators or chatbots to LiveZilla by logging in to your LiveZilla web interface and going to User Management. There you can create new operator or chatbot accounts and assign them roles and permissions.
-
Q: How do I customize my chat widget, forms, buttons, etc.?
-
A: You can customize your chat widget, forms, buttons, etc. by logging in to your LiveZilla web interface and going to Link Generator. There you can choose from different templates, colors, sizes, positions, etc. and generate HTML code that you can embed on your website.
-
Q: How do I integrate LiveZilla with other apps?
-
A: You can integrate LiveZilla with other apps by using its API or its Zapier integration. You can find more information about these options on the LiveZilla website or documentation.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Into the Dead Mod APK with Unlimited Ammo and Money.md b/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Into the Dead Mod APK with Unlimited Ammo and Money.md
deleted file mode 100644
index 92e47a70ed60332aa56e6896f61fce0d26c87083..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Into the Dead Mod APK with Unlimited Ammo and Money.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Into the Dead Mod APK: A Zombie Survival Game with Unlimited Ammo and Money
-
If you are looking for a thrilling and immersive zombie game for your Android device, you might want to check out Into the Dead mod APK. This is a modified version of the popular endless runner game by Pikpok, where you have to run for your life in a post-apocalyptic world infested by the undead. In this article, we will tell you how to download and install Into the Dead mod APK on your Android device, what features it offers, and some tips and tricks for playing it.
-
How to Download and Install Into the Dead Mod APK on Android
-
Before you can enjoy Into the Dead mod APK, you need to follow these steps to install it on your device:
-
into the dead mod apk unlimited ammo and money andropalace
Allow unknown apps on your device. To do this, go to your device settings and tap on Security. Then, scroll down until you see Unknown sources and check the box. This will let you install apps from sources other than the Google Play Store.
-
Download the mod APK file from a reputable source. You can find many websites that offer Into the Dead mod APK, but be careful as some of them may contain malware or viruses. One of the trusted sources we recommend is [Andropalace](^1^), where you can download Into the Dead mod APK v2.7.1 with unlimited ammo and money.
-
Install the mod APK using a file manager app. After downloading the mod APK file, you need to locate it on your device using a file manager app. You can use any file manager app you like, such as Cx File Explorer or File Manager. Once you find the mod APK file, tap on it and follow the instructions to install it.
-
-
Congratulations! You have successfully installed Into the Dead mod APK on your Android device. Now you can launch the game and enjoy its features.
-
Features of Into the Dead Mod APK
-
Into the Dead mod APK offers many features that make it more fun and exciting than the original game. Here are some of them:
-
into the dead 2 mod apk unlimited money and ammo andropalace
-into the dead hack apk unlimited coins and ammo andropalace
-into the dead mod apk download free unlimited money and ammo andropalace
-into the dead 2 hack apk unlimited gold and ammo andropalace
-into the dead mod apk latest version unlimited money and ammo andropalace
-into the dead 2 mod apk offline unlimited ammo and money andropalace
-into the dead hack apk download unlimited coins and ammo andropalace
-into the dead mod apk android 1 unlimited money and ammo andropalace
-into the dead 2 mod apk revdl unlimited gold and ammo andropalace
-into the dead mod apk rexdl unlimited money and ammo andropalace
-into the dead 2 mod apk android 1 unlimited money and ammo andropalace
-into the dead hack apk android unlimited coins and ammo andropalace
-into the dead mod apk obb unlimited money and ammo andropalace
-into the dead 2 mod apk data unlimited gold and ammo andropalace
-into the dead mod apk no root unlimited money and ammo andropalace
-into the dead 2 mod apk no ads unlimited money and ammo andropalace
-into the dead hack apk ios unlimited coins and ammo andropalace
-into the dead mod apk pure unlimited money and ammo andropalace
-into the dead 2 mod apk happymod unlimited gold and ammo andropalace
-into the dead mod apk uptodown unlimited money and ammo andropalace
-into the dead 2 mod apk apkpure unlimited money and ammo andropalace
-into the dead hack apk online unlimited coins and ammo andropalace
-into the dead mod apk all unlocked unlimited money and ammo andropalace
-into the dead 2 mod apk all weapons unlocked unlimited gold and ammo andropalace
-into the dead mod apk latest update unlimited money and ammo andropalace
-into the dead 2 mod apk latest version unlimited money and ammo andropalace
-into the dead hack apk latest version unlimited coins and ammo andropalace
-into the dead mod apk old version unlimited money and ammo andropalace
-into the dead 2 mod apk old version unlimited gold and ammo andropalace
-into the dead hack apk old version unlimited coins and ammo andropalace
-into the dead mod apk for pc unlimited money and ammo andropalace
-into the dead 2 mod apk for pc unlimited gold and ammo andropalace
-into the dead hack apk for pc unlimited coins and ammo andropalace
-into the dead mod apk for ios unlimited money and ammo andropalace
-into the dead 2 mod apk for ios unlimited gold and ammo andropalace
-into the dead hack apk for ios no jailbreak unlimited coins and ammo an
-
-
Unlimited ammo and money. With this feature, you don't have to worry about running out of bullets or coins in the game. You can shoot as many zombies as you want and buy any weapon or perk you like.
-
All weapons unlocked. With this feature, you can access all the weapons in the game without having to complete missions or challenges. You can choose from pistols, shotguns, chainsaws, grenades, miniguns, and more.
-
No ads. With this feature, you can enjoy the game without any annoying interruptions or distractions. You can focus on running and killing zombies without having to watch ads or buy premium items.
-
-
Tips and Tricks for Playing Into the Dead Mod APK
-
Into the Dead mod APK is a simple but challenging game that requires quick reflexes and strategy. Here are some tips and tricks that can help you survive longer and score higher in the game:
-
-
Use headphones for better immersion. The game has excellent sound effects that create a
scary and realistic atmosphere. You can hear the zombies growling, the bullets whizzing, and the wind howling. Using headphones can enhance your immersion and help you react faster to the sounds around you.
-
Avoid obstacles and fences. The game is full of obstacles and fences that can slow you down or stop you completely. You need to avoid them as much as possible, as they can make you vulnerable to zombie attacks. You can use the tilt or touch controls to steer left or right, or jump over low obstacles by tapping the screen.
-
Save your weapons for emergencies. The game gives you a weapon at the start of each run, but you have limited ammo and it can run out quickly. You should save your weapons for situations where you are surrounded by zombies or facing a large horde. You can also find ammo crates along the way, but they are rare and random.
-
Complete missions and challenges. The game has various missions and challenges that you can complete to earn coins and unlock new weapons and perks. Some of the missions include killing a certain number of zombies, running a certain distance, or using a specific weapon. Some of the challenges include running in fog, running at night, or running with no weapons.
-
-
Pros and Cons of Into the Dead Mod APK
-
Into the Dead mod APK is a fun and addictive game that can keep you entertained for hours. However, it also has some drawbacks that you should be aware of. Here are some of the pros and cons of Into the Dead mod APK:
-
-
-
Pros
-
Cons
-
-
-
Simple controls. The game has easy and intuitive controls that anyone can master. You can choose between tilt or touch controls, depending on your preference.
-
Repetitive levels. The game has only one mode and one map, which can get boring after a while. The levels are randomly generated, but they have little variation in terms of scenery and layout.
-
-
-
Atmospheric graphics. The game has stunning graphics that create a dark and eerie mood. The game uses realistic lighting and shadows, fog effects, and blood splatters to enhance the horror theme.
-
Lack of story. The game has no story or plot, which makes it feel shallow and meaningless. You don't know why you are running, where you are going, or what happened to the world.
-
-
-
Addictive gameplay. The game has a simple but addictive gameplay that keeps you hooked. You always want to run farther, kill more zombies, and unlock more weapons and perks.
-
Potential security risks. The game is a modded version of the original game, which means it may contain malware or viruses that can harm your device or steal your data. You should always download mod APKs from trusted sources and scan them before installing them.
-
-
-
Conclusion
-
Into the Dead mod APK is a zombie survival game that offers unlimited ammo and money, all weapons unlocked, and no ads. It is a modified version of the original game by Pikpok, where you have to run for your life in a post-apocalyptic world infested by the undead. It has simple controls, atmospheric graphics, and addictive gameplay that make it a great choice for zombie fans. However, it also has some drawbacks, such as repetitive levels, lack of story, and potential security risks. You should always be careful when downloading and installing mod APKs on your device.
-
If you are interested in playing Into the Dead mod APK, you can download it from [Andropalace] and follow the steps we mentioned above to install it on your Android device. We hope you enjoy this game and have fun killing zombies!
-
Frequently Asked Questions
-
Here are some of the common questions that people ask about Into the Dead mod APK:
-
Q: Is Into the Dead mod APK safe to use?
-
A: Into the Dead mod APK is generally safe to use if you download it from a reputable source like Andropalace. However, there is always a risk of malware or viruses when downloading mod APKs from unknown sources. You should always scan the mod APK file before installing it on your device.
-
Q: How do I update Into the Dead mod APK?
-
A: To update Into the Dead mod APK, you need to download the latest version of the mod APK file from Andropalace or another trusted source. Then, you need to uninstall the previous version of the mod APK from your device and install the new version using a file manager app.
-
Q: Can Q: Can I play Into the Dead mod APK online with other players?
-
A: No, Into the Dead mod APK is an offline game that does not require an internet connection to play. You can only play it solo on your device.
-
Q: What are the perks in Into the Dead mod APK?
-
A: Perks are special abilities that you can use in the game to enhance your performance. You can buy perks with coins or unlock them by completing missions or challenges. Some of the perks include faster reload, longer sprint, more health, and more ammo.
-
Q: What are the best weapons in Into the Dead mod APK?
-
A: The best weapons in Into the Dead mod APK depend on your preference and playstyle. However, some of the most powerful and popular weapons are the chainsaw, the minigun, the grenade launcher, and the crossbow.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py
deleted file mode 100644
index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py
+++ /dev/null
@@ -1,301 +0,0 @@
-import json
-import logging
-import math
-import os
-import time
-from contextlib import suppress
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-from open_clip import LPLoss, LPMetrics, lp_gather_features
-from open_clip.utils import do_mixup, get_mix_lambda
-from .distributed import is_master
-from .zero_shot import zero_shot_eval
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-def unwrap_model(model):
- if hasattr(model, "module"):
- return model.module
- else:
- return model
-
-
-def train_one_epoch(
- model,
- data,
- epoch,
- optimizer,
- scaler,
- scheduler,
- args,
- tb_writer=None,
- extra_suffix="",
-):
- device = torch.device(args.device)
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- model.train()
- loss = LPLoss(args.lp_loss)
-
- dataloader, sampler = data["train"].dataloader, data["train"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- num_batches_per_epoch = dataloader.num_batches
- sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10))
-
- # for toy dataset
- if args.dataset_type == "toy":
- dataloader.dataset.generate_queue()
-
- loss_m = AverageMeter()
- batch_time_m = AverageMeter()
- data_time_m = AverageMeter()
- end = time.time()
-
- for i, batch in enumerate(dataloader):
- step = num_batches_per_epoch * epoch + i
-
- if isinstance(scheduler, dict):
- for s in scheduler.values():
- s(step)
- else:
- scheduler(step)
-
- audio = batch # contains mel_spec, wavform, and longer list
- class_label = batch["class_label"]
- # audio = audio.to(device=device, non_blocking=True)
- class_label = class_label.to(device=device, non_blocking=True)
-
- if args.mixup:
- # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146
- mix_lambda = torch.from_numpy(
- get_mix_lambda(0.5, len(audio["waveform"]))
- ).to(device)
- class_label = do_mixup(class_label, mix_lambda)
- else:
- mix_lambda = None
-
- data_time_m.update(time.time() - end)
- if isinstance(optimizer, dict):
- for o_ in optimizer.values():
- o_.zero_grad()
- else:
- optimizer.zero_grad()
-
- with autocast():
- pred = model(audio, mix_lambda=mix_lambda, device=device)
- total_loss = loss(pred, class_label)
-
- if isinstance(optimizer, dict):
- if scaler is not None:
- scaler.scale(total_loss).backward()
- for o_ in optimizer.values():
- if args.horovod:
- o_.synchronize()
- scaler.unscale_(o_)
- with o_.skip_synchronize():
- scaler.step(o_)
- else:
- scaler.step(o_)
- scaler.update()
- else:
- total_loss.backward()
- for o_ in optimizer.values():
- o_.step()
- else:
- if scaler is not None:
- scaler.scale(total_loss).backward()
- if args.horovod:
- optimizer.synchronize()
- scaler.unscale_(optimizer)
- with optimizer.skip_synchronize():
- scaler.step(optimizer)
- else:
- scaler.step(optimizer)
- scaler.update()
- else:
- total_loss.backward()
- optimizer.step()
-
- # Note: we clamp to 4.6052 = ln(100), as in the original paper.
- with torch.no_grad():
- unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100))
- unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100))
-
- batch_time_m.update(time.time() - end)
- end = time.time()
- batch_count = i + 1
-
- if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch):
- if isinstance(audio, dict):
- batch_size = len(audio["waveform"])
- else:
- batch_size = len(audio)
- num_samples = batch_count * batch_size * args.world_size
- samples_per_epoch = dataloader.num_samples
- percent_complete = 100.0 * batch_count / num_batches_per_epoch
-
- # NOTE loss is coarsely sampled, just master node and per log update
- loss_m.update(total_loss.item(), batch_size)
- if isinstance(optimizer, dict):
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "lr": optimizer.param_groups[0]["lr"],
- }
- for name, val in log_data.items():
- name = f"train{extra_suffix}/{name}"
- if tb_writer is not None:
- tb_writer.add_scalar(name, val, step)
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- wandb.log({name: val, "step": step})
-
- # resetting batch / data time meters per log window
- batch_time_m.reset()
- data_time_m.reset()
- # end for
-
-
-def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""):
- metrics = {}
- if not args.parallel_eval:
- if not is_master(args):
- return metrics
- device = torch.device(args.device)
- model.eval()
-
- # CHANGE
- # zero_shot_metrics = zero_shot_eval(model, data, epoch, args)
- # metrics.update(zero_shot_metrics)
- if is_master(args):
- print("Evaluating...")
- metric_names = args.lp_metrics.split(",")
- eval_tool = LPMetrics(metric_names=metric_names)
-
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- if "val" in data and (
- args.val_frequency
- and ((epoch % args.val_frequency) == 0 or epoch == args.epochs)
- ):
- if args.parallel_eval:
- dataloader, sampler = data["val"].dataloader, data["val"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- samples_per_val = dataloader.num_samples
- else:
- dataloader = data["val"].dataloader
- num_samples = 0
- samples_per_val = dataloader.num_samples
-
- eval_info = {"pred": [], "target": []}
- with torch.no_grad():
- for i, batch in enumerate(dataloader):
- audio = batch # contains mel_spec, wavform, and longer list
- class_label = batch["class_label"]
-
- # audio = audio.to(device=device, non_blocking=True)
- class_label = class_label.to(device=device, non_blocking=True)
-
- with autocast():
- pred = model(audio, device=device)
- if args.parallel_eval:
- pred, class_label = lp_gather_features(
- pred, class_label, args.world_size, args.horovod
- )
- eval_info["pred"].append(pred)
- eval_info["target"].append(class_label)
-
- num_samples += class_label.shape[0]
-
- if (i % 100) == 0: # and i != 0:
- logging.info(
- f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]"
- )
-
- if is_master(args):
- eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu()
- eval_info["target"] = torch.cat(eval_info["target"], 0).cpu()
- metric_dict = eval_tool.evaluate_mertics(
- eval_info["pred"], eval_info["target"]
- )
- metrics.update(metric_dict)
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
-
- if is_master(args):
- if not metrics:
- return metrics
-
- logging.info(
- f"Eval Epoch: {epoch} "
- + "\n".join(
- ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics]
- )
- )
- if args.save_logs:
- for name, val in metrics.items():
- if tb_writer is not None:
- tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch)
-
- with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f:
- f.write(json.dumps(metrics))
- f.write("\n")
-
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- for name, val in metrics.items():
- wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch})
-
- return metrics
- else:
- return metrics
diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/inference.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/inference.py
deleted file mode 100644
index 49dc75f740aec7be287eab70bae1f7677ccc4662..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/pytorch/inference.py
+++ /dev/null
@@ -1,206 +0,0 @@
-import os
-import sys
-sys.path.insert(1, os.path.join(sys.path[0], '../utils'))
-import numpy as np
-import argparse
-import librosa
-import matplotlib.pyplot as plt
-import torch
-
-from utilities import create_folder, get_filename
-from models import *
-from pytorch_utils import move_data_to_device
-import config
-
-def audio_tagging(args):
- """Inference audio tagging result of an audio clip.
- """
-
- # Arugments & parameters
- sample_rate = args.sample_rate
- window_size = args.window_size
- hop_size = args.hop_size
- mel_bins = args.mel_bins
- fmin = args.fmin
- fmax = args.fmax
- model_type = args.model_type
- checkpoint_path = args.checkpoint_path
- audio_path = args.audio_path
- device = torch.device('cuda') if args.cuda and torch.cuda.is_available() else torch.device('cpu')
-
- classes_num = config.classes_num
- labels = config.labels
-
- # Model
- Model = eval(model_type)
- model = Model(sample_rate=sample_rate, window_size=window_size,
- hop_size=hop_size, mel_bins=mel_bins, fmin=fmin, fmax=fmax,
- classes_num=classes_num)
-
- checkpoint = torch.load(checkpoint_path, map_location=device)
- model.load_state_dict(checkpoint['model'])
-
- # Parallel
- if 'cuda' in str(device):
- model.to(device)
- print('GPU number: {}'.format(torch.cuda.device_count()))
- model = torch.nn.DataParallel(model)
- else:
- print('Using CPU.')
-
- # Load audio
- (waveform, _) = librosa.core.load(audio_path, sr=sample_rate, mono=True)
-
- waveform = waveform[None, :] # (1, audio_length)
- waveform = move_data_to_device(waveform, device)
-
- # Forward
- with torch.no_grad():
- model.eval()
- batch_output_dict = model(waveform, None)
-
- clipwise_output = batch_output_dict['clipwise_output'].data.cpu().numpy()[0]
- """(classes_num,)"""
-
- sorted_indexes = np.argsort(clipwise_output)[::-1]
-
- # Print audio tagging top probabilities
- for k in range(10):
- print('{}: {:.3f}'.format(np.array(labels)[sorted_indexes[k]],
- clipwise_output[sorted_indexes[k]]))
-
- # Print embedding
- if 'embedding' in batch_output_dict.keys():
- embedding = batch_output_dict['embedding'].data.cpu().numpy()[0]
- print('embedding: {}'.format(embedding.shape))
-
- return clipwise_output, labels
-
-
-def sound_event_detection(args):
- """Inference sound event detection result of an audio clip.
- """
-
- # Arugments & parameters
- sample_rate = args.sample_rate
- window_size = args.window_size
- hop_size = args.hop_size
- mel_bins = args.mel_bins
- fmin = args.fmin
- fmax = args.fmax
- model_type = args.model_type
- checkpoint_path = args.checkpoint_path
- audio_path = args.audio_path
- device = torch.device('cuda') if args.cuda and torch.cuda.is_available() else torch.device('cpu')
-
- classes_num = config.classes_num
- labels = config.labels
- frames_per_second = sample_rate // hop_size
-
- # Paths
- fig_path = os.path.join('results', '{}.png'.format(get_filename(audio_path)))
- create_folder(os.path.dirname(fig_path))
-
- # Model
- Model = eval(model_type)
- model = Model(sample_rate=sample_rate, window_size=window_size,
- hop_size=hop_size, mel_bins=mel_bins, fmin=fmin, fmax=fmax,
- classes_num=classes_num)
-
- checkpoint = torch.load(checkpoint_path, map_location=device)
- model.load_state_dict(checkpoint['model'])
-
- # Parallel
- print('GPU number: {}'.format(torch.cuda.device_count()))
- model = torch.nn.DataParallel(model)
-
- if 'cuda' in str(device):
- model.to(device)
-
- # Load audio
- (waveform, _) = librosa.core.load(audio_path, sr=sample_rate, mono=True)
-
- waveform = waveform[None, :] # (1, audio_length)
- waveform = move_data_to_device(waveform, device)
-
- # Forward
- with torch.no_grad():
- model.eval()
- batch_output_dict = model(waveform, None)
-
- framewise_output = batch_output_dict['framewise_output'].data.cpu().numpy()[0]
- """(time_steps, classes_num)"""
-
- print('Sound event detection result (time_steps x classes_num): {}'.format(
- framewise_output.shape))
-
- sorted_indexes = np.argsort(np.max(framewise_output, axis=0))[::-1]
-
- top_k = 10 # Show top results
- top_result_mat = framewise_output[:, sorted_indexes[0 : top_k]]
- """(time_steps, top_k)"""
-
- # Plot result
- stft = librosa.core.stft(y=waveform[0].data.cpu().numpy(), n_fft=window_size,
- hop_length=hop_size, window='hann', center=True)
- frames_num = stft.shape[-1]
-
- fig, axs = plt.subplots(2, 1, sharex=True, figsize=(10, 4))
- axs[0].matshow(np.log(np.abs(stft)), origin='lower', aspect='auto', cmap='jet')
- axs[0].set_ylabel('Frequency bins')
- axs[0].set_title('Log spectrogram')
- axs[1].matshow(top_result_mat.T, origin='upper', aspect='auto', cmap='jet', vmin=0, vmax=1)
- axs[1].xaxis.set_ticks(np.arange(0, frames_num, frames_per_second))
- axs[1].xaxis.set_ticklabels(np.arange(0, frames_num / frames_per_second))
- axs[1].yaxis.set_ticks(np.arange(0, top_k))
- axs[1].yaxis.set_ticklabels(np.array(labels)[sorted_indexes[0 : top_k]])
- axs[1].yaxis.grid(color='k', linestyle='solid', linewidth=0.3, alpha=0.3)
- axs[1].set_xlabel('Seconds')
- axs[1].xaxis.set_ticks_position('bottom')
-
- plt.tight_layout()
- plt.savefig(fig_path)
- print('Save sound event detection visualization to {}'.format(fig_path))
-
- return framewise_output, labels
-
-
-if __name__ == '__main__':
-
- parser = argparse.ArgumentParser(description='Example of parser. ')
- subparsers = parser.add_subparsers(dest='mode')
-
- parser_at = subparsers.add_parser('audio_tagging')
- parser_at.add_argument('--sample_rate', type=int, default=32000)
- parser_at.add_argument('--window_size', type=int, default=1024)
- parser_at.add_argument('--hop_size', type=int, default=320)
- parser_at.add_argument('--mel_bins', type=int, default=64)
- parser_at.add_argument('--fmin', type=int, default=50)
- parser_at.add_argument('--fmax', type=int, default=14000)
- parser_at.add_argument('--model_type', type=str, required=True)
- parser_at.add_argument('--checkpoint_path', type=str, required=True)
- parser_at.add_argument('--audio_path', type=str, required=True)
- parser_at.add_argument('--cuda', action='store_true', default=False)
-
- parser_sed = subparsers.add_parser('sound_event_detection')
- parser_sed.add_argument('--sample_rate', type=int, default=32000)
- parser_sed.add_argument('--window_size', type=int, default=1024)
- parser_sed.add_argument('--hop_size', type=int, default=320)
- parser_sed.add_argument('--mel_bins', type=int, default=64)
- parser_sed.add_argument('--fmin', type=int, default=50)
- parser_sed.add_argument('--fmax', type=int, default=14000)
- parser_sed.add_argument('--model_type', type=str, required=True)
- parser_sed.add_argument('--checkpoint_path', type=str, required=True)
- parser_sed.add_argument('--audio_path', type=str, required=True)
- parser_sed.add_argument('--cuda', action='store_true', default=False)
-
- args = parser.parse_args()
-
- if args.mode == 'audio_tagging':
- audio_tagging(args)
-
- elif args.mode == 'sound_event_detection':
- sound_event_detection(args)
-
- else:
- raise Exception('Error argument!')
\ No newline at end of file
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/text_encoders.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/text_encoders.py
deleted file mode 100644
index b49c5603afa2d41ad6e0145b719443f0f4ce9301..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/text_encoders.py
+++ /dev/null
@@ -1,160 +0,0 @@
-from . import text_cleaners
-from typing import Dict, List, Optional
-from .constants import ALL_POSSIBLE_HARAQAT
-import sentencepiece as spm
-
-
-class TextEncoder:
- pad = "P"
-
- def __init__(
- self,
- input_chars: List[str],
- target_charts: List[str],
- cleaner_fn: Optional[str] = None,
- reverse_input: bool = False,
- reverse_target: bool = False,
- sp_model_path=None,
- ):
- if cleaner_fn:
- self.cleaner_fn = getattr(text_cleaners, cleaner_fn)
- else:
- self.cleaner_fn = None
-
- self.input_symbols: List[str] = [TextEncoder.pad] + input_chars
- self.target_symbols: List[str] = [TextEncoder.pad] + target_charts
-
- if sp_model_path is None:
- self.input_symbol_to_id: Dict[str, int] = {
- s: i for i, s in enumerate(self.input_symbols)
- }
- self.input_id_to_symbol: Dict[int, str] = {
- i: s for i, s in enumerate(self.input_symbols)
- }
- else:
- sp_model = spm.SentencePieceProcessor()
- sp_model.load(sp_model_path + "/sp.model")
- self.input_symbol_to_id: Dict[str, int] = {
- s: sp_model.PieceToId(s+'▁') for s in self.input_symbols
- }
- self.input_symbol_to_id[" "] = sp_model.PieceToId("|") # encode space
- self.input_symbol_to_id[TextEncoder.pad] = 0 # encode padding
-
- self.input_space_id = sp_model.PieceToId("|")
- self.input_id_to_symbol: Dict[int, str] = {
- i: s for s, i in self.input_symbol_to_id.items()
- }
-
- self.target_symbol_to_id: Dict[str, int] = {
- s: i for i, s in enumerate(self.target_symbols)
- }
- self.target_id_to_symbol: Dict[int, str] = {
- i: s for i, s in enumerate(self.target_symbols)
- }
-
- self.reverse_input = reverse_input
- self.reverse_target = reverse_target
- self.input_pad_id = self.input_symbol_to_id[self.pad]
- self.target_pad_id = self.target_symbol_to_id[self.pad]
- self.start_symbol_id = None
-
- def input_to_sequence(self, text: str) -> List[int]:
- if self.reverse_input:
- text = "".join(list(reversed(text)))
- sequence = [self.input_symbol_to_id[s] for s in text if s not in [self.pad]]
-
- return sequence
-
- def target_to_sequence(self, text: str) -> List[int]:
- if self.reverse_target:
- text = "".join(list(reversed(text)))
- sequence = [self.target_symbol_to_id[s] for s in text if s not in [self.pad]]
-
- return sequence
-
- def sequence_to_input(self, sequence: List[int]):
- return [
- self.input_id_to_symbol[symbol]
- for symbol in sequence
- if symbol in self.input_id_to_symbol and symbol not in [self.input_pad_id]
- ]
-
- def sequence_to_target(self, sequence: List[int]):
- return [
- self.target_id_to_symbol[symbol]
- for symbol in sequence
- if symbol in self.target_id_to_symbol and symbol not in [self.target_pad_id]
- ]
-
- def clean(self, text):
- if self.cleaner_fn:
- return self.cleaner_fn(text)
- return text
-
- def combine_text_and_haraqat(self, input_ids: List[int], output_ids: List[int]):
- """
- Combines the input text with its corresponding haraqat
- Args:
- inputs: a list of ids representing the input text
- outputs: a list of ids representing the output text
- Returns:
- text: the text after merging the inputs text representation with the output
- representation
- """
- output = ""
- for i, input_id in enumerate(input_ids):
- if input_id == self.input_pad_id:
- break
- output += self.input_id_to_symbol[input_id]
- # if input_id == self.input_space_id:
- # continue
- output += self.target_id_to_symbol[output_ids[i]]
- return output
-
- def __str__(self):
- return type(self).__name__
-
-
-class BasicArabicEncoder(TextEncoder):
- def __init__(
- self,
- cleaner_fn="basic_cleaners",
- reverse_input: bool = False,
- reverse_target: bool = False,
- sp_model_path=None,
- ):
- input_chars: List[str] = list("بض.غىهظخة؟:طس،؛فندؤلوئآك-يذاصشحزءمأجإ ترقعث")
- target_charts: List[str] = list(ALL_POSSIBLE_HARAQAT.keys())
-
- super().__init__(
- input_chars,
- target_charts,
- cleaner_fn=cleaner_fn,
- reverse_input=reverse_input,
- reverse_target=reverse_target,
- sp_model_path=sp_model_path,
- )
-
-
-class ArabicEncoderWithStartSymbol(TextEncoder):
- def __init__(
- self,
- cleaner_fn="basic_cleaners",
- reverse_input: bool = False,
- reverse_target: bool = False,
- sp_model_path=None,
- ):
- input_chars: List[str] = list("بض.غىهظخة؟:طس،؛فندؤلوئآك-يذاصشحزءمأجإ ترقعث")
- # the only difference from the basic encoder is adding the start symbol
- target_charts: List[str] = list(ALL_POSSIBLE_HARAQAT.keys()) + ["s"]
-
- super().__init__(
- input_chars,
- target_charts,
- cleaner_fn=cleaner_fn,
- reverse_input=reverse_input,
- reverse_target=reverse_target,
- sp_model_path=sp_model_path,
- )
-
- self.start_symbol_id = self.target_symbol_to_id["s"]
diff --git a/spaces/AchyuthGamer/ImMagician/README.md b/spaces/AchyuthGamer/ImMagician/README.md
deleted file mode 100644
index e77f75cb04ab098ec1c3b977a2394ba6ca873e27..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/ImMagician/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: ImMagician
-emoji: 🪄
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: true
----
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Move.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Move.js
deleted file mode 100644
index cbc03dbb80c62209945f899c1d906a9f198d36cf..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Move.js
+++ /dev/null
@@ -1,156 +0,0 @@
-import IndexOf from '../../../../plugins/utils/object/IndexOf.js';
-import { GetDisplayWidth, GetDisplayHeight } from '../../../../plugins/utils/size/GetDisplaySize.js';
-import { WaitComplete } from '../../utils/WaitEvent.js';
-
-export default {
- moveChild(child, duration, ease, distance) {
- var key;
- if (typeof (child) === 'string') {
- key = child;
- child = this.sizerChildren[key];
- } else {
- key = IndexOf(this.sizerChildren, child);
- }
-
- if (duration === undefined) {
- duration = 500;
- }
-
- var isShownChild = (this.currentChildKey === key);
-
- if (distance === undefined) {
- switch (key) {
- case 'leftSide':
- case 'rightSide':
- distance = GetDisplayWidth(child);
- break;
- case 'topSide':
- case 'bottomSide':
- distance = GetDisplayHeight(child);
- break;
- default: // 'panel'
- if (isShownChild) { // Show panel
- switch (this.previousChildKey) {
- case 'leftSide':
- case 'rightSide':
- distance = GetDisplayWidth(this.sizerChildren[this.previousChildKey]);
- break;
- case 'topSide':
- case 'bottomSide':
- distance = GetDisplayHeight(this.sizerChildren[this.previousChildKey]);
- break;
- default:
- distance = 0;
- break;
- }
- } else { // Hide panel
- switch (this.currentChildKey) {
- case 'leftSide':
- case 'rightSide':
- distance = GetDisplayWidth(this.sizerChildren[this.currentChildKey]);
- break;
- case 'topSide':
- case 'bottomSide':
- distance = GetDisplayHeight(this.sizerChildren[this.currentChildKey]);
- break;
- default:
- distance = 0;
- break;
- }
- }
- break;
- }
- }
-
- var moveLeft, moveRight, moveUp, moveDown;
- if (isShownChild) {
- switch (key) {
- case 'panel':
- switch (this.previousChildKey) {
- case 'leftSide':
- moveLeft = true;
- break;
- case 'rightSide':
- moveRight = true;
- break;
- case 'topSide':
- moveUp = true;
- break;
- case 'bottomSide':
- moveDown = true;
- break;
- }
- break;
- case 'leftSide':
- moveRight = true;
- break;
- case 'rightSide':
- moveLeft = true;
- break;
- case 'topSide':
- moveDown = true;
- break;
- case 'bottomSide':
- moveUp = true;
- break;
- }
- } else { // Hide
- switch (key) {
- case 'panel':
- switch (this.currentChildKey) {
- case 'leftSide':
- moveRight = true;
- break;
- case 'rightSide':
- moveLeft = true;
- break;
- case 'topSide':
- moveDown = true;
- break;
- case 'bottomSide':
- moveUp = true;
- break;
- }
- break;
- case 'leftSide':
- moveLeft = true;
- break;
- case 'rightSide':
- moveRight = true;
- break;
- case 'topSide':
- moveUp = true;
- break;
- case 'bottomSide':
- moveDown = true;
- break;
- }
- }
-
- if (moveLeft) {
- child.moveTo(duration, `-=${distance}`, undefined, ease);
- } else if (moveRight) {
- child.moveTo(duration, `+=${distance}`, undefined, ease);
- } else if (moveUp) {
- child.moveTo(duration, undefined, `-=${distance}`, ease);
- } else if (moveDown) {
- child.moveTo(duration, undefined, `+=${distance}`, ease);
- } else {
- child.moveTo(0);
- }
- return this;
- },
-
- moveChildPromise(child, duration, ease, distance) {
- if (typeof (child) === 'string') {
- child = this.sizerChildren[key];
- }
- this.moveChild(child, duration, ease, distance);
-
- if (child._easeMove) {
- return WaitComplete(child._easeMove);
- } else {
- return Promise.resolve();
- }
- }
-}
\ No newline at end of file
diff --git a/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/js/app-legacy.8305dfab.js b/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/js/app-legacy.8305dfab.js
deleted file mode 100644
index 323bf0c7f628dc02c843e9f959082890fc17e855..0000000000000000000000000000000000000000
--- a/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/js/app-legacy.8305dfab.js
+++ /dev/null
@@ -1,21 +0,0 @@
-/*!
-
-=========================================================
-* Vue Notus - v1.1.0 based on Tailwind Starter Kit by Creative Tim
-=========================================================
-
-* Product Page: https://www.creative-tim.com/product/vue-notus
-* Copyright 2021 Creative Tim (https://www.creative-tim.com)
-* Licensed under MIT (https://github.com/creativetimofficial/vue-notus/blob/main/LICENSE.md)
-
-* Tailwind Starter Kit Page: https://www.creative-tim.com/learning-lab/tailwind-starter-kit/presentation
-
-* Coded by Creative Tim
-
-=========================================================
-
-* The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
-
-*/
-(function(){"use strict";var e={80601:function(e,t,a){a(77726),a(33473),a(92151),a(1286);var l=a(70821),o=a(22201),n={id:"app"};function r(e,t,a,o,r,s){var i=(0,l.resolveComponent)("alert"),c=(0,l.resolveComponent)("router-view");return(0,l.openBlock)(),(0,l.createElementBlock)("div",n,[(0,l.createVNode)(i,{display:r.alertDisplay,text:r.alertText,color:r.alertColor},null,8,["display","text","color"]),(0,l.createVNode)(c)])}var s={key:0,class:"fixed w-full z-50 w-10/12 justify-center items-center flex"},i=(0,l.createElementVNode)("span",{class:"text-xl inline-block mr-5 align-middle"},[(0,l.createElementVNode)("i",{class:"fas fa-bell"})],-1),c={class:"inline-block ml-2 align-middle mr-8"};function d(e,t,a,o,n,r){return a.display?((0,l.openBlock)(),(0,l.createElementBlock)("div",s,[(0,l.createElementVNode)("div",{class:(0,l.normalizeClass)([a.color,"text-white px-6 py-4 border-0 rounded"])},[i,(0,l.createElementVNode)("span",c,(0,l.toDisplayString)(a.text),1)],2)])):(0,l.createCommentVNode)("",!0)}var u={props:{display:Boolean,text:String,color:String}},p=a(83744);const m=(0,p.Z)(u,[["render",d]]);var f=m,b={name:"admin-layout",data:function(){return{alertText:"",alertColor:"",alertDisplay:!1}},components:{Alert:f},provide:function(){return{AlertMethod:this.alertMethod}},methods:{alertMethod:function(e){var t=this,a=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"bg-lightBlue-400",l=arguments.length>2&&void 0!==arguments[2]?arguments[2]:1500;this.alertText=e,this.alertColor=a,this.alertDisplay=!0,setInterval((function(){t.alertDisplay=!1}),l)}}};const h=(0,p.Z)(b,[["render",r]]);var v=h,g={class:"relative bg-blueGray-100"},x={class:"px-4 md:px-10 mx-auto w-full -m-24"};function w(e,t,a,o,n,r){var s=(0,l.resolveComponent)("admin-navbar"),i=(0,l.resolveComponent)("header-stats"),c=(0,l.resolveComponent)("router-view"),d=(0,l.resolveComponent)("footer-admin");return(0,l.openBlock)(),(0,l.createElementBlock)("div",null,[(0,l.createElementVNode)("div",g,[(0,l.createVNode)(s),(0,l.createVNode)(i),(0,l.createElementVNode)("div",x,[(0,l.createVNode)(c),(0,l.createVNode)(d)])])])}var y={class:"absolute top-0 left-0 w-full z-10 bg-transparent md:flex-row md:flex-nowrap md:justify-start flex items-center p-4"},N=(0,l.createElementVNode)("div",{class:"w-full mx-autp items-center flex justify-between md:flex-nowrap flex-wrap md:px-10 px-4"},[(0,l.createElementVNode)("a",{class:"text-white text-sm uppercase hidden lg:inline-block font-semibold",href:"javascript:void(0)"}," ChatGPT-Plugin ")],-1),V=[N];function C(e,t,a,o,n,r){return(0,l.openBlock)(),(0,l.createElementBlock)("nav",y,V)}var k={components:{}};const E=(0,p.Z)(k,[["render",C]]);var T=E,S={class:"relative bg-emerald-600 pb-32 pt-12"},D={class:"px-4 md:px-10 mx-auto w-full"},G={class:"flex flex-wrap"},B={class:"w-full lg:w-6/12 xl:w-3/12 px-4"},U={class:"w-full lg:w-6/12 xl:w-3/12 px-4"},A={class:"w-full lg:w-6/12 xl:w-3/12 px-4"},P={class:"w-full lg:w-6/12 xl:w-3/12 px-4"};function z(e,t,a,o,n,r){var s=(0,l.resolveComponent)("card-stats");return(0,l.openBlock)(),(0,l.createElementBlock)("div",S,[(0,l.createElementVNode)("div",D,[(0,l.createElementVNode)("div",null,[(0,l.createElementVNode)("div",G,[(0,l.createElementVNode)("div",B,[(0,l.createVNode)(s,{statSubtitle:"系统访问量",statTitle:n.SystemAccess.count,statArrow:n.SystemAccess.statArrow,statPercent:n.SystemAccess.statPercent,statPercentColor:"text-emerald-500",statDescripiron:"相比昨日",statIconName:"far fa-chart-bar",statIconColor:"bg-red-500"},null,8,["statTitle","statArrow","statPercent"])]),(0,l.createElementVNode)("div",U,[(0,l.createVNode)(s,{statSubtitle:"缓存文件数",statTitle:n.CacheFile.count,statArrow:n.CacheFile.statArrow,statPercent:n.CacheFile.statPercent,statPercentColor:"text-red-500",statDescripiron:"相比昨日",statIconName:"fas fa-chart-pie",statIconColor:"bg-orange-500"},null,8,["statTitle","statArrow","statPercent"])]),(0,l.createElementVNode)("div",A,[(0,l.createVNode)(s,{statSubtitle:"外网访问量",statTitle:n.WebAccess.count,statArrow:n.WebAccess.statArrow,statPercent:n.WebAccess.statPercent,statPercentColor:"text-orange-500",statDescripiron:"相比昨日",statIconName:"fas fa-users",statIconColor:"bg-pink-500"},null,8,["statTitle","statArrow","statPercent"])]),(0,l.createElementVNode)("div",P,[(0,l.createVNode)(s,{statSubtitle:"系统负载",statTitle:n.SystemLoad.count+"%",statArrow:n.SystemLoad.statArrow,statPercent:n.SystemLoad.statPercent,statPercentColor:"text-emerald-500",statDescripiron:"相比一小时前",statIconName:"fas fa-percent",statIconColor:"bg-emerald-500"},null,8,["statTitle","statArrow","statPercent"])])])])])])}a(56977);var M={class:"relative flex flex-col min-w-0 break-words bg-white rounded mb-6 xl:mb-0 shadow-lg"},R={class:"flex-auto p-4"},I={class:"flex flex-wrap"},F={class:"relative w-full pr-4 max-w-full flex-grow flex-1"},L={class:"text-blueGray-400 uppercase font-bold text-xs"},j={class:"font-semibold text-xl text-blueGray-700"},O={class:"relative w-auto pl-4 flex-initial"},Z={class:"text-sm text-blueGray-400 mt-4"},$={class:"whitespace-nowrap"};function q(e,t,a,o,n,r){return(0,l.openBlock)(),(0,l.createElementBlock)("div",M,[(0,l.createElementVNode)("div",R,[(0,l.createElementVNode)("div",I,[(0,l.createElementVNode)("div",F,[(0,l.createElementVNode)("h5",L,(0,l.toDisplayString)(a.statSubtitle),1),(0,l.createElementVNode)("span",j,(0,l.toDisplayString)(a.statTitle),1)]),(0,l.createElementVNode)("div",O,[(0,l.createElementVNode)("div",{class:(0,l.normalizeClass)(["text-white p-3 text-center inline-flex items-center justify-center w-12 h-12 shadow-lg rounded-full",[a.statIconColor]])},[(0,l.createElementVNode)("i",{class:(0,l.normalizeClass)([a.statIconName])},null,2)],2)])]),(0,l.createElementVNode)("p",Z,[(0,l.createElementVNode)("span",{class:(0,l.normalizeClass)(["mr-2",[a.statPercentColor]])},[(0,l.createElementVNode)("i",{class:(0,l.normalizeClass)(["up"===a.statArrow?"fas fa-arrow-up":"fas fa-arrow-down"])},null,2),(0,l.createTextVNode)(" "+(0,l.toDisplayString)(a.statPercent)+"% ",1)],2),(0,l.createElementVNode)("span",$,(0,l.toDisplayString)(a.statDescripiron),1)])])])}var W={name:"card-stats",props:{statSubtitle:{type:String,default:"Traffic"},statTitle:{type:String,default:"350,897"},statArrow:{default:"up",validator:function(e){return-1!==["up","down"].indexOf(e)}},statPercent:{type:String,default:"3.48"},statPercentColor:{type:String,default:"text-emerald-500"},statDescripiron:{type:String,default:"Since last month"},statIconName:{type:String,default:"far fa-chart-bar"},statIconColor:{type:String,default:"bg-red-500"}}};const _=(0,p.Z)(W,[["render",q]]);var Y=_,X=a(6154),H={data:function(){return{SystemAccess:{count:0,statArrow:"up",statPercent:0},CacheFile:{count:0,statArrow:"up",statPercent:0},WebAccess:{count:0,statArrow:"up",statPercent:0},SystemLoad:{count:0,statArrow:"up",statPercent:0}}},components:{CardStats:Y},created:function(){this.getData()},methods:{getData:function(){var e=this;X.Z.post("".concat(window.location.origin,"/system-statistics")).then((function(t){e.SystemAccess={count:t.data.SystemAccess.count,statArrow:t.data.SystemAccess.count>t.data.SystemAccess.oldCount?"up":"down",statPercent:Math.abs((t.data.SystemAccess.count-t.data.SystemAccess.oldCount)/t.data.SystemAccess.oldCount>0?t.data.SystemAccess.oldCount:1)},e.CacheFile={count:t.data.CacheFile.count,statArrow:t.data.CacheFile.count>t.data.CacheFile.oldCount?"up":"down",statPercent:Math.abs((t.data.CacheFile.count-t.data.CacheFile.oldCount)/t.data.CacheFile.oldCount>0?t.data.CacheFile.oldCount:1)},e.WebAccess={count:t.data.WebAccess.count,statArrow:t.data.WebAccess.count>t.data.WebAccess.oldCount?"up":"down",statPercent:Math.abs((t.data.WebAccess.count-t.data.WebAccess.oldCount)/t.data.WebAccess.oldCount>0?t.data.WebAccess.oldCount:1)},e.SystemLoad={count:t.data.SystemLoad.count.toFixed(2),statArrow:t.data.SystemLoad.count>t.data.SystemLoad.oldCount?"up":"down",statPercent:Math.abs((t.data.SystemLoad.count-t.data.SystemLoad.oldCount)/t.data.SystemLoad.oldCount>0?t.data.SystemLoad.oldCount:1)}})).catch((function(e){console.log(e)}))}}};const K=(0,p.Z)(H,[["render",z]]);var Q=K,J={class:"block py-4"},ee={class:"container mx-auto px-4"},te=(0,l.createElementVNode)("hr",{class:"mb-4 border-b-1 border-blueGray-200"},null,-1),ae={class:"flex flex-wrap items-center md:justify-between justify-center"},le={class:"w-full md:w-4/12 px-4"},oe={class:"text-sm text-blueGray-500 font-semibold py-1 text-center md:text-left"},ne=(0,l.createElementVNode)("a",{href:"https://github.com/ikechan8370/chatgpt-plugin",class:"text-blueGray-500 hover:text-blueGray-700 text-sm font-semibold py-1"}," chatgpt-plugin ",-1),re=(0,l.createStaticVNode)('
',2),hi=[bi];function vi(e,t){return(0,l.openBlock)(),(0,l.createElementBlock)("div",fi,hi)}const gi={},xi=(0,p.Z)(gi,[["render",vi]]);var wi=xi,yi={name:"statistics-page",components:{AdminNavbar:T,HeaderStats:Q,FooterAdmin:de,CardLineChart:Je,CardPageVisits:Tt,CardSocialTraffic:wi}};const Ni=(0,p.Z)(yi,[["render",mi]]);var Vi=Ni,Ci=a(42104),ki=a.n(Ci),Ei=a(31986),Ti=a.n(Ei),Si=a(58043),Di=a(27543),Gi=a(35245),Bi=a(23375),Ui=a(28325),Ai=a.n(Ui);a(24335),a(15251),a(35433),a(49299),a(39980),a(86405),a(68758),a(35249),a(85795),a(47231),a(42273),a(44852),a(77533),a(35266),a(72594),a(18508),a(31093),a(25691),a(4279),a(2731),a(51849),a(73253),a(24029),a(57874),a(73358),a(24064),a(2481),a(10856),a(79016),a(54019),a(36972),a(36430),a(92776),a(24940),a(58060),a(639),a(84126),a(94446),a(53292),a(46428),a(27308),a(86043),a(69104),a(97861),a(24115),a(50331),a(15827),a(21275),a(76609),a(61354),a(86902),a(64681),a(4677),a(99114),a(5798),a(52812),a(44225),a(57649),a(46213),a(29467),a(4412),a(25867),a(74307),a(59385),a(18980),a(80871),a(97899),a(2946),a(30258),a(58149),a(57065),a(73162),a(90827),a(24370),a(40728),a(96854),a(54409),a(68483),a(77158),a(60397),a(68232),a(22456),a(59979),a(70060),a(68805),a(75041),a(66841),a(79958),a(66512),a(8956),a(51039),a(75045),a(50171),a(10427),a(6634),a(9220),a(27915),a(72778),a(71828),a(91709),a(28407),a(65276),a(66857),a(51315),a(49472),a(79787),a(79812),a(1415),a(47362),a(27046),a(77346),a(31565),a(17117),a(40485),a(37802),a(92447),a(60075),a(39181),a(70110),a(81295),a(14324),a(24677),a(5578),a(88161),a(26203),a(17786),a(74277),a(65503),a(50057),a(77460),a(54263),a(90175),a(16150),a(10880),a(56521),a(29525),a(48942),a(18848),a(52503),a(99945),a(54884),a(12886),a(52008),a(81454),a(55314),a(68874),a(96342),a(38885),a(96836),a(68915),a(88651),a(46690),a(22444),a(64488),a(81917),a(56543),a(71643),a(82821),a(32334),a(69486),a(31634),a(90319),a(87442),a(51412),a(61719),a(150),a(45520),a(76347),a(85153),a(93335),a(26555),a(6004),a(48443),a(86268),a(61169),a(33965),a(16185),a(23099),a(16554),a(15101),a(89134),a(80676),a(61899),a(55949),a(80454),a(17898),a(52353),a(77661),a(677),a(33436),a(35743),a(58704),a(74876),a(11426),a(24371),a(35577),a(13144),a(85513),a(903),a(47511),a(40780),a(13210),a(54332),a(70942),a(52892),a(74984),a(20288),a(26280),a(89425),a(79457),a(92927),a(63887),a(86862),a(97353),a(43932),a(17929),a(45820),a(37345),a(24906),a(71429),a(93381),a(24319),a(9753),a(92168),a(89485),a(80366),a(26896),a(82939),a(84891),a(94933),a(54803),a(24540),a(63326),a(62356),a(21029),a(28439),a(2040),a(38512),a(50096),a(76577),a(40998),a(94840),a(23449),a(70767),a(71384),a(89865),a(42963),a(10509),a(22738),a(89281),a(9983),a(30893),a(37485),a(84435),a(68092),a(71327),a(612),a(83113),a(34229),a(65683),a(12788),a(55689),a(8571),a(90874),a(48598),a(89239),a(20601),a(65398),a(16241),a(46193),a(1607),a(37838),a(9930),a(84315),a(14032),a(10196),a(52467),a(14641),a(30035),a(70981),a(47251),a(38564),a(34438),a(83082),a(10008),a(5774),a(64040),a(10230),a(31693),a(99729),a(45682),a(10504),a(62349),a(22449),a(19938),a(2982),a(857);ki().use(Ti(),{Prism:Ai()}),ki().use((0,Si.Z)()),ki().use((0,Di.Z)()),ki().use((0,Gi.Z)()),ki().use((0,Bi.Z)());var Pi=[{path:"/admin",redirect:"/admin/dashboard",component:me,children:[{path:"/admin/dashboard",component:Ht},{path:"/admin/settings",component:So}]},{path:"/auth",redirect:"/auth/login",component:xe,children:[{path:"/auth/login",component:_o}]},{path:"/page/",component:ps},{path:"/page/:code",component:ur},{path:"/help/",component:Qr},{path:"/help/:use",component:Qr},{path:"/statistics/",component:Vi},{path:"/version",component:Fs},{path:"/",component:ni}],zi=(0,o.p7)({history:(0,o.PO)(),routes:Pi});(0,l.createApp)(v).use(zi).use(ki()).mount("#app")}},t={};function a(l){var o=t[l];if(void 0!==o)return o.exports;var n=t[l]={id:l,loaded:!1,exports:{}};return e[l].call(n.exports,n,n.exports,a),n.loaded=!0,n.exports}a.m=e,function(){a.amdO={}}(),function(){var e=[];a.O=function(t,l,o,n){if(!l){var r=1/0;for(d=0;d=n)&&Object.keys(a.O).every((function(e){return a.O[e](l[i])}))?l.splice(i--,1):(s=!1,n0&&e[d-1][2]>n;d--)e[d]=e[d-1];e[d]=[l,o,n]}}(),function(){a.n=function(e){var t=e&&e.__esModule?function(){return e["default"]}:function(){return e};return a.d(t,{a:t}),t}}(),function(){a.d=function(e,t){for(var l in t)a.o(t,l)&&!a.o(e,l)&&Object.defineProperty(e,l,{enumerable:!0,get:t[l]})}}(),function(){a.g=function(){if("object"===typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"===typeof window)return window}}()}(),function(){a.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)}}(),function(){a.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})}}(),function(){a.nmd=function(e){return e.paths=[],e.children||(e.children=[]),e}}(),function(){a.p="/"}(),function(){var e={143:0};a.O.j=function(t){return 0===e[t]};var t=function(t,l){var o,n,r=l[0],s=l[1],i=l[2],c=0;if(r.some((function(t){return 0!==e[t]}))){for(o in s)a.o(s,o)&&(a.m[o]=s[o]);if(i)var d=i(a)}for(t&&t(l);c \
-# indir= \
-# outdir=
-
-import logging
-import os
-import sys
-import traceback
-
-from saicinpainting.evaluation.utils import move_to_device
-
-os.environ['OMP_NUM_THREADS'] = '1'
-os.environ['OPENBLAS_NUM_THREADS'] = '1'
-os.environ['MKL_NUM_THREADS'] = '1'
-os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
-os.environ['NUMEXPR_NUM_THREADS'] = '1'
-
-import cv2
-import hydra
-import numpy as np
-import torch
-import tqdm
-import yaml
-from omegaconf import OmegaConf
-from torch.utils.data._utils.collate import default_collate
-
-from saicinpainting.training.data.datasets import make_default_val_dataset
-from saicinpainting.training.trainers import load_checkpoint
-from saicinpainting.utils import register_debug_signal_handlers
-
-LOGGER = logging.getLogger(__name__)
-
-
-@hydra.main(config_path='configs/prediction', config_name='default.yaml')
-def main(predict_config: OmegaConf):
- try:
- register_debug_signal_handlers() # kill -10 will result in traceback dumped into log
-
- device = torch.device(predict_config.device)
-
- train_config_path = os.path.join(predict_config.model.path, 'config.yaml')
- with open(train_config_path, 'r') as f:
- train_config = OmegaConf.create(yaml.safe_load(f))
-
- train_config.training_model.predict_only = True
-
- out_ext = predict_config.get('out_ext', '.png')
-
- checkpoint_path = os.path.join(predict_config.model.path,
- 'models',
- predict_config.model.checkpoint)
- model = load_checkpoint(train_config, checkpoint_path, strict=False, map_location='cpu')
- model.freeze()
- model.to(device)
-
- if not predict_config.indir.endswith('/'):
- predict_config.indir += '/'
-
- dataset = make_default_val_dataset(predict_config.indir, **predict_config.dataset)
- with torch.no_grad():
- for img_i in tqdm.trange(len(dataset)):
- mask_fname = dataset.mask_filenames[img_i]
- cur_out_fname = os.path.join(
- predict_config.outdir,
- os.path.splitext(mask_fname[len(predict_config.indir):])[0] + out_ext
- )
- os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True)
-
- batch = move_to_device(default_collate([dataset[img_i]]), device)
- batch['mask'] = (batch['mask'] > 0) * 1
- batch = model(batch)
- cur_res = batch[predict_config.out_key][0].permute(1, 2, 0).detach().cpu().numpy()
-
- cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8')
- cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR)
- cv2.imwrite(cur_out_fname, cur_res)
- except KeyboardInterrupt:
- LOGGER.warning('Interrupted by user')
- except Exception as ex:
- LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
- sys.exit(1)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/AlexZou/Deploy_Restoration/net/Ushape_Trans.py b/spaces/AlexZou/Deploy_Restoration/net/Ushape_Trans.py
deleted file mode 100644
index 867e234e877b6bd5917535f9f256639b5a2d093b..0000000000000000000000000000000000000000
--- a/spaces/AlexZou/Deploy_Restoration/net/Ushape_Trans.py
+++ /dev/null
@@ -1,378 +0,0 @@
-# -*- coding: utf-8 -*-
-# @Author : Lintao Peng
-# @File : Ushape_Trans.py
-# coding=utf-8
-# Design based on the pix2pix
-
-import torch.nn as nn
-import torch.nn.functional as F
-import torch
-import datetime
-import os
-import time
-import timeit
-import copy
-import numpy as np
-from torch.nn import ModuleList
-from torch.nn import Conv2d
-from torch.nn import LeakyReLU
-from net.block import *
-from net.block import _equalized_conv2d
-from net.SGFMT import TransformerModel
-from net.PositionalEncoding import FixedPositionalEncoding,LearnedPositionalEncoding
-from net.CMSFFT import ChannelTransformer
-
-
-
-
-
-
-
-##权重初始化
-def weights_init_normal(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- torch.nn.init.normal_(m.weight.data, 0.0, 0.02)
- elif classname.find("BatchNorm2d") != -1:
- torch.nn.init.normal_(m.weight.data, 1.0, 0.02)
- torch.nn.init.constant_(m.bias.data, 0.0)
-
-
-
-
-
-
-class Generator(nn.Module):
- """
- MSG-Unet-GAN的生成器部分
- """
- def __init__(self,
- img_dim=256,
- patch_dim=16,
- embedding_dim=512,
- num_channels=3,
- num_heads=8,
- num_layers=4,
- hidden_dim=256,
- dropout_rate=0.0,
- attn_dropout_rate=0.0,
- in_ch=3,
- out_ch=3,
- conv_patch_representation=True,
- positional_encoding_type="learned",
- use_eql=True):
- super(Generator, self).__init__()
- assert embedding_dim % num_heads == 0
- assert img_dim % patch_dim == 0
-
- self.out_ch=out_ch #输出通道数
- self.in_ch=in_ch #输入通道数
- self.img_dim = img_dim #输入图片尺寸
- self.embedding_dim = embedding_dim #512
- self.num_heads = num_heads #多头注意力中头的数量
- self.patch_dim = patch_dim #每个patch的尺寸
- self.num_channels = num_channels #图片通道数?
- self.dropout_rate = dropout_rate #drop-out比率
- self.attn_dropout_rate = attn_dropout_rate #注意力模块的dropout比率
- self.conv_patch_representation = conv_patch_representation #True
-
- self.num_patches = int((img_dim // patch_dim) ** 2) #将三通道图片分成多少块
- self.seq_length = self.num_patches #每个sequence的长度为patches的大小
- self.flatten_dim = 128 * num_channels #128*3=384
-
- #线性编码
- self.linear_encoding = nn.Linear(self.flatten_dim, self.embedding_dim)
- #位置编码
- if positional_encoding_type == "learned":
- self.position_encoding = LearnedPositionalEncoding(
- self.seq_length, self.embedding_dim, self.seq_length
- )
- elif positional_encoding_type == "fixed":
- self.position_encoding = FixedPositionalEncoding(
- self.embedding_dim,
- )
-
- self.pe_dropout = nn.Dropout(p=self.dropout_rate)
-
- self.transformer = TransformerModel(
- embedding_dim, #512
- num_layers, #4
- num_heads, #8
- hidden_dim, #4096
-
- self.dropout_rate,
- self.attn_dropout_rate,
- )
-
- #layer Norm
- self.pre_head_ln = nn.LayerNorm(embedding_dim)
-
- if self.conv_patch_representation:
-
- self.Conv_x = nn.Conv2d(
- 256,
- self.embedding_dim, #512
- kernel_size=3,
- stride=1,
- padding=1
- )
-
- self.bn = nn.BatchNorm2d(256)
- self.relu = nn.ReLU(inplace=True)
-
-
-
- #modulelist
- self.rgb_to_feature=ModuleList([from_rgb(32),from_rgb(64),from_rgb(128)])
- self.feature_to_rgb=ModuleList([to_rgb(32),to_rgb(64),to_rgb(128),to_rgb(256)])
-
- self.Maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
- self.Maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
- self.Maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
- self.Maxpool3 = nn.MaxPool2d(kernel_size=2, stride=2)
- self.Maxpool4 = nn.MaxPool2d(kernel_size=2, stride=2)
-
- self.Conv1=conv_block(self.in_ch, 16)
- self.Conv1_1 = conv_block(16, 32)
- self.Conv2 = conv_block(32, 32)
- self.Conv2_1 = conv_block(32, 64)
- self.Conv3 = conv_block(64,64)
- self.Conv3_1 = conv_block(64,128)
- self.Conv4 = conv_block(128,128)
- self.Conv4_1 = conv_block(128,256)
-
- self.Conv5 = conv_block(512,256)
-
- #self.Conv_x = conv_block(256,512)
- self.mtc = ChannelTransformer(channel_num=[32,64,128,256],
- patchSize=[32, 16, 8, 4])
-
-
- self.Up5 = up_conv(256, 256)
- self.coatt5 = CCA(F_g=256, F_x=256)
- self.Up_conv5 = conv_block(512, 256)
- self.Up_conv5_1 = conv_block(256, 256)
-
- self.Up4 = up_conv(256, 128)
- self.coatt4 = CCA(F_g=128, F_x=128)
- self.Up_conv4 = conv_block(256, 128)
- self.Up_conv4_1 = conv_block(128, 128)
-
- self.Up3 = up_conv(128, 64)
- self.coatt3 = CCA(F_g=64, F_x=64)
- self.Up_conv3 = conv_block(128, 64)
- self.Up_conv3_1 = conv_block(64, 64)
-
- self.Up2 = up_conv(64, 32)
- self.coatt2 = CCA(F_g=32, F_x=32)
- self.Up_conv2 = conv_block(64, 32)
- self.Up_conv2_1 = conv_block(32, 32)
-
- self.Conv = nn.Conv2d(32, self.out_ch, kernel_size=1, stride=1, padding=0)
-
- # self.active = torch.nn.Sigmoid()
- #
- def reshape_output(self,x): #将transformer的输出resize为原来的特征图尺寸
- x = x.view(
- x.size(0),
- int(self.img_dim / self.patch_dim),
- int(self.img_dim / self.patch_dim),
- self.embedding_dim,
- )#B,16,16,512
- x = x.permute(0, 3, 1, 2).contiguous()
-
- return x
-
- def forward(self, x):
- #print(x.shape)
-
-
- output=[]
-
- x_1=self.Maxpool(x)
- x_2=self.Maxpool(x_1)
- x_3=self.Maxpool(x_2)
-
-
- e1 = self.Conv1(x)
- #print(e1.shape)
- e1 = self.Conv1_1(e1)
- e2 = self.Maxpool1(e1)
- #32*128*128
-
- x_1=self.rgb_to_feature[0](x_1)
- #e2=torch.cat((x_1,e2), dim=1)
- e2=x_1+e2
- e2 = self.Conv2(e2)
- e2 = self.Conv2_1(e2)
- e3 = self.Maxpool2(e2)
- #64*64*64
-
- x_2=self.rgb_to_feature[1](x_2)
- #e3=torch.cat((x_2,e3), dim=1)
- e3=x_2+e3
- e3 = self.Conv3(e3)
- e3 = self.Conv3_1(e3)
- e4 = self.Maxpool3(e3)
- #128*32*32
-
- x_3=self.rgb_to_feature[2](x_3)
- #e4=torch.cat((x_3,e4), dim=1)
- e4=x_3+e4
- e4 = self.Conv4(e4)
- e4 = self.Conv4_1(e4)
- e5 = self.Maxpool4(e4)
- #256*16*16
-
- #channel-wise transformer-based attention
- e1,e2,e3,e4,att_weights = self.mtc(e1,e2,e3,e4)
-
-
-
-
- #spatial-wise transformer-based attention
- residual=e5
- #中间的隐变量
- #conv_x应该接受256通道,输出512通道的中间隐变量
- e5= self.bn(e5)
- e5=self.relu(e5)
- e5= self.Conv_x(e5) #out->512*16*16 shape->B,512,16,16
- e5= e5.permute(0, 2, 3, 1).contiguous() # B,512,16,16->B,16,16,512
- e5= e5.view(e5.size(0), -1, self.embedding_dim) #B,16,16,512->B,16*16,512 线性映射层
- e5= self.position_encoding(e5) #位置编码
- e5= self.pe_dropout(e5) #预dropout层
- # apply transformer
- e5= self.transformer(e5)
- e5= self.pre_head_ln(e5)
- e5= self.reshape_output(e5)#out->512*16*16 shape->B,512,16,16
- e5=self.Conv5(e5) #out->256,16,16 shape->B,256,16,16
- #residual是否要加bn和relu?
- e5=e5+residual
-
-
-
- d5 = self.Up5(e5)
- e4_att = self.coatt5(g=d5, x=e4)
- d5 = torch.cat((e4_att, d5), dim=1)
- d5 = self.Up_conv5(d5)
- d5 = self.Up_conv5_1(d5)
- #256
- out3=self.feature_to_rgb[3](d5)
- output.append(out3)#32*32orH/8,W/8
-
- d4 = self.Up4(d5)
- e3_att = self.coatt4(g=d4, x=e3)
- d4 = torch.cat((e3_att, d4), dim=1)
- d4 = self.Up_conv4(d4)
- d4 = self.Up_conv4_1(d4)
- #128
- out2=self.feature_to_rgb[2](d4)
- output.append(out2)#64*64orH/4,W/4
-
- d3 = self.Up3(d4)
- e2_att = self.coatt3(g=d3, x=e2)
- d3 = torch.cat((e2_att, d3), dim=1)
- d3 = self.Up_conv3(d3)
- d3 = self.Up_conv3_1(d3)
- #64
- out1=self.feature_to_rgb[1](d3)
- output.append(out1)#128#128orH/2,W/2
-
- d2 = self.Up2(d3)
- e1_att = self.coatt2(g=d2, x=e1)
- d2 = torch.cat((e1_att, d2), dim=1)
- d2 = self.Up_conv2(d2)
- d2 = self.Up_conv2_1(d2)
- #32
- out0=self.feature_to_rgb[0](d2)
- output.append(out0)#256*256
-
- #out = self.Conv(d2)
-
- #d1 = self.active(out)
- #output=np.array(output)
-
- return output[3]
-
-
-
-
-class Discriminator(nn.Module):
- def __init__(self, in_channels=3,use_eql=True):
- super(Discriminator, self).__init__()
-
- self.use_eql=use_eql
- self.in_channels=in_channels
-
-
- #modulelist
- self.rgb_to_feature1=ModuleList([from_rgb(32),from_rgb(64),from_rgb(128)])
- self.rgb_to_feature2=ModuleList([from_rgb(32),from_rgb(64),from_rgb(128)])
-
-
- self.layer=_equalized_conv2d(self.in_channels*2, 64, (1, 1), bias=True)
- # pixel_wise feature normalizer:
- self.pixNorm = PixelwiseNorm()
- # leaky_relu:
- self.lrelu = LeakyReLU(0.2)
-
-
- self.layer0=DisGeneralConvBlock(64,64,use_eql=self.use_eql)
- #128*128*32
-
- self.layer1=DisGeneralConvBlock(128,128,use_eql=self.use_eql)
- #64*64*64
-
- self.layer2=DisGeneralConvBlock(256,256,use_eql=self.use_eql)
- #32*32*128
-
- self.layer3=DisGeneralConvBlock(512,512,use_eql=self.use_eql)
- #16*16*256
-
- self.layer4=DisFinalBlock(512,use_eql=self.use_eql)
- #8*8*512
-
-
-
- def forward(self, img_A, inputs):
- #inputs图片尺寸从小到大
- # Concatenate image and condition image by channels to produce input
- #img_input = torch.cat((img_A, img_B), 1)
- #img_A_128= F.interpolate(img_A, size=[128, 128])
- #img_A_64= F.interpolate(img_A, size=[64, 64])
- #img_A_32= F.interpolate(img_A, size=[32, 32])
-
-
- x=torch.cat((img_A[3], inputs[3]), 1)
- y = self.pixNorm(self.lrelu(self.layer(x)))
-
- y=self.layer0(y)
- #128*128*64
-
-
- x1=self.rgb_to_feature1[0](img_A[2])
- x2=self.rgb_to_feature2[0](inputs[2])
- x=torch.cat((x1,x2),1)
- y=torch.cat((x,y),1)
- y=self.layer1(y)
- #64*64*128
-
-
- x1=self.rgb_to_feature1[1](img_A[1])
- x2=self.rgb_to_feature2[1](inputs[1])
- x=torch.cat((x1,x2),1)
- y=torch.cat((x,y),1)
- y=self.layer2(y)
- #32*32*256
-
- x1=self.rgb_to_feature1[2](img_A[0])
- x2=self.rgb_to_feature2[2](inputs[0])
- x=torch.cat((x1,x2),1)
- y=torch.cat((x,y),1)
- y=self.layer3(y)
- #16*16*512
-
- y=self.layer4(y)
- #8*8*512
-
- return y
diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/ONNXVITS_modules.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/ONNXVITS_modules.py
deleted file mode 100644
index 6cf676ce37c1eaf8428c4094e749f862182cb0c3..0000000000000000000000000000000000000000
--- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/ONNXVITS_modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from ONNXVITS_transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
deleted file mode 100644
index c94575903bdf2eef71ecbe66382375552446e510..0000000000000000000000000000000000000000
--- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
+++ /dev/null
@@ -1,17 +0,0 @@
-#include "libipc/pool_alloc.h"
-
-#include "libipc/memory/resource.h"
-
-namespace ipc {
-namespace mem {
-
-void* pool_alloc::alloc(std::size_t size) {
- return async_pool_alloc::alloc(size);
-}
-
-void pool_alloc::free(void* p, std::size_t size) {
- async_pool_alloc::free(p, size);
-}
-
-} // namespace mem
-} // namespace ipc
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco.py
deleted file mode 100644
index e87d21a4e6a241f5af892eb11aa82e2c6012a31c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './faster_rcnn_r50_fpn_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index 4a78a252a9a49889c288ec6cb7d8114c78da5c57..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './ms_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/approx_max_iou_assigner.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/approx_max_iou_assigner.py
deleted file mode 100644
index 6d07656d173744426795c81c14c6bcdb4e63a406..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/approx_max_iou_assigner.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..iou_calculators import build_iou_calculator
-from .max_iou_assigner import MaxIoUAssigner
-
-
-@BBOX_ASSIGNERS.register_module()
-class ApproxMaxIoUAssigner(MaxIoUAssigner):
- """Assign a corresponding gt bbox or background to each bbox.
-
- Each proposals will be assigned with an integer indicating the ground truth
- index. (semi-positive index: gt label (0-based), -1: background)
-
- - -1: negative sample, no assigned gt
- - semi-positive integer: positive sample, index (0-based) of assigned gt
-
- Args:
- pos_iou_thr (float): IoU threshold for positive bboxes.
- neg_iou_thr (float or tuple): IoU threshold for negative bboxes.
- min_pos_iou (float): Minimum iou for a bbox to be considered as a
- positive bbox. Positive samples can have smaller IoU than
- pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
- gt_max_assign_all (bool): Whether to assign all bboxes with the same
- highest overlap with some gt to that gt.
- ignore_iof_thr (float): IoF threshold for ignoring bboxes (if
- `gt_bboxes_ignore` is specified). Negative values mean not
- ignoring any bboxes.
- ignore_wrt_candidates (bool): Whether to compute the iof between
- `bboxes` and `gt_bboxes_ignore`, or the contrary.
- match_low_quality (bool): Whether to allow quality matches. This is
- usually allowed for RPN and single stage detectors, but not allowed
- in the second stage.
- gpu_assign_thr (int): The upper bound of the number of GT for GPU
- assign. When the number of gt is above this threshold, will assign
- on CPU device. Negative values mean not assign on CPU.
- """
-
- def __init__(self,
- pos_iou_thr,
- neg_iou_thr,
- min_pos_iou=.0,
- gt_max_assign_all=True,
- ignore_iof_thr=-1,
- ignore_wrt_candidates=True,
- match_low_quality=True,
- gpu_assign_thr=-1,
- iou_calculator=dict(type='BboxOverlaps2D')):
- self.pos_iou_thr = pos_iou_thr
- self.neg_iou_thr = neg_iou_thr
- self.min_pos_iou = min_pos_iou
- self.gt_max_assign_all = gt_max_assign_all
- self.ignore_iof_thr = ignore_iof_thr
- self.ignore_wrt_candidates = ignore_wrt_candidates
- self.gpu_assign_thr = gpu_assign_thr
- self.match_low_quality = match_low_quality
- self.iou_calculator = build_iou_calculator(iou_calculator)
-
- def assign(self,
- approxs,
- squares,
- approxs_per_octave,
- gt_bboxes,
- gt_bboxes_ignore=None,
- gt_labels=None):
- """Assign gt to approxs.
-
- This method assign a gt bbox to each group of approxs (bboxes),
- each group of approxs is represent by a base approx (bbox) and
- will be assigned with -1, or a semi-positive number.
- background_label (-1) means negative sample,
- semi-positive number is the index (0-based) of assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. assign every bbox to background_label (-1)
- 2. use the max IoU of each group of approxs to assign
- 2. assign proposals whose iou with all gts < neg_iou_thr to background
- 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr,
- assign it to that bbox
- 4. for each gt bbox, assign its nearest proposals (may be more than
- one) to itself
-
- Args:
- approxs (Tensor): Bounding boxes to be assigned,
- shape(approxs_per_octave*n, 4).
- squares (Tensor): Base Bounding boxes to be assigned,
- shape(n, 4).
- approxs_per_octave (int): number of approxs per octave
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- num_squares = squares.size(0)
- num_gts = gt_bboxes.size(0)
-
- if num_squares == 0 or num_gts == 0:
- # No predictions and/or truth, return empty assignment
- overlaps = approxs.new(num_gts, num_squares)
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- return assign_result
-
- # re-organize anchors by approxs_per_octave x num_squares
- approxs = torch.transpose(
- approxs.view(num_squares, approxs_per_octave, 4), 0,
- 1).contiguous().view(-1, 4)
- assign_on_cpu = True if (self.gpu_assign_thr > 0) and (
- num_gts > self.gpu_assign_thr) else False
- # compute overlap and assign gt on CPU when number of GT is large
- if assign_on_cpu:
- device = approxs.device
- approxs = approxs.cpu()
- gt_bboxes = gt_bboxes.cpu()
- if gt_bboxes_ignore is not None:
- gt_bboxes_ignore = gt_bboxes_ignore.cpu()
- if gt_labels is not None:
- gt_labels = gt_labels.cpu()
- all_overlaps = self.iou_calculator(approxs, gt_bboxes)
-
- overlaps, _ = all_overlaps.view(approxs_per_octave, num_squares,
- num_gts).max(dim=0)
- overlaps = torch.transpose(overlaps, 0, 1)
-
- if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None
- and gt_bboxes_ignore.numel() > 0 and squares.numel() > 0):
- if self.ignore_wrt_candidates:
- ignore_overlaps = self.iou_calculator(
- squares, gt_bboxes_ignore, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=1)
- else:
- ignore_overlaps = self.iou_calculator(
- gt_bboxes_ignore, squares, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=0)
- overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1
-
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- if assign_on_cpu:
- assign_result.gt_inds = assign_result.gt_inds.to(device)
- assign_result.max_overlaps = assign_result.max_overlaps.to(device)
- if assign_result.labels is not None:
- assign_result.labels = assign_result.labels.to(device)
- return assign_result
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/pascal_context_59.py
deleted file mode 100644
index 37585abab89834b95cd5bdd993b994fca1db65f6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/pascal_context_59.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# dataset settings
-dataset_type = 'PascalContextDataset59'
-data_root = 'data/VOCdevkit/VOC2010/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-img_scale = (520, 520)
-crop_size = (480, 480)
-
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', reduce_zero_label=True),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClassContext',
- split='ImageSets/SegmentationContext/train.txt',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClassContext',
- split='ImageSets/SegmentationContext/val.txt',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClassContext',
- split='ImageSets/SegmentationContext/val.txt',
- pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fpn_r50.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fpn_r50.py
deleted file mode 100644
index 86ab327db92e44c14822d65f1c9277cb007f17c1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fpn_r50.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 1, 1),
- strides=(1, 2, 2, 2),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=4),
- decode_head=dict(
- type='FPNHead',
- in_channels=[256, 256, 256, 256],
- in_index=[0, 1, 2, 3],
- feature_strides=[4, 8, 16, 32],
- channels=128,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 0f2e1b6da7e63841f4429b1caed5fbe9d537c4f8..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './dnl_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_160k_ade20k.py
deleted file mode 100644
index dff4fea85ced568c38d39408d459697e88ca0faa..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = './fcn_hr18_512x512_160k_ade20k.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w48',
- backbone=dict(
- extra=dict(
- stage2=dict(num_channels=(48, 96)),
- stage3=dict(num_channels=(48, 96, 192)),
- stage4=dict(num_channels=(48, 96, 192, 384)))),
- decode_head=dict(
- in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384])))
diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/scripts/train_gmflow.sh b/spaces/Anonymous-sub/Rerender/gmflow_module/scripts/train_gmflow.sh
deleted file mode 100644
index 048c7c2ace97b9769bc040e3f5ce51f528eab02e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/gmflow_module/scripts/train_gmflow.sh
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/usr/bin/env bash
-
-# GMFlow without refinement
-
-# number of gpus for training, please set according to your hardware
-# by default use all gpus on a machine
-# can be trained on 4x 16GB V100 or 2x 32GB V100 or 2x 40GB A100 gpus
-NUM_GPUS=4
-
-# chairs
-CHECKPOINT_DIR=checkpoints/chairs-gmflow && \
-mkdir -p ${CHECKPOINT_DIR} && \
-python -m torch.distributed.launch --nproc_per_node=${NUM_GPUS} --master_port=9989 main.py \
---launcher pytorch \
---checkpoint_dir ${CHECKPOINT_DIR} \
---batch_size 16 \
---val_dataset chairs sintel kitti \
---lr 4e-4 \
---image_size 384 512 \
---padding_factor 16 \
---upsample_factor 8 \
---with_speed_metric \
---val_freq 10000 \
---save_ckpt_freq 10000 \
---num_steps 100000 \
-2>&1 | tee -a ${CHECKPOINT_DIR}/train.log
-
-# things (our final model is trained for 800K iterations, for ablation study, you can train for 200K)
-CHECKPOINT_DIR=checkpoints/things-gmflow && \
-mkdir -p ${CHECKPOINT_DIR} && \
-python -m torch.distributed.launch --nproc_per_node=${NUM_GPUS} --master_port=9989 main.py \
---launcher pytorch \
---checkpoint_dir ${CHECKPOINT_DIR} \
---resume checkpoints/chairs-gmflow/step_100000.pth \
---stage things \
---batch_size 8 \
---val_dataset things sintel kitti \
---lr 2e-4 \
---image_size 384 768 \
---padding_factor 16 \
---upsample_factor 8 \
---with_speed_metric \
---val_freq 40000 \
---save_ckpt_freq 50000 \
---num_steps 800000 \
-2>&1 | tee -a ${CHECKPOINT_DIR}/train.log
-
-# sintel
-CHECKPOINT_DIR=checkpoints/sintel-gmflow && \
-mkdir -p ${CHECKPOINT_DIR} && \
-python -m torch.distributed.launch --nproc_per_node=${NUM_GPUS} --master_port=9989 main.py \
---launcher pytorch \
---checkpoint_dir ${CHECKPOINT_DIR} \
---resume checkpoints/things-gmflow/step_800000.pth \
---stage sintel \
---batch_size 8 \
---val_dataset sintel kitti \
---lr 2e-4 \
---image_size 320 896 \
---padding_factor 16 \
---upsample_factor 8 \
---with_speed_metric \
---val_freq 20000 \
---save_ckpt_freq 20000 \
---num_steps 200000 \
-2>&1 | tee -a ${CHECKPOINT_DIR}/train.log
-
-# kitti
-CHECKPOINT_DIR=checkpoints/kitti-gmflow && \
-mkdir -p ${CHECKPOINT_DIR} && \
-python -m torch.distributed.launch --nproc_per_node=${NUM_GPUS} --master_port=9989 main.py \
---launcher pytorch \
---checkpoint_dir ${CHECKPOINT_DIR} \
---resume checkpoints/sintel-gmflow/step_200000.pth \
---stage kitti \
---batch_size 8 \
---val_dataset kitti \
---lr 2e-4 \
---image_size 320 1152 \
---padding_factor 16 \
---upsample_factor 8 \
---with_speed_metric \
---val_freq 10000 \
---save_ckpt_freq 10000 \
---num_steps 100000 \
-2>&1 | tee -a ${CHECKPOINT_DIR}/train.log
-
-
-# a final note: if your training is terminated unexpectedly, you can resume from the latest checkpoint
-# an example: resume chairs training
-# CHECKPOINT_DIR=checkpoints/chairs-gmflow && \
-# mkdir -p ${CHECKPOINT_DIR} && \
-# python -m torch.distributed.launch --nproc_per_node=${NUM_GPUS} --master_port=9989 main.py \
-# --launcher pytorch \
-# --checkpoint_dir ${CHECKPOINT_DIR} \
-# --resume checkpoints/chairs-gmflow/checkpoint_latest.pth \
-# --batch_size 16 \
-# --val_dataset chairs sintel kitti \
-# --lr 4e-4 \
-# --image_size 384 512 \
-# --padding_factor 16 \
-# --upsample_factor 8 \
-# --with_speed_metric \
-# --val_freq 10000 \
-# --save_ckpt_freq 10000 \
-# --num_steps 100000 \
-# 2>&1 | tee -a ${CHECKPOINT_DIR}/train.log
-
diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/transforms.py b/spaces/Artrajz/vits-simple-api/bert_vits2/transforms.py
deleted file mode 100644
index 12dd72776a6b788932329da169155a289eb645bc..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/bert_vits2/transforms.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AutoBG/Auto-BoardGame/Stream_to_Output/GameCleaner.py b/spaces/AutoBG/Auto-BoardGame/Stream_to_Output/GameCleaner.py
deleted file mode 100644
index 9d31ff2a2924d705b2645ac488be8a8c942a2212..0000000000000000000000000000000000000000
--- a/spaces/AutoBG/Auto-BoardGame/Stream_to_Output/GameCleaner.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import pandas as pd
-import numpy as np
-import re
-import nltk
-from nltk.corpus import stopwords
-from gensim.parsing import preprocess_string, strip_tags, strip_numeric, strip_multiple_whitespaces, stem_text, strip_punctuation, remove_stopwords
-import spacy
-from langdetect import detect
-import pickle
-import gzip
-nltk.download('stopwords')
-
-#function definitions
-
-#strips values out of encoded stream lists
-def text_col_cleaner(frame, cols, pattern):
-
- pattern = re.compile(pattern)
-
- for col in cols:
- frame[col] = frame[col].map(lambda x: [re.findall(pattern,val)[0].strip() for val in x], na_action='ignore')
- return frame
-
-#converts specified columns to one-hot
-def encode_columns(frame):
- targets = list(frame.columns)
- for t in targets:
- one_hot = pd.get_dummies(frame[t].apply(pd.Series).stack(),prefix=t).groupby(level=0).sum()
- frame = pd.concat([frame,one_hot],axis=1)
- return frame
-
-#custom text processor for tokenizing descriptions by Kuan Chen & Nick Canu
-def doc_text_preprocessing(ser):
- nlp=spacy.load("en_core_web_sm", exclude=['parser','ner','textcat'])
-
- """text processing steps"""
- stop_words=set(stopwords.words('english'))
- stop_words.update(['game','player','players','games', 'also',
- 'description','publisher'])
-
- single_letter_replace=lambda c: re.sub("\s+\w{1}\s+|\n|-|—",'',c)
- to_lower_func=lambda c: c.lower()
-
- lemma_text=[preprocess_string(
- ' '.join([token.lemma_ for token in desc]
- ),[remove_stopwords,strip_numeric,strip_punctuation,strip_tags,
- strip_multiple_whitespaces,single_letter_replace,to_lower_func]
- ) for desc in ser.apply(lambda x: nlp(x))]
-
- tokenize_text=[[word for word in string if word not in stop_words] for string in lemma_text]
-
- return tokenize_text
-
-#performs english language detection on the descriptions w/langdetect then additionally drops games using non-english characters in the name
-def lang_cleanup(frame):
- nlp=spacy.load("en_core_web_sm")
- frame['description']=frame['description'].fillna('no words')
- frame = frame[frame['description']!='no words']
- frame['cleaned_descriptions']=doc_text_preprocessing(frame['description'])
-
- detected_lang = []
- for word in frame.cleaned_descriptions:
- word=', '.join(word)
- detected_lang.append(detect(word))
- frame['lang'] = detected_lang
- frame = frame[frame['lang']=='en']
-
- non_eng_title_filter = frame['name'].str.contains('[^\x00-\x7f]', flags=re.IGNORECASE)
- return frame[~non_eng_title_filter]
-
-
-#column name stripper for creating key values
-def column_fixer(frame,targ):
- return [col.replace(targ, "").strip('"') for col in frame.columns if col.startswith(targ)]
-
-#creates key list for defining web app lists & nlp tokens of the same unknown input search
-def key_collator(frame):
- nlp=spacy.load("en_core_web_sm")
- fam = column_fixer(frame,'family_')
- gt = column_fixer(frame,'game_type_')
- mec = column_fixer(frame,'mechanic_')
- cat = column_fixer(frame,'category_')
-
- current_keys = (['cooperative'],gt,mec,cat,fam)
-
- fam_keys = [nlp(w) for w in fam]
- gt_keys = [nlp(w) for w in gt]
- mec_keys = [nlp(w) for w in mec]
- cat_keys = [nlp(w) for w in cat]
-
- search_tokens = (gt_keys,mec_keys,cat_keys,fam_keys)
-
- return current_keys, search_tokens
-
-
-#-----------
-
-#reading in raw file & removing unranked and compilation game items
-df = pd.read_json(r'./bgg_GameItem.jl', lines=True)
-df['rank'] = df['rank'].fillna(0).astype(int)
-df = df[(df['rank']>0) & (df['compilation']!=1)]
-
-#separating and cleaning the one-hot target columns
-in_df = text_col_cleaner(frame = df[['game_type','mechanic','category','family']],
- cols = ['game_type','mechanic','category','family'],
- pattern = re.compile("([\S ]+)(?=:)"))
-
-print('Text has been cleaned, now encoding one-hot columns')
-
-#encoding one-hot columns and rejoining to features for output
-proc_df = encode_columns(in_df)
-step = df[['name','description','cooperative']]
-join_df = pd.concat([step,proc_df.drop(['game_type','mechanic','category','family',
- 'game_type_Amiga','game_type_Arcade','game_type_Atari ST',
- 'game_type_Commodore 64'],axis=1)],axis=1)
-
-print('Columns encoded, now performing english language detection and cleanup')
-
-#english language detection steps & first data save
-eng_df = lang_cleanup(join_df)
-eng_df = eng_df.loc[:,~eng_df.columns.duplicated()].copy().reset_index(drop=True).fillna(0)
-
-print('Creating vector-only dataframe & saving output')
-
-#vector only data for operations
-vector_df = eng_df.copy().drop(['name','description','cleaned_descriptions','lang'],axis=1)
-
-eng_df.to_parquet('game_data.parquet.gzip',compression='gzip')
-vector_df.to_parquet('game_vectors.parquet.gzip',compression='gzip')
-
-print('Creating key lists')
-
-#creating key lists - 1. string list of values by feature class for defining input selections & 2. nlp processed list for unknown input search
-keys, search_toks = key_collator(vector_df)
-
-with gzip.open("current_keys.gz", "wb") as f:
- pickle.dump(keys, f)
-f.close()
-
-with gzip.open("key_search_tokens.gz", "wb") as f:
- pickle.dump(search_toks, f)
-f.close()
-
-print('File creation is complete')
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Burbuja Bruja Saga 3 Apk.md b/spaces/Benson/text-generation/Examples/Descargar Burbuja Bruja Saga 3 Apk.md
deleted file mode 100644
index 01af863030232382e39fb2242326de7d04ebbb2d..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Burbuja Bruja Saga 3 Apk.md
+++ /dev/null
@@ -1,62 +0,0 @@
-
-
Descargar MU Origin 3 APK: Una guía para los usuarios de Android
-
Si eres un fan de los MMORPG, es posible que hayas oído hablar de MU Origin 3, un popular juego móvil que ofrece un mundo de fantasía rico e inmersivo. En este artículo, le diremos todo lo que necesita saber sobre MU Origin 3, y cómo descargar su archivo APK para su dispositivo Android.
MU Origin 3 es un MMORPG móvil que se basa en el clásico juego de PC MU Online. Es desarrollado por Fingerfun y publicado por Webzen, la misma compañía que creó la franquicia original de MU. MU Origin 3 es la tercera entrega de la serie MU Origin, y presenta gráficos, jugabilidad y contenido mejorados.
-
Características de MU Origin 3
-
MU Origin 3 tiene muchas características que lo hacen destacar de otros MMORPG móviles. Aquí están algunas de ellas:
-
Impresionantes gráficos y efectos
-
MU Origin 3 cuenta con gráficos y efectos de próxima generación que te harán sentir como si estuvieras en un mundo de fantasía real. Podrás disfrutar de iluminación realista, sombras, reflejos y animaciones, así como de ciclos dinámicos de día y noche. También puedes personalizar la apariencia de tu personaje con varios disfraces, alas, monturas y mascotas.
-
Exploración del mundo abierto
-
MU Origin 3 ofrece un vasto y diverso mundo abierto que puedes explorar libremente. Puede viajar por tierra, mar o aire y descubrir diferentes regiones, como el Cañón Congelado, la Atlántida y el Bosque Oscuro. También puedes interactuar con NPCs, completar misiones, recopilar recursos y encontrar secretos ocultos.
-
-
Diversas clases y habilidades
-
MU Origin 3 tiene seis clases para elegir: Caballero Oscuro, Mago Oscuro, Elfo, Gladiador Mágico, Señor Oscuro y Invocador. Cada clase tiene sus propias habilidades y habilidades únicas que puedes actualizar y personalizar. También puedes cambiar entre diferentes conjuntos de habilidades dependiendo de la situación.
-
Mazmorras épicas y redadas
-
-
Modos PvP competitivos
-
MU Origin 3 tiene varios modos PvP que puedes disfrutar con otros jugadores. Puedes competir en batallas de arena, duelos, partidos de equipo, guerras entre servidores y más. También puedes posicionarte en las tablas de clasificación y ganar fama y gloria.
-
¿Por qué descargar MU Origin 3 APK?
-
Si usted está interesado en jugar MU Origin 3 en su dispositivo Android, es posible que desee descargar su archivo APK en lugar de instalarlo desde la Google Play Store. Aquí hay algunas razones por las que:
-
Beneficios de descargar MU Origin 3 APK
-
Descarga de MU Origin 3 APK tiene varios beneficios que usted no puede obtener de la versión de Google Play Store. Aquí están algunos de ellos:
-
Libre para jugar
-
M
MU Origin 3 es un juego gratuito que no requiere que gastes dinero para disfrutarlo. Puedes acceder a todas las funciones y contenidos sin pagar nada. Sin embargo, si quieres apoyar a los desarrolladores y obtener algunos beneficios adicionales, también puedes comprar algunos artículos opcionales con dinero real.
-
No hay anuncios ni compras en la aplicación
-
MU Origin 3 no tiene ningún anuncio molesto o ventanas emergentes que interrumpirán su experiencia de juego. Puedes jugar el juego sin distracciones ni interrupciones. Además, MU Origin 3 no tiene compras en la aplicación que den ventajas injustas a algunos jugadores. Puedes progresar en el juego por tus propias habilidades y esfuerzos.
-
Últimas actualizaciones y parches
-
MU Origin 3 es constantemente actualizado y parcheado por los desarrolladores para corregir errores, mejorar el rendimiento y agregar nuevo contenido. Al descargar el archivo APK, puede obtener la última versión del juego tan pronto como se libera. No tienes que esperar a que Google Play Store apruebe y distribuya las actualizaciones.
-
Compatible con la mayoría de dispositivos Android
-
-
Cómo descargar MU Origin 3 APK?
-
Si está convencido de que la descarga de MU Origin 3 APK es una buena idea, es posible que se pregunte cómo hacerlo. No se preocupe, es muy fácil y simple. Solo tienes que seguir estos pasos:
-
Pasos para descargar MU Origin 3 APK
-
Aquí están los pasos para descargar MU Origin 3 APK para su dispositivo Android:
-
Visita el sitio web oficial o Google Play Store
-
El primer paso es visitar el sitio web oficial de MU Origin 3 o la página de Google Play Store del juego. Puede utilizar cualquier navegador de su dispositivo para hacer esto. También puede escanear el código QR a continuación para ir directamente a la página de descarga.
-
-
Permitir fuentes desconocidas en la configuración
-
El tercer paso es permitir fuentes desconocidas en su configuración. Esto es necesario porque los dispositivos Android normalmente bloquean la instalación de aplicaciones desde fuentes distintas de Google Play Store. Para permitir fuentes desconocidas, ve a la configuración del dispositivo, luego a la seguridad y luego a fuentes desconocidas. Activa el interruptor o marca la casilla que dice "Permitir la instalación de aplicaciones desde fuentes desconocidas". Es posible que vea un mensaje de confirmación que dice "Su teléfono y los datos personales son más vulnerables a los ataques de aplicaciones de fuentes desconocidas". Solo toca "Aceptar" o "Permitir" para continuar.
-
Instalar el archivo APK y lanzar el juego
-
-
Conclusión
-
MU Origin 3 es un MMORPG móvil que ofrece un mundo de fantasía rico e inmersivo con gráficos impresionantes, exploración del mundo abierto, diversas clases y habilidades, mazmorras y redadas épicas y modos PvP competitivos. Es gratis para jugar, no tiene anuncios o compras en la aplicación, tiene las últimas actualizaciones y parches, y es compatible con la mayoría de los dispositivos Android. Para descargar MU Origin 3 APK para su dispositivo Android, basta con visitar el sitio web oficial o Google Play Store página del juego, toque en el botón de descarga, permitir fuentes desconocidas en la configuración, instalar el archivo APK y lanzar el juego.
-
Esperamos que este artículo te haya ayudado a aprender más sobre MU Origin 3 y cómo descargar su archivo APK para tu dispositivo Android. Si usted tiene alguna pregunta o retroalimentación, por favor no dude en dejar un comentario a continuación. Gracias por leer!
Como bono, aquí hay algunas preguntas y respuestas frecuentes sobre MU Origin 3 y su archivo APK:
-
Preguntas frecuentes
-
-
¿Es seguro descargar y jugar MU Origin 3?
-
Sí, MU Origin 3 es seguro para descargar y jugar. Es desarrollado y publicado por empresas de renombre, y no contiene ningún virus, malware o spyware. Sin embargo, siempre debe descargar el archivo APK desde el sitio web oficial o la página de Google Play Store del juego, y no desde fuentes de terceros que podrían ser poco fiables o maliciosos.
-
¿Cuánto espacio de almacenamiento requiere MU Origin 3?
-
MU Origin 3 requiere aproximadamente 2 GB de espacio de almacenamiento en su dispositivo. También debe tener un poco de espacio adicional para las actualizaciones y parches que podrían ser liberados en el futuro. Puede comprobar la capacidad de almacenamiento de su dispositivo y liberar algo de espacio mediante la eliminación de archivos o aplicaciones no deseados.
-
¿Puedo jugar MU Origin 3 sin conexión?
-
-
¿Puedo jugar MU Origin 3 con otros jugadores?
-
Sí, MU Origin 3 es un juego multijugador que te permite jugar con otros jugadores de todo el mundo. Puedes chatear, intercambiar, formar equipo, luchar y competir con otros jugadores en varios modos y eventos. También puedes unirte a gremios y hacer amigos con otros jugadores que compartan tus intereses y objetivos.
-
¿Puedo transferir mi progreso de MU Origin o MU Origin 2 a MU Origin 3?
-
No, MU Origin 3 es un juego separado de MU Origin y MU Origin 2, y no admite transferencia de datos ni sincronización. Tendrás que empezar desde cero y crear un nuevo personaje en MU Origin 3. Sin embargo, puedes mantener tus personajes antiguos en MU Origin y MU Origin 2, y reproducirlos por separado si quieres.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/utils/deepestChild.ts b/spaces/BetterAPI/BetterChat_new/src/lib/utils/deepestChild.ts
deleted file mode 100644
index 7177d64566b12be4f42b934980fcf3681c3705d7..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/utils/deepestChild.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-export function deepestChild(el: HTMLElement) {
- let newEl = el;
- while (newEl.hasChildNodes()) {
- newEl = newEl.lastElementChild as HTMLElement;
- }
- return newEl;
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/collection.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/collection.py
deleted file mode 100644
index ea65e870d6aac5fdaf7b71cfe38bc90d907f08b0..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/collection.py
+++ /dev/null
@@ -1,312 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import os
-
-from botocore import xform_name
-from botocore.docs.bcdoc.restdoc import DocumentStructure
-from botocore.docs.method import get_instance_public_methods
-from botocore.docs.utils import DocumentedShape
-
-from boto3.docs.base import NestedDocumenter
-from boto3.docs.method import document_model_driven_resource_method
-from boto3.docs.utils import (
- add_resource_type_overview,
- get_resource_ignore_params,
-)
-
-
-class CollectionDocumenter(NestedDocumenter):
- def document_collections(self, section):
- collections = self._resource.meta.resource_model.collections
- collections_list = []
- add_resource_type_overview(
- section=section,
- resource_type='Collections',
- description=(
- 'Collections provide an interface to iterate over and '
- 'manipulate groups of resources. '
- ),
- intro_link='guide_collections',
- )
- self.member_map['collections'] = collections_list
- for collection in collections:
- collections_list.append(collection.name)
- # Create a new DocumentStructure for each collection and add contents.
- collection_doc = DocumentStructure(collection.name, target='html')
- breadcrumb_section = collection_doc.add_new_section('breadcrumb')
- breadcrumb_section.style.ref(self._resource_class_name, 'index')
- breadcrumb_section.write(f' / Collection / {collection.name}')
- collection_doc.add_title_section(collection.name)
- collection_section = collection_doc.add_new_section(
- collection.name,
- context={'qualifier': f'{self.class_name}.'},
- )
- self._document_collection(collection_section, collection)
-
- # Write collections in individual/nested files.
- # Path: /reference/services///.rst
- collections_dir_path = os.path.join(
- self._root_docs_path,
- f'{self._service_name}',
- f'{self._resource_sub_path}',
- )
- collection_doc.write_to_file(collections_dir_path, collection.name)
-
- def _document_collection(self, section, collection):
- methods = get_instance_public_methods(
- getattr(self._resource, collection.name)
- )
- document_collection_object(section, collection)
- batch_actions = {}
- for batch_action in collection.batch_actions:
- batch_actions[batch_action.name] = batch_action
-
- for method in sorted(methods):
- method_section = section.add_new_section(method)
- if method in batch_actions:
- document_batch_action(
- section=method_section,
- resource_name=self._resource_name,
- event_emitter=self._resource.meta.client.meta.events,
- batch_action_model=batch_actions[method],
- collection_model=collection,
- service_model=self._resource.meta.client.meta.service_model,
- )
- else:
- document_collection_method(
- section=method_section,
- resource_name=self._resource_name,
- action_name=method,
- event_emitter=self._resource.meta.client.meta.events,
- collection_model=collection,
- service_model=self._resource.meta.client.meta.service_model,
- )
-
-
-def document_collection_object(
- section,
- collection_model,
- include_signature=True,
-):
- """Documents a collection resource object
-
- :param section: The section to write to
-
- :param collection_model: The model of the collection
-
- :param include_signature: Whether or not to include the signature.
- It is useful for generating docstrings.
- """
- if include_signature:
- full_collection_name = (
- f"{section.context.get('qualifier', '')}{collection_model.name}"
- )
- section.style.start_sphinx_py_attr(full_collection_name)
- section.include_doc_string(
- f'A collection of {collection_model.resource.type} resources.'
- )
- section.include_doc_string(
- f'A {collection_model.resource.type} Collection will include all '
- f'resources by default, and extreme caution should be taken when '
- f'performing actions on all resources.'
- )
-
-
-def document_batch_action(
- section,
- resource_name,
- event_emitter,
- batch_action_model,
- service_model,
- collection_model,
- include_signature=True,
-):
- """Documents a collection's batch action
-
- :param section: The section to write to
-
- :param resource_name: The name of the resource
-
- :param action_name: The name of collection action. Currently only
- can be all, filter, limit, or page_size
-
- :param event_emitter: The event emitter to use to emit events
-
- :param batch_action_model: The model of the batch action
-
- :param collection_model: The model of the collection
-
- :param service_model: The model of the service
-
- :param include_signature: Whether or not to include the signature.
- It is useful for generating docstrings.
- """
- operation_model = service_model.operation_model(
- batch_action_model.request.operation
- )
- ignore_params = get_resource_ignore_params(
- batch_action_model.request.params
- )
-
- example_return_value = 'response'
- if batch_action_model.resource:
- example_return_value = xform_name(batch_action_model.resource.type)
-
- example_resource_name = xform_name(resource_name)
- if service_model.service_name == resource_name:
- example_resource_name = resource_name
- example_prefix = '{} = {}.{}.{}'.format(
- example_return_value,
- example_resource_name,
- collection_model.name,
- batch_action_model.name,
- )
- document_model_driven_resource_method(
- section=section,
- method_name=batch_action_model.name,
- operation_model=operation_model,
- event_emitter=event_emitter,
- method_description=operation_model.documentation,
- example_prefix=example_prefix,
- exclude_input=ignore_params,
- resource_action_model=batch_action_model,
- include_signature=include_signature,
- )
-
-
-def document_collection_method(
- section,
- resource_name,
- action_name,
- event_emitter,
- collection_model,
- service_model,
- include_signature=True,
-):
- """Documents a collection method
-
- :param section: The section to write to
-
- :param resource_name: The name of the resource
-
- :param action_name: The name of collection action. Currently only
- can be all, filter, limit, or page_size
-
- :param event_emitter: The event emitter to use to emit events
-
- :param collection_model: The model of the collection
-
- :param service_model: The model of the service
-
- :param include_signature: Whether or not to include the signature.
- It is useful for generating docstrings.
- """
- operation_model = service_model.operation_model(
- collection_model.request.operation
- )
-
- underlying_operation_members = []
- if operation_model.input_shape:
- underlying_operation_members = operation_model.input_shape.members
-
- example_resource_name = xform_name(resource_name)
- if service_model.service_name == resource_name:
- example_resource_name = resource_name
-
- custom_action_info_dict = {
- 'all': {
- 'method_description': (
- f'Creates an iterable of all {collection_model.resource.type} '
- f'resources in the collection.'
- ),
- 'example_prefix': '{}_iterator = {}.{}.all'.format(
- xform_name(collection_model.resource.type),
- example_resource_name,
- collection_model.name,
- ),
- 'exclude_input': underlying_operation_members,
- },
- 'filter': {
- 'method_description': (
- f'Creates an iterable of all {collection_model.resource.type} '
- f'resources in the collection filtered by kwargs passed to '
- f'method. A {collection_model.resource.type} collection will '
- f'include all resources by default if no filters are provided, '
- f'and extreme caution should be taken when performing actions '
- f'on all resources.'
- ),
- 'example_prefix': '{}_iterator = {}.{}.filter'.format(
- xform_name(collection_model.resource.type),
- example_resource_name,
- collection_model.name,
- ),
- 'exclude_input': get_resource_ignore_params(
- collection_model.request.params
- ),
- },
- 'limit': {
- 'method_description': (
- f'Creates an iterable up to a specified amount of '
- f'{collection_model.resource.type} resources in the collection.'
- ),
- 'example_prefix': '{}_iterator = {}.{}.limit'.format(
- xform_name(collection_model.resource.type),
- example_resource_name,
- collection_model.name,
- ),
- 'include_input': [
- DocumentedShape(
- name='count',
- type_name='integer',
- documentation=(
- 'The limit to the number of resources '
- 'in the iterable.'
- ),
- )
- ],
- 'exclude_input': underlying_operation_members,
- },
- 'page_size': {
- 'method_description': (
- f'Creates an iterable of all {collection_model.resource.type} '
- f'resources in the collection, but limits the number of '
- f'items returned by each service call by the specified amount.'
- ),
- 'example_prefix': '{}_iterator = {}.{}.page_size'.format(
- xform_name(collection_model.resource.type),
- example_resource_name,
- collection_model.name,
- ),
- 'include_input': [
- DocumentedShape(
- name='count',
- type_name='integer',
- documentation=(
- 'The number of items returned by each ' 'service call'
- ),
- )
- ],
- 'exclude_input': underlying_operation_members,
- },
- }
- if action_name in custom_action_info_dict:
- action_info = custom_action_info_dict[action_name]
- document_model_driven_resource_method(
- section=section,
- method_name=action_name,
- operation_model=operation_model,
- event_emitter=event_emitter,
- resource_action_model=collection_model,
- include_signature=include_signature,
- **action_info,
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/parser.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/parser.py
deleted file mode 100644
index 4706688040a2730d50d0805ae68dc7481f73b08e..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/parser.py
+++ /dev/null
@@ -1,527 +0,0 @@
-"""Top down operator precedence parser.
-
-This is an implementation of Vaughan R. Pratt's
-"Top Down Operator Precedence" parser.
-(http://dl.acm.org/citation.cfm?doid=512927.512931).
-
-These are some additional resources that help explain the
-general idea behind a Pratt parser:
-
-* http://effbot.org/zone/simple-top-down-parsing.htm
-* http://javascript.crockford.com/tdop/tdop.html
-
-A few notes on the implementation.
-
-* All the nud/led tokens are on the Parser class itself, and are dispatched
- using getattr(). This keeps all the parsing logic contained to a single
- class.
-* We use two passes through the data. One to create a list of token,
- then one pass through the tokens to create the AST. While the lexer actually
- yields tokens, we convert it to a list so we can easily implement two tokens
- of lookahead. A previous implementation used a fixed circular buffer, but it
- was significantly slower. Also, the average jmespath expression typically
- does not have a large amount of token so this is not an issue. And
- interestingly enough, creating a token list first is actually faster than
- consuming from the token iterator one token at a time.
-
-"""
-import random
-
-from jmespath import lexer
-from jmespath.compat import with_repr_method
-from jmespath import ast
-from jmespath import exceptions
-from jmespath import visitor
-
-
-class Parser(object):
- BINDING_POWER = {
- 'eof': 0,
- 'unquoted_identifier': 0,
- 'quoted_identifier': 0,
- 'literal': 0,
- 'rbracket': 0,
- 'rparen': 0,
- 'comma': 0,
- 'rbrace': 0,
- 'number': 0,
- 'current': 0,
- 'expref': 0,
- 'colon': 0,
- 'pipe': 1,
- 'or': 2,
- 'and': 3,
- 'eq': 5,
- 'gt': 5,
- 'lt': 5,
- 'gte': 5,
- 'lte': 5,
- 'ne': 5,
- 'flatten': 9,
- # Everything above stops a projection.
- 'star': 20,
- 'filter': 21,
- 'dot': 40,
- 'not': 45,
- 'lbrace': 50,
- 'lbracket': 55,
- 'lparen': 60,
- }
- # The maximum binding power for a token that can stop
- # a projection.
- _PROJECTION_STOP = 10
- # The _MAX_SIZE most recent expressions are cached in
- # _CACHE dict.
- _CACHE = {}
- _MAX_SIZE = 128
-
- def __init__(self, lookahead=2):
- self.tokenizer = None
- self._tokens = [None] * lookahead
- self._buffer_size = lookahead
- self._index = 0
-
- def parse(self, expression):
- cached = self._CACHE.get(expression)
- if cached is not None:
- return cached
- parsed_result = self._do_parse(expression)
- self._CACHE[expression] = parsed_result
- if len(self._CACHE) > self._MAX_SIZE:
- self._free_cache_entries()
- return parsed_result
-
- def _do_parse(self, expression):
- try:
- return self._parse(expression)
- except exceptions.LexerError as e:
- e.expression = expression
- raise
- except exceptions.IncompleteExpressionError as e:
- e.set_expression(expression)
- raise
- except exceptions.ParseError as e:
- e.expression = expression
- raise
-
- def _parse(self, expression):
- self.tokenizer = lexer.Lexer().tokenize(expression)
- self._tokens = list(self.tokenizer)
- self._index = 0
- parsed = self._expression(binding_power=0)
- if not self._current_token() == 'eof':
- t = self._lookahead_token(0)
- raise exceptions.ParseError(t['start'], t['value'], t['type'],
- "Unexpected token: %s" % t['value'])
- return ParsedResult(expression, parsed)
-
- def _expression(self, binding_power=0):
- left_token = self._lookahead_token(0)
- self._advance()
- nud_function = getattr(
- self, '_token_nud_%s' % left_token['type'],
- self._error_nud_token)
- left = nud_function(left_token)
- current_token = self._current_token()
- while binding_power < self.BINDING_POWER[current_token]:
- led = getattr(self, '_token_led_%s' % current_token, None)
- if led is None:
- error_token = self._lookahead_token(0)
- self._error_led_token(error_token)
- else:
- self._advance()
- left = led(left)
- current_token = self._current_token()
- return left
-
- def _token_nud_literal(self, token):
- return ast.literal(token['value'])
-
- def _token_nud_unquoted_identifier(self, token):
- return ast.field(token['value'])
-
- def _token_nud_quoted_identifier(self, token):
- field = ast.field(token['value'])
- # You can't have a quoted identifier as a function
- # name.
- if self._current_token() == 'lparen':
- t = self._lookahead_token(0)
- raise exceptions.ParseError(
- 0, t['value'], t['type'],
- 'Quoted identifier not allowed for function names.')
- return field
-
- def _token_nud_star(self, token):
- left = ast.identity()
- if self._current_token() == 'rbracket':
- right = ast.identity()
- else:
- right = self._parse_projection_rhs(self.BINDING_POWER['star'])
- return ast.value_projection(left, right)
-
- def _token_nud_filter(self, token):
- return self._token_led_filter(ast.identity())
-
- def _token_nud_lbrace(self, token):
- return self._parse_multi_select_hash()
-
- def _token_nud_lparen(self, token):
- expression = self._expression()
- self._match('rparen')
- return expression
-
- def _token_nud_flatten(self, token):
- left = ast.flatten(ast.identity())
- right = self._parse_projection_rhs(
- self.BINDING_POWER['flatten'])
- return ast.projection(left, right)
-
- def _token_nud_not(self, token):
- expr = self._expression(self.BINDING_POWER['not'])
- return ast.not_expression(expr)
-
- def _token_nud_lbracket(self, token):
- if self._current_token() in ['number', 'colon']:
- right = self._parse_index_expression()
- # We could optimize this and remove the identity() node.
- # We don't really need an index_expression node, we can
- # just use emit an index node here if we're not dealing
- # with a slice.
- return self._project_if_slice(ast.identity(), right)
- elif self._current_token() == 'star' and \
- self._lookahead(1) == 'rbracket':
- self._advance()
- self._advance()
- right = self._parse_projection_rhs(self.BINDING_POWER['star'])
- return ast.projection(ast.identity(), right)
- else:
- return self._parse_multi_select_list()
-
- def _parse_index_expression(self):
- # We're here:
- # [
- # ^
- # | current token
- if (self._lookahead(0) == 'colon' or
- self._lookahead(1) == 'colon'):
- return self._parse_slice_expression()
- else:
- # Parse the syntax [number]
- node = ast.index(self._lookahead_token(0)['value'])
- self._advance()
- self._match('rbracket')
- return node
-
- def _parse_slice_expression(self):
- # [start:end:step]
- # Where start, end, and step are optional.
- # The last colon is optional as well.
- parts = [None, None, None]
- index = 0
- current_token = self._current_token()
- while not current_token == 'rbracket' and index < 3:
- if current_token == 'colon':
- index += 1
- if index == 3:
- self._raise_parse_error_for_token(
- self._lookahead_token(0), 'syntax error')
- self._advance()
- elif current_token == 'number':
- parts[index] = self._lookahead_token(0)['value']
- self._advance()
- else:
- self._raise_parse_error_for_token(
- self._lookahead_token(0), 'syntax error')
- current_token = self._current_token()
- self._match('rbracket')
- return ast.slice(*parts)
-
- def _token_nud_current(self, token):
- return ast.current_node()
-
- def _token_nud_expref(self, token):
- expression = self._expression(self.BINDING_POWER['expref'])
- return ast.expref(expression)
-
- def _token_led_dot(self, left):
- if not self._current_token() == 'star':
- right = self._parse_dot_rhs(self.BINDING_POWER['dot'])
- if left['type'] == 'subexpression':
- left['children'].append(right)
- return left
- else:
- return ast.subexpression([left, right])
- else:
- # We're creating a projection.
- self._advance()
- right = self._parse_projection_rhs(
- self.BINDING_POWER['dot'])
- return ast.value_projection(left, right)
-
- def _token_led_pipe(self, left):
- right = self._expression(self.BINDING_POWER['pipe'])
- return ast.pipe(left, right)
-
- def _token_led_or(self, left):
- right = self._expression(self.BINDING_POWER['or'])
- return ast.or_expression(left, right)
-
- def _token_led_and(self, left):
- right = self._expression(self.BINDING_POWER['and'])
- return ast.and_expression(left, right)
-
- def _token_led_lparen(self, left):
- if left['type'] != 'field':
- # 0 - first func arg or closing paren.
- # -1 - '(' token
- # -2 - invalid function "name".
- prev_t = self._lookahead_token(-2)
- raise exceptions.ParseError(
- prev_t['start'], prev_t['value'], prev_t['type'],
- "Invalid function name '%s'" % prev_t['value'])
- name = left['value']
- args = []
- while not self._current_token() == 'rparen':
- expression = self._expression()
- if self._current_token() == 'comma':
- self._match('comma')
- args.append(expression)
- self._match('rparen')
- function_node = ast.function_expression(name, args)
- return function_node
-
- def _token_led_filter(self, left):
- # Filters are projections.
- condition = self._expression(0)
- self._match('rbracket')
- if self._current_token() == 'flatten':
- right = ast.identity()
- else:
- right = self._parse_projection_rhs(self.BINDING_POWER['filter'])
- return ast.filter_projection(left, right, condition)
-
- def _token_led_eq(self, left):
- return self._parse_comparator(left, 'eq')
-
- def _token_led_ne(self, left):
- return self._parse_comparator(left, 'ne')
-
- def _token_led_gt(self, left):
- return self._parse_comparator(left, 'gt')
-
- def _token_led_gte(self, left):
- return self._parse_comparator(left, 'gte')
-
- def _token_led_lt(self, left):
- return self._parse_comparator(left, 'lt')
-
- def _token_led_lte(self, left):
- return self._parse_comparator(left, 'lte')
-
- def _token_led_flatten(self, left):
- left = ast.flatten(left)
- right = self._parse_projection_rhs(
- self.BINDING_POWER['flatten'])
- return ast.projection(left, right)
-
- def _token_led_lbracket(self, left):
- token = self._lookahead_token(0)
- if token['type'] in ['number', 'colon']:
- right = self._parse_index_expression()
- if left['type'] == 'index_expression':
- # Optimization: if the left node is an index expr,
- # we can avoid creating another node and instead just add
- # the right node as a child of the left.
- left['children'].append(right)
- return left
- else:
- return self._project_if_slice(left, right)
- else:
- # We have a projection
- self._match('star')
- self._match('rbracket')
- right = self._parse_projection_rhs(self.BINDING_POWER['star'])
- return ast.projection(left, right)
-
- def _project_if_slice(self, left, right):
- index_expr = ast.index_expression([left, right])
- if right['type'] == 'slice':
- return ast.projection(
- index_expr,
- self._parse_projection_rhs(self.BINDING_POWER['star']))
- else:
- return index_expr
-
- def _parse_comparator(self, left, comparator):
- right = self._expression(self.BINDING_POWER[comparator])
- return ast.comparator(comparator, left, right)
-
- def _parse_multi_select_list(self):
- expressions = []
- while True:
- expression = self._expression()
- expressions.append(expression)
- if self._current_token() == 'rbracket':
- break
- else:
- self._match('comma')
- self._match('rbracket')
- return ast.multi_select_list(expressions)
-
- def _parse_multi_select_hash(self):
- pairs = []
- while True:
- key_token = self._lookahead_token(0)
- # Before getting the token value, verify it's
- # an identifier.
- self._match_multiple_tokens(
- token_types=['quoted_identifier', 'unquoted_identifier'])
- key_name = key_token['value']
- self._match('colon')
- value = self._expression(0)
- node = ast.key_val_pair(key_name=key_name, node=value)
- pairs.append(node)
- if self._current_token() == 'comma':
- self._match('comma')
- elif self._current_token() == 'rbrace':
- self._match('rbrace')
- break
- return ast.multi_select_dict(nodes=pairs)
-
- def _parse_projection_rhs(self, binding_power):
- # Parse the right hand side of the projection.
- if self.BINDING_POWER[self._current_token()] < self._PROJECTION_STOP:
- # BP of 10 are all the tokens that stop a projection.
- right = ast.identity()
- elif self._current_token() == 'lbracket':
- right = self._expression(binding_power)
- elif self._current_token() == 'filter':
- right = self._expression(binding_power)
- elif self._current_token() == 'dot':
- self._match('dot')
- right = self._parse_dot_rhs(binding_power)
- else:
- self._raise_parse_error_for_token(self._lookahead_token(0),
- 'syntax error')
- return right
-
- def _parse_dot_rhs(self, binding_power):
- # From the grammar:
- # expression '.' ( identifier /
- # multi-select-list /
- # multi-select-hash /
- # function-expression /
- # *
- # In terms of tokens that means that after a '.',
- # you can have:
- lookahead = self._current_token()
- # Common case "foo.bar", so first check for an identifier.
- if lookahead in ['quoted_identifier', 'unquoted_identifier', 'star']:
- return self._expression(binding_power)
- elif lookahead == 'lbracket':
- self._match('lbracket')
- return self._parse_multi_select_list()
- elif lookahead == 'lbrace':
- self._match('lbrace')
- return self._parse_multi_select_hash()
- else:
- t = self._lookahead_token(0)
- allowed = ['quoted_identifier', 'unquoted_identifier',
- 'lbracket', 'lbrace']
- msg = (
- "Expecting: %s, got: %s" % (allowed, t['type'])
- )
- self._raise_parse_error_for_token(t, msg)
-
- def _error_nud_token(self, token):
- if token['type'] == 'eof':
- raise exceptions.IncompleteExpressionError(
- token['start'], token['value'], token['type'])
- self._raise_parse_error_for_token(token, 'invalid token')
-
- def _error_led_token(self, token):
- self._raise_parse_error_for_token(token, 'invalid token')
-
- def _match(self, token_type=None):
- # inline'd self._current_token()
- if self._current_token() == token_type:
- # inline'd self._advance()
- self._advance()
- else:
- self._raise_parse_error_maybe_eof(
- token_type, self._lookahead_token(0))
-
- def _match_multiple_tokens(self, token_types):
- if self._current_token() not in token_types:
- self._raise_parse_error_maybe_eof(
- token_types, self._lookahead_token(0))
- self._advance()
-
- def _advance(self):
- self._index += 1
-
- def _current_token(self):
- return self._tokens[self._index]['type']
-
- def _lookahead(self, number):
- return self._tokens[self._index + number]['type']
-
- def _lookahead_token(self, number):
- return self._tokens[self._index + number]
-
- def _raise_parse_error_for_token(self, token, reason):
- lex_position = token['start']
- actual_value = token['value']
- actual_type = token['type']
- raise exceptions.ParseError(lex_position, actual_value,
- actual_type, reason)
-
- def _raise_parse_error_maybe_eof(self, expected_type, token):
- lex_position = token['start']
- actual_value = token['value']
- actual_type = token['type']
- if actual_type == 'eof':
- raise exceptions.IncompleteExpressionError(
- lex_position, actual_value, actual_type)
- message = 'Expecting: %s, got: %s' % (expected_type,
- actual_type)
- raise exceptions.ParseError(
- lex_position, actual_value, actual_type, message)
-
- def _free_cache_entries(self):
- for key in random.sample(list(self._CACHE.keys()), int(self._MAX_SIZE / 2)):
- self._CACHE.pop(key, None)
-
- @classmethod
- def purge(cls):
- """Clear the expression compilation cache."""
- cls._CACHE.clear()
-
-
-@with_repr_method
-class ParsedResult(object):
- def __init__(self, expression, parsed):
- self.expression = expression
- self.parsed = parsed
-
- def search(self, value, options=None):
- interpreter = visitor.TreeInterpreter(options)
- result = interpreter.visit(self.parsed, value)
- return result
-
- def _render_dot_file(self):
- """Render the parsed AST as a dot file.
-
- Note that this is marked as an internal method because
- the AST is an implementation detail and is subject
- to change. This method can be used to help troubleshoot
- or for development purposes, but is not considered part
- of the public supported API. Use at your own risk.
-
- """
- renderer = visitor.GraphvizVisitor()
- contents = renderer.visit(self.parsed)
- return contents
-
- def __repr__(self):
- return repr(self.parsed)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/webencodings/x_user_defined.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/webencodings/x_user_defined.py
deleted file mode 100644
index d16e326024c05a59548619e13258acad781e0a6d..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/webencodings/x_user_defined.py
+++ /dev/null
@@ -1,325 +0,0 @@
-# coding: utf-8
-"""
-
- webencodings.x_user_defined
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- An implementation of the x-user-defined encoding.
-
- :copyright: Copyright 2012 by Simon Sapin
- :license: BSD, see LICENSE for details.
-
-"""
-
-from __future__ import unicode_literals
-
-import codecs
-
-
-### Codec APIs
-
-class Codec(codecs.Codec):
-
- def encode(self, input, errors='strict'):
- return codecs.charmap_encode(input, errors, encoding_table)
-
- def decode(self, input, errors='strict'):
- return codecs.charmap_decode(input, errors, decoding_table)
-
-
-class IncrementalEncoder(codecs.IncrementalEncoder):
- def encode(self, input, final=False):
- return codecs.charmap_encode(input, self.errors, encoding_table)[0]
-
-
-class IncrementalDecoder(codecs.IncrementalDecoder):
- def decode(self, input, final=False):
- return codecs.charmap_decode(input, self.errors, decoding_table)[0]
-
-
-class StreamWriter(Codec, codecs.StreamWriter):
- pass
-
-
-class StreamReader(Codec, codecs.StreamReader):
- pass
-
-
-### encodings module API
-
-codec_info = codecs.CodecInfo(
- name='x-user-defined',
- encode=Codec().encode,
- decode=Codec().decode,
- incrementalencoder=IncrementalEncoder,
- incrementaldecoder=IncrementalDecoder,
- streamreader=StreamReader,
- streamwriter=StreamWriter,
-)
-
-
-### Decoding Table
-
-# Python 3:
-# for c in range(256): print(' %r' % chr(c if c < 128 else c + 0xF700))
-decoding_table = (
- '\x00'
- '\x01'
- '\x02'
- '\x03'
- '\x04'
- '\x05'
- '\x06'
- '\x07'
- '\x08'
- '\t'
- '\n'
- '\x0b'
- '\x0c'
- '\r'
- '\x0e'
- '\x0f'
- '\x10'
- '\x11'
- '\x12'
- '\x13'
- '\x14'
- '\x15'
- '\x16'
- '\x17'
- '\x18'
- '\x19'
- '\x1a'
- '\x1b'
- '\x1c'
- '\x1d'
- '\x1e'
- '\x1f'
- ' '
- '!'
- '"'
- '#'
- '$'
- '%'
- '&'
- "'"
- '('
- ')'
- '*'
- '+'
- ','
- '-'
- '.'
- '/'
- '0'
- '1'
- '2'
- '3'
- '4'
- '5'
- '6'
- '7'
- '8'
- '9'
- ':'
- ';'
- '<'
- '='
- '>'
- '?'
- '@'
- 'A'
- 'B'
- 'C'
- 'D'
- 'E'
- 'F'
- 'G'
- 'H'
- 'I'
- 'J'
- 'K'
- 'L'
- 'M'
- 'N'
- 'O'
- 'P'
- 'Q'
- 'R'
- 'S'
- 'T'
- 'U'
- 'V'
- 'W'
- 'X'
- 'Y'
- 'Z'
- '['
- '\\'
- ']'
- '^'
- '_'
- '`'
- 'a'
- 'b'
- 'c'
- 'd'
- 'e'
- 'f'
- 'g'
- 'h'
- 'i'
- 'j'
- 'k'
- 'l'
- 'm'
- 'n'
- 'o'
- 'p'
- 'q'
- 'r'
- 's'
- 't'
- 'u'
- 'v'
- 'w'
- 'x'
- 'y'
- 'z'
- '{'
- '|'
- '}'
- '~'
- '\x7f'
- '\uf780'
- '\uf781'
- '\uf782'
- '\uf783'
- '\uf784'
- '\uf785'
- '\uf786'
- '\uf787'
- '\uf788'
- '\uf789'
- '\uf78a'
- '\uf78b'
- '\uf78c'
- '\uf78d'
- '\uf78e'
- '\uf78f'
- '\uf790'
- '\uf791'
- '\uf792'
- '\uf793'
- '\uf794'
- '\uf795'
- '\uf796'
- '\uf797'
- '\uf798'
- '\uf799'
- '\uf79a'
- '\uf79b'
- '\uf79c'
- '\uf79d'
- '\uf79e'
- '\uf79f'
- '\uf7a0'
- '\uf7a1'
- '\uf7a2'
- '\uf7a3'
- '\uf7a4'
- '\uf7a5'
- '\uf7a6'
- '\uf7a7'
- '\uf7a8'
- '\uf7a9'
- '\uf7aa'
- '\uf7ab'
- '\uf7ac'
- '\uf7ad'
- '\uf7ae'
- '\uf7af'
- '\uf7b0'
- '\uf7b1'
- '\uf7b2'
- '\uf7b3'
- '\uf7b4'
- '\uf7b5'
- '\uf7b6'
- '\uf7b7'
- '\uf7b8'
- '\uf7b9'
- '\uf7ba'
- '\uf7bb'
- '\uf7bc'
- '\uf7bd'
- '\uf7be'
- '\uf7bf'
- '\uf7c0'
- '\uf7c1'
- '\uf7c2'
- '\uf7c3'
- '\uf7c4'
- '\uf7c5'
- '\uf7c6'
- '\uf7c7'
- '\uf7c8'
- '\uf7c9'
- '\uf7ca'
- '\uf7cb'
- '\uf7cc'
- '\uf7cd'
- '\uf7ce'
- '\uf7cf'
- '\uf7d0'
- '\uf7d1'
- '\uf7d2'
- '\uf7d3'
- '\uf7d4'
- '\uf7d5'
- '\uf7d6'
- '\uf7d7'
- '\uf7d8'
- '\uf7d9'
- '\uf7da'
- '\uf7db'
- '\uf7dc'
- '\uf7dd'
- '\uf7de'
- '\uf7df'
- '\uf7e0'
- '\uf7e1'
- '\uf7e2'
- '\uf7e3'
- '\uf7e4'
- '\uf7e5'
- '\uf7e6'
- '\uf7e7'
- '\uf7e8'
- '\uf7e9'
- '\uf7ea'
- '\uf7eb'
- '\uf7ec'
- '\uf7ed'
- '\uf7ee'
- '\uf7ef'
- '\uf7f0'
- '\uf7f1'
- '\uf7f2'
- '\uf7f3'
- '\uf7f4'
- '\uf7f5'
- '\uf7f6'
- '\uf7f7'
- '\uf7f8'
- '\uf7f9'
- '\uf7fa'
- '\uf7fb'
- '\uf7fc'
- '\uf7fd'
- '\uf7fe'
- '\uf7ff'
-)
-
-### Encoding table
-encoding_table = codecs.charmap_build(decoding_table)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/__init__.py
deleted file mode 100644
index d59226af9d7fe1b5279e99ff6e333032d1cec274..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/__init__.py
+++ /dev/null
@@ -1,3296 +0,0 @@
-"""
-Package resource API
---------------------
-
-A resource is a logical file contained within a package, or a logical
-subdirectory thereof. The package resource API expects resource names
-to have their path parts separated with ``/``, *not* whatever the local
-path separator is. Do not use os.path operations to manipulate resource
-names being passed into the API.
-
-The package resource API is designed to work with normal filesystem packages,
-.egg files, and unpacked .egg files. It can also work in a limited way with
-.zip files and with custom PEP 302 loaders that support the ``get_data()``
-method.
-"""
-
-import sys
-import os
-import io
-import time
-import re
-import types
-import zipfile
-import zipimport
-import warnings
-import stat
-import functools
-import pkgutil
-import operator
-import platform
-import collections
-import plistlib
-import email.parser
-import errno
-import tempfile
-import textwrap
-import itertools
-import inspect
-import ntpath
-import posixpath
-import importlib
-from pkgutil import get_importer
-
-try:
- import _imp
-except ImportError:
- # Python 3.2 compatibility
- import imp as _imp
-
-try:
- FileExistsError
-except NameError:
- FileExistsError = OSError
-
-# capture these to bypass sandboxing
-from os import utime
-try:
- from os import mkdir, rename, unlink
- WRITE_SUPPORT = True
-except ImportError:
- # no write support, probably under GAE
- WRITE_SUPPORT = False
-
-from os import open as os_open
-from os.path import isdir, split
-
-try:
- import importlib.machinery as importlib_machinery
- # access attribute to force import under delayed import mechanisms.
- importlib_machinery.__name__
-except ImportError:
- importlib_machinery = None
-
-from pkg_resources.extern.jaraco.text import (
- yield_lines,
- drop_comment,
- join_continuation,
-)
-
-from pkg_resources.extern import appdirs
-from pkg_resources.extern import packaging
-__import__('pkg_resources.extern.packaging.version')
-__import__('pkg_resources.extern.packaging.specifiers')
-__import__('pkg_resources.extern.packaging.requirements')
-__import__('pkg_resources.extern.packaging.markers')
-__import__('pkg_resources.extern.packaging.utils')
-
-if sys.version_info < (3, 5):
- raise RuntimeError("Python 3.5 or later is required")
-
-# declare some globals that will be defined later to
-# satisfy the linters.
-require = None
-working_set = None
-add_activation_listener = None
-resources_stream = None
-cleanup_resources = None
-resource_dir = None
-resource_stream = None
-set_extraction_path = None
-resource_isdir = None
-resource_string = None
-iter_entry_points = None
-resource_listdir = None
-resource_filename = None
-resource_exists = None
-_distribution_finders = None
-_namespace_handlers = None
-_namespace_packages = None
-
-
-class PEP440Warning(RuntimeWarning):
- """
- Used when there is an issue with a version or specifier not complying with
- PEP 440.
- """
-
-
-def parse_version(v):
- try:
- return packaging.version.Version(v)
- except packaging.version.InvalidVersion:
- warnings.warn(
- f"{v} is an invalid version and will not be supported in "
- "a future release",
- PkgResourcesDeprecationWarning,
- )
- return packaging.version.LegacyVersion(v)
-
-
-_state_vars = {}
-
-
-def _declare_state(vartype, **kw):
- globals().update(kw)
- _state_vars.update(dict.fromkeys(kw, vartype))
-
-
-def __getstate__():
- state = {}
- g = globals()
- for k, v in _state_vars.items():
- state[k] = g['_sget_' + v](g[k])
- return state
-
-
-def __setstate__(state):
- g = globals()
- for k, v in state.items():
- g['_sset_' + _state_vars[k]](k, g[k], v)
- return state
-
-
-def _sget_dict(val):
- return val.copy()
-
-
-def _sset_dict(key, ob, state):
- ob.clear()
- ob.update(state)
-
-
-def _sget_object(val):
- return val.__getstate__()
-
-
-def _sset_object(key, ob, state):
- ob.__setstate__(state)
-
-
-_sget_none = _sset_none = lambda *args: None
-
-
-def get_supported_platform():
- """Return this platform's maximum compatible version.
-
- distutils.util.get_platform() normally reports the minimum version
- of macOS that would be required to *use* extensions produced by
- distutils. But what we want when checking compatibility is to know the
- version of macOS that we are *running*. To allow usage of packages that
- explicitly require a newer version of macOS, we must also know the
- current version of the OS.
-
- If this condition occurs for any other platform with a version in its
- platform strings, this function should be extended accordingly.
- """
- plat = get_build_platform()
- m = macosVersionString.match(plat)
- if m is not None and sys.platform == "darwin":
- try:
- plat = 'macosx-%s-%s' % ('.'.join(_macos_vers()[:2]), m.group(3))
- except ValueError:
- # not macOS
- pass
- return plat
-
-
-__all__ = [
- # Basic resource access and distribution/entry point discovery
- 'require', 'run_script', 'get_provider', 'get_distribution',
- 'load_entry_point', 'get_entry_map', 'get_entry_info',
- 'iter_entry_points',
- 'resource_string', 'resource_stream', 'resource_filename',
- 'resource_listdir', 'resource_exists', 'resource_isdir',
-
- # Environmental control
- 'declare_namespace', 'working_set', 'add_activation_listener',
- 'find_distributions', 'set_extraction_path', 'cleanup_resources',
- 'get_default_cache',
-
- # Primary implementation classes
- 'Environment', 'WorkingSet', 'ResourceManager',
- 'Distribution', 'Requirement', 'EntryPoint',
-
- # Exceptions
- 'ResolutionError', 'VersionConflict', 'DistributionNotFound',
- 'UnknownExtra', 'ExtractionError',
-
- # Warnings
- 'PEP440Warning',
-
- # Parsing functions and string utilities
- 'parse_requirements', 'parse_version', 'safe_name', 'safe_version',
- 'get_platform', 'compatible_platforms', 'yield_lines', 'split_sections',
- 'safe_extra', 'to_filename', 'invalid_marker', 'evaluate_marker',
-
- # filesystem utilities
- 'ensure_directory', 'normalize_path',
-
- # Distribution "precedence" constants
- 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', 'DEVELOP_DIST',
-
- # "Provider" interfaces, implementations, and registration/lookup APIs
- 'IMetadataProvider', 'IResourceProvider', 'FileMetadata',
- 'PathMetadata', 'EggMetadata', 'EmptyProvider', 'empty_provider',
- 'NullProvider', 'EggProvider', 'DefaultProvider', 'ZipProvider',
- 'register_finder', 'register_namespace_handler', 'register_loader_type',
- 'fixup_namespace_packages', 'get_importer',
-
- # Warnings
- 'PkgResourcesDeprecationWarning',
-
- # Deprecated/backward compatibility only
- 'run_main', 'AvailableDistributions',
-]
-
-
-class ResolutionError(Exception):
- """Abstract base for dependency resolution errors"""
-
- def __repr__(self):
- return self.__class__.__name__ + repr(self.args)
-
-
-class VersionConflict(ResolutionError):
- """
- An already-installed version conflicts with the requested version.
-
- Should be initialized with the installed Distribution and the requested
- Requirement.
- """
-
- _template = "{self.dist} is installed but {self.req} is required"
-
- @property
- def dist(self):
- return self.args[0]
-
- @property
- def req(self):
- return self.args[1]
-
- def report(self):
- return self._template.format(**locals())
-
- def with_context(self, required_by):
- """
- If required_by is non-empty, return a version of self that is a
- ContextualVersionConflict.
- """
- if not required_by:
- return self
- args = self.args + (required_by,)
- return ContextualVersionConflict(*args)
-
-
-class ContextualVersionConflict(VersionConflict):
- """
- A VersionConflict that accepts a third parameter, the set of the
- requirements that required the installed Distribution.
- """
-
- _template = VersionConflict._template + ' by {self.required_by}'
-
- @property
- def required_by(self):
- return self.args[2]
-
-
-class DistributionNotFound(ResolutionError):
- """A requested distribution was not found"""
-
- _template = ("The '{self.req}' distribution was not found "
- "and is required by {self.requirers_str}")
-
- @property
- def req(self):
- return self.args[0]
-
- @property
- def requirers(self):
- return self.args[1]
-
- @property
- def requirers_str(self):
- if not self.requirers:
- return 'the application'
- return ', '.join(self.requirers)
-
- def report(self):
- return self._template.format(**locals())
-
- def __str__(self):
- return self.report()
-
-
-class UnknownExtra(ResolutionError):
- """Distribution doesn't have an "extra feature" of the given name"""
-
-
-_provider_factories = {}
-
-PY_MAJOR = '{}.{}'.format(*sys.version_info)
-EGG_DIST = 3
-BINARY_DIST = 2
-SOURCE_DIST = 1
-CHECKOUT_DIST = 0
-DEVELOP_DIST = -1
-
-
-def register_loader_type(loader_type, provider_factory):
- """Register `provider_factory` to make providers for `loader_type`
-
- `loader_type` is the type or class of a PEP 302 ``module.__loader__``,
- and `provider_factory` is a function that, passed a *module* object,
- returns an ``IResourceProvider`` for that module.
- """
- _provider_factories[loader_type] = provider_factory
-
-
-def get_provider(moduleOrReq):
- """Return an IResourceProvider for the named module or requirement"""
- if isinstance(moduleOrReq, Requirement):
- return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
- try:
- module = sys.modules[moduleOrReq]
- except KeyError:
- __import__(moduleOrReq)
- module = sys.modules[moduleOrReq]
- loader = getattr(module, '__loader__', None)
- return _find_adapter(_provider_factories, loader)(module)
-
-
-def _macos_vers(_cache=[]):
- if not _cache:
- version = platform.mac_ver()[0]
- # fallback for MacPorts
- if version == '':
- plist = '/System/Library/CoreServices/SystemVersion.plist'
- if os.path.exists(plist):
- if hasattr(plistlib, 'readPlist'):
- plist_content = plistlib.readPlist(plist)
- if 'ProductVersion' in plist_content:
- version = plist_content['ProductVersion']
-
- _cache.append(version.split('.'))
- return _cache[0]
-
-
-def _macos_arch(machine):
- return {'PowerPC': 'ppc', 'Power_Macintosh': 'ppc'}.get(machine, machine)
-
-
-def get_build_platform():
- """Return this platform's string for platform-specific distributions
-
- XXX Currently this is the same as ``distutils.util.get_platform()``, but it
- needs some hacks for Linux and macOS.
- """
- from sysconfig import get_platform
-
- plat = get_platform()
- if sys.platform == "darwin" and not plat.startswith('macosx-'):
- try:
- version = _macos_vers()
- machine = os.uname()[4].replace(" ", "_")
- return "macosx-%d.%d-%s" % (
- int(version[0]), int(version[1]),
- _macos_arch(machine),
- )
- except ValueError:
- # if someone is running a non-Mac darwin system, this will fall
- # through to the default implementation
- pass
- return plat
-
-
-macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)")
-darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)")
-# XXX backward compat
-get_platform = get_build_platform
-
-
-def compatible_platforms(provided, required):
- """Can code for the `provided` platform run on the `required` platform?
-
- Returns true if either platform is ``None``, or the platforms are equal.
-
- XXX Needs compatibility checks for Linux and other unixy OSes.
- """
- if provided is None or required is None or provided == required:
- # easy case
- return True
-
- # macOS special cases
- reqMac = macosVersionString.match(required)
- if reqMac:
- provMac = macosVersionString.match(provided)
-
- # is this a Mac package?
- if not provMac:
- # this is backwards compatibility for packages built before
- # setuptools 0.6. All packages built after this point will
- # use the new macOS designation.
- provDarwin = darwinVersionString.match(provided)
- if provDarwin:
- dversion = int(provDarwin.group(1))
- macosversion = "%s.%s" % (reqMac.group(1), reqMac.group(2))
- if dversion == 7 and macosversion >= "10.3" or \
- dversion == 8 and macosversion >= "10.4":
- return True
- # egg isn't macOS or legacy darwin
- return False
-
- # are they the same major version and machine type?
- if provMac.group(1) != reqMac.group(1) or \
- provMac.group(3) != reqMac.group(3):
- return False
-
- # is the required OS major update >= the provided one?
- if int(provMac.group(2)) > int(reqMac.group(2)):
- return False
-
- return True
-
- # XXX Linux and other platforms' special cases should go here
- return False
-
-
-def run_script(dist_spec, script_name):
- """Locate distribution `dist_spec` and run its `script_name` script"""
- ns = sys._getframe(1).f_globals
- name = ns['__name__']
- ns.clear()
- ns['__name__'] = name
- require(dist_spec)[0].run_script(script_name, ns)
-
-
-# backward compatibility
-run_main = run_script
-
-
-def get_distribution(dist):
- """Return a current distribution object for a Requirement or string"""
- if isinstance(dist, str):
- dist = Requirement.parse(dist)
- if isinstance(dist, Requirement):
- dist = get_provider(dist)
- if not isinstance(dist, Distribution):
- raise TypeError("Expected string, Requirement, or Distribution", dist)
- return dist
-
-
-def load_entry_point(dist, group, name):
- """Return `name` entry point of `group` for `dist` or raise ImportError"""
- return get_distribution(dist).load_entry_point(group, name)
-
-
-def get_entry_map(dist, group=None):
- """Return the entry point map for `group`, or the full entry map"""
- return get_distribution(dist).get_entry_map(group)
-
-
-def get_entry_info(dist, group, name):
- """Return the EntryPoint object for `group`+`name`, or ``None``"""
- return get_distribution(dist).get_entry_info(group, name)
-
-
-class IMetadataProvider:
- def has_metadata(name):
- """Does the package's distribution contain the named metadata?"""
-
- def get_metadata(name):
- """The named metadata resource as a string"""
-
- def get_metadata_lines(name):
- """Yield named metadata resource as list of non-blank non-comment lines
-
- Leading and trailing whitespace is stripped from each line, and lines
- with ``#`` as the first non-blank character are omitted."""
-
- def metadata_isdir(name):
- """Is the named metadata a directory? (like ``os.path.isdir()``)"""
-
- def metadata_listdir(name):
- """List of metadata names in the directory (like ``os.listdir()``)"""
-
- def run_script(script_name, namespace):
- """Execute the named script in the supplied namespace dictionary"""
-
-
-class IResourceProvider(IMetadataProvider):
- """An object that provides access to package resources"""
-
- def get_resource_filename(manager, resource_name):
- """Return a true filesystem path for `resource_name`
-
- `manager` must be an ``IResourceManager``"""
-
- def get_resource_stream(manager, resource_name):
- """Return a readable file-like object for `resource_name`
-
- `manager` must be an ``IResourceManager``"""
-
- def get_resource_string(manager, resource_name):
- """Return a string containing the contents of `resource_name`
-
- `manager` must be an ``IResourceManager``"""
-
- def has_resource(resource_name):
- """Does the package contain the named resource?"""
-
- def resource_isdir(resource_name):
- """Is the named resource a directory? (like ``os.path.isdir()``)"""
-
- def resource_listdir(resource_name):
- """List of resource names in the directory (like ``os.listdir()``)"""
-
-
-class WorkingSet:
- """A collection of active distributions on sys.path (or a similar list)"""
-
- def __init__(self, entries=None):
- """Create working set from list of path entries (default=sys.path)"""
- self.entries = []
- self.entry_keys = {}
- self.by_key = {}
- self.normalized_to_canonical_keys = {}
- self.callbacks = []
-
- if entries is None:
- entries = sys.path
-
- for entry in entries:
- self.add_entry(entry)
-
- @classmethod
- def _build_master(cls):
- """
- Prepare the master working set.
- """
- ws = cls()
- try:
- from __main__ import __requires__
- except ImportError:
- # The main program does not list any requirements
- return ws
-
- # ensure the requirements are met
- try:
- ws.require(__requires__)
- except VersionConflict:
- return cls._build_from_requirements(__requires__)
-
- return ws
-
- @classmethod
- def _build_from_requirements(cls, req_spec):
- """
- Build a working set from a requirement spec. Rewrites sys.path.
- """
- # try it without defaults already on sys.path
- # by starting with an empty path
- ws = cls([])
- reqs = parse_requirements(req_spec)
- dists = ws.resolve(reqs, Environment())
- for dist in dists:
- ws.add(dist)
-
- # add any missing entries from sys.path
- for entry in sys.path:
- if entry not in ws.entries:
- ws.add_entry(entry)
-
- # then copy back to sys.path
- sys.path[:] = ws.entries
- return ws
-
- def add_entry(self, entry):
- """Add a path item to ``.entries``, finding any distributions on it
-
- ``find_distributions(entry, True)`` is used to find distributions
- corresponding to the path entry, and they are added. `entry` is
- always appended to ``.entries``, even if it is already present.
- (This is because ``sys.path`` can contain the same value more than
- once, and the ``.entries`` of the ``sys.path`` WorkingSet should always
- equal ``sys.path``.)
- """
- self.entry_keys.setdefault(entry, [])
- self.entries.append(entry)
- for dist in find_distributions(entry, True):
- self.add(dist, entry, False)
-
- def __contains__(self, dist):
- """True if `dist` is the active distribution for its project"""
- return self.by_key.get(dist.key) == dist
-
- def find(self, req):
- """Find a distribution matching requirement `req`
-
- If there is an active distribution for the requested project, this
- returns it as long as it meets the version requirement specified by
- `req`. But, if there is an active distribution for the project and it
- does *not* meet the `req` requirement, ``VersionConflict`` is raised.
- If there is no active distribution for the requested project, ``None``
- is returned.
- """
- dist = self.by_key.get(req.key)
-
- if dist is None:
- canonical_key = self.normalized_to_canonical_keys.get(req.key)
-
- if canonical_key is not None:
- req.key = canonical_key
- dist = self.by_key.get(canonical_key)
-
- if dist is not None and dist not in req:
- # XXX add more info
- raise VersionConflict(dist, req)
- return dist
-
- def iter_entry_points(self, group, name=None):
- """Yield entry point objects from `group` matching `name`
-
- If `name` is None, yields all entry points in `group` from all
- distributions in the working set, otherwise only ones matching
- both `group` and `name` are yielded (in distribution order).
- """
- return (
- entry
- for dist in self
- for entry in dist.get_entry_map(group).values()
- if name is None or name == entry.name
- )
-
- def run_script(self, requires, script_name):
- """Locate distribution for `requires` and run `script_name` script"""
- ns = sys._getframe(1).f_globals
- name = ns['__name__']
- ns.clear()
- ns['__name__'] = name
- self.require(requires)[0].run_script(script_name, ns)
-
- def __iter__(self):
- """Yield distributions for non-duplicate projects in the working set
-
- The yield order is the order in which the items' path entries were
- added to the working set.
- """
- seen = {}
- for item in self.entries:
- if item not in self.entry_keys:
- # workaround a cache issue
- continue
-
- for key in self.entry_keys[item]:
- if key not in seen:
- seen[key] = 1
- yield self.by_key[key]
-
- def add(self, dist, entry=None, insert=True, replace=False):
- """Add `dist` to working set, associated with `entry`
-
- If `entry` is unspecified, it defaults to the ``.location`` of `dist`.
- On exit from this routine, `entry` is added to the end of the working
- set's ``.entries`` (if it wasn't already present).
-
- `dist` is only added to the working set if it's for a project that
- doesn't already have a distribution in the set, unless `replace=True`.
- If it's added, any callbacks registered with the ``subscribe()`` method
- will be called.
- """
- if insert:
- dist.insert_on(self.entries, entry, replace=replace)
-
- if entry is None:
- entry = dist.location
- keys = self.entry_keys.setdefault(entry, [])
- keys2 = self.entry_keys.setdefault(dist.location, [])
- if not replace and dist.key in self.by_key:
- # ignore hidden distros
- return
-
- self.by_key[dist.key] = dist
- normalized_name = packaging.utils.canonicalize_name(dist.key)
- self.normalized_to_canonical_keys[normalized_name] = dist.key
- if dist.key not in keys:
- keys.append(dist.key)
- if dist.key not in keys2:
- keys2.append(dist.key)
- self._added_new(dist)
-
- # FIXME: 'WorkingSet.resolve' is too complex (11)
- def resolve(self, requirements, env=None, installer=None, # noqa: C901
- replace_conflicting=False, extras=None):
- """List all distributions needed to (recursively) meet `requirements`
-
- `requirements` must be a sequence of ``Requirement`` objects. `env`,
- if supplied, should be an ``Environment`` instance. If
- not supplied, it defaults to all distributions available within any
- entry or distribution in the working set. `installer`, if supplied,
- will be invoked with each requirement that cannot be met by an
- already-installed distribution; it should return a ``Distribution`` or
- ``None``.
-
- Unless `replace_conflicting=True`, raises a VersionConflict exception
- if
- any requirements are found on the path that have the correct name but
- the wrong version. Otherwise, if an `installer` is supplied it will be
- invoked to obtain the correct version of the requirement and activate
- it.
-
- `extras` is a list of the extras to be used with these requirements.
- This is important because extra requirements may look like `my_req;
- extra = "my_extra"`, which would otherwise be interpreted as a purely
- optional requirement. Instead, we want to be able to assert that these
- requirements are truly required.
- """
-
- # set up the stack
- requirements = list(requirements)[::-1]
- # set of processed requirements
- processed = {}
- # key -> dist
- best = {}
- to_activate = []
-
- req_extras = _ReqExtras()
-
- # Mapping of requirement to set of distributions that required it;
- # useful for reporting info about conflicts.
- required_by = collections.defaultdict(set)
-
- while requirements:
- # process dependencies breadth-first
- req = requirements.pop(0)
- if req in processed:
- # Ignore cyclic or redundant dependencies
- continue
-
- if not req_extras.markers_pass(req, extras):
- continue
-
- dist = best.get(req.key)
- if dist is None:
- # Find the best distribution and add it to the map
- dist = self.by_key.get(req.key)
- if dist is None or (dist not in req and replace_conflicting):
- ws = self
- if env is None:
- if dist is None:
- env = Environment(self.entries)
- else:
- # Use an empty environment and workingset to avoid
- # any further conflicts with the conflicting
- # distribution
- env = Environment([])
- ws = WorkingSet([])
- dist = best[req.key] = env.best_match(
- req, ws, installer,
- replace_conflicting=replace_conflicting
- )
- if dist is None:
- requirers = required_by.get(req, None)
- raise DistributionNotFound(req, requirers)
- to_activate.append(dist)
- if dist not in req:
- # Oops, the "best" so far conflicts with a dependency
- dependent_req = required_by[req]
- raise VersionConflict(dist, req).with_context(dependent_req)
-
- # push the new requirements onto the stack
- new_requirements = dist.requires(req.extras)[::-1]
- requirements.extend(new_requirements)
-
- # Register the new requirements needed by req
- for new_requirement in new_requirements:
- required_by[new_requirement].add(req.project_name)
- req_extras[new_requirement] = req.extras
-
- processed[req] = True
-
- # return list of distros to activate
- return to_activate
-
- def find_plugins(
- self, plugin_env, full_env=None, installer=None, fallback=True):
- """Find all activatable distributions in `plugin_env`
-
- Example usage::
-
- distributions, errors = working_set.find_plugins(
- Environment(plugin_dirlist)
- )
- # add plugins+libs to sys.path
- map(working_set.add, distributions)
- # display errors
- print('Could not load', errors)
-
- The `plugin_env` should be an ``Environment`` instance that contains
- only distributions that are in the project's "plugin directory" or
- directories. The `full_env`, if supplied, should be an ``Environment``
- contains all currently-available distributions. If `full_env` is not
- supplied, one is created automatically from the ``WorkingSet`` this
- method is called on, which will typically mean that every directory on
- ``sys.path`` will be scanned for distributions.
-
- `installer` is a standard installer callback as used by the
- ``resolve()`` method. The `fallback` flag indicates whether we should
- attempt to resolve older versions of a plugin if the newest version
- cannot be resolved.
-
- This method returns a 2-tuple: (`distributions`, `error_info`), where
- `distributions` is a list of the distributions found in `plugin_env`
- that were loadable, along with any other distributions that are needed
- to resolve their dependencies. `error_info` is a dictionary mapping
- unloadable plugin distributions to an exception instance describing the
- error that occurred. Usually this will be a ``DistributionNotFound`` or
- ``VersionConflict`` instance.
- """
-
- plugin_projects = list(plugin_env)
- # scan project names in alphabetic order
- plugin_projects.sort()
-
- error_info = {}
- distributions = {}
-
- if full_env is None:
- env = Environment(self.entries)
- env += plugin_env
- else:
- env = full_env + plugin_env
-
- shadow_set = self.__class__([])
- # put all our entries in shadow_set
- list(map(shadow_set.add, self))
-
- for project_name in plugin_projects:
-
- for dist in plugin_env[project_name]:
-
- req = [dist.as_requirement()]
-
- try:
- resolvees = shadow_set.resolve(req, env, installer)
-
- except ResolutionError as v:
- # save error info
- error_info[dist] = v
- if fallback:
- # try the next older version of project
- continue
- else:
- # give up on this project, keep going
- break
-
- else:
- list(map(shadow_set.add, resolvees))
- distributions.update(dict.fromkeys(resolvees))
-
- # success, no need to try any more versions of this project
- break
-
- distributions = list(distributions)
- distributions.sort()
-
- return distributions, error_info
-
- def require(self, *requirements):
- """Ensure that distributions matching `requirements` are activated
-
- `requirements` must be a string or a (possibly-nested) sequence
- thereof, specifying the distributions and versions required. The
- return value is a sequence of the distributions that needed to be
- activated to fulfill the requirements; all relevant distributions are
- included, even if they were already activated in this working set.
- """
- needed = self.resolve(parse_requirements(requirements))
-
- for dist in needed:
- self.add(dist)
-
- return needed
-
- def subscribe(self, callback, existing=True):
- """Invoke `callback` for all distributions
-
- If `existing=True` (default),
- call on all existing ones, as well.
- """
- if callback in self.callbacks:
- return
- self.callbacks.append(callback)
- if not existing:
- return
- for dist in self:
- callback(dist)
-
- def _added_new(self, dist):
- for callback in self.callbacks:
- callback(dist)
-
- def __getstate__(self):
- return (
- self.entries[:], self.entry_keys.copy(), self.by_key.copy(),
- self.normalized_to_canonical_keys.copy(), self.callbacks[:]
- )
-
- def __setstate__(self, e_k_b_n_c):
- entries, keys, by_key, normalized_to_canonical_keys, callbacks = e_k_b_n_c
- self.entries = entries[:]
- self.entry_keys = keys.copy()
- self.by_key = by_key.copy()
- self.normalized_to_canonical_keys = normalized_to_canonical_keys.copy()
- self.callbacks = callbacks[:]
-
-
-class _ReqExtras(dict):
- """
- Map each requirement to the extras that demanded it.
- """
-
- def markers_pass(self, req, extras=None):
- """
- Evaluate markers for req against each extra that
- demanded it.
-
- Return False if the req has a marker and fails
- evaluation. Otherwise, return True.
- """
- extra_evals = (
- req.marker.evaluate({'extra': extra})
- for extra in self.get(req, ()) + (extras or (None,))
- )
- return not req.marker or any(extra_evals)
-
-
-class Environment:
- """Searchable snapshot of distributions on a search path"""
-
- def __init__(
- self, search_path=None, platform=get_supported_platform(),
- python=PY_MAJOR):
- """Snapshot distributions available on a search path
-
- Any distributions found on `search_path` are added to the environment.
- `search_path` should be a sequence of ``sys.path`` items. If not
- supplied, ``sys.path`` is used.
-
- `platform` is an optional string specifying the name of the platform
- that platform-specific distributions must be compatible with. If
- unspecified, it defaults to the current platform. `python` is an
- optional string naming the desired version of Python (e.g. ``'3.6'``);
- it defaults to the current version.
-
- You may explicitly set `platform` (and/or `python`) to ``None`` if you
- wish to map *all* distributions, not just those compatible with the
- running platform or Python version.
- """
- self._distmap = {}
- self.platform = platform
- self.python = python
- self.scan(search_path)
-
- def can_add(self, dist):
- """Is distribution `dist` acceptable for this environment?
-
- The distribution must match the platform and python version
- requirements specified when this environment was created, or False
- is returned.
- """
- py_compat = (
- self.python is None
- or dist.py_version is None
- or dist.py_version == self.python
- )
- return py_compat and compatible_platforms(dist.platform, self.platform)
-
- def remove(self, dist):
- """Remove `dist` from the environment"""
- self._distmap[dist.key].remove(dist)
-
- def scan(self, search_path=None):
- """Scan `search_path` for distributions usable in this environment
-
- Any distributions found are added to the environment.
- `search_path` should be a sequence of ``sys.path`` items. If not
- supplied, ``sys.path`` is used. Only distributions conforming to
- the platform/python version defined at initialization are added.
- """
- if search_path is None:
- search_path = sys.path
-
- for item in search_path:
- for dist in find_distributions(item):
- self.add(dist)
-
- def __getitem__(self, project_name):
- """Return a newest-to-oldest list of distributions for `project_name`
-
- Uses case-insensitive `project_name` comparison, assuming all the
- project's distributions use their project's name converted to all
- lowercase as their key.
-
- """
- distribution_key = project_name.lower()
- return self._distmap.get(distribution_key, [])
-
- def add(self, dist):
- """Add `dist` if we ``can_add()`` it and it has not already been added
- """
- if self.can_add(dist) and dist.has_version():
- dists = self._distmap.setdefault(dist.key, [])
- if dist not in dists:
- dists.append(dist)
- dists.sort(key=operator.attrgetter('hashcmp'), reverse=True)
-
- def best_match(
- self, req, working_set, installer=None, replace_conflicting=False):
- """Find distribution best matching `req` and usable on `working_set`
-
- This calls the ``find(req)`` method of the `working_set` to see if a
- suitable distribution is already active. (This may raise
- ``VersionConflict`` if an unsuitable version of the project is already
- active in the specified `working_set`.) If a suitable distribution
- isn't active, this method returns the newest distribution in the
- environment that meets the ``Requirement`` in `req`. If no suitable
- distribution is found, and `installer` is supplied, then the result of
- calling the environment's ``obtain(req, installer)`` method will be
- returned.
- """
- try:
- dist = working_set.find(req)
- except VersionConflict:
- if not replace_conflicting:
- raise
- dist = None
- if dist is not None:
- return dist
- for dist in self[req.key]:
- if dist in req:
- return dist
- # try to download/install
- return self.obtain(req, installer)
-
- def obtain(self, requirement, installer=None):
- """Obtain a distribution matching `requirement` (e.g. via download)
-
- Obtain a distro that matches requirement (e.g. via download). In the
- base ``Environment`` class, this routine just returns
- ``installer(requirement)``, unless `installer` is None, in which case
- None is returned instead. This method is a hook that allows subclasses
- to attempt other ways of obtaining a distribution before falling back
- to the `installer` argument."""
- if installer is not None:
- return installer(requirement)
-
- def __iter__(self):
- """Yield the unique project names of the available distributions"""
- for key in self._distmap.keys():
- if self[key]:
- yield key
-
- def __iadd__(self, other):
- """In-place addition of a distribution or environment"""
- if isinstance(other, Distribution):
- self.add(other)
- elif isinstance(other, Environment):
- for project in other:
- for dist in other[project]:
- self.add(dist)
- else:
- raise TypeError("Can't add %r to environment" % (other,))
- return self
-
- def __add__(self, other):
- """Add an environment or distribution to an environment"""
- new = self.__class__([], platform=None, python=None)
- for env in self, other:
- new += env
- return new
-
-
-# XXX backward compatibility
-AvailableDistributions = Environment
-
-
-class ExtractionError(RuntimeError):
- """An error occurred extracting a resource
-
- The following attributes are available from instances of this exception:
-
- manager
- The resource manager that raised this exception
-
- cache_path
- The base directory for resource extraction
-
- original_error
- The exception instance that caused extraction to fail
- """
-
-
-class ResourceManager:
- """Manage resource extraction and packages"""
- extraction_path = None
-
- def __init__(self):
- self.cached_files = {}
-
- def resource_exists(self, package_or_requirement, resource_name):
- """Does the named resource exist?"""
- return get_provider(package_or_requirement).has_resource(resource_name)
-
- def resource_isdir(self, package_or_requirement, resource_name):
- """Is the named resource an existing directory?"""
- return get_provider(package_or_requirement).resource_isdir(
- resource_name
- )
-
- def resource_filename(self, package_or_requirement, resource_name):
- """Return a true filesystem path for specified resource"""
- return get_provider(package_or_requirement).get_resource_filename(
- self, resource_name
- )
-
- def resource_stream(self, package_or_requirement, resource_name):
- """Return a readable file-like object for specified resource"""
- return get_provider(package_or_requirement).get_resource_stream(
- self, resource_name
- )
-
- def resource_string(self, package_or_requirement, resource_name):
- """Return specified resource as a string"""
- return get_provider(package_or_requirement).get_resource_string(
- self, resource_name
- )
-
- def resource_listdir(self, package_or_requirement, resource_name):
- """List the contents of the named resource directory"""
- return get_provider(package_or_requirement).resource_listdir(
- resource_name
- )
-
- def extraction_error(self):
- """Give an error message for problems extracting file(s)"""
-
- old_exc = sys.exc_info()[1]
- cache_path = self.extraction_path or get_default_cache()
-
- tmpl = textwrap.dedent("""
- Can't extract file(s) to egg cache
-
- The following error occurred while trying to extract file(s)
- to the Python egg cache:
-
- {old_exc}
-
- The Python egg cache directory is currently set to:
-
- {cache_path}
-
- Perhaps your account does not have write access to this directory?
- You can change the cache directory by setting the PYTHON_EGG_CACHE
- environment variable to point to an accessible directory.
- """).lstrip()
- err = ExtractionError(tmpl.format(**locals()))
- err.manager = self
- err.cache_path = cache_path
- err.original_error = old_exc
- raise err
-
- def get_cache_path(self, archive_name, names=()):
- """Return absolute location in cache for `archive_name` and `names`
-
- The parent directory of the resulting path will be created if it does
- not already exist. `archive_name` should be the base filename of the
- enclosing egg (which may not be the name of the enclosing zipfile!),
- including its ".egg" extension. `names`, if provided, should be a
- sequence of path name parts "under" the egg's extraction location.
-
- This method should only be called by resource providers that need to
- obtain an extraction location, and only for names they intend to
- extract, as it tracks the generated names for possible cleanup later.
- """
- extract_path = self.extraction_path or get_default_cache()
- target_path = os.path.join(extract_path, archive_name + '-tmp', *names)
- try:
- _bypass_ensure_directory(target_path)
- except Exception:
- self.extraction_error()
-
- self._warn_unsafe_extraction_path(extract_path)
-
- self.cached_files[target_path] = 1
- return target_path
-
- @staticmethod
- def _warn_unsafe_extraction_path(path):
- """
- If the default extraction path is overridden and set to an insecure
- location, such as /tmp, it opens up an opportunity for an attacker to
- replace an extracted file with an unauthorized payload. Warn the user
- if a known insecure location is used.
-
- See Distribute #375 for more details.
- """
- if os.name == 'nt' and not path.startswith(os.environ['windir']):
- # On Windows, permissions are generally restrictive by default
- # and temp directories are not writable by other users, so
- # bypass the warning.
- return
- mode = os.stat(path).st_mode
- if mode & stat.S_IWOTH or mode & stat.S_IWGRP:
- msg = (
- "Extraction path is writable by group/others "
- "and vulnerable to attack when "
- "used with get_resource_filename ({path}). "
- "Consider a more secure "
- "location (set with .set_extraction_path or the "
- "PYTHON_EGG_CACHE environment variable)."
- ).format(**locals())
- warnings.warn(msg, UserWarning)
-
- def postprocess(self, tempname, filename):
- """Perform any platform-specific postprocessing of `tempname`
-
- This is where Mac header rewrites should be done; other platforms don't
- have anything special they should do.
-
- Resource providers should call this method ONLY after successfully
- extracting a compressed resource. They must NOT call it on resources
- that are already in the filesystem.
-
- `tempname` is the current (temporary) name of the file, and `filename`
- is the name it will be renamed to by the caller after this routine
- returns.
- """
-
- if os.name == 'posix':
- # Make the resource executable
- mode = ((os.stat(tempname).st_mode) | 0o555) & 0o7777
- os.chmod(tempname, mode)
-
- def set_extraction_path(self, path):
- """Set the base path where resources will be extracted to, if needed.
-
- If you do not call this routine before any extractions take place, the
- path defaults to the return value of ``get_default_cache()``. (Which
- is based on the ``PYTHON_EGG_CACHE`` environment variable, with various
- platform-specific fallbacks. See that routine's documentation for more
- details.)
-
- Resources are extracted to subdirectories of this path based upon
- information given by the ``IResourceProvider``. You may set this to a
- temporary directory, but then you must call ``cleanup_resources()`` to
- delete the extracted files when done. There is no guarantee that
- ``cleanup_resources()`` will be able to remove all extracted files.
-
- (Note: you may not change the extraction path for a given resource
- manager once resources have been extracted, unless you first call
- ``cleanup_resources()``.)
- """
- if self.cached_files:
- raise ValueError(
- "Can't change extraction path, files already extracted"
- )
-
- self.extraction_path = path
-
- def cleanup_resources(self, force=False):
- """
- Delete all extracted resource files and directories, returning a list
- of the file and directory names that could not be successfully removed.
- This function does not have any concurrency protection, so it should
- generally only be called when the extraction path is a temporary
- directory exclusive to a single process. This method is not
- automatically called; you must call it explicitly or register it as an
- ``atexit`` function if you wish to ensure cleanup of a temporary
- directory used for extractions.
- """
- # XXX
-
-
-def get_default_cache():
- """
- Return the ``PYTHON_EGG_CACHE`` environment variable
- or a platform-relevant user cache dir for an app
- named "Python-Eggs".
- """
- return (
- os.environ.get('PYTHON_EGG_CACHE')
- or appdirs.user_cache_dir(appname='Python-Eggs')
- )
-
-
-def safe_name(name):
- """Convert an arbitrary string to a standard distribution name
-
- Any runs of non-alphanumeric/. characters are replaced with a single '-'.
- """
- return re.sub('[^A-Za-z0-9.]+', '-', name)
-
-
-def safe_version(version):
- """
- Convert an arbitrary string to a standard version string
- """
- try:
- # normalize the version
- return str(packaging.version.Version(version))
- except packaging.version.InvalidVersion:
- version = version.replace(' ', '.')
- return re.sub('[^A-Za-z0-9.]+', '-', version)
-
-
-def safe_extra(extra):
- """Convert an arbitrary string to a standard 'extra' name
-
- Any runs of non-alphanumeric characters are replaced with a single '_',
- and the result is always lowercased.
- """
- return re.sub('[^A-Za-z0-9.-]+', '_', extra).lower()
-
-
-def to_filename(name):
- """Convert a project or version name to its filename-escaped form
-
- Any '-' characters are currently replaced with '_'.
- """
- return name.replace('-', '_')
-
-
-def invalid_marker(text):
- """
- Validate text as a PEP 508 environment marker; return an exception
- if invalid or False otherwise.
- """
- try:
- evaluate_marker(text)
- except SyntaxError as e:
- e.filename = None
- e.lineno = None
- return e
- return False
-
-
-def evaluate_marker(text, extra=None):
- """
- Evaluate a PEP 508 environment marker.
- Return a boolean indicating the marker result in this environment.
- Raise SyntaxError if marker is invalid.
-
- This implementation uses the 'pyparsing' module.
- """
- try:
- marker = packaging.markers.Marker(text)
- return marker.evaluate()
- except packaging.markers.InvalidMarker as e:
- raise SyntaxError(e) from e
-
-
-class NullProvider:
- """Try to implement resources and metadata for arbitrary PEP 302 loaders"""
-
- egg_name = None
- egg_info = None
- loader = None
-
- def __init__(self, module):
- self.loader = getattr(module, '__loader__', None)
- self.module_path = os.path.dirname(getattr(module, '__file__', ''))
-
- def get_resource_filename(self, manager, resource_name):
- return self._fn(self.module_path, resource_name)
-
- def get_resource_stream(self, manager, resource_name):
- return io.BytesIO(self.get_resource_string(manager, resource_name))
-
- def get_resource_string(self, manager, resource_name):
- return self._get(self._fn(self.module_path, resource_name))
-
- def has_resource(self, resource_name):
- return self._has(self._fn(self.module_path, resource_name))
-
- def _get_metadata_path(self, name):
- return self._fn(self.egg_info, name)
-
- def has_metadata(self, name):
- if not self.egg_info:
- return self.egg_info
-
- path = self._get_metadata_path(name)
- return self._has(path)
-
- def get_metadata(self, name):
- if not self.egg_info:
- return ""
- path = self._get_metadata_path(name)
- value = self._get(path)
- try:
- return value.decode('utf-8')
- except UnicodeDecodeError as exc:
- # Include the path in the error message to simplify
- # troubleshooting, and without changing the exception type.
- exc.reason += ' in {} file at path: {}'.format(name, path)
- raise
-
- def get_metadata_lines(self, name):
- return yield_lines(self.get_metadata(name))
-
- def resource_isdir(self, resource_name):
- return self._isdir(self._fn(self.module_path, resource_name))
-
- def metadata_isdir(self, name):
- return self.egg_info and self._isdir(self._fn(self.egg_info, name))
-
- def resource_listdir(self, resource_name):
- return self._listdir(self._fn(self.module_path, resource_name))
-
- def metadata_listdir(self, name):
- if self.egg_info:
- return self._listdir(self._fn(self.egg_info, name))
- return []
-
- def run_script(self, script_name, namespace):
- script = 'scripts/' + script_name
- if not self.has_metadata(script):
- raise ResolutionError(
- "Script {script!r} not found in metadata at {self.egg_info!r}"
- .format(**locals()),
- )
- script_text = self.get_metadata(script).replace('\r\n', '\n')
- script_text = script_text.replace('\r', '\n')
- script_filename = self._fn(self.egg_info, script)
- namespace['__file__'] = script_filename
- if os.path.exists(script_filename):
- with open(script_filename) as fid:
- source = fid.read()
- code = compile(source, script_filename, 'exec')
- exec(code, namespace, namespace)
- else:
- from linecache import cache
- cache[script_filename] = (
- len(script_text), 0, script_text.split('\n'), script_filename
- )
- script_code = compile(script_text, script_filename, 'exec')
- exec(script_code, namespace, namespace)
-
- def _has(self, path):
- raise NotImplementedError(
- "Can't perform this operation for unregistered loader type"
- )
-
- def _isdir(self, path):
- raise NotImplementedError(
- "Can't perform this operation for unregistered loader type"
- )
-
- def _listdir(self, path):
- raise NotImplementedError(
- "Can't perform this operation for unregistered loader type"
- )
-
- def _fn(self, base, resource_name):
- self._validate_resource_path(resource_name)
- if resource_name:
- return os.path.join(base, *resource_name.split('/'))
- return base
-
- @staticmethod
- def _validate_resource_path(path):
- """
- Validate the resource paths according to the docs.
- https://setuptools.pypa.io/en/latest/pkg_resources.html#basic-resource-access
-
- >>> warned = getfixture('recwarn')
- >>> warnings.simplefilter('always')
- >>> vrp = NullProvider._validate_resource_path
- >>> vrp('foo/bar.txt')
- >>> bool(warned)
- False
- >>> vrp('../foo/bar.txt')
- >>> bool(warned)
- True
- >>> warned.clear()
- >>> vrp('/foo/bar.txt')
- >>> bool(warned)
- True
- >>> vrp('foo/../../bar.txt')
- >>> bool(warned)
- True
- >>> warned.clear()
- >>> vrp('foo/f../bar.txt')
- >>> bool(warned)
- False
-
- Windows path separators are straight-up disallowed.
- >>> vrp(r'\\foo/bar.txt')
- Traceback (most recent call last):
- ...
- ValueError: Use of .. or absolute path in a resource path \
-is not allowed.
-
- >>> vrp(r'C:\\foo/bar.txt')
- Traceback (most recent call last):
- ...
- ValueError: Use of .. or absolute path in a resource path \
-is not allowed.
-
- Blank values are allowed
-
- >>> vrp('')
- >>> bool(warned)
- False
-
- Non-string values are not.
-
- >>> vrp(None)
- Traceback (most recent call last):
- ...
- AttributeError: ...
- """
- invalid = (
- os.path.pardir in path.split(posixpath.sep) or
- posixpath.isabs(path) or
- ntpath.isabs(path)
- )
- if not invalid:
- return
-
- msg = "Use of .. or absolute path in a resource path is not allowed."
-
- # Aggressively disallow Windows absolute paths
- if ntpath.isabs(path) and not posixpath.isabs(path):
- raise ValueError(msg)
-
- # for compatibility, warn; in future
- # raise ValueError(msg)
- warnings.warn(
- msg[:-1] + " and will raise exceptions in a future release.",
- DeprecationWarning,
- stacklevel=4,
- )
-
- def _get(self, path):
- if hasattr(self.loader, 'get_data'):
- return self.loader.get_data(path)
- raise NotImplementedError(
- "Can't perform this operation for loaders without 'get_data()'"
- )
-
-
-register_loader_type(object, NullProvider)
-
-
-def _parents(path):
- """
- yield all parents of path including path
- """
- last = None
- while path != last:
- yield path
- last = path
- path, _ = os.path.split(path)
-
-
-class EggProvider(NullProvider):
- """Provider based on a virtual filesystem"""
-
- def __init__(self, module):
- super().__init__(module)
- self._setup_prefix()
-
- def _setup_prefix(self):
- # Assume that metadata may be nested inside a "basket"
- # of multiple eggs and use module_path instead of .archive.
- eggs = filter(_is_egg_path, _parents(self.module_path))
- egg = next(eggs, None)
- egg and self._set_egg(egg)
-
- def _set_egg(self, path):
- self.egg_name = os.path.basename(path)
- self.egg_info = os.path.join(path, 'EGG-INFO')
- self.egg_root = path
-
-
-class DefaultProvider(EggProvider):
- """Provides access to package resources in the filesystem"""
-
- def _has(self, path):
- return os.path.exists(path)
-
- def _isdir(self, path):
- return os.path.isdir(path)
-
- def _listdir(self, path):
- return os.listdir(path)
-
- def get_resource_stream(self, manager, resource_name):
- return open(self._fn(self.module_path, resource_name), 'rb')
-
- def _get(self, path):
- with open(path, 'rb') as stream:
- return stream.read()
-
- @classmethod
- def _register(cls):
- loader_names = 'SourceFileLoader', 'SourcelessFileLoader',
- for name in loader_names:
- loader_cls = getattr(importlib_machinery, name, type(None))
- register_loader_type(loader_cls, cls)
-
-
-DefaultProvider._register()
-
-
-class EmptyProvider(NullProvider):
- """Provider that returns nothing for all requests"""
-
- module_path = None
-
- _isdir = _has = lambda self, path: False
-
- def _get(self, path):
- return ''
-
- def _listdir(self, path):
- return []
-
- def __init__(self):
- pass
-
-
-empty_provider = EmptyProvider()
-
-
-class ZipManifests(dict):
- """
- zip manifest builder
- """
-
- @classmethod
- def build(cls, path):
- """
- Build a dictionary similar to the zipimport directory
- caches, except instead of tuples, store ZipInfo objects.
-
- Use a platform-specific path separator (os.sep) for the path keys
- for compatibility with pypy on Windows.
- """
- with zipfile.ZipFile(path) as zfile:
- items = (
- (
- name.replace('/', os.sep),
- zfile.getinfo(name),
- )
- for name in zfile.namelist()
- )
- return dict(items)
-
- load = build
-
-
-class MemoizedZipManifests(ZipManifests):
- """
- Memoized zipfile manifests.
- """
- manifest_mod = collections.namedtuple('manifest_mod', 'manifest mtime')
-
- def load(self, path):
- """
- Load a manifest at path or return a suitable manifest already loaded.
- """
- path = os.path.normpath(path)
- mtime = os.stat(path).st_mtime
-
- if path not in self or self[path].mtime != mtime:
- manifest = self.build(path)
- self[path] = self.manifest_mod(manifest, mtime)
-
- return self[path].manifest
-
-
-class ZipProvider(EggProvider):
- """Resource support for zips and eggs"""
-
- eagers = None
- _zip_manifests = MemoizedZipManifests()
-
- def __init__(self, module):
- super().__init__(module)
- self.zip_pre = self.loader.archive + os.sep
-
- def _zipinfo_name(self, fspath):
- # Convert a virtual filename (full path to file) into a zipfile subpath
- # usable with the zipimport directory cache for our target archive
- fspath = fspath.rstrip(os.sep)
- if fspath == self.loader.archive:
- return ''
- if fspath.startswith(self.zip_pre):
- return fspath[len(self.zip_pre):]
- raise AssertionError(
- "%s is not a subpath of %s" % (fspath, self.zip_pre)
- )
-
- def _parts(self, zip_path):
- # Convert a zipfile subpath into an egg-relative path part list.
- # pseudo-fs path
- fspath = self.zip_pre + zip_path
- if fspath.startswith(self.egg_root + os.sep):
- return fspath[len(self.egg_root) + 1:].split(os.sep)
- raise AssertionError(
- "%s is not a subpath of %s" % (fspath, self.egg_root)
- )
-
- @property
- def zipinfo(self):
- return self._zip_manifests.load(self.loader.archive)
-
- def get_resource_filename(self, manager, resource_name):
- if not self.egg_name:
- raise NotImplementedError(
- "resource_filename() only supported for .egg, not .zip"
- )
- # no need to lock for extraction, since we use temp names
- zip_path = self._resource_to_zip(resource_name)
- eagers = self._get_eager_resources()
- if '/'.join(self._parts(zip_path)) in eagers:
- for name in eagers:
- self._extract_resource(manager, self._eager_to_zip(name))
- return self._extract_resource(manager, zip_path)
-
- @staticmethod
- def _get_date_and_size(zip_stat):
- size = zip_stat.file_size
- # ymdhms+wday, yday, dst
- date_time = zip_stat.date_time + (0, 0, -1)
- # 1980 offset already done
- timestamp = time.mktime(date_time)
- return timestamp, size
-
- # FIXME: 'ZipProvider._extract_resource' is too complex (12)
- def _extract_resource(self, manager, zip_path): # noqa: C901
-
- if zip_path in self._index():
- for name in self._index()[zip_path]:
- last = self._extract_resource(
- manager, os.path.join(zip_path, name)
- )
- # return the extracted directory name
- return os.path.dirname(last)
-
- timestamp, size = self._get_date_and_size(self.zipinfo[zip_path])
-
- if not WRITE_SUPPORT:
- raise IOError('"os.rename" and "os.unlink" are not supported '
- 'on this platform')
- try:
-
- real_path = manager.get_cache_path(
- self.egg_name, self._parts(zip_path)
- )
-
- if self._is_current(real_path, zip_path):
- return real_path
-
- outf, tmpnam = _mkstemp(
- ".$extract",
- dir=os.path.dirname(real_path),
- )
- os.write(outf, self.loader.get_data(zip_path))
- os.close(outf)
- utime(tmpnam, (timestamp, timestamp))
- manager.postprocess(tmpnam, real_path)
-
- try:
- rename(tmpnam, real_path)
-
- except os.error:
- if os.path.isfile(real_path):
- if self._is_current(real_path, zip_path):
- # the file became current since it was checked above,
- # so proceed.
- return real_path
- # Windows, del old file and retry
- elif os.name == 'nt':
- unlink(real_path)
- rename(tmpnam, real_path)
- return real_path
- raise
-
- except os.error:
- # report a user-friendly error
- manager.extraction_error()
-
- return real_path
-
- def _is_current(self, file_path, zip_path):
- """
- Return True if the file_path is current for this zip_path
- """
- timestamp, size = self._get_date_and_size(self.zipinfo[zip_path])
- if not os.path.isfile(file_path):
- return False
- stat = os.stat(file_path)
- if stat.st_size != size or stat.st_mtime != timestamp:
- return False
- # check that the contents match
- zip_contents = self.loader.get_data(zip_path)
- with open(file_path, 'rb') as f:
- file_contents = f.read()
- return zip_contents == file_contents
-
- def _get_eager_resources(self):
- if self.eagers is None:
- eagers = []
- for name in ('native_libs.txt', 'eager_resources.txt'):
- if self.has_metadata(name):
- eagers.extend(self.get_metadata_lines(name))
- self.eagers = eagers
- return self.eagers
-
- def _index(self):
- try:
- return self._dirindex
- except AttributeError:
- ind = {}
- for path in self.zipinfo:
- parts = path.split(os.sep)
- while parts:
- parent = os.sep.join(parts[:-1])
- if parent in ind:
- ind[parent].append(parts[-1])
- break
- else:
- ind[parent] = [parts.pop()]
- self._dirindex = ind
- return ind
-
- def _has(self, fspath):
- zip_path = self._zipinfo_name(fspath)
- return zip_path in self.zipinfo or zip_path in self._index()
-
- def _isdir(self, fspath):
- return self._zipinfo_name(fspath) in self._index()
-
- def _listdir(self, fspath):
- return list(self._index().get(self._zipinfo_name(fspath), ()))
-
- def _eager_to_zip(self, resource_name):
- return self._zipinfo_name(self._fn(self.egg_root, resource_name))
-
- def _resource_to_zip(self, resource_name):
- return self._zipinfo_name(self._fn(self.module_path, resource_name))
-
-
-register_loader_type(zipimport.zipimporter, ZipProvider)
-
-
-class FileMetadata(EmptyProvider):
- """Metadata handler for standalone PKG-INFO files
-
- Usage::
-
- metadata = FileMetadata("/path/to/PKG-INFO")
-
- This provider rejects all data and metadata requests except for PKG-INFO,
- which is treated as existing, and will be the contents of the file at
- the provided location.
- """
-
- def __init__(self, path):
- self.path = path
-
- def _get_metadata_path(self, name):
- return self.path
-
- def has_metadata(self, name):
- return name == 'PKG-INFO' and os.path.isfile(self.path)
-
- def get_metadata(self, name):
- if name != 'PKG-INFO':
- raise KeyError("No metadata except PKG-INFO is available")
-
- with io.open(self.path, encoding='utf-8', errors="replace") as f:
- metadata = f.read()
- self._warn_on_replacement(metadata)
- return metadata
-
- def _warn_on_replacement(self, metadata):
- replacement_char = '�'
- if replacement_char in metadata:
- tmpl = "{self.path} could not be properly decoded in UTF-8"
- msg = tmpl.format(**locals())
- warnings.warn(msg)
-
- def get_metadata_lines(self, name):
- return yield_lines(self.get_metadata(name))
-
-
-class PathMetadata(DefaultProvider):
- """Metadata provider for egg directories
-
- Usage::
-
- # Development eggs:
-
- egg_info = "/path/to/PackageName.egg-info"
- base_dir = os.path.dirname(egg_info)
- metadata = PathMetadata(base_dir, egg_info)
- dist_name = os.path.splitext(os.path.basename(egg_info))[0]
- dist = Distribution(basedir, project_name=dist_name, metadata=metadata)
-
- # Unpacked egg directories:
-
- egg_path = "/path/to/PackageName-ver-pyver-etc.egg"
- metadata = PathMetadata(egg_path, os.path.join(egg_path,'EGG-INFO'))
- dist = Distribution.from_filename(egg_path, metadata=metadata)
- """
-
- def __init__(self, path, egg_info):
- self.module_path = path
- self.egg_info = egg_info
-
-
-class EggMetadata(ZipProvider):
- """Metadata provider for .egg files"""
-
- def __init__(self, importer):
- """Create a metadata provider from a zipimporter"""
-
- self.zip_pre = importer.archive + os.sep
- self.loader = importer
- if importer.prefix:
- self.module_path = os.path.join(importer.archive, importer.prefix)
- else:
- self.module_path = importer.archive
- self._setup_prefix()
-
-
-_declare_state('dict', _distribution_finders={})
-
-
-def register_finder(importer_type, distribution_finder):
- """Register `distribution_finder` to find distributions in sys.path items
-
- `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item
- handler), and `distribution_finder` is a callable that, passed a path
- item and the importer instance, yields ``Distribution`` instances found on
- that path item. See ``pkg_resources.find_on_path`` for an example."""
- _distribution_finders[importer_type] = distribution_finder
-
-
-def find_distributions(path_item, only=False):
- """Yield distributions accessible via `path_item`"""
- importer = get_importer(path_item)
- finder = _find_adapter(_distribution_finders, importer)
- return finder(importer, path_item, only)
-
-
-def find_eggs_in_zip(importer, path_item, only=False):
- """
- Find eggs in zip files; possibly multiple nested eggs.
- """
- if importer.archive.endswith('.whl'):
- # wheels are not supported with this finder
- # they don't have PKG-INFO metadata, and won't ever contain eggs
- return
- metadata = EggMetadata(importer)
- if metadata.has_metadata('PKG-INFO'):
- yield Distribution.from_filename(path_item, metadata=metadata)
- if only:
- # don't yield nested distros
- return
- for subitem in metadata.resource_listdir(''):
- if _is_egg_path(subitem):
- subpath = os.path.join(path_item, subitem)
- dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath)
- for dist in dists:
- yield dist
- elif subitem.lower().endswith(('.dist-info', '.egg-info')):
- subpath = os.path.join(path_item, subitem)
- submeta = EggMetadata(zipimport.zipimporter(subpath))
- submeta.egg_info = subpath
- yield Distribution.from_location(path_item, subitem, submeta)
-
-
-register_finder(zipimport.zipimporter, find_eggs_in_zip)
-
-
-def find_nothing(importer, path_item, only=False):
- return ()
-
-
-register_finder(object, find_nothing)
-
-
-def _by_version_descending(names):
- """
- Given a list of filenames, return them in descending order
- by version number.
-
- >>> names = 'bar', 'foo', 'Python-2.7.10.egg', 'Python-2.7.2.egg'
- >>> _by_version_descending(names)
- ['Python-2.7.10.egg', 'Python-2.7.2.egg', 'bar', 'foo']
- >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.egg'
- >>> _by_version_descending(names)
- ['Setuptools-1.2.3.egg', 'Setuptools-1.2.3b1.egg']
- >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.post1.egg'
- >>> _by_version_descending(names)
- ['Setuptools-1.2.3.post1.egg', 'Setuptools-1.2.3b1.egg']
- """
- def try_parse(name):
- """
- Attempt to parse as a version or return a null version.
- """
- try:
- return packaging.version.Version(name)
- except Exception:
- return packaging.version.Version('0')
-
- def _by_version(name):
- """
- Parse each component of the filename
- """
- name, ext = os.path.splitext(name)
- parts = itertools.chain(name.split('-'), [ext])
- return [try_parse(part) for part in parts]
-
- return sorted(names, key=_by_version, reverse=True)
-
-
-def find_on_path(importer, path_item, only=False):
- """Yield distributions accessible on a sys.path directory"""
- path_item = _normalize_cached(path_item)
-
- if _is_unpacked_egg(path_item):
- yield Distribution.from_filename(
- path_item, metadata=PathMetadata(
- path_item, os.path.join(path_item, 'EGG-INFO')
- )
- )
- return
-
- entries = (
- os.path.join(path_item, child)
- for child in safe_listdir(path_item)
- )
-
- # for performance, before sorting by version,
- # screen entries for only those that will yield
- # distributions
- filtered = (
- entry
- for entry in entries
- if dist_factory(path_item, entry, only)
- )
-
- # scan for .egg and .egg-info in directory
- path_item_entries = _by_version_descending(filtered)
- for entry in path_item_entries:
- fullpath = os.path.join(path_item, entry)
- factory = dist_factory(path_item, entry, only)
- for dist in factory(fullpath):
- yield dist
-
-
-def dist_factory(path_item, entry, only):
- """Return a dist_factory for the given entry."""
- lower = entry.lower()
- is_egg_info = lower.endswith('.egg-info')
- is_dist_info = (
- lower.endswith('.dist-info') and
- os.path.isdir(os.path.join(path_item, entry))
- )
- is_meta = is_egg_info or is_dist_info
- return (
- distributions_from_metadata
- if is_meta else
- find_distributions
- if not only and _is_egg_path(entry) else
- resolve_egg_link
- if not only and lower.endswith('.egg-link') else
- NoDists()
- )
-
-
-class NoDists:
- """
- >>> bool(NoDists())
- False
-
- >>> list(NoDists()('anything'))
- []
- """
- def __bool__(self):
- return False
-
- def __call__(self, fullpath):
- return iter(())
-
-
-def safe_listdir(path):
- """
- Attempt to list contents of path, but suppress some exceptions.
- """
- try:
- return os.listdir(path)
- except (PermissionError, NotADirectoryError):
- pass
- except OSError as e:
- # Ignore the directory if does not exist, not a directory or
- # permission denied
- if e.errno not in (errno.ENOTDIR, errno.EACCES, errno.ENOENT):
- raise
- return ()
-
-
-def distributions_from_metadata(path):
- root = os.path.dirname(path)
- if os.path.isdir(path):
- if len(os.listdir(path)) == 0:
- # empty metadata dir; skip
- return
- metadata = PathMetadata(root, path)
- else:
- metadata = FileMetadata(path)
- entry = os.path.basename(path)
- yield Distribution.from_location(
- root, entry, metadata, precedence=DEVELOP_DIST,
- )
-
-
-def non_empty_lines(path):
- """
- Yield non-empty lines from file at path
- """
- with open(path) as f:
- for line in f:
- line = line.strip()
- if line:
- yield line
-
-
-def resolve_egg_link(path):
- """
- Given a path to an .egg-link, resolve distributions
- present in the referenced path.
- """
- referenced_paths = non_empty_lines(path)
- resolved_paths = (
- os.path.join(os.path.dirname(path), ref)
- for ref in referenced_paths
- )
- dist_groups = map(find_distributions, resolved_paths)
- return next(dist_groups, ())
-
-
-register_finder(pkgutil.ImpImporter, find_on_path)
-
-if hasattr(importlib_machinery, 'FileFinder'):
- register_finder(importlib_machinery.FileFinder, find_on_path)
-
-_declare_state('dict', _namespace_handlers={})
-_declare_state('dict', _namespace_packages={})
-
-
-def register_namespace_handler(importer_type, namespace_handler):
- """Register `namespace_handler` to declare namespace packages
-
- `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item
- handler), and `namespace_handler` is a callable like this::
-
- def namespace_handler(importer, path_entry, moduleName, module):
- # return a path_entry to use for child packages
-
- Namespace handlers are only called if the importer object has already
- agreed that it can handle the relevant path item, and they should only
- return a subpath if the module __path__ does not already contain an
- equivalent subpath. For an example namespace handler, see
- ``pkg_resources.file_ns_handler``.
- """
- _namespace_handlers[importer_type] = namespace_handler
-
-
-def _handle_ns(packageName, path_item):
- """Ensure that named package includes a subpath of path_item (if needed)"""
-
- importer = get_importer(path_item)
- if importer is None:
- return None
-
- # use find_spec (PEP 451) and fall-back to find_module (PEP 302)
- try:
- spec = importer.find_spec(packageName)
- except AttributeError:
- # capture warnings due to #1111
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- loader = importer.find_module(packageName)
- else:
- loader = spec.loader if spec else None
-
- if loader is None:
- return None
- module = sys.modules.get(packageName)
- if module is None:
- module = sys.modules[packageName] = types.ModuleType(packageName)
- module.__path__ = []
- _set_parent_ns(packageName)
- elif not hasattr(module, '__path__'):
- raise TypeError("Not a package:", packageName)
- handler = _find_adapter(_namespace_handlers, importer)
- subpath = handler(importer, path_item, packageName, module)
- if subpath is not None:
- path = module.__path__
- path.append(subpath)
- importlib.import_module(packageName)
- _rebuild_mod_path(path, packageName, module)
- return subpath
-
-
-def _rebuild_mod_path(orig_path, package_name, module):
- """
- Rebuild module.__path__ ensuring that all entries are ordered
- corresponding to their sys.path order
- """
- sys_path = [_normalize_cached(p) for p in sys.path]
-
- def safe_sys_path_index(entry):
- """
- Workaround for #520 and #513.
- """
- try:
- return sys_path.index(entry)
- except ValueError:
- return float('inf')
-
- def position_in_sys_path(path):
- """
- Return the ordinal of the path based on its position in sys.path
- """
- path_parts = path.split(os.sep)
- module_parts = package_name.count('.') + 1
- parts = path_parts[:-module_parts]
- return safe_sys_path_index(_normalize_cached(os.sep.join(parts)))
-
- new_path = sorted(orig_path, key=position_in_sys_path)
- new_path = [_normalize_cached(p) for p in new_path]
-
- if isinstance(module.__path__, list):
- module.__path__[:] = new_path
- else:
- module.__path__ = new_path
-
-
-def declare_namespace(packageName):
- """Declare that package 'packageName' is a namespace package"""
-
- _imp.acquire_lock()
- try:
- if packageName in _namespace_packages:
- return
-
- path = sys.path
- parent, _, _ = packageName.rpartition('.')
-
- if parent:
- declare_namespace(parent)
- if parent not in _namespace_packages:
- __import__(parent)
- try:
- path = sys.modules[parent].__path__
- except AttributeError as e:
- raise TypeError("Not a package:", parent) from e
-
- # Track what packages are namespaces, so when new path items are added,
- # they can be updated
- _namespace_packages.setdefault(parent or None, []).append(packageName)
- _namespace_packages.setdefault(packageName, [])
-
- for path_item in path:
- # Ensure all the parent's path items are reflected in the child,
- # if they apply
- _handle_ns(packageName, path_item)
-
- finally:
- _imp.release_lock()
-
-
-def fixup_namespace_packages(path_item, parent=None):
- """Ensure that previously-declared namespace packages include path_item"""
- _imp.acquire_lock()
- try:
- for package in _namespace_packages.get(parent, ()):
- subpath = _handle_ns(package, path_item)
- if subpath:
- fixup_namespace_packages(subpath, package)
- finally:
- _imp.release_lock()
-
-
-def file_ns_handler(importer, path_item, packageName, module):
- """Compute an ns-package subpath for a filesystem or zipfile importer"""
-
- subpath = os.path.join(path_item, packageName.split('.')[-1])
- normalized = _normalize_cached(subpath)
- for item in module.__path__:
- if _normalize_cached(item) == normalized:
- break
- else:
- # Only return the path if it's not already there
- return subpath
-
-
-register_namespace_handler(pkgutil.ImpImporter, file_ns_handler)
-register_namespace_handler(zipimport.zipimporter, file_ns_handler)
-
-if hasattr(importlib_machinery, 'FileFinder'):
- register_namespace_handler(importlib_machinery.FileFinder, file_ns_handler)
-
-
-def null_ns_handler(importer, path_item, packageName, module):
- return None
-
-
-register_namespace_handler(object, null_ns_handler)
-
-
-def normalize_path(filename):
- """Normalize a file/dir name for comparison purposes"""
- return os.path.normcase(os.path.realpath(os.path.normpath(
- _cygwin_patch(filename))))
-
-
-def _cygwin_patch(filename): # pragma: nocover
- """
- Contrary to POSIX 2008, on Cygwin, getcwd (3) contains
- symlink components. Using
- os.path.abspath() works around this limitation. A fix in os.getcwd()
- would probably better, in Cygwin even more so, except
- that this seems to be by design...
- """
- return os.path.abspath(filename) if sys.platform == 'cygwin' else filename
-
-
-def _normalize_cached(filename, _cache={}):
- try:
- return _cache[filename]
- except KeyError:
- _cache[filename] = result = normalize_path(filename)
- return result
-
-
-def _is_egg_path(path):
- """
- Determine if given path appears to be an egg.
- """
- return _is_zip_egg(path) or _is_unpacked_egg(path)
-
-
-def _is_zip_egg(path):
- return (
- path.lower().endswith('.egg') and
- os.path.isfile(path) and
- zipfile.is_zipfile(path)
- )
-
-
-def _is_unpacked_egg(path):
- """
- Determine if given path appears to be an unpacked egg.
- """
- return (
- path.lower().endswith('.egg') and
- os.path.isfile(os.path.join(path, 'EGG-INFO', 'PKG-INFO'))
- )
-
-
-def _set_parent_ns(packageName):
- parts = packageName.split('.')
- name = parts.pop()
- if parts:
- parent = '.'.join(parts)
- setattr(sys.modules[parent], name, sys.modules[packageName])
-
-
-MODULE = re.compile(r"\w+(\.\w+)*$").match
-EGG_NAME = re.compile(
- r"""
- (?P[^-]+) (
- -(?P[^-]+) (
- -py(?P[^-]+) (
- -(?P.+)
- )?
- )?
- )?
- """,
- re.VERBOSE | re.IGNORECASE,
-).match
-
-
-class EntryPoint:
- """Object representing an advertised importable object"""
-
- def __init__(self, name, module_name, attrs=(), extras=(), dist=None):
- if not MODULE(module_name):
- raise ValueError("Invalid module name", module_name)
- self.name = name
- self.module_name = module_name
- self.attrs = tuple(attrs)
- self.extras = tuple(extras)
- self.dist = dist
-
- def __str__(self):
- s = "%s = %s" % (self.name, self.module_name)
- if self.attrs:
- s += ':' + '.'.join(self.attrs)
- if self.extras:
- s += ' [%s]' % ','.join(self.extras)
- return s
-
- def __repr__(self):
- return "EntryPoint.parse(%r)" % str(self)
-
- def load(self, require=True, *args, **kwargs):
- """
- Require packages for this EntryPoint, then resolve it.
- """
- if not require or args or kwargs:
- warnings.warn(
- "Parameters to load are deprecated. Call .resolve and "
- ".require separately.",
- PkgResourcesDeprecationWarning,
- stacklevel=2,
- )
- if require:
- self.require(*args, **kwargs)
- return self.resolve()
-
- def resolve(self):
- """
- Resolve the entry point from its module and attrs.
- """
- module = __import__(self.module_name, fromlist=['__name__'], level=0)
- try:
- return functools.reduce(getattr, self.attrs, module)
- except AttributeError as exc:
- raise ImportError(str(exc)) from exc
-
- def require(self, env=None, installer=None):
- if self.extras and not self.dist:
- raise UnknownExtra("Can't require() without a distribution", self)
-
- # Get the requirements for this entry point with all its extras and
- # then resolve them. We have to pass `extras` along when resolving so
- # that the working set knows what extras we want. Otherwise, for
- # dist-info distributions, the working set will assume that the
- # requirements for that extra are purely optional and skip over them.
- reqs = self.dist.requires(self.extras)
- items = working_set.resolve(reqs, env, installer, extras=self.extras)
- list(map(working_set.add, items))
-
- pattern = re.compile(
- r'\s*'
- r'(?P.+?)\s*'
- r'=\s*'
- r'(?P[\w.]+)\s*'
- r'(:\s*(?P[\w.]+))?\s*'
- r'(?P\[.*\])?\s*$'
- )
-
- @classmethod
- def parse(cls, src, dist=None):
- """Parse a single entry point from string `src`
-
- Entry point syntax follows the form::
-
- name = some.module:some.attr [extra1, extra2]
-
- The entry name and module name are required, but the ``:attrs`` and
- ``[extras]`` parts are optional
- """
- m = cls.pattern.match(src)
- if not m:
- msg = "EntryPoint must be in 'name=module:attrs [extras]' format"
- raise ValueError(msg, src)
- res = m.groupdict()
- extras = cls._parse_extras(res['extras'])
- attrs = res['attr'].split('.') if res['attr'] else ()
- return cls(res['name'], res['module'], attrs, extras, dist)
-
- @classmethod
- def _parse_extras(cls, extras_spec):
- if not extras_spec:
- return ()
- req = Requirement.parse('x' + extras_spec)
- if req.specs:
- raise ValueError()
- return req.extras
-
- @classmethod
- def parse_group(cls, group, lines, dist=None):
- """Parse an entry point group"""
- if not MODULE(group):
- raise ValueError("Invalid group name", group)
- this = {}
- for line in yield_lines(lines):
- ep = cls.parse(line, dist)
- if ep.name in this:
- raise ValueError("Duplicate entry point", group, ep.name)
- this[ep.name] = ep
- return this
-
- @classmethod
- def parse_map(cls, data, dist=None):
- """Parse a map of entry point groups"""
- if isinstance(data, dict):
- data = data.items()
- else:
- data = split_sections(data)
- maps = {}
- for group, lines in data:
- if group is None:
- if not lines:
- continue
- raise ValueError("Entry points must be listed in groups")
- group = group.strip()
- if group in maps:
- raise ValueError("Duplicate group name", group)
- maps[group] = cls.parse_group(group, lines, dist)
- return maps
-
-
-def _version_from_file(lines):
- """
- Given an iterable of lines from a Metadata file, return
- the value of the Version field, if present, or None otherwise.
- """
- def is_version_line(line):
- return line.lower().startswith('version:')
- version_lines = filter(is_version_line, lines)
- line = next(iter(version_lines), '')
- _, _, value = line.partition(':')
- return safe_version(value.strip()) or None
-
-
-class Distribution:
- """Wrap an actual or potential sys.path entry w/metadata"""
- PKG_INFO = 'PKG-INFO'
-
- def __init__(
- self, location=None, metadata=None, project_name=None,
- version=None, py_version=PY_MAJOR, platform=None,
- precedence=EGG_DIST):
- self.project_name = safe_name(project_name or 'Unknown')
- if version is not None:
- self._version = safe_version(version)
- self.py_version = py_version
- self.platform = platform
- self.location = location
- self.precedence = precedence
- self._provider = metadata or empty_provider
-
- @classmethod
- def from_location(cls, location, basename, metadata=None, **kw):
- project_name, version, py_version, platform = [None] * 4
- basename, ext = os.path.splitext(basename)
- if ext.lower() in _distributionImpl:
- cls = _distributionImpl[ext.lower()]
-
- match = EGG_NAME(basename)
- if match:
- project_name, version, py_version, platform = match.group(
- 'name', 'ver', 'pyver', 'plat'
- )
- return cls(
- location, metadata, project_name=project_name, version=version,
- py_version=py_version, platform=platform, **kw
- )._reload_version()
-
- def _reload_version(self):
- return self
-
- @property
- def hashcmp(self):
- return (
- self.parsed_version,
- self.precedence,
- self.key,
- self.location,
- self.py_version or '',
- self.platform or '',
- )
-
- def __hash__(self):
- return hash(self.hashcmp)
-
- def __lt__(self, other):
- return self.hashcmp < other.hashcmp
-
- def __le__(self, other):
- return self.hashcmp <= other.hashcmp
-
- def __gt__(self, other):
- return self.hashcmp > other.hashcmp
-
- def __ge__(self, other):
- return self.hashcmp >= other.hashcmp
-
- def __eq__(self, other):
- if not isinstance(other, self.__class__):
- # It's not a Distribution, so they are not equal
- return False
- return self.hashcmp == other.hashcmp
-
- def __ne__(self, other):
- return not self == other
-
- # These properties have to be lazy so that we don't have to load any
- # metadata until/unless it's actually needed. (i.e., some distributions
- # may not know their name or version without loading PKG-INFO)
-
- @property
- def key(self):
- try:
- return self._key
- except AttributeError:
- self._key = key = self.project_name.lower()
- return key
-
- @property
- def parsed_version(self):
- if not hasattr(self, "_parsed_version"):
- self._parsed_version = parse_version(self.version)
-
- return self._parsed_version
-
- def _warn_legacy_version(self):
- LV = packaging.version.LegacyVersion
- is_legacy = isinstance(self._parsed_version, LV)
- if not is_legacy:
- return
-
- # While an empty version is technically a legacy version and
- # is not a valid PEP 440 version, it's also unlikely to
- # actually come from someone and instead it is more likely that
- # it comes from setuptools attempting to parse a filename and
- # including it in the list. So for that we'll gate this warning
- # on if the version is anything at all or not.
- if not self.version:
- return
-
- tmpl = textwrap.dedent("""
- '{project_name} ({version})' is being parsed as a legacy,
- non PEP 440,
- version. You may find odd behavior and sort order.
- In particular it will be sorted as less than 0.0. It
- is recommended to migrate to PEP 440 compatible
- versions.
- """).strip().replace('\n', ' ')
-
- warnings.warn(tmpl.format(**vars(self)), PEP440Warning)
-
- @property
- def version(self):
- try:
- return self._version
- except AttributeError as e:
- version = self._get_version()
- if version is None:
- path = self._get_metadata_path_for_display(self.PKG_INFO)
- msg = (
- "Missing 'Version:' header and/or {} file at path: {}"
- ).format(self.PKG_INFO, path)
- raise ValueError(msg, self) from e
-
- return version
-
- @property
- def _dep_map(self):
- """
- A map of extra to its list of (direct) requirements
- for this distribution, including the null extra.
- """
- try:
- return self.__dep_map
- except AttributeError:
- self.__dep_map = self._filter_extras(self._build_dep_map())
- return self.__dep_map
-
- @staticmethod
- def _filter_extras(dm):
- """
- Given a mapping of extras to dependencies, strip off
- environment markers and filter out any dependencies
- not matching the markers.
- """
- for extra in list(filter(None, dm)):
- new_extra = extra
- reqs = dm.pop(extra)
- new_extra, _, marker = extra.partition(':')
- fails_marker = marker and (
- invalid_marker(marker)
- or not evaluate_marker(marker)
- )
- if fails_marker:
- reqs = []
- new_extra = safe_extra(new_extra) or None
-
- dm.setdefault(new_extra, []).extend(reqs)
- return dm
-
- def _build_dep_map(self):
- dm = {}
- for name in 'requires.txt', 'depends.txt':
- for extra, reqs in split_sections(self._get_metadata(name)):
- dm.setdefault(extra, []).extend(parse_requirements(reqs))
- return dm
-
- def requires(self, extras=()):
- """List of Requirements needed for this distro if `extras` are used"""
- dm = self._dep_map
- deps = []
- deps.extend(dm.get(None, ()))
- for ext in extras:
- try:
- deps.extend(dm[safe_extra(ext)])
- except KeyError as e:
- raise UnknownExtra(
- "%s has no such extra feature %r" % (self, ext)
- ) from e
- return deps
-
- def _get_metadata_path_for_display(self, name):
- """
- Return the path to the given metadata file, if available.
- """
- try:
- # We need to access _get_metadata_path() on the provider object
- # directly rather than through this class's __getattr__()
- # since _get_metadata_path() is marked private.
- path = self._provider._get_metadata_path(name)
-
- # Handle exceptions e.g. in case the distribution's metadata
- # provider doesn't support _get_metadata_path().
- except Exception:
- return '[could not detect]'
-
- return path
-
- def _get_metadata(self, name):
- if self.has_metadata(name):
- for line in self.get_metadata_lines(name):
- yield line
-
- def _get_version(self):
- lines = self._get_metadata(self.PKG_INFO)
- version = _version_from_file(lines)
-
- return version
-
- def activate(self, path=None, replace=False):
- """Ensure distribution is importable on `path` (default=sys.path)"""
- if path is None:
- path = sys.path
- self.insert_on(path, replace=replace)
- if path is sys.path:
- fixup_namespace_packages(self.location)
- for pkg in self._get_metadata('namespace_packages.txt'):
- if pkg in sys.modules:
- declare_namespace(pkg)
-
- def egg_name(self):
- """Return what this distribution's standard .egg filename should be"""
- filename = "%s-%s-py%s" % (
- to_filename(self.project_name), to_filename(self.version),
- self.py_version or PY_MAJOR
- )
-
- if self.platform:
- filename += '-' + self.platform
- return filename
-
- def __repr__(self):
- if self.location:
- return "%s (%s)" % (self, self.location)
- else:
- return str(self)
-
- def __str__(self):
- try:
- version = getattr(self, 'version', None)
- except ValueError:
- version = None
- version = version or "[unknown version]"
- return "%s %s" % (self.project_name, version)
-
- def __getattr__(self, attr):
- """Delegate all unrecognized public attributes to .metadata provider"""
- if attr.startswith('_'):
- raise AttributeError(attr)
- return getattr(self._provider, attr)
-
- def __dir__(self):
- return list(
- set(super(Distribution, self).__dir__())
- | set(
- attr for attr in self._provider.__dir__()
- if not attr.startswith('_')
- )
- )
-
- @classmethod
- def from_filename(cls, filename, metadata=None, **kw):
- return cls.from_location(
- _normalize_cached(filename), os.path.basename(filename), metadata,
- **kw
- )
-
- def as_requirement(self):
- """Return a ``Requirement`` that matches this distribution exactly"""
- if isinstance(self.parsed_version, packaging.version.Version):
- spec = "%s==%s" % (self.project_name, self.parsed_version)
- else:
- spec = "%s===%s" % (self.project_name, self.parsed_version)
-
- return Requirement.parse(spec)
-
- def load_entry_point(self, group, name):
- """Return the `name` entry point of `group` or raise ImportError"""
- ep = self.get_entry_info(group, name)
- if ep is None:
- raise ImportError("Entry point %r not found" % ((group, name),))
- return ep.load()
-
- def get_entry_map(self, group=None):
- """Return the entry point map for `group`, or the full entry map"""
- try:
- ep_map = self._ep_map
- except AttributeError:
- ep_map = self._ep_map = EntryPoint.parse_map(
- self._get_metadata('entry_points.txt'), self
- )
- if group is not None:
- return ep_map.get(group, {})
- return ep_map
-
- def get_entry_info(self, group, name):
- """Return the EntryPoint object for `group`+`name`, or ``None``"""
- return self.get_entry_map(group).get(name)
-
- # FIXME: 'Distribution.insert_on' is too complex (13)
- def insert_on(self, path, loc=None, replace=False): # noqa: C901
- """Ensure self.location is on path
-
- If replace=False (default):
- - If location is already in path anywhere, do nothing.
- - Else:
- - If it's an egg and its parent directory is on path,
- insert just ahead of the parent.
- - Else: add to the end of path.
- If replace=True:
- - If location is already on path anywhere (not eggs)
- or higher priority than its parent (eggs)
- do nothing.
- - Else:
- - If it's an egg and its parent directory is on path,
- insert just ahead of the parent,
- removing any lower-priority entries.
- - Else: add it to the front of path.
- """
-
- loc = loc or self.location
- if not loc:
- return
-
- nloc = _normalize_cached(loc)
- bdir = os.path.dirname(nloc)
- npath = [(p and _normalize_cached(p) or p) for p in path]
-
- for p, item in enumerate(npath):
- if item == nloc:
- if replace:
- break
- else:
- # don't modify path (even removing duplicates) if
- # found and not replace
- return
- elif item == bdir and self.precedence == EGG_DIST:
- # if it's an .egg, give it precedence over its directory
- # UNLESS it's already been added to sys.path and replace=False
- if (not replace) and nloc in npath[p:]:
- return
- if path is sys.path:
- self.check_version_conflict()
- path.insert(p, loc)
- npath.insert(p, nloc)
- break
- else:
- if path is sys.path:
- self.check_version_conflict()
- if replace:
- path.insert(0, loc)
- else:
- path.append(loc)
- return
-
- # p is the spot where we found or inserted loc; now remove duplicates
- while True:
- try:
- np = npath.index(nloc, p + 1)
- except ValueError:
- break
- else:
- del npath[np], path[np]
- # ha!
- p = np
-
- return
-
- def check_version_conflict(self):
- if self.key == 'setuptools':
- # ignore the inevitable setuptools self-conflicts :(
- return
-
- nsp = dict.fromkeys(self._get_metadata('namespace_packages.txt'))
- loc = normalize_path(self.location)
- for modname in self._get_metadata('top_level.txt'):
- if (modname not in sys.modules or modname in nsp
- or modname in _namespace_packages):
- continue
- if modname in ('pkg_resources', 'setuptools', 'site'):
- continue
- fn = getattr(sys.modules[modname], '__file__', None)
- if fn and (normalize_path(fn).startswith(loc) or
- fn.startswith(self.location)):
- continue
- issue_warning(
- "Module %s was already imported from %s, but %s is being added"
- " to sys.path" % (modname, fn, self.location),
- )
-
- def has_version(self):
- try:
- self.version
- except ValueError:
- issue_warning("Unbuilt egg for " + repr(self))
- return False
- return True
-
- def clone(self, **kw):
- """Copy this distribution, substituting in any changed keyword args"""
- names = 'project_name version py_version platform location precedence'
- for attr in names.split():
- kw.setdefault(attr, getattr(self, attr, None))
- kw.setdefault('metadata', self._provider)
- return self.__class__(**kw)
-
- @property
- def extras(self):
- return [dep for dep in self._dep_map if dep]
-
-
-class EggInfoDistribution(Distribution):
- def _reload_version(self):
- """
- Packages installed by distutils (e.g. numpy or scipy),
- which uses an old safe_version, and so
- their version numbers can get mangled when
- converted to filenames (e.g., 1.11.0.dev0+2329eae to
- 1.11.0.dev0_2329eae). These distributions will not be
- parsed properly
- downstream by Distribution and safe_version, so
- take an extra step and try to get the version number from
- the metadata file itself instead of the filename.
- """
- md_version = self._get_version()
- if md_version:
- self._version = md_version
- return self
-
-
-class DistInfoDistribution(Distribution):
- """
- Wrap an actual or potential sys.path entry
- w/metadata, .dist-info style.
- """
- PKG_INFO = 'METADATA'
- EQEQ = re.compile(r"([\(,])\s*(\d.*?)\s*([,\)])")
-
- @property
- def _parsed_pkg_info(self):
- """Parse and cache metadata"""
- try:
- return self._pkg_info
- except AttributeError:
- metadata = self.get_metadata(self.PKG_INFO)
- self._pkg_info = email.parser.Parser().parsestr(metadata)
- return self._pkg_info
-
- @property
- def _dep_map(self):
- try:
- return self.__dep_map
- except AttributeError:
- self.__dep_map = self._compute_dependencies()
- return self.__dep_map
-
- def _compute_dependencies(self):
- """Recompute this distribution's dependencies."""
- dm = self.__dep_map = {None: []}
-
- reqs = []
- # Including any condition expressions
- for req in self._parsed_pkg_info.get_all('Requires-Dist') or []:
- reqs.extend(parse_requirements(req))
-
- def reqs_for_extra(extra):
- for req in reqs:
- if not req.marker or req.marker.evaluate({'extra': extra}):
- yield req
-
- common = types.MappingProxyType(dict.fromkeys(reqs_for_extra(None)))
- dm[None].extend(common)
-
- for extra in self._parsed_pkg_info.get_all('Provides-Extra') or []:
- s_extra = safe_extra(extra.strip())
- dm[s_extra] = [r for r in reqs_for_extra(extra) if r not in common]
-
- return dm
-
-
-_distributionImpl = {
- '.egg': Distribution,
- '.egg-info': EggInfoDistribution,
- '.dist-info': DistInfoDistribution,
-}
-
-
-def issue_warning(*args, **kw):
- level = 1
- g = globals()
- try:
- # find the first stack frame that is *not* code in
- # the pkg_resources module, to use for the warning
- while sys._getframe(level).f_globals is g:
- level += 1
- except ValueError:
- pass
- warnings.warn(stacklevel=level + 1, *args, **kw)
-
-
-def parse_requirements(strs):
- """
- Yield ``Requirement`` objects for each specification in `strs`.
-
- `strs` must be a string, or a (possibly-nested) iterable thereof.
- """
- return map(Requirement, join_continuation(map(drop_comment, yield_lines(strs))))
-
-
-class RequirementParseError(packaging.requirements.InvalidRequirement):
- "Compatibility wrapper for InvalidRequirement"
-
-
-class Requirement(packaging.requirements.Requirement):
- def __init__(self, requirement_string):
- """DO NOT CALL THIS UNDOCUMENTED METHOD; use Requirement.parse()!"""
- super(Requirement, self).__init__(requirement_string)
- self.unsafe_name = self.name
- project_name = safe_name(self.name)
- self.project_name, self.key = project_name, project_name.lower()
- self.specs = [
- (spec.operator, spec.version) for spec in self.specifier]
- self.extras = tuple(map(safe_extra, self.extras))
- self.hashCmp = (
- self.key,
- self.url,
- self.specifier,
- frozenset(self.extras),
- str(self.marker) if self.marker else None,
- )
- self.__hash = hash(self.hashCmp)
-
- def __eq__(self, other):
- return (
- isinstance(other, Requirement) and
- self.hashCmp == other.hashCmp
- )
-
- def __ne__(self, other):
- return not self == other
-
- def __contains__(self, item):
- if isinstance(item, Distribution):
- if item.key != self.key:
- return False
-
- item = item.version
-
- # Allow prereleases always in order to match the previous behavior of
- # this method. In the future this should be smarter and follow PEP 440
- # more accurately.
- return self.specifier.contains(item, prereleases=True)
-
- def __hash__(self):
- return self.__hash
-
- def __repr__(self):
- return "Requirement.parse(%r)" % str(self)
-
- @staticmethod
- def parse(s):
- req, = parse_requirements(s)
- return req
-
-
-def _always_object(classes):
- """
- Ensure object appears in the mro even
- for old-style classes.
- """
- if object not in classes:
- return classes + (object,)
- return classes
-
-
-def _find_adapter(registry, ob):
- """Return an adapter factory for `ob` from `registry`"""
- types = _always_object(inspect.getmro(getattr(ob, '__class__', type(ob))))
- for t in types:
- if t in registry:
- return registry[t]
-
-
-def ensure_directory(path):
- """Ensure that the parent directory of `path` exists"""
- dirname = os.path.dirname(path)
- os.makedirs(dirname, exist_ok=True)
-
-
-def _bypass_ensure_directory(path):
- """Sandbox-bypassing version of ensure_directory()"""
- if not WRITE_SUPPORT:
- raise IOError('"os.mkdir" not supported on this platform.')
- dirname, filename = split(path)
- if dirname and filename and not isdir(dirname):
- _bypass_ensure_directory(dirname)
- try:
- mkdir(dirname, 0o755)
- except FileExistsError:
- pass
-
-
-def split_sections(s):
- """Split a string or iterable thereof into (section, content) pairs
-
- Each ``section`` is a stripped version of the section header ("[section]")
- and each ``content`` is a list of stripped lines excluding blank lines and
- comment-only lines. If there are any such lines before the first section
- header, they're returned in a first ``section`` of ``None``.
- """
- section = None
- content = []
- for line in yield_lines(s):
- if line.startswith("["):
- if line.endswith("]"):
- if section or content:
- yield section, content
- section = line[1:-1].strip()
- content = []
- else:
- raise ValueError("Invalid section heading", line)
- else:
- content.append(line)
-
- # wrap up last segment
- yield section, content
-
-
-def _mkstemp(*args, **kw):
- old_open = os.open
- try:
- # temporarily bypass sandboxing
- os.open = os_open
- return tempfile.mkstemp(*args, **kw)
- finally:
- # and then put it back
- os.open = old_open
-
-
-# Silence the PEP440Warning by default, so that end users don't get hit by it
-# randomly just because they use pkg_resources. We want to append the rule
-# because we want earlier uses of filterwarnings to take precedence over this
-# one.
-warnings.filterwarnings("ignore", category=PEP440Warning, append=True)
-
-
-# from jaraco.functools 1.3
-def _call_aside(f, *args, **kwargs):
- f(*args, **kwargs)
- return f
-
-
-@_call_aside
-def _initialize(g=globals()):
- "Set up global resource manager (deliberately not state-saved)"
- manager = ResourceManager()
- g['_manager'] = manager
- g.update(
- (name, getattr(manager, name))
- for name in dir(manager)
- if not name.startswith('_')
- )
-
-
-class PkgResourcesDeprecationWarning(Warning):
- """
- Base class for warning about deprecations in ``pkg_resources``
-
- This class is not derived from ``DeprecationWarning``, and as such is
- visible by default.
- """
-
-
-@_call_aside
-def _initialize_master_working_set():
- """
- Prepare the master working set and make the ``require()``
- API available.
-
- This function has explicit effects on the global state
- of pkg_resources. It is intended to be invoked once at
- the initialization of this module.
-
- Invocation by other packages is unsupported and done
- at their own risk.
- """
- working_set = WorkingSet._build_master()
- _declare_state('object', working_set=working_set)
-
- require = working_set.require
- iter_entry_points = working_set.iter_entry_points
- add_activation_listener = working_set.subscribe
- run_script = working_set.run_script
- # backward compatibility
- run_main = run_script
- # Activate all distributions already on sys.path with replace=False and
- # ensure that all distributions added to the working set in the future
- # (e.g. by calling ``require()``) will get activated as well,
- # with higher priority (replace=True).
- tuple(
- dist.activate(replace=False)
- for dist in working_set
- )
- add_activation_listener(
- lambda dist: dist.activate(replace=True),
- existing=False,
- )
- working_set.entries = []
- # match order
- list(map(working_set.add_entry, sys.path))
- globals().update(locals())
diff --git a/spaces/BigSalmon/MASKK/README.md b/spaces/BigSalmon/MASKK/README.md
deleted file mode 100644
index 452f57f40960ee601ae64f905745c79532c22980..0000000000000000000000000000000000000000
--- a/spaces/BigSalmon/MASKK/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: MASKK
-emoji: 🏢
-colorFrom: blue
-colorTo: pink
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/classification.py b/spaces/Boadiwaa/Recipes/openai/api_resources/classification.py
deleted file mode 100644
index 6423c6946a0f2bf54301b9f632de8a8a3baf493c..0000000000000000000000000000000000000000
--- a/spaces/Boadiwaa/Recipes/openai/api_resources/classification.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from openai.openai_object import OpenAIObject
-
-
-class Classification(OpenAIObject):
- @classmethod
- def get_url(self):
- return "/classifications"
-
- @classmethod
- def create(cls, **params):
- instance = cls()
- return instance.request("post", cls.get_url(), params)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/utils/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/utils/__init__.py
deleted file mode 100644
index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/utils/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
diff --git a/spaces/CVPR/LIVE/thrust/examples/cmake/add_subdir/dummy.cpp b/spaces/CVPR/LIVE/thrust/examples/cmake/add_subdir/dummy.cpp
deleted file mode 100644
index ad7b9435fff3274ee11551d0191c70cf581fcb86..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/examples/cmake/add_subdir/dummy.cpp
+++ /dev/null
@@ -1,32 +0,0 @@
-#include
-
-#include
-
-int main()
-{
- std::cout << "Hello from Thrust version " << THRUST_VERSION << ":\n"
-
- << "Host system: "
-#if THRUST_HOST_SYSTEM == THRUST_HOST_SYSTEM_CPP
- << "CPP\n"
-#elif THRUST_HOST_SYSTEM == THRUST_HOST_SYSTEM_OMP
- << "OMP\n"
-#elif THRUST_HOST_SYSTEM == THRUST_HOST_SYSTEM_TBB
- << "TBB\n"
-#else
- << "Unknown\n"
-#endif
-
- << "Device system: "
-#if THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_CPP
- << "CPP\n";
-#elif THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_CUDA
- << "CUDA\n";
-#elif THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_OMP
- << "OMP\n";
-#elif THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_TBB
- << "TBB\n";
-#else
- << "Unknown\n";
-#endif
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/overlapped_copy.h b/spaces/CVPR/LIVE/thrust/thrust/detail/overlapped_copy.h
deleted file mode 100644
index f6bb85a91129bfbca84721ab5c4943b3a2034698..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/overlapped_copy.h
+++ /dev/null
@@ -1,131 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace detail
-{
-
-
-template
- OutputIterator sequential_copy(InputIterator first,
- InputIterator last,
- OutputIterator result)
-{
- for(; first != last; ++first, ++result)
- {
- *result = *first;
- } // end for
-
- return result;
-} // end sequential_copy()
-
-
-template
- BidirectionalIterator2 sequential_copy_backward(BidirectionalIterator1 first,
- BidirectionalIterator1 last,
- BidirectionalIterator2 result)
-{
- // yes, we preincrement
- // the ranges are open on the right, i.e. [first, last)
- while(first != last)
- {
- *--result = *--last;
- } // end while
-
- return result;
-} // end sequential_copy_backward()
-
-
-namespace dispatch
-{
-
-
-template
- RandomAccessIterator2 overlapped_copy(thrust::system::cpp::detail::execution_policy &,
- RandomAccessIterator1 first,
- RandomAccessIterator1 last,
- RandomAccessIterator2 result)
-{
- if(first < last && first <= result && result < last)
- {
- // result lies in [first, last)
- // it's safe to use std::copy_backward here
- thrust::detail::sequential_copy_backward(first, last, result + (last - first));
- result += (last - first);
- } // end if
- else
- {
- // result + (last - first) lies in [first, last)
- // it's safe to use sequential_copy here
- result = thrust::detail::sequential_copy(first, last, result);
- } // end else
-
- return result;
-} // end overlapped_copy()
-
-
-template
- RandomAccessIterator2 overlapped_copy(thrust::execution_policy &exec,
- RandomAccessIterator1 first,
- RandomAccessIterator1 last,
- RandomAccessIterator2 result)
-{
- typedef typename thrust::iterator_value::type value_type;
-
- // make a temporary copy of [first,last), and copy into it first
- thrust::detail::temporary_array temp(exec, first, last);
- return thrust::copy(exec, temp.begin(), temp.end(), result);
-} // end overlapped_copy()
-
-} // end dispatch
-
-
-template
- RandomAccessIterator2 overlapped_copy(RandomAccessIterator1 first,
- RandomAccessIterator1 last,
- RandomAccessIterator2 result)
-{
- typedef typename thrust::iterator_system::type System1;
- typedef typename thrust::iterator_system::type System2;
-
- typedef typename thrust::detail::minimum_system::type System;
-
- // XXX presumes System is default constructible
- System system;
-
- return thrust::detail::dispatch::overlapped_copy(system, first, last, result);
-} // end overlapped_copy()
-
-} // end detail
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform.h
deleted file mode 100644
index 1aa2f4993fead2b6de01cc2faa29f2a49d950fd3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform.h
+++ /dev/null
@@ -1,106 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-template
-__host__ __device__
- OutputIterator transform(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- UnaryFunction op);
-
-template
-__host__ __device__
- OutputIterator transform(thrust::execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- OutputIterator result,
- BinaryFunction op);
-
-template
-__host__ __device__
- ForwardIterator transform_if(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- ForwardIterator result,
- UnaryFunction unary_op,
- Predicate pred);
-
-template
-__host__ __device__
- ForwardIterator transform_if(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- ForwardIterator result,
- UnaryFunction unary_op,
- Predicate pred);
-
-template
-__host__ __device__
- ForwardIterator transform_if(thrust::execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- InputIterator3 stencil,
- ForwardIterator result,
- BinaryFunction binary_op,
- Predicate pred);
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/cocoeval/cocoeval.h b/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/cocoeval/cocoeval.h
deleted file mode 100644
index db246e49a026b7cd989b305f4d3d98100be3c912..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/cocoeval/cocoeval.h
+++ /dev/null
@@ -1,88 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace py = pybind11;
-
-namespace detectron2 {
-
-namespace COCOeval {
-
-// Annotation data for a single object instance in an image
-struct InstanceAnnotation {
- InstanceAnnotation(
- uint64_t id,
- double score,
- double area,
- bool is_crowd,
- bool ignore)
- : id{id}, score{score}, area{area}, is_crowd{is_crowd}, ignore{ignore} {}
- uint64_t id;
- double score = 0.;
- double area = 0.;
- bool is_crowd = false;
- bool ignore = false;
-};
-
-// Stores intermediate results for evaluating detection results for a single
-// image that has D detected instances and G ground truth instances. This stores
-// matches between detected and ground truth instances
-struct ImageEvaluation {
- // For each of the D detected instances, the id of the matched ground truth
- // instance, or 0 if unmatched
- std::vector detection_matches;
-
- // The detection score of each of the D detected instances
- std::vector detection_scores;
-
- // Marks whether or not each of G instances was ignored from evaluation (e.g.,
- // because it's outside area_range)
- std::vector ground_truth_ignores;
-
- // Marks whether or not each of D instances was ignored from evaluation (e.g.,
- // because it's outside aRng)
- std::vector detection_ignores;
-};
-
-template
-using ImageCategoryInstances = std::vector>>;
-
-// C++ implementation of COCO API cocoeval.py::COCOeval.evaluateImg(). For each
-// combination of image, category, area range settings, and IOU thresholds to
-// evaluate, it matches detected instances to ground truth instances and stores
-// the results into a vector of ImageEvaluation results, which will be
-// interpreted by the COCOeval::Accumulate() function to produce precion-recall
-// curves. The parameters of nested vectors have the following semantics:
-// image_category_ious[i][c][d][g] is the intersection over union of the d'th
-// detected instance and g'th ground truth instance of
-// category category_ids[c] in image image_ids[i]
-// image_category_ground_truth_instances[i][c] is a vector of ground truth
-// instances in image image_ids[i] of category category_ids[c]
-// image_category_detection_instances[i][c] is a vector of detected
-// instances in image image_ids[i] of category category_ids[c]
-std::vector EvaluateImages(
- const std::vector>& area_ranges, // vector of 2-tuples
- int max_detections,
- const std::vector& iou_thresholds,
- const ImageCategoryInstances>& image_category_ious,
- const ImageCategoryInstances&
- image_category_ground_truth_instances,
- const ImageCategoryInstances&
- image_category_detection_instances);
-
-// C++ implementation of COCOeval.accumulate(), which generates precision
-// recall curves for each set of category, IOU threshold, detection area range,
-// and max number of detections parameters. It is assumed that the parameter
-// evaluations is the return value of the functon COCOeval::EvaluateImages(),
-// which was called with the same parameter settings params
-py::dict Accumulate(
- const py::object& params,
- const std::vector& evalutations);
-
-} // namespace COCOeval
-} // namespace detectron2
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/clip_roi_heads.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/clip_roi_heads.py
deleted file mode 100644
index 7144e8f3b1f25b395c290c218ddee129148ea3b5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/clip_roi_heads.py
+++ /dev/null
@@ -1,747 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import inspect
-import logging
-import numpy as np
-from typing import Dict, List, Optional, Tuple
-import torch
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, nonzero_tuple
-from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-from detectron2.utils.registry import Registry
-
-from ..backbone.resnet import BottleneckBlock, ResNet
-from ..matcher import Matcher
-from ..poolers import ROIPooler
-from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals
-from ..sampling import subsample_labels
-from .box_head import build_box_head
-from .fast_rcnn import FastRCNNOutputLayers
-from .keypoint_head import build_keypoint_head
-from .mask_head import build_mask_head
-
-from .roi_heads import ROI_HEADS_REGISTRY, select_foreground_proposals, ROIHeads
-
-@ROI_HEADS_REGISTRY.register()
-class CLIPRes5ROIHeads(ROIHeads):
- """
- Created for CLIP ResNet. This head uses the last resnet layer from backbone.
- The ROIHeads in a typical "C4" R-CNN model, where
- the box and mask head share the cropping and
- the per-region feature computation by a Res5 block.
- See :paper:`ResNet` Appendix A.
- """
-
- @configurable
- def __init__(
- self,
- *,
- in_features: List[str],
- pooler: ROIPooler,
- res5: None,
- box_predictor: nn.Module,
- mask_head: Optional[nn.Module] = None,
- **kwargs,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_features (list[str]): list of backbone feature map names to use for
- feature extraction
- pooler (ROIPooler): pooler to extra region features from backbone
- res5 (nn.Sequential): a CNN to compute per-region features, to be used by
- ``box_predictor`` and ``mask_head``. Typically this is a "res5"
- block from a ResNet.
- box_predictor (nn.Module): make box predictions from the feature.
- Should have the same interface as :class:`FastRCNNOutputLayers`.
- mask_head (nn.Module): transform features to make mask predictions
- """
- super().__init__(**kwargs)
- self.in_features = in_features
- self.pooler = pooler
- # if isinstance(res5, (list, tuple)):
- # res5 = nn.Sequential(*res5)
- self.res5 = res5 # None, this head uses the res5 from backbone
- self.box_predictor = box_predictor
- self.mask_on = mask_head is not None
- if self.mask_on:
- self.mask_head = mask_head
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- # fmt: off
- ret = super().from_config(cfg)
- in_features = ret["in_features"] = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- pooler_scales = (1.0 / input_shape[in_features[0]].stride, )
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- mask_on = cfg.MODEL.MASK_ON
- # fmt: on
- assert not cfg.MODEL.KEYPOINT_ON
- assert len(in_features) == 1
-
- ret["pooler"] = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
-
- # Compatbility with old moco code. Might be useful.
- # See notes in StandardROIHeads.from_config
- # if not inspect.ismethod(cls._build_res5_block):
- # logger.warning(
- # "The behavior of _build_res5_block may change. "
- # "Please do not depend on private methods."
- # )
- # cls._build_res5_block = classmethod(cls._build_res5_block)
-
- ret["res5"], out_channels = None, cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * 8 # cls._build_res5_block(cfg)
- ret["box_predictor"] = FastRCNNOutputLayers(
- cfg, ShapeSpec(channels=out_channels, height=1, width=1)
- )
-
- if mask_on:
- ret["mask_head"] = build_mask_head(
- cfg,
- ShapeSpec(channels=out_channels, width=pooler_resolution, height=pooler_resolution),
- )
- return ret
-
- def _shared_roi_transform(self, features, boxes, backbone_res5):
- x = self.pooler(features, boxes)
- return backbone_res5(x)
-
- def forward(self, images, features, proposals, queries, targets=None,
- res5=None, ds=None, norm=None, vision_projection=None, attnpool=None):
- """
- See :meth:`ROIHeads.forward`.
- """
- del images
-
- if self.training:
- assert targets
- proposals = self.label_and_sample_proposals(proposals, targets)
- del targets
-
- proposal_boxes = [x.proposal_boxes for x in proposals]
- box_features = self._shared_roi_transform(
- [features[f] for f in self.in_features], proposal_boxes, res5
- )
- if attnpool: # att pooling
- att_feats = attnpool(box_features)
- predictions = self.box_predictor(att_feats, queries)
- else: # mean pooling
- predictions = self.box_predictor(box_features.mean(dim=[2, 3]))
- if self.training:
- del features
- losses = self.box_predictor.losses(predictions, proposals)
- if self.mask_on:
- proposals, fg_selection_masks = select_foreground_proposals(
- proposals, self.num_classes
- )
- # Since the ROI feature transform is shared between boxes and masks,
- # we don't need to recompute features. The mask loss is only defined
- # on foreground proposals, so we need to select out the foreground
- # features.
- mask_features = box_features[torch.cat(fg_selection_masks, dim=0)]
- del box_features
- losses.update(self.mask_head(mask_features, proposals))
- return [], losses
- else:
- pred_instances, _ = self.box_predictor.inference(predictions, proposals)
- pred_instances = self.forward_with_given_boxes(features, pred_instances, res5)
- return pred_instances, {}
-
- def forward_with_given_boxes(self, features, instances, res5=None):
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- instances (Instances):
- the same `Instances` object, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
- assert not self.training
- assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
-
- if self.mask_on:
- features = [features[f] for f in self.in_features]
- x = self._shared_roi_transform(features, [x.pred_boxes for x in instances], res5)
- return self.mask_head(x, instances)
- else:
- return instances
-
-@ROI_HEADS_REGISTRY.register()
-class CLIPSwinROIHeads(ROIHeads):
- """
- Created for CLIP ResNet. This head uses the last resnet layer from backbone.
- The ROIHeads in a typical "C4" R-CNN model, where
- the box and mask head share the cropping and
- the per-region feature computation by a Res5 block.
- See :paper:`ResNet` Appendix A.
- """
-
- @configurable
- def __init__(
- self,
- *,
- in_features: List[str],
- pooler: ROIPooler,
- res5: None,
- box_predictor: nn.Module,
- mask_head: Optional[nn.Module] = None,
- **kwargs,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_features (list[str]): list of backbone feature map names to use for
- feature extraction
- pooler (ROIPooler): pooler to extra region features from backbone
- res5 (nn.Sequential): a CNN to compute per-region features, to be used by
- ``box_predictor`` and ``mask_head``. Typically this is a "res5"
- block from a ResNet.
- box_predictor (nn.Module): make box predictions from the feature.
- Should have the same interface as :class:`FastRCNNOutputLayers`.
- mask_head (nn.Module): transform features to make mask predictions
- """
- super().__init__(**kwargs)
- self.in_features = in_features
- self.pooler = pooler
- # if isinstance(res5, (list, tuple)):
- # res5 = nn.Sequential(*res5)
- self.res5 = res5 # None, this head uses the res5 from backbone
- self.box_predictor = box_predictor
- self.mask_on = mask_head is not None
- if self.mask_on:
- self.mask_head = mask_head
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- # fmt: off
- ret = super().from_config(cfg)
- in_features = ret["in_features"] = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- pooler_scales = (1.0 / input_shape[in_features[0]].stride, )
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- mask_on = cfg.MODEL.MASK_ON
- # fmt: on
- assert not cfg.MODEL.KEYPOINT_ON
- assert len(in_features) == 1
-
- ret["pooler"] = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
-
- # Compatbility with old moco code. Might be useful.
- # See notes in StandardROIHeads.from_config
- # if not inspect.ismethod(cls._build_res5_block):
- # logger.warning(
- # "The behavior of _build_res5_block may change. "
- # "Please do not depend on private methods."
- # )
- # cls._build_res5_block = classmethod(cls._build_res5_block)
-
- ret["res5"], out_channels = None, cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * 8 # cls._build_res5_block(cfg)
- ret["box_predictor"] = FastRCNNOutputLayers(
- cfg, ShapeSpec(channels=out_channels, height=1, width=1)
- )
-
- if mask_on:
- ret["mask_head"] = build_mask_head(
- cfg,
- ShapeSpec(channels=out_channels, width=pooler_resolution, height=pooler_resolution),
- )
- return ret
-
- def _shared_roi_transform(self, features, boxes, backbone_res5, backbone_ds):
- x = self.pooler(features, boxes)
- if backbone_ds:
- x_flattened = x.flatten(2).transpose(1, 2)
- x_ds = backbone_ds(x_flattened, x.shape[2], x.shape[3])
- return backbone_res5(x_ds, x.shape[2] // 2, x.shape[3] // 2)
- else:
- return backbone_res5(x)
-
- def forward(self, images, features, proposals, queries, targets=None,
- res5=None, ds=None, norm=None, vision_projection=None, attnpool=None):
- """
- See :meth:`ROIHeads.forward`.
- """
- del images
-
- if self.training:
- assert targets
- proposals = self.label_and_sample_proposals(proposals, targets)
- del targets
-
- proposal_boxes = [x.proposal_boxes for x in proposals]
- box_features = self._shared_roi_transform(
- [features[f] for f in self.in_features], proposal_boxes, res5, ds,
- )
- if isinstance(box_features, tuple):
- box_features = norm(box_features[0]).mean(1)
- box_features = box_features @ vision_projection
- box_features = box_features / box_features.norm(dim=-1, keepdim=True)
-
- if attnpool: # att pooling
- att_feats = attnpool(box_features)
- predictions = self.box_predictor(att_feats)
- else: # mean pooling
- predictions = self.box_predictor(box_features, queries)
-
- if self.training:
- del features
- losses = self.box_predictor.losses(predictions, proposals)
- if self.mask_on:
- proposals, fg_selection_masks = select_foreground_proposals(
- proposals, self.num_classes
- )
- # Since the ROI feature transform is shared between boxes and masks,
- # we don't need to recompute features. The mask loss is only defined
- # on foreground proposals, so we need to select out the foreground
- # features.
- mask_features = box_features[torch.cat(fg_selection_masks, dim=0)]
- del box_features
- losses.update(self.mask_head(mask_features, proposals))
- return [], losses
- else:
- pred_instances, _ = self.box_predictor.inference(predictions, proposals)
- # pred_instances = self.forward_with_given_boxes(features, pred_instances, res5)
- return pred_instances, {}
-
- def forward_with_given_boxes(self, features, instances, res5=None):
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- instances (Instances):
- the same `Instances` object, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
- assert not self.training
- assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
-
- if self.mask_on:
- features = [features[f] for f in self.in_features]
- x = self._shared_roi_transform(features, [x.pred_boxes for x in instances], res5)
- return self.mask_head(x, instances)
- else:
- return instances
-
-@ROI_HEADS_REGISTRY.register()
-class PretrainRes5ROIHeads(ROIHeads):
- """
- Created for pretraining CLIP ResNet without box_predictor. This head uses the last resnet layer from backbone.
- The ROIHeads in a typical "C4" R-CNN model, where
- the box and mask head share the cropping and
- the per-region feature computation by a Res5 block.
- See :paper:`ResNet` Appendix A.
- """
-
- @configurable
- def __init__(
- self,
- *,
- in_features: List[str],
- pooler: ROIPooler,
- res5: None,
- box_predictor: Optional[nn.Module] = None,
- mask_head: Optional[nn.Module] = None,
- **kwargs,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_features (list[str]): list of backbone feature map names to use for
- feature extraction
- pooler (ROIPooler): pooler to extra region features from backbone
- res5 (nn.Sequential): a CNN to compute per-region features, to be used by
- ``box_predictor`` and ``mask_head``. Typically this is a "res5"
- block from a ResNet.
- box_predictor (nn.Module): make box predictions from the feature.
- Should have the same interface as :class:`FastRCNNOutputLayers`.
- mask_head (nn.Module): transform features to make mask predictions
- """
- super().__init__(**kwargs)
- self.in_features = in_features
- self.pooler = pooler
- # if isinstance(res5, (list, tuple)):
- # res5 = nn.Sequential(*res5)
- self.res5 = res5 # None, this head uses the res5 from backbone
- self.box_predictor = None
- self.mask_on = None
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- # fmt: off
- ret = super().from_config(cfg)
- in_features = ret["in_features"] = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- pooler_scales = (1.0 / input_shape[in_features[0]].stride, )
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- mask_on = cfg.MODEL.MASK_ON
- # fmt: on
- assert not cfg.MODEL.KEYPOINT_ON
- assert len(in_features) == 1
-
- ret["pooler"] = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
-
- ret["res5"], out_channels = None, cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * 8 # cls._build_res5_block(cfg)
- ret["box_predictor"] = None
- ret["mask_head"] = None
- return ret
-
- def _shared_roi_transform(self, features, boxes, backbone_res5, backbone_ds):
- x = self.pooler(features, boxes)
- if backbone_ds:
- return backbone_res5(backbone_ds(x))
- else:
- return backbone_res5(x)
-
- def forward(self, images, features, proposals, targets=None, res5=None, ds=None, attnpool=None):
- """
- See :meth:`ROIHeads.forward`.
- """
- # if self.training:
- # assert targets
- # proposals = self.label_and_sample_proposals(proposals, targets)
- # del targets
- if isinstance(proposals[0], Boxes): # grid boxes
- proposal_boxes = proposals
- else: # object proposals
- proposal_boxes = [x.proposal_boxes for x in proposals]
- box_features = self._shared_roi_transform(
- [features[f] for f in self.in_features], proposal_boxes, res5
- )
- if attnpool: # att pooling
- att_feats = attnpool(box_features)
- region_feats = att_feats # self.box_predictor(att_feats)
- else: # mean pooling
- region_feats = box_features.mean(dim=[2, 3]) # self.box_predictor(box_features.mean(dim=[2, 3]))
-
- return region_feats
-
- def forward_with_given_boxes(self, features, instances, res5=None):
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- instances (Instances):
- the same `Instances` object, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
- assert not self.training
- assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
-
- return instances
-
-@ROI_HEADS_REGISTRY.register()
-class CLIPStandardROIHeads(ROIHeads):
- """
- Created for CLIP ResNet. This head uses the attention pool layers from backbone.
- It's "standard" in a sense that there is no ROI transform sharing
- or feature sharing between tasks.
- Each head independently processes the input features by each head's
- own pooler and head.
-
- This class is used by most models, such as FPN and C5.
- To implement more models, you can subclass it and implement a different
- :meth:`forward()` or a head.
- """
-
- @configurable
- def __init__(
- self,
- *,
- box_in_features: List[str],
- box_pooler: ROIPooler,
- box_head: nn.Module,
- box_predictor: nn.Module,
- mask_in_features: Optional[List[str]] = None,
- mask_pooler: Optional[ROIPooler] = None,
- mask_head: Optional[nn.Module] = None,
- train_on_pred_boxes: bool = False,
- **kwargs,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- box_in_features (list[str]): list of feature names to use for the box head.
- box_pooler (ROIPooler): pooler to extra region features for box head
- box_head (nn.Module): transform features to make box predictions
- box_predictor (nn.Module): make box predictions from the feature.
- Should have the same interface as :class:`FastRCNNOutputLayers`.
- mask_in_features (list[str]): list of feature names to use for the mask
- pooler or mask head. None if not using mask head.
- mask_pooler (ROIPooler): pooler to extract region features from image features.
- The mask head will then take region features to make predictions.
- If None, the mask head will directly take the dict of image features
- defined by `mask_in_features`
- mask_head (nn.Module): transform features to make mask predictions
- keypoint_in_features, keypoint_pooler, keypoint_head: similar to ``mask_*``.
- train_on_pred_boxes (bool): whether to use proposal boxes or
- predicted boxes from the box head to train other heads.
- """
- super().__init__(**kwargs)
- # keep self.in_features for backward compatibility
- self.in_features = self.box_in_features = box_in_features
- self.box_pooler = box_pooler
- self.box_head = box_head
- self.box_predictor = box_predictor
-
- self.mask_on = mask_in_features is not None
- if self.mask_on:
- self.mask_in_features = mask_in_features
- self.mask_pooler = mask_pooler
- self.mask_head = mask_head
-
- self.train_on_pred_boxes = train_on_pred_boxes
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- ret = super().from_config(cfg)
- ret["train_on_pred_boxes"] = cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES
- # Subclasses that have not been updated to use from_config style construction
- # may have overridden _init_*_head methods. In this case, those overridden methods
- # will not be classmethods and we need to avoid trying to call them here.
- # We test for this with ismethod which only returns True for bound methods of cls.
- # Such subclasses will need to handle calling their overridden _init_*_head methods.
- if inspect.ismethod(cls._init_box_head):
- ret.update(cls._init_box_head(cfg, input_shape))
- if inspect.ismethod(cls._init_mask_head):
- ret.update(cls._init_mask_head(cfg, input_shape))
- return ret
-
- @classmethod
- def _init_box_head(cls, cfg, input_shape):
- # fmt: off
- in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- # fmt: on
-
- # If StandardROIHeads is applied on multiple feature maps (as in FPN),
- # then we share the same predictors and therefore the channel counts must be the same
- in_channels = [input_shape[f].channels for f in in_features]
- # Check all channel counts are equal
- assert len(set(in_channels)) == 1, in_channels
- in_channels = in_channels[0]
-
- box_pooler = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- # Here we split "box head" and "box predictor", which is mainly due to historical reasons.
- # They are used together so the "box predictor" layers should be part of the "box head".
- # New subclasses of ROIHeads do not need "box predictor"s.
- box_head = None if cfg.MODEL.CLIP.USE_TEXT_EMB_CLASSIFIER else build_box_head(
- cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution)
- )
- box_head_output_shape = 1024
- box_predictor = FastRCNNOutputLayers(cfg, box_head_output_shape)
- return {
- "box_in_features": in_features,
- "box_pooler": box_pooler,
- "box_head": box_head,
- "box_predictor": box_predictor,
- }
-
- @classmethod
- def _init_mask_head(cls, cfg, input_shape):
- if not cfg.MODEL.MASK_ON:
- return {}
- # fmt: off
- in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
- sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE
- # fmt: on
-
- in_channels = [input_shape[f].channels for f in in_features][0]
-
- ret = {"mask_in_features": in_features}
- ret["mask_pooler"] = (
- ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- if pooler_type
- else None
- )
- if pooler_type:
- shape = ShapeSpec(
- channels=in_channels, width=pooler_resolution, height=pooler_resolution
- )
- else:
- shape = {f: input_shape[f] for f in in_features}
- ret["mask_head"] = build_mask_head(cfg, shape)
- return ret
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- proposals: List[Instances],
- targets: Optional[List[Instances]] = None,
- attnpool=None,
- ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]:
- """
- See :class:`ROIHeads.forward`.
- """
- del images
- if self.training:
- assert targets, "'targets' argument is required during training"
- proposals = self.label_and_sample_proposals(proposals, targets)
- del targets
-
- if self.training:
- losses = self._forward_box(features, proposals, attnpool=attnpool)
- # Usually the original proposals used by the box head are used by the mask, keypoint
- # heads. But when `self.train_on_pred_boxes is True`, proposals will contain boxes
- # predicted by the box head.
- losses.update(self._forward_mask(features, proposals))
- return proposals, losses
- else:
- pred_instances = self._forward_box(features, proposals, attnpool=attnpool)
- # During inference cascaded prediction is used: the mask and keypoints heads are only
- # applied to the top scoring box detections.
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
- return pred_instances, {}
-
- def forward_with_given_boxes(
- self, features: Dict[str, torch.Tensor], instances: List[Instances]
- ) -> List[Instances]:
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- This is useful for downstream tasks where a box is known, but need to obtain
- other attributes (outputs of other heads).
- Test-time augmentation also uses this.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- list[Instances]:
- the same `Instances` objects, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
- assert not self.training
- assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
-
- instances = self._forward_mask(features, instances)
- return instances
-
- def _forward_box(self, features: Dict[str, torch.Tensor], proposals: List[Instances], attnpool=None):
- """
- Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`,
- the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument.
-
- Args:
- features (dict[str, Tensor]): mapping from feature map names to tensor.
- Same as in :meth:`ROIHeads.forward`.
- proposals (list[Instances]): the per-image object proposals with
- their matching ground truth.
- Each has fields "proposal_boxes", and "objectness_logits",
- "gt_classes", "gt_boxes".
-
- Returns:
- In training, a dict of losses.
- In inference, a list of `Instances`, the predicted instances.
- """
- features = [features[f] for f in self.box_in_features]
- box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals])
- if attnpool: # att pooling
- box_features = attnpool(box_features)
- else: # default FPN pooling (FastRCNNConvFCHead)
- box_features = self.box_head(box_features)
- predictions = self.box_predictor(box_features)
- del box_features
-
- if self.training:
- losses = self.box_predictor.losses(predictions, proposals)
- # proposals is modified in-place below, so losses must be computed first.
- if self.train_on_pred_boxes:
- with torch.no_grad():
- pred_boxes = self.box_predictor.predict_boxes_for_gt_classes(
- predictions, proposals
- )
- for proposals_per_image, pred_boxes_per_image in zip(proposals, pred_boxes):
- proposals_per_image.proposal_boxes = Boxes(pred_boxes_per_image)
- return losses
- else:
- pred_instances, _ = self.box_predictor.inference(predictions, proposals)
- return pred_instances
-
- def _forward_mask(self, features: Dict[str, torch.Tensor], instances: List[Instances]):
- """
- Forward logic of the mask prediction branch.
-
- Args:
- features (dict[str, Tensor]): mapping from feature map names to tensor.
- Same as in :meth:`ROIHeads.forward`.
- instances (list[Instances]): the per-image instances to train/predict masks.
- In training, they can be the proposals.
- In inference, they can be the boxes predicted by R-CNN box head.
-
- Returns:
- In training, a dict of losses.
- In inference, update `instances` with new fields "pred_masks" and return it.
- """
- if not self.mask_on:
- return {} if self.training else instances
-
- if self.training:
- # head is only trained on positive proposals.
- instances, _ = select_foreground_proposals(instances, self.num_classes)
-
- if self.mask_pooler is not None:
- features = [features[f] for f in self.mask_in_features]
- boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances]
- features = self.mask_pooler(features, boxes)
- else:
- features = {f: features[f] for f in self.mask_in_features}
- return self.mask_head(features, instances)
\ No newline at end of file
diff --git a/spaces/Chris4K/llms_compare/Tachosoft 23.1 Download.md b/spaces/Chris4K/llms_compare/Tachosoft 23.1 Download.md
deleted file mode 100644
index 80dad4d5db5377a8c9ef252bfcee4bae37cc7be2..0000000000000000000000000000000000000000
--- a/spaces/Chris4K/llms_compare/Tachosoft 23.1 Download.md
+++ /dev/null
@@ -1,66 +0,0 @@
-## Tachosoft 23.1 Download
-
-
-
-
-
- 
-
-
-
-
-
-**DOWNLOAD ===> [https://urluso.com/2tBNDY](https://urluso.com/2tBNDY)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Tachosoft 23.1: A Powerful Tool for Odometer Correction
-
-
-
-If you are looking for a reliable and easy-to-use software for odometer correction, you might want to check out Tachosoft 23.1. This software is one of the world's largest digital odometer calculators, covering more than 2246 vehicle models from various manufacturers[^1^] [^2^]. It can help you adjust the mileage of your car's dashboard by reading and modifying the data stored in the memory chip.
-
-
-
-Tachosoft 23.1 is compatible with Windows operating system and supports various languages. You can download it from the official website or from some trusted online sources[^1^] [^3^]. However, you need to be careful when using this software, as it may be illegal in some countries to tamper with the odometer reading. You should only use it for personal or educational purposes, and not for fraud or deception.
-
-
-
-To use Tachosoft 23.1, you need to have a programmer device that can read and write the memory chip of your dashboard. You also need to know the type and model of your chip, as well as the location and format of the mileage data. Tachosoft 23.1 can provide you with some useful tips and pinouts for different chips and vehicles[^1^] [^4^]. You can also refer to the user manual or online tutorials for more guidance.
-
-
-
-Once you have read the dump file from the chip, you can open it with Tachosoft 23.1 and enter the desired mileage value. The software will calculate the new checksum and display the modified dump file. You can then write it back to the chip using your programmer device. After that, you can reinstall the dashboard and enjoy your new mileage reading.
-
-
-
-Tachosoft 23.1 is a powerful tool for odometer correction that can save you time and money. It is easy to use for everyone, from beginner to professional. However, you should always be responsible and ethical when using this software, and respect the laws of your country.
-
-
-
-Tachosoft 23.1 is not only a mileage calculator, but also a data provider. It can show you the mileage data in the position of the computer program, so you can easily change it yourself. It can also help you find the correct chip type and pinout for your dashboard, as well as the offset and swap values for the mileage data. You can also use the search function to find the vehicle model and chip type that you need.
-
-
-
-Tachosoft 23.1 supports a wide range of vehicles, including cars, trucks, motorcycles, and boats. It can work with various dashboards and memory chips, such as 93c46, 93c56, 93c66, 93c86, 24c01, 24c02, 24c04, 24c08, 24c16, 24c32, 24c64, 95xxx, ST6249, NEC, and many more . It can also handle different calculation methods and algorithms used by different manufacturers and dashboards.
-
-
-
-Tachosoft 23.1 is constantly updated with new models and features. The latest version is 23.1, released in 2013. It added more than 200 new models to the database, such as Alfa Romeo Giulietta, Audi A1/A6/A7/A8/Q3/Q5/Q7/S5/TT/TTS/TTRS/R8/RS3/RS4/RS5/RS6/RS7/S8/SQ5/V8/V10/V12/R18/R20/R21/R22/R23/R24/R25/R26/R27/R28/R29/R30/R31/R32/R33/R34/R35/R36/R37/R38/R39/R40/R41/R42/R43/R44/R45/R46/R47/R48/R49/, BMW F01/F02/F07/F10/F11/F12/F13/F20/F21/F25/F30/F31/F32/F33/F34/F35/F36/F80/F82/F83/X1/X3/X4/X5/X6/Z4/M3/M4/M5/M6/ActiveHybrid3/ActiveHybrid5/ActiveHybrid7/i3/i8/1M/2M/3M/4M/5M/6M/7M/8M/9M/10M/11M/12M/13M/14M/15M/16M/17M/18M/19M/, Chevrolet Aveo/Camaro/Captiva/Cobalt/Cruze/Equinox/HHR/Impala/Lacetti/Malibu/Matiz/Niva/Silverado/Sonic/Suburban/Tahoe/Tavera/Volt/Zafira/ZR1/Z06/Z07/Z08/Z09/Z10/Z11/Z12/Z13/Z14/Z15/Z16/Z17/Z18/Z19/, Citroen Berlingo/C1/C2/C3/C4/C5/C6/C8/C-Crosser/C-Elysee/C-Zero/Ds3/Ds4/Ds5/Evasion/Jumper/Jumpy/Nemo/Picasso/Saxo/Xantia/Xsara/Xsara Picasso/, Fiat 500/Barchetta/Doblo/Ducato/Idea/Linea/Marea/Multipla/Palio/Panda/Punto/Stilo/Ulysse/, Ford B-Max/C-Max/Ecosport/Escort/Fiesta/Focus/Fusion/Galaxy/Ka/Kuga/Mondeo/Mustang/S-Max/Taurus/Tourneo Connect/Tourneo Custom/Transit Connect/Transit Custom/, Honda Accord/Airwave/Civic/City/Crosstour/Cr-V/Cr-Z/Elysion/Fit/Jazz/Hr-V/Legend/Odyssey/Pilot/S2000/Shuttle/, Hyundai Accent/Azera/Coupe/Elantra/Eon/Equus/Galloper/Gensis Coupe/Gensis Sedan/Gensis/Gets/I10/I20/I30/I40/Ix20/Ix35/Ix55/Lantra/Santa Fe/Sonata/Terracan/Tiburon/Trajet/Tucson/Veloster/Verna/Xg300/Xg350/, Kia Carens/Carnival/Ceed/Cerato/Koup/Optima/Picanto/Pride/Rio/Sorento/Soul/Sportage/Venga/, Land Rover Defender/Discovery/Evoque/Freelander/Lr2/Lr3/Lr4/Range Rover Sport/, Mazda
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/app.py b/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/app.py
deleted file mode 100644
index d141b6f437200faedb9fd418a5e611ab720cd33f..0000000000000000000000000000000000000000
--- a/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/app.py
+++ /dev/null
@@ -1,114 +0,0 @@
-import gradio as gr
-import jax.numpy as jnp
-from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel
-from diffusers import FlaxScoreSdeVeScheduler, FlaxDPMSolverMultistepScheduler
-import torch
-torch.backends.cuda.matmul.allow_tf32 = True
-import torchvision
-import torchvision.transforms as T
-from flax.jax_utils import replicate
-from flax.training.common_utils import shard
-#from torchvision.transforms import v2 as T2
-import cv2
-import PIL
-from PIL import Image
-import numpy as np
-import jax
-
-import torchvision.transforms.functional as F
-
-output_res = (768,768)
-
-conditioning_image_transforms = T.Compose(
- [
- #T2.ScaleJitter(target_size=output_res, scale_range=(0.5, 3.0))),
- T.RandomCrop(size=output_res, pad_if_needed=True, padding_mode="symmetric"),
- T.ToTensor(),
- #T.Normalize([0.5], [0.5]),
- ]
-)
-
-cnet, cnet_params = FlaxControlNetModel.from_pretrained("./models/catcon-controlnet-wd", dtype=jnp.bfloat16, from_flax=True)
-pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained(
- "./models/wd-1-5-b2-flax",
- controlnet=cnet,
- revision="flax",
- dtype=jnp.bfloat16,
- )
-#scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained(
-# "./models/wd-1-5-b2-flax",
-# subfolder="scheduler"
-#)
-#params["scheduler"] = scheduler_state
-
-#scheduler = FlaxDPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
-#pipe.enable_model_cpu_offload()
-#pipe.enable_xformers_memory_efficient_attention()
-
-def get_random(seed):
- return jax.random.PRNGKey(seed)
-
-# inference function takes prompt, negative prompt and image
-def infer(prompt, negative_prompt, image):
- # implement your inference function here
- params["controlnet"] = cnet_params
- num_samples = 1
-
- inp = Image.fromarray(image)
-
- cond_input = conditioning_image_transforms(inp)
- cond_input = T.ToPILImage()(cond_input)
-
- cond_img_in = pipe.prepare_image_inputs([cond_input] * num_samples)
- cond_img_in = shard(cond_img_in)
-
- prompt_in = pipe.prepare_text_inputs([prompt] * num_samples)
- prompt_in = shard(prompt_in)
-
- n_prompt_in = pipe.prepare_text_inputs([negative_prompt] * num_samples)
- n_prompt_in = shard(n_prompt_in)
-
- rng = get_random(0)
- rng = jax.random.split(rng, jax.device_count())
-
- p_params = replicate(params)
-
- output = pipe(
- prompt_ids=prompt_in,
- image=cond_img_in,
- params=p_params,
- prng_seed=rng,
- num_inference_steps=70,
- neg_prompt_ids=n_prompt_in,
- jit=True,
- ).images
-
- output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
- return output_images
-
-gr.Interface(
- infer,
- inputs=[
- gr.Textbox(
- label="Enter prompt",
- max_lines=1,
- placeholder="1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck",
- ),
- gr.Textbox(
- label="Enter negative prompt",
- max_lines=1,
- placeholder="low quality",
- ),
- gr.Image(),
- ],
- outputs=gr.Gallery().style(grid=[2], height="auto"),
- title="Generate controlled outputs with Categorical Conditioning on Waifu Diffusion 1.5 beta 2.",
- description="This Space uses image examples as style conditioning.",
- examples=[
- ["1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck", "realistic, real life", "wikipe_cond_1.png"],
- ["1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck", "realistic, real life", "wikipe_cond_2.png"],
- ["1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck", "realistic, real life", "wikipe_cond_3.png"]
- ],
- allow_flagging=False,
-).launch(enable_queue=True)
-
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/box_coder.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/box_coder.py
deleted file mode 100644
index 5579503fa55c92b82690fe55dd9715447ab8f081..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/box_coder.py
+++ /dev/null
@@ -1,193 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import math
-
-import torch
-import pandas as pd
-from maskrcnn_benchmark.data.datasets.evaluation.word import io_
-class BoxCoder(object):
- """
- This class encodes and decodes a set of bounding boxes into the representation used for training the regressors.
- """
-
- def __init__(self, weights, bbox_xform_clip=math.log(1000. / 16)):
- """
- Arguments:
- weights (4-element tuple)
- bbox_xform_clip (float)
- """
- self.weights = weights
- self.bbox_xform_clip = bbox_xform_clip
-
- def encode(self, reference_boxes, proposals):
- """
- Encode a set of proposals with respect to some
- reference boxes
-
- Arguments:
- reference_boxes (Tensor): reference boxes
- proposals (Tensor): boxes to be encoded
- """
- TO_REMOVE = 1 # TODO remove
- ex_widths = proposals[:, 2] - proposals[:, 0] + TO_REMOVE
- ex_heights = proposals[:, 3] - proposals[:, 1] + TO_REMOVE
- ex_ctr_x = proposals[:, 0] + 0.5 * ex_widths
- ex_ctr_y = proposals[:, 1] + 0.5 * ex_heights
-
- gt_widths = reference_boxes[:, 2] - reference_boxes[:, 0] + TO_REMOVE
- gt_heights = reference_boxes[:, 3] - reference_boxes[:, 1] + TO_REMOVE
- gt_ctr_x = reference_boxes[:, 0] + 0.5 * gt_widths
- gt_ctr_y = reference_boxes[:, 1] + 0.5 * gt_heights
-
- wx, wy, ww, wh = self.weights
- targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths
- targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights
- targets_dw = ww * torch.log(gt_widths / ex_widths)
- targets_dh = wh * torch.log(gt_heights / ex_heights)
-
- targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1)
- return targets
-
- def encode_iou(self, reference_boxes, proposals):
- """
- Encode a set of proposals with respect to some
- reference boxes
-
- Arguments:
- reference_boxes (Tensor): reference boxes
- proposals (Tensor): boxes to be encoded
- """
- TO_REMOVE = 1 # TODO remove
- ex_widths = proposals[:, 2] - proposals[:, 0] + TO_REMOVE
- ex_heights = proposals[:, 3] - proposals[:, 1] + TO_REMOVE
- ex_ctr_x = proposals[:, 0] + 0.5 * ex_widths
- ex_ctr_y = proposals[:, 1] + 0.5 * ex_heights
-
- gt_widths = reference_boxes[:, 2] - reference_boxes[:, 0] + TO_REMOVE
- gt_heights = reference_boxes[:, 3] - reference_boxes[:, 1] + TO_REMOVE
- gt_ctr_x = reference_boxes[:, 0] + 0.5 * gt_widths
- gt_ctr_y = reference_boxes[:, 1] + 0.5 * gt_heights
-
- wx, wy, ww, wh = self.weights
- targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths
- targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights
- targets_dw = ww * torch.log(gt_widths / ex_widths)
- targets_dh = wh * torch.log(gt_heights / ex_heights)
-
- targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1)
- return targets
-
-
- def decode(self, rel_codes, boxes):
- """
- From a set of original boxes and encoded relative box offsets,
- get the decoded boxes.
-
- Arguments:
- rel_codes (Tensor): encoded boxes # predict [2, 12000, 4]
- boxes (Tensor): reference boxes. # anchor [2, 12000, 4] xmin0 ymin1 xmax2 ymax3
- """
- boxes = boxes.to(rel_codes.dtype)
-
-
- TO_REMOVE = 1 # TODO remove
- widths = boxes[:, 2] - boxes[:, 0] + TO_REMOVE
- heights = boxes[:, 3] - boxes[:, 1] + TO_REMOVE
- ctr_x = boxes[:, 0] + 0.5 * widths
- ctr_y = boxes[:, 1] + 0.5 * heights
-
- wx, wy, ww, wh = self.weights
- dx = rel_codes[:, 0::4] / wx
- dy = rel_codes[:, 1::4] / wy
- dw = rel_codes[:, 2::4] / ww
- dh = rel_codes[:, 3::4] / wh
-
- dw = torch.clamp(dw, max=self.bbox_xform_clip)
- dh = torch.clamp(dh, max=self.bbox_xform_clip)
-
- pred_ctr_x = dx * widths[:, None] + ctr_x[:, None]
- pred_ctr_y = dy * heights[:, None] + ctr_y[:, None]
- pred_w = torch.exp(dw) * widths[:, None]
- pred_h = torch.exp(dh) * heights[:, None]
-
- ##############################
-
- pred_boxes = torch.zeros_like(rel_codes)
- pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * pred_w
- pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * pred_h
- pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * pred_w - 1
- pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * pred_h - 1
-
- return pred_boxes
-
-
- def decode_iou(self, rel_codes, boxes, num_p = 8):
- """
- From a set of original boxes and encoded relative box offsets,
- get the decoded boxes.
-
- Arguments:
- rel_codes (Tensor): encoded boxes # predict [2, 12000, 4]
- boxes (Tensor): reference boxes. # anchor [2, 12000, 4] xmin0 ymin1 xmax2 ymax3
- """
- boxes = boxes.to(rel_codes.dtype)
-
- TO_REMOVE = 1 # TODO remove
- widths = boxes[:, 2] - boxes[:, 0] + TO_REMOVE
- heights = boxes[:, 3] - boxes[:, 1] + TO_REMOVE
-
- ctr_x = boxes[:, 0] + 0.5 * widths
- ctr_y = boxes[:, 1] + 0.5 * heights
- # 123
- # 8#4
- # 765
- if num_p == 8: # 8 boundary points
- x_1 = boxes[:, 0] + widths * rel_codes[:, 0]
- y_1 = boxes[:, 1] + heights * rel_codes[:, 1]
- x_2 = ctr_x + widths * rel_codes[:, 2]
- y_2 = boxes[:, 1] + heights * rel_codes[:, 3]
- x_3 = boxes[:, 2] + widths * rel_codes[:, 4]
- y_3 = boxes[:, 1] + heights * rel_codes[:, 5]
- x_4 = boxes[:, 2] + widths * rel_codes[:, 6]
- y_4 = ctr_y + heights * rel_codes[:, 7]
- x_5 = boxes[:, 2] + widths * rel_codes[:, 8]
- y_5 = boxes[:, 3] + heights * rel_codes[:, 9]
- x_6 = ctr_x + widths * rel_codes[:, 10]
- y_6 = boxes[:, 3] + heights * rel_codes[:, 11]
- x_7 = boxes[:, 0] + widths * rel_codes[:, 12]
- y_7 = boxes[:, 3] + heights * rel_codes[:, 13]
- x_8 = boxes[:, 0] + widths * rel_codes[:, 14]
- y_8 = ctr_y + heights * rel_codes[:, 15]
- x_total = torch.stack([x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8], 0)
- y_total = torch.stack([y_1, y_2, y_3, y_4, y_5, y_6, y_7, y_8], 0)
-
- x_min = torch.min(x_total, 0, keepdim=True) # [1, N]
- x_max = torch.max(x_total, 0, keepdim=True)
-
- y_min = torch.min(y_total, 0, keepdim=True)
- y_max = torch.max(y_total, 0, keepdim=True)
-
- N1, N2 = x_min[0].shape
- x_min = x_min[0].view([N2])
- x_max = x_max[0].view([N2])
- y_min = y_min[0].view([N2])
- y_max = y_max[0].view([N2])
-
- x_min = torch.stack([x_min, ctr_x], 0)
- x_max = torch.stack([x_max, ctr_x], 0)
- y_min = torch.stack([y_min, ctr_y], 0)
- y_max = torch.stack([y_max, ctr_y], 0)
-
- x_min = torch.min(x_min, 0, keepdim=True) # [1, N]
- x_max = torch.max(x_max, 0, keepdim=True)
- y_min = torch.min(y_min, 0, keepdim=True)
- y_max = torch.max(y_max, 0, keepdim=True)
-
- pred_boxes = torch.zeros_like(boxes)
-
- pred_boxes[:, 0] = x_min[0][0, :]
- pred_boxes[:, 1] = y_min[0][0, :]
- pred_boxes[:, 2] = x_max[0][0, :]
- pred_boxes[:, 3] = y_max[0][0, :]
-
-
- return pred_boxes
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Image-93033d87.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Image-93033d87.js
deleted file mode 100644
index c2b974926ec748c990f95858fff7ba3be0678cb4..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Image-93033d87.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,e as g,s as d,J as n,K as e,p as m,M as i,n as l,A as u}from"./index-3370be2a.js";function f(c){let t,r,s,o;return{c(){t=n("svg"),r=n("rect"),s=n("circle"),o=n("polyline"),e(r,"x","3"),e(r,"y","3"),e(r,"width","18"),e(r,"height","18"),e(r,"rx","2"),e(r,"ry","2"),e(s,"cx","8.5"),e(s,"cy","8.5"),e(s,"r","1.5"),e(o,"points","21 15 16 10 5 21"),e(t,"xmlns","http://www.w3.org/2000/svg"),e(t,"width","100%"),e(t,"height","100%"),e(t,"viewBox","0 0 24 24"),e(t,"fill","none"),e(t,"stroke","currentColor"),e(t,"stroke-width","1.5"),e(t,"stroke-linecap","round"),e(t,"stroke-linejoin","round"),e(t,"class","feather feather-image")},m(a,p){m(a,t,p),i(t,r),i(t,s),i(t,o)},p:l,i:l,o:l,d(a){a&&u(t)}}}class x extends h{constructor(t){super(),g(this,t,null,f,d,{})}}export{x as I};
-//# sourceMappingURL=Image-93033d87.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-03d58ab8.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-03d58ab8.css
deleted file mode 100644
index c02568c42d3cf011dc008a256fdece5721dbccab..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-03d58ab8.css
+++ /dev/null
@@ -1 +0,0 @@
-.hide.svelte-ydeks8{display:none}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/wasm_utils.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/wasm_utils.py
deleted file mode 100644
index 205892bdb3531f1c28c4ecb782b860de7f7a325a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/wasm_utils.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import sys
-
-# See https://pyodide.org/en/stable/usage/faq.html#how-to-detect-that-code-is-run-with-pyodide
-IS_WASM = sys.platform == "emscripten"
-
-
-class WasmUnsupportedError(Exception):
- pass
-
-
-app = None
-
-
-# `register_app` and `get_registered_app` are used
-# for the Wasm worker to get a reference to
-# the Gradio's FastAPI app instance (`app`).
-def register_app(_app):
- global app
- app = _app
-
-
-def get_registered_app():
- global app
- return app
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/server/modelEndpoint.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/server/modelEndpoint.ts
deleted file mode 100644
index 1edd45cf91e251ff13c89f2f56ee6130a74bce19..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/server/modelEndpoint.ts
+++ /dev/null
@@ -1,32 +0,0 @@
-import { HF_ACCESS_TOKEN } from "$env/static/private";
-import { sum } from "$lib/utils/sum";
-import type { BackendModel } from "./models";
-
-/**
- * Find a random load-balanced endpoint
- */
-export function modelEndpoint(model: BackendModel): {
- url: string;
- authorization: string;
- weight: number;
-} {
- if (!model.endpoints) {
- return {
- url: `https://api-inference.huggingface.co/models/${model.name}`,
- authorization: `Bearer ${HF_ACCESS_TOKEN}`,
- weight: 1,
- };
- }
- const endpoints = model.endpoints;
- const totalWeight = sum(endpoints.map((e) => e.weight));
-
- let random = Math.random() * totalWeight;
- for (const endpoint of endpoints) {
- if (random < endpoint.weight) {
- return endpoint;
- }
- random -= endpoint.weight;
- }
-
- throw new Error("Invalid config, no endpoint found");
-}
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/modules/rope.py b/spaces/Datasculptor/MusicGen/audiocraft/modules/rope.py
deleted file mode 100644
index 4b8c70b9aba28eeb53d12ddc3de8852492847808..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/modules/rope.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from torch import nn
-import torch
-
-
-class XPos(nn.Module):
- """Length-extrapolatable positional embedding (xPos) from [Sun et al 2022](https://arxiv.org/abs/2212.10554v1).
- This applies an exponential decay to the RoPE rotation matrix.
-
- Args:
- dim (int): Embedding dimension.
- smoothing (float): Smoothing factor applied to the decay rates.
- base_scale (int): Base decay rate, given in terms of scaling time.
- device (torch.device or None): Device on which to initialize the module.
- dtype (torch.dtype): dtype to use to generate the embedding.
- """
- def __init__(self, dim: int, smoothing: float = 0.4, base_scale: int = 512,
- device=None, dtype: torch.dtype = torch.float32):
- super().__init__()
- assert dim % 2 == 0
- assert dtype in [torch.float64, torch.float32]
- self.dtype = dtype
- self.base_scale = base_scale
-
- half_dim = dim // 2
- adim = torch.arange(half_dim, device=device, dtype=dtype)
- decay_rates = (adim / half_dim + smoothing) / (1.0 + smoothing)
- self.register_buffer("decay_rates", decay_rates)
- self.decay: tp.Optional[torch.Tensor] = None
-
- def get_decay(self, start: int, end: int):
- """Create complex decay tensor, cache values for fast computation.
- """
- if self.decay is None or end > self.decay.shape[0]:
- assert isinstance(self.decay_rates, torch.Tensor) # Satisfy type checker.
- idx = torch.arange(end, device=self.decay_rates.device, dtype=self.dtype)
- power = idx / self.base_scale
- scale = self.decay_rates ** power.unsqueeze(-1)
- self.decay = torch.polar(scale, torch.zeros_like(scale))
- return self.decay[start:end] # [T, C/2]
-
-
-class RotaryEmbedding(nn.Module):
- """Rotary positional embedding (RoPE) from [Su et al 2022](https://arxiv.org/abs/2104.09864).
-
- Args:
- dim (int): Embedding dimension (twice the number of frequencies).
- max_period (float): Maximum period of the rotation frequencies.
- xpos (bool): Use xPos, applies an exponential decay to rotation matrix.
- scale (float): Scale of positional embedding, set to 0 to deactivate.
- device (torch.device or None): Device on which to initialize the module.
- dtype (torch.dtype): dtype to use to generate the embedding.
- """
- def __init__(self, dim: int, max_period: float = 10000.0, xpos: bool = False,
- scale: float = 1.0, device=None, dtype: torch.dtype = torch.float32):
- super().__init__()
- assert dim % 2 == 0
- self.scale = scale
- assert dtype in [torch.float64, torch.float32]
- self.dtype = dtype
-
- adim = torch.arange(0, dim, 2, device=device, dtype=dtype)[: (dim // 2)]
- frequencies = 1.0 / (max_period ** (adim / dim))
- self.register_buffer("frequencies", frequencies)
- self.rotation: tp.Optional[torch.Tensor] = None
-
- self.xpos = XPos(dim, device=device, dtype=dtype) if xpos else None
-
- def get_rotation(self, start: int, end: int):
- """Create complex rotation tensor, cache values for fast computation.
- """
- if self.rotation is None or end > self.rotation.shape[0]:
- assert isinstance(self.frequencies, torch.Tensor) # Satisfy type checker.
- idx = torch.arange(end, device=self.frequencies.device, dtype=self.dtype)
- angles = torch.outer(idx, self.frequencies)
- self.rotation = torch.polar(torch.ones_like(angles), angles)
- return self.rotation[start:end]
-
- def rotate(self, x: torch.Tensor, start: int = 0, invert_decay: bool = False):
- """Apply rope rotation to query or key tensor.
- """
- T = x.shape[1]
- rotation = self.get_rotation(start, start + T).unsqueeze(0).unsqueeze(2)
-
- if self.xpos:
- decay = self.xpos.get_decay(start, start + T).unsqueeze(0).unsqueeze(2)
- else:
- decay = 1.0
-
- if invert_decay:
- decay = decay ** -1
-
- x_complex = torch.view_as_complex(x.to(self.dtype).reshape(*x.shape[:-1], -1, 2))
- scaled_rotation = (rotation * decay) * self.scale + (1.0 - self.scale)
- x_out = torch.view_as_real(x_complex * scaled_rotation).flatten(-2)
-
- return x_out.type_as(x)
-
- def rotate_qk(self, query: torch.Tensor, key: torch.Tensor, start: int = 0):
- """ Apply rope rotation to both query and key tensors.
- Supports streaming mode, in which query and key are not expected to have the same shape.
- In streaming mode, key will be of legnth [P + C] with P the cached past timesteps, but
- query will be [C] (typically C == 1).
-
- Args:
- query (torch.Tensor): Query to rotate.
- key (torch.Tensor): Key to rotate.
- start (int): Start index of the sequence for time offset.
- """
- query_timesteps = query.shape[1]
- key_timesteps = key.shape[1]
- streaming_offset = key_timesteps - query_timesteps
-
- query_out = self.rotate(query, start + streaming_offset)
- key_out = self.rotate(key, start, invert_decay=True)
-
- return query_out, key_out
diff --git a/spaces/DeepFloyd/deepfloyd-if-license/style.css b/spaces/DeepFloyd/deepfloyd-if-license/style.css
deleted file mode 100644
index d4140f862f2f9398d75083a739277eaa1fe759bc..0000000000000000000000000000000000000000
--- a/spaces/DeepFloyd/deepfloyd-if-license/style.css
+++ /dev/null
@@ -1,12 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-li{
- margin-top: 1em;
-}
-
-ol ol{
- list-style-type: lower-alpha;
-}
diff --git a/spaces/Dryash/ChatGPT4/README.md b/spaces/Dryash/ChatGPT4/README.md
deleted file mode 100644
index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000
--- a/spaces/Dryash/ChatGPT4/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Chat-with-GPT4
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: ysharma/ChatGPT4
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Duskfallcrew/prompthero-openjourney/app.py b/spaces/Duskfallcrew/prompthero-openjourney/app.py
deleted file mode 100644
index 2193905172b6fb6d868bff88cc8311f491ec13b3..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/prompthero-openjourney/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/prompthero/openjourney").launch()
\ No newline at end of file
diff --git a/spaces/ECCV2022/storydalle/README.md b/spaces/ECCV2022/storydalle/README.md
deleted file mode 100644
index 43f21c2788ff47178a3c18b18c9b568dbd61ce84..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/storydalle/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Storydalle
-emoji: 📊
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/models/realesrgan_model.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/models/realesrgan_model.py
deleted file mode 100644
index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/models/realesrgan_model.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.srgan_model import SRGANModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from collections import OrderedDict
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRGANModel(SRGANModel):
- """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRGANModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt_usm, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
- self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
- self.gt_usm = self.usm_sharpener(self.gt)
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
-
- def optimize_parameters(self, current_iter):
- # usm sharpening
- l1_gt = self.gt_usm
- percep_gt = self.gt_usm
- gan_gt = self.gt_usm
- if self.opt['l1_gt_usm'] is False:
- l1_gt = self.gt
- if self.opt['percep_gt_usm'] is False:
- percep_gt = self.gt
- if self.opt['gan_gt_usm'] is False:
- gan_gt = self.gt
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, l1_gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(gan_gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/transforms.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/transforms.py
deleted file mode 100644
index 6f30b7177d17fc61a4173c21b4233172a890be58..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/transforms.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import numpy as np
-import torch
-from torch.nn import functional as F
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Fazen/ask-youtube/styles.css b/spaces/Fazen/ask-youtube/styles.css
deleted file mode 100644
index 01d9bfe12e7bedb926f910d7f8b7aa8edeaa0062..0000000000000000000000000000000000000000
--- a/spaces/Fazen/ask-youtube/styles.css
+++ /dev/null
@@ -1,14 +0,0 @@
-@import url("https://fonts.googleapis.com/css?family=Arimo:400,700");
-
-.container-fluid a {
- color: #000000 !important;
-}
-
-.container-fluid a:hover {
- color: #0000EE !important;
-}
-
-.stTextInput input {
- background-color: white;
- border: 1px solid #e6e6e6;
-}
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/modules/mel_processing.py b/spaces/FrankZxShen/so-vits-svc-models-ba/modules/mel_processing.py
deleted file mode 100644
index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/modules/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/GMFTBY/PandaGPT/model/modeling_llama.py b/spaces/GMFTBY/PandaGPT/model/modeling_llama.py
deleted file mode 100644
index 12d980e189d902fb1a6d9ea05dc3ca91959b1c8c..0000000000000000000000000000000000000000
--- a/spaces/GMFTBY/PandaGPT/model/modeling_llama.py
+++ /dev/null
@@ -1,755 +0,0 @@
-# This script is based on https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py
-
-""" PyTorch LLaMA model."""
-import math
-from typing import List, Optional, Tuple, Union
-
-import torch
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from transformers.activations import ACT2FN
-from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast
-from transformers.modeling_utils import PreTrainedModel
-from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
-from transformers.models.llama.configuration_llama import LlamaConfig
-
-
-logger = logging.get_logger(__name__)
-
-_CONFIG_FOR_DOC = "LlamaConfig"
-
-
-# Copied from transformers.models.bart.modeling_bart._make_causal_mask
-def _make_causal_mask(
- input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
-):
- """
- Make causal mask used for bi-directional self-attention.
- """
- bsz, tgt_len = input_ids_shape
- mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
- mask_cond = torch.arange(mask.size(-1), device=device)
- mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
- mask = mask.to(dtype)
-
- if past_key_values_length > 0:
- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
- return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
-
-
-# Copied from transformers.models.bart.modeling_bart._expand_mask
-def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
- """
- Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
- """
- bsz, src_len = mask.size()
- tgt_len = tgt_len if tgt_len is not None else src_len
-
- expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
-
- inverted_mask = 1.0 - expanded_mask
-
- return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
-
-
-class LlamaRMSNorm(nn.Module):
- def __init__(self, hidden_size, eps=1e-6):
- """
- LlamaRMSNorm is equivalent to T5LayerNorm
- """
- super().__init__()
- self.weight = nn.Parameter(torch.ones(hidden_size))
- self.variance_epsilon = eps
-
- def forward(self, hidden_states):
- variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
- hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
-
- # convert into half-precision if necessary
- if self.weight.dtype in [torch.float16, torch.bfloat16]:
- hidden_states = hidden_states.to(self.weight.dtype)
-
- return self.weight * hidden_states
-
-
-class LlamaRotaryEmbedding(torch.nn.Module):
- def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
- super().__init__()
- inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
- self.register_buffer("inv_freq", inv_freq)
-
- # Build here to make `torch.jit.trace` work.
- self.max_seq_len_cached = max_position_embeddings
- t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
- freqs = torch.einsum("i,j->ij", t, self.inv_freq)
- # Different from paper, but it uses a different permutation in order to obtain the same calculation
- emb = torch.cat((freqs, freqs), dim=-1)
- self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
- self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
-
- def forward(self, x, seq_len=None):
- # x: [bs, num_attention_heads, seq_len, head_size]
- # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
- if seq_len > self.max_seq_len_cached:
- self.max_seq_len_cached = seq_len
- t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
- freqs = torch.einsum("i,j->ij", t, self.inv_freq)
- # Different from paper, but it uses a different permutation in order to obtain the same calculation
- emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
- self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
- self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
- return (
- self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
- self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
- )
-
-
-def rotate_half(x):
- """Rotates half the hidden dims of the input."""
- x1 = x[..., : x.shape[-1] // 2]
- x2 = x[..., x.shape[-1] // 2 :]
- return torch.cat((-x2, x1), dim=-1)
-
-
-def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
- gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1]
- gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3])
- cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)
- sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)
- q_embed = (q * cos) + (rotate_half(q) * sin)
- k_embed = (k * cos) + (rotate_half(k) * sin)
- return q_embed, k_embed
-
-
-class LlamaMLP(nn.Module):
- def __init__(
- self,
- hidden_size: int,
- intermediate_size: int,
- hidden_act: str,
- ):
- super().__init__()
- self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
- self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
- self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
- self.act_fn = ACT2FN[hidden_act]
-
- def forward(self, x):
- return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
-
-
-class LlamaAttention(nn.Module):
- """Multi-headed attention from 'Attention Is All You Need' paper"""
-
- def __init__(self, config: LlamaConfig):
- super().__init__()
- self.config = config
- self.hidden_size = config.hidden_size
- self.num_heads = config.num_attention_heads
- self.head_dim = self.hidden_size // self.num_heads
- self.max_position_embeddings = config.max_position_embeddings
-
- if (self.head_dim * self.num_heads) != self.hidden_size:
- raise ValueError(
- f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
- f" and `num_heads`: {self.num_heads})."
- )
- self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
- self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
-
- def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
- return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: bool = False,
- use_cache: bool = False,
- ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- bsz, q_len, _ = hidden_states.size()
-
- query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
-
- kv_seq_len = key_states.shape[-2]
- if past_key_value is not None:
- kv_seq_len += past_key_value[0].shape[-2]
- cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
- query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
- # [bsz, nh, t, hd]
-
- if past_key_value is not None:
- # reuse k, v, self_attention
- key_states = torch.cat([past_key_value[0], key_states], dim=2)
- value_states = torch.cat([past_key_value[1], value_states], dim=2)
-
- past_key_value = (key_states, value_states) if use_cache else None
-
- attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
-
- if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
- raise ValueError(
- f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is"
- f" {attn_weights.size()}"
- )
-
- if attention_mask is not None:
- if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
- raise ValueError(
- f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
- )
- attn_weights = attn_weights + attention_mask
- attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
-
- # upcast attention to fp32
- attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
- attn_output = torch.matmul(attn_weights, value_states)
-
- if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
- raise ValueError(
- f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
- f" {attn_output.size()}"
- )
-
- attn_output = attn_output.transpose(1, 2)
- attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
-
- attn_output = self.o_proj(attn_output)
-
- if not output_attentions:
- attn_weights = None
-
- return attn_output, attn_weights, past_key_value
-
-
-class LlamaDecoderLayer(nn.Module):
- def __init__(self, config: LlamaConfig):
- super().__init__()
- self.hidden_size = config.hidden_size
- self.self_attn = LlamaAttention(config=config)
- self.mlp = LlamaMLP(
- hidden_size=self.hidden_size,
- intermediate_size=config.intermediate_size,
- hidden_act=config.hidden_act,
- )
- self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
- self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: Optional[bool] = False,
- use_cache: Optional[bool] = False,
- ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
- attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
- (see `past_key_values`).
- past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
- """
-
- residual = hidden_states
-
- hidden_states = self.input_layernorm(hidden_states)
-
- # Self Attention
- hidden_states, self_attn_weights, present_key_value = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
- hidden_states = residual + hidden_states
-
- # Fully Connected
- residual = hidden_states
- hidden_states = self.post_attention_layernorm(hidden_states)
- hidden_states = self.mlp(hidden_states)
- hidden_states = residual + hidden_states
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (self_attn_weights,)
-
- if use_cache:
- outputs += (present_key_value,)
-
- return outputs
-
-
-LLAMA_START_DOCSTRING = r"""
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`LlamaConfig`]):
- Model configuration class with all the parameters of the model. Initializing with a config file does not
- load the weights associated with the model, only the configuration. Check out the
- [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-
-@add_start_docstrings(
- "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
- LLAMA_START_DOCSTRING,
-)
-class LlamaPreTrainedModel(PreTrainedModel):
- config_class = LlamaConfig
- base_model_prefix = "model"
- supports_gradient_checkpointing = True
- _no_split_modules = ["LlamaDecoderLayer"]
- _keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
-
- def _init_weights(self, module):
- std = self.config.initializer_range
- if isinstance(module, nn.Linear):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, LlamaModel):
- module.gradient_checkpointing = value
-
-
-LLAMA_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
- `past_key_values`).
-
- If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
- and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
- information on the default strategy.
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.n_positions - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
- `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
- `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
-
- Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
- blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
- LLAMA_START_DOCSTRING,
-)
-class LlamaModel(LlamaPreTrainedModel):
- """
- Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`]
-
- Args:
- config: LlamaConfig
- """
-
- def __init__(self, config: LlamaConfig):
- super().__init__(config)
- self.padding_idx = config.pad_token_id
- self.vocab_size = config.vocab_size
-
- self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
- self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)])
- self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
-
- self.gradient_checkpointing = False
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, value):
- self.embed_tokens = value
-
- # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
- def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
- # create causal mask
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- combined_attention_mask = None
- if input_shape[-1] > 1:
- combined_attention_mask = _make_causal_mask(
- input_shape,
- inputs_embeds.dtype,
- device=inputs_embeds.device,
- past_key_values_length=past_key_values_length,
- )
-
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
- inputs_embeds.device
- )
- combined_attention_mask = (
- expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
- )
-
- return combined_attention_mask
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- query_embeds: Optional[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPast]:
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- use_cache = use_cache if use_cache is not None else self.config.use_cache
-
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # retrieve input_ids and inputs_embeds
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
- elif input_ids is not None:
- batch_size, seq_length = input_ids.shape
- elif inputs_embeds is not None:
- batch_size, seq_length, _ = inputs_embeds.shape
- else:
- raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
-
- if inputs_embeds is None:
- inputs_embeds = self.embed_tokens(input_ids)
- if query_embeds is not None:
- inputs_embeds = torch.cat([query_embeds, inputs_embeds], dim=1)
- batch_size, seq_length, _ = inputs_embeds.shape
-
- seq_length_with_past = seq_length
- past_key_values_length = 0
-
- if past_key_values is not None:
- past_key_values_length = past_key_values[0][0].shape[2]
- seq_length_with_past = seq_length_with_past + past_key_values_length
-
- if position_ids is None:
- device = input_ids.device if input_ids is not None else inputs_embeds.device
- position_ids = torch.arange(
- past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
- )
- position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
- else:
- position_ids = position_ids.view(-1, seq_length).long()
-
- # embed positions
- if attention_mask is None:
- attention_mask = torch.ones(
- (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
- )
- attention_mask = self._prepare_decoder_attention_mask(
- attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
- )
-
- hidden_states = inputs_embeds
-
- if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warning_once(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- # decoder layers
- all_hidden_states = () if output_hidden_states else None
- all_self_attns = () if output_attentions else None
- next_decoder_cache = () if use_cache else None
-
- for idx, decoder_layer in enumerate(self.layers):
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- past_key_value = past_key_values[idx] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- # None for past_key_value
- return module(*inputs, output_attentions, None)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(decoder_layer),
- hidden_states,
- attention_mask,
- position_ids,
- None,
- )
- else:
- layer_outputs = decoder_layer(
- hidden_states,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
-
- hidden_states = layer_outputs[0]
-
- if use_cache:
- next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
-
- if output_attentions:
- all_self_attns += (layer_outputs[1],)
-
- hidden_states = self.norm(hidden_states)
-
- # add hidden states from the last decoder layer
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- next_cache = next_decoder_cache if use_cache else None
- if not return_dict:
- return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
- return BaseModelOutputWithPast(
- last_hidden_state=hidden_states,
- past_key_values=next_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attns,
- )
-
-
-class LlamaForCausalLM(LlamaPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.model = LlamaModel(config)
-
- self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.model.embed_tokens
-
- def set_input_embeddings(self, value):
- self.model.embed_tokens = value
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def set_decoder(self, decoder):
- self.model = decoder
-
- def get_decoder(self):
- return self.model
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- query_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CausalLMOutputWithPast]:
- r"""
- Args:
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
- config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
-
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import AutoTokenizer, LlamaForCausalLM
-
- >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
- >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
-
- >>> prompt = "Hey, are you consciours? Can you talk to me?"
- >>> inputs = tokenizer(prompt, return_tensors="pt")
-
- >>> # Generate
- >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
- >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
- "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
- ```"""
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
- outputs = self.model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- query_embeds=query_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- hidden_states = outputs[0]
- logits = self.lm_head(hidden_states)
-
- loss = None
- if labels is not None:
- # Shift so that tokens < n predict n
- shift_logits = logits[..., :-1, :].contiguous()
- shift_labels = labels[..., 1:].contiguous()
- # Flatten the tokens
- loss_fct = CrossEntropyLoss()
- shift_logits = shift_logits.view(-1, self.config.vocab_size)
- shift_labels = shift_labels.view(-1)
- # Enable model parallelism
- shift_labels = shift_labels.to(shift_logits.device)
- loss = loss_fct(shift_logits, shift_labels)
-
- if not return_dict:
- output = (logits,) + outputs[1:]
- return (loss,) + output if loss is not None else output
-
- return CausalLMOutputWithPast(
- loss=loss,
- logits=logits,
- past_key_values=outputs.past_key_values,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
- def prepare_inputs_for_generation(
- self, input_ids, query_embeds=None, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
- ):
- if past_key_values:
- input_ids = input_ids[:, -1:]
-
- position_ids = kwargs.get("position_ids", None)
- if attention_mask is not None and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past_key_values:
- position_ids = position_ids[:, -1].unsqueeze(-1)
- query_embeds = None
-
- # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
- if inputs_embeds is not None and past_key_values is None:
- model_inputs = {"inputs_embeds": inputs_embeds}
- else:
- model_inputs = {"input_ids": input_ids}
-
- model_inputs.update(
- {
- "position_ids": position_ids,
- "query_embeds": query_embeds,
- "past_key_values": past_key_values,
- "use_cache": kwargs.get("use_cache"),
- "attention_mask": attention_mask,
- }
- )
- return model_inputs
-
- @staticmethod
- def _reorder_cache(past_key_values, beam_idx):
- reordered_past = ()
- for layer_past in past_key_values:
- reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
- return reordered_past
-
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/towers_of_hanoi.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/towers_of_hanoi.py
deleted file mode 100644
index 42a7ef65417e2011c1baaec6ae8f06b7f474baaa..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/towers_of_hanoi.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-
-class TowersOfHanoi(Task):
- """Sequentially move disks from one tower to another—only smaller disks can be on top of larger ones."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 14
- self.lang_template = "solve towers of hanoi"
- self.task_completed_desc = "solved towers of hanoi."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add stand.
- base_size = (0.12, 0.36, 0.01)
- base_urdf = 'hanoi/stand.urdf'
- base_pose = self.get_random_pose(env, base_size)
- env.add_object(base_urdf, base_pose, 'fixed')
-
- # Rod positions in base coordinates.
- rod_position = ((0, -0.12, 0.03), (0, 0, 0.03), (0, 0.12, 0.03))
-
- # Add disks.
- disks = []
- n_disks = 3
- for i in range(n_disks):
- disk_urdf = 'hanoi/disk%d.urdf' % i
- pos = utils.apply(base_pose, rod_position[0])
- z = 0.015 * (n_disks - i - 2)
- pos = (pos[0], pos[1], pos[2] + z)
- disks.append(env.add_object(disk_urdf, (pos, base_pose[1])))
-
- # Solve Hanoi sequence with dynamic programming.
- hanoi_steps = utils.solve_hanoi_all(n_disks)
-
- # Goal: pick and place disks using Hanoi sequence.
- for step in hanoi_steps:
- disk_id = disks[step[0]]
- targ_position = rod_position[step[2]]
- targ_position = utils.apply(base_pose, targ_position)
- targ_pose = (targ_position, (0, 0, 0, 1))
- self.add_goal(objs=[disk_id], matches=np.int32([[1]]), targ_poses=[targ_pose], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / len(hanoi_steps),
- symmetries=[0], language_goal=self.lang_template)
\ No newline at end of file
diff --git a/spaces/Gertie01/enhanced-dalle2/app.py b/spaces/Gertie01/enhanced-dalle2/app.py
deleted file mode 100644
index 46a3702d5b710a4911f7c4b4b76485278eedb858..0000000000000000000000000000000000000000
--- a/spaces/Gertie01/enhanced-dalle2/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import openai
-import requests
-import urllib.parse
-import json
-import gradio as gr
-from PIL import Image, ImageDraw, ImageFont
-from io import BytesIO
-import os
-
-openai.api_key = os.environ["OPEN_AI_KEY"]
-
-
-
-png_bytes_array = []
-images = []
-
-system_prompt = """Transcript of a dialog, where the User interacts with an Assistant named Stablediffy. Stablediffy knows much about prompt engineering for stable diffusion (an open-source image generation software). The User asks Stablediffy about prompts for stable diffusion Image Generation.
-
-Possible keywords for stable diffusion: "cinematic, colorful background, concept art, dramatic lighting, high detail, highly detailed, hyper realistic, intricate, intricate sharp details, octane render, smooth, studio lighting, trending on artstation, landscape, scenery, cityscape, underwater, salt flat, tundra, jungle, desert mountain, ocean, beach, lake, waterfall, ripples, swirl, waves, avenue, horizon, pasture, plateau, garden, fields, floating island, forest, cloud forest, grasslands, flower field, flower ocean, volcano, cliff, snowy mountain
-city, cityscape, street, downtown"
-
--- Transcript --
-
-USER: suggest a prompt for a young girl from Swiss sitting by the window with headphones on
-ASSISTANT: gorgeous young Swiss girl sitting by window with headphones on, wearing white bra with translucent shirt over, soft lips, beach blonde hair, octane render, unreal engine, photograph, realistic skin texture, photorealistic, hyper realism, highly detailed, 85mm portrait photography, award winning, hard rim lighting photography
-
-USER: suggest a prompt for an mysterious city
-ASSISTANT: Mysterious city, cityscape, urban, downtown, street, noir style, cinematic lightning, dramatic lightning, intricate, sharp details, octane render, unreal engine, highly detailed, night scene, dark lighting, gritty atmosphere
-
-USER: suggest a prompt for a high quality render of a car in 1950
-ASSISTANT: Car in 1950, highly detailed, classic car, 1950's, highly detailed, dramatic lightning, cinematic lightning, unreal engine
-
-USER:"""
-
-def get_modified_text_response(user_query):
- try:
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[{"role": "system", "content": system_prompt},
- {"role": "user", "content": "suggest a prompt for" + user_query}]
- )
- res = response["choices"][0]["message"]["content"].replace('\'', '')
- print(res)
- return res
- except:
- return "no gpt response"
-
-def text_to_image(text, selected_value):
- global png_bytes_array
- images = []
- if(selected_value == "New"):
- png_bytes_array = []
- image_urls = generate_image(text)
- for img in image_urls:
- response = requests.get(img['url'])
- image = Image.open(BytesIO(response.content))
- images.append(image)
- bytesIO = BytesIO()
- image.save(bytesIO, format="PNG")
- png_bytes = bytesIO.getvalue()
- png_bytes_array.append(png_bytes)
- return images
-
- else:
- index = int(selected_value[-1])
- image_urls = variation_image(png_bytes_array[index])
- for img in image_urls:
- response = requests.get(img['url'])
- image = Image.open(BytesIO(response.content))
- images.append(image)
- return images
-
-def variation_image(image):
- response = openai.Image.create_variation(
- image = image,
- n=4,
- size="1024x1024"
- )
- #print(response)
- #image_url = response['data'][0]['url']
- return response['data']
-def generate_image(prompt):
- better_prompt = get_modified_text_response(prompt)
- response = openai.Image.create(
- prompt=better_prompt,
- n=4,
- size="1024x1024"
- )
-
- return response['data']
-def main():
- radio_buttons = gr.inputs.Radio(["New", "Var0","Var1", "Var2", "Var3"], label="Select a variation option")
- out = gr.Gallery(
- label="Generated images", show_label=False, elem_id="gallery"
- ).style(columns=[2], rows=[2], object_fit="contain", height="auto")
- iface = gr.Interface(fn=text_to_image, inputs=["text",radio_buttons],
- #outputs=gr.Gallery([gr.Image(type="pil").style(height=300,width=300),gr.Image(type="pil").style(height=300,width=300),gr.Image(type="pil").style(height=300,width=300),gr.Image(type="pil").style(height=300,width=300)]))
- outputs = out)
- iface.launch()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Gradio-Blocks/poor-mans-duplex/README.md b/spaces/Gradio-Blocks/poor-mans-duplex/README.md
deleted file mode 100644
index b863aa1d0bb0bfdf7beeded17f6a9b8626252fdc..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/poor-mans-duplex/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Poor Man's Duplex
-emoji: 🗣️
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.0.5
-app_file: duplex.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r50_fpn_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r50_fpn_20e_coco.py
deleted file mode 100644
index 6f886e1c407ff9376929a7092f82e5508d2b1ac9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r50_fpn_20e_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './cascade_rcnn_r50_fpn_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 19])
-runner = dict(type='EpochBasedRunner', max_epochs=20)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 616984575dda73a13fc5870f60ae6ffa30d6b01b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './apcnet_r50-d8_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/README.md
deleted file mode 100644
index 190373e87922db8f26789fd2b29a9ca953bb0d4f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Dynamic Multi-scale Filters for Semantic Segmentation
-
-## Introduction
-
-
-
-```latex
-@InProceedings{He_2019_ICCV,
-author = {He, Junjun and Deng, Zhongying and Qiao, Yu},
-title = {Dynamic Multi-Scale Filters for Semantic Segmentation},
-booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
-month = {October},
-year = {2019}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| DMNet | R-50-D8 | 512x1024 | 40000 | 7.0 | 3.66 | 77.78 | 79.14 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x1024_40k_cityscapes/dmnet_r50-d8_512x1024_40k_cityscapes_20201214_115717-5e88fa33.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x1024_40k_cityscapes/dmnet_r50-d8_512x1024_40k_cityscapes-20201214_115717.log.json) |
-| DMNet | R-101-D8 | 512x1024 | 40000 | 10.6 | 2.54 | 78.37 | 79.72 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes/dmnet_r101-d8_512x1024_40k_cityscapes_20201214_115716-abc9d111.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes/dmnet_r101-d8_512x1024_40k_cityscapes-20201214_115716.log.json) |
-| DMNet | R-50-D8 | 769x769 | 40000 | 7.9 | 1.57 | 78.49 | 80.27 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_769x769_40k_cityscapes/dmnet_r50-d8_769x769_40k_cityscapes_20201214_115717-2a2628d7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_769x769_40k_cityscapes/dmnet_r50-d8_769x769_40k_cityscapes-20201214_115717.log.json) |
-| DMNet | R-101-D8 | 769x769 | 40000 | 12.0 | 1.01 | 77.62 | 78.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_769x769_40k_cityscapes/dmnet_r101-d8_769x769_40k_cityscapes_20201214_115718-b650de90.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_769x769_40k_cityscapes/dmnet_r101-d8_769x769_40k_cityscapes-20201214_115718.log.json) |
-| DMNet | R-50-D8 | 512x1024 | 80000 | - | - | 79.07 | 80.22 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x1024_80k_cityscapes/dmnet_r50-d8_512x1024_80k_cityscapes_20201214_115716-987f51e3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x1024_80k_cityscapes/dmnet_r50-d8_512x1024_80k_cityscapes-20201214_115716.log.json) |
-| DMNet | R-101-D8 | 512x1024 | 80000 | - | - | 79.64 | 80.67 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x1024_80k_cityscapes/dmnet_r101-d8_512x1024_80k_cityscapes_20201214_115705-b1ff208a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x1024_80k_cityscapes/dmnet_r101-d8_512x1024_80k_cityscapes-20201214_115705.log.json) |
-| DMNet | R-50-D8 | 769x769 | 80000 | - | - | 79.22 | 80.55 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_769x769_80k_cityscapes/dmnet_r50-d8_769x769_80k_cityscapes_20201214_115718-7ea9fa12.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_769x769_80k_cityscapes/dmnet_r50-d8_769x769_80k_cityscapes-20201214_115718.log.json) |
-| DMNet | R-101-D8 | 769x769 | 80000 | - | - | 79.19 | 80.65 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_769x769_80k_cityscapes/dmnet_r101-d8_769x769_80k_cityscapes_20201214_115716-a7fbc2ab.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_769x769_80k_cityscapes/dmnet_r101-d8_769x769_80k_cityscapes-20201214_115716.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| DMNet | R-50-D8 | 512x512 | 80000 | 9.4 | 20.95 | 42.37 | 43.62 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x512_80k_ade20k/dmnet_r50-d8_512x512_80k_ade20k_20201214_115705-a8626293.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x512_80k_ade20k/dmnet_r50-d8_512x512_80k_ade20k-20201214_115705.log.json) |
-| DMNet | R-101-D8 | 512x512 | 80000 | 13.0 | 13.88 | 45.34 | 46.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x512_80k_ade20k/dmnet_r101-d8_512x512_80k_ade20k_20201214_115704-c656c3fb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x512_80k_ade20k/dmnet_r101-d8_512x512_80k_ade20k-20201214_115704.log.json) |
-| DMNet | R-50-D8 | 512x512 | 160000 | - | - | 43.15 | 44.17 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x512_160k_ade20k/dmnet_r50-d8_512x512_160k_ade20k_20201214_115706-25fb92c2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r50-d8_512x512_160k_ade20k/dmnet_r50-d8_512x512_160k_ade20k-20201214_115706.log.json) |
-| DMNet | R-101-D8 | 512x512 | 160000 | - | - | 45.42 | 46.76 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dmnet/dmnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x512_160k_ade20k/dmnet_r101-d8_512x512_160k_ade20k_20201214_115705-73f9a8d7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/dmnet/dmnet_r101-d8_512x512_160k_ade20k/dmnet_r101-d8_512x512_160k_ade20k-20201214_115705.log.json) |
diff --git a/spaces/Gradio-Blocks/zero-and-few-shot-reasoning/app.py b/spaces/Gradio-Blocks/zero-and-few-shot-reasoning/app.py
deleted file mode 100644
index cc343294bade468f9479fd4a6d3922696676bd35..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/zero-and-few-shot-reasoning/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import gradio as gr
-
-demo = gr.Blocks()
-
-
-#'huggingface/facebook/opt-13b'
-#'huggingface/EleutherAI/gpt-neox-20b'
-#inference not supported
-
-name_list = ['huggingface/bigscience/T0pp', 'huggingface/EleutherAI/gpt-j-6B', 'huggingface/gpt2-xl', 'huggingface/EleutherAI/gpt-neo-2.7B']
-
-#examples from Figure 1 of the paper
-examples = [#zero-shot
- ["Q: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?\nA: The answer (arabic numerals) is "],
- #zero-shot-CoT
- ["Q: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?\nA: Let’s think step by step."],
- #few-shot
- ["Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\nA: The answer is 11.\nQ: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?\nA:"],
- #few-shot-CoT
- ["Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\nA: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.\nQ:A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?\nA:"],
- ]
-
-def calculator(num1, operation, num2):
- if operation == "add":
- return num1 + num2
- elif operation == "subtract":
- return num1 - num2
- elif operation == "multiply":
- return num1 * num2
- elif operation == "divide":
- return num1 / num2
-
-
-secrets = ["API_KEY1", "API_KEY2", "API_KEY3", "API_KEY4", "API_KEY5", "API_KEY6"]
-
-
-def complete_with_gpt(text):
- for secret in secrets:
- try:
- interfaces = [gr.Interface.load(name, api_key = "secret") for name in name_list]
- except:
- print("Error: API key is not valid")
- return [interface(text) for interface in interfaces]
-
-def set_example(example: list) -> dict:
- return gr.Textbox.update(value=example[0])
-
-with gr.Blocks() as demo:
- gr.Markdown(
- """
- # Let’s think step by step Is all you need ?
- """
- )
- with gr.Box():
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label = "Write your riddle here", placeholder="Type here the riddles to see if LM can solve the questions", lines=4)
- with gr.Row():
- btn = gr.Button("Laguage model think brrr ...")
-
- gr.Markdown(" Note: Due to high number of visitors, inference API rate limit is too high and sometimes results in error, looking for solutions around this problem, thanks for understanding 🤗")
- example_text = gr.Dataset(components=[input_text], samples=examples)
- example_text.click(fn=set_example,
- inputs = example_text,
- outputs= example_text.components)
-
- with gr.Column():
- gr.Markdown("Let's see how different LM's multiply matrices/ think 💭")
- btn.click(complete_with_gpt, inputs = input_text, outputs = [gr.Textbox(label=name_list[_], lines=4) for _ in range(len(name_list))])
-
- with gr.Column():
- gr.Markdown("In case you need to count to verify the answer, you can use the calculator below 😉 ")
- num1 = gr.Number(placeholder="Type here the first number", lines=1)
- num2 = gr.Number(placeholder="Type here the second number", lines=1)
- operation = gr.Dropdown(["add", "subtract", "multiply", "divide"], placeholder="Type here the operation", lines=1)
- with gr.Row():
- calculate = gr.Button("Calculate")
- with gr.Column():
- calculate.click(calculator, inputs = [num1, operation, num2], outputs = gr.Textbox(label="Result", lines=1))
-
-
- gr.Markdown(
- """
-
Comparable Analysis
-CertainThing CertainThing CertainThing
-Courtesy of
-@hcanadli12345
-((((best quality, ultra-detailed)))), (official_art, thick_eyebrows, laugh),
-kawaii, cleavage, (((two side up, white and orangeish and medium
-streaked hair))), ((tsurime)), Thigh-high socks, Clear vinyl jacket,
-skindentation, multicolored black bikini, dappled_sunlight, Santorini,
-geometrical pattern, sport fashion, chaos
-no masterpiece
-Note: the "masterpiece" tag is already
-fine-tuned and it is not recommended
-to emphasize it again.
-
-
-
-
-
-
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/app_vc.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/app_vc.py
deleted file mode 100644
index 1d69b4a23c80f800b775705f53bc483108307d6c..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/app_vc.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from synthesizer.inference import Synthesizer
-from pydantic import BaseModel, Field
-from encoder import inference as speacker_encoder
-import torch
-import os
-from pathlib import Path
-from enum import Enum
-import ppg_extractor as Extractor
-import ppg2mel as Convertor
-import librosa
-from scipy.io.wavfile import write
-import re
-import numpy as np
-from mkgui.base.components.types import FileContent
-from vocoder.hifigan import inference as gan_vocoder
-from typing import Any, Tuple
-import matplotlib.pyplot as plt
-
-
-# Constants
-AUDIO_SAMPLES_DIR = f'sample{os.sep}'
-EXT_MODELS_DIRT = f'ppg_extractor{os.sep}saved_models'
-CONV_MODELS_DIRT = f'ppg2mel{os.sep}saved_models'
-VOC_MODELS_DIRT = f'vocoder{os.sep}saved_models'
-TEMP_SOURCE_AUDIO = f'wavs{os.sep}temp_source.wav'
-TEMP_TARGET_AUDIO = f'wavs{os.sep}temp_target.wav'
-TEMP_RESULT_AUDIO = f'wavs{os.sep}temp_result.wav'
-
-# Load local sample audio as options TODO: load dataset
-if os.path.isdir(AUDIO_SAMPLES_DIR):
- audio_input_selection = Enum('samples', list((file.name, file) for file in Path(AUDIO_SAMPLES_DIR).glob("*.wav")))
-# Pre-Load models
-if os.path.isdir(EXT_MODELS_DIRT):
- extractors = Enum('extractors', list((file.name, file) for file in Path(EXT_MODELS_DIRT).glob("**/*.pt")))
- print("Loaded extractor models: " + str(len(extractors)))
-else:
- raise Exception(f"Model folder {EXT_MODELS_DIRT} doesn't exist.")
-
-if os.path.isdir(CONV_MODELS_DIRT):
- convertors = Enum('convertors', list((file.name, file) for file in Path(CONV_MODELS_DIRT).glob("**/*.pth")))
- print("Loaded convertor models: " + str(len(convertors)))
-else:
- raise Exception(f"Model folder {CONV_MODELS_DIRT} doesn't exist.")
-
-if os.path.isdir(VOC_MODELS_DIRT):
- vocoders = Enum('vocoders', list((file.name, file) for file in Path(VOC_MODELS_DIRT).glob("**/*gan*.pt")))
- print("Loaded vocoders models: " + str(len(vocoders)))
-else:
- raise Exception(f"Model folder {VOC_MODELS_DIRT} doesn't exist.")
-
-class Input(BaseModel):
- local_audio_file: audio_input_selection = Field(
- ..., alias="输入语音(本地wav)",
- description="选择本地语音文件."
- )
- upload_audio_file: FileContent = Field(default=None, alias="或上传语音",
- description="拖拽或点击上传.", mime_type="audio/wav")
- local_audio_file_target: audio_input_selection = Field(
- ..., alias="目标语音(本地wav)",
- description="选择本地语音文件."
- )
- upload_audio_file_target: FileContent = Field(default=None, alias="或上传目标语音",
- description="拖拽或点击上传.", mime_type="audio/wav")
- extractor: extractors = Field(
- ..., alias="编码模型",
- description="选择语音编码模型文件."
- )
- convertor: convertors = Field(
- ..., alias="转换模型",
- description="选择语音转换模型文件."
- )
- vocoder: vocoders = Field(
- ..., alias="语音解码模型",
- description="选择语音解码模型文件(目前只支持HifiGan类型)."
- )
-
-class AudioEntity(BaseModel):
- content: bytes
- mel: Any
-
-class Output(BaseModel):
- __root__: Tuple[AudioEntity, AudioEntity, AudioEntity]
-
- def render_output_ui(self, streamlit_app, input) -> None: # type: ignore
- """Custom output UI.
- If this method is implmeneted, it will be used instead of the default Output UI renderer.
- """
- src, target, result = self.__root__
-
- streamlit_app.subheader("Synthesized Audio")
- streamlit_app.audio(result.content, format="audio/wav")
-
- fig, ax = plt.subplots()
- ax.imshow(src.mel, aspect="equal", interpolation="none")
- ax.set_title("mel spectrogram(Source Audio)")
- streamlit_app.pyplot(fig)
- fig, ax = plt.subplots()
- ax.imshow(target.mel, aspect="equal", interpolation="none")
- ax.set_title("mel spectrogram(Target Audio)")
- streamlit_app.pyplot(fig)
- fig, ax = plt.subplots()
- ax.imshow(result.mel, aspect="equal", interpolation="none")
- ax.set_title("mel spectrogram(Result Audio)")
- streamlit_app.pyplot(fig)
-
-def convert(input: Input) -> Output:
- """convert(转换)"""
- # load models
- extractor = Extractor.load_model(Path(input.extractor.value))
- convertor = Convertor.load_model(Path(input.convertor.value))
- # current_synt = Synthesizer(Path(input.synthesizer.value))
- gan_vocoder.load_model(Path(input.vocoder.value))
-
- # load file
- if input.upload_audio_file != None:
- with open(TEMP_SOURCE_AUDIO, "w+b") as f:
- f.write(input.upload_audio_file.as_bytes())
- f.seek(0)
- src_wav, sample_rate = librosa.load(TEMP_SOURCE_AUDIO)
- else:
- src_wav, sample_rate = librosa.load(input.local_audio_file.value)
- write(TEMP_SOURCE_AUDIO, sample_rate, src_wav) #Make sure we get the correct wav
-
- if input.upload_audio_file_target != None:
- with open(TEMP_TARGET_AUDIO, "w+b") as f:
- f.write(input.upload_audio_file_target.as_bytes())
- f.seek(0)
- ref_wav, _ = librosa.load(TEMP_TARGET_AUDIO)
- else:
- ref_wav, _ = librosa.load(input.local_audio_file_target.value)
- write(TEMP_TARGET_AUDIO, sample_rate, ref_wav) #Make sure we get the correct wav
-
- ppg = extractor.extract_from_wav(src_wav)
- # Import necessary dependency of Voice Conversion
- from utils.f0_utils import compute_f0, f02lf0, compute_mean_std, get_converted_lf0uv
- ref_lf0_mean, ref_lf0_std = compute_mean_std(f02lf0(compute_f0(ref_wav)))
- speacker_encoder.load_model(Path("encoder{os.sep}saved_models{os.sep}pretrained_bak_5805000.pt"))
- embed = speacker_encoder.embed_utterance(ref_wav)
- lf0_uv = get_converted_lf0uv(src_wav, ref_lf0_mean, ref_lf0_std, convert=True)
- min_len = min(ppg.shape[1], len(lf0_uv))
- ppg = ppg[:, :min_len]
- lf0_uv = lf0_uv[:min_len]
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- _, mel_pred, att_ws = convertor.inference(
- ppg,
- logf0_uv=torch.from_numpy(lf0_uv).unsqueeze(0).float().to(device),
- spembs=torch.from_numpy(embed).unsqueeze(0).to(device),
- )
- mel_pred= mel_pred.transpose(0, 1)
- breaks = [mel_pred.shape[1]]
- mel_pred= mel_pred.detach().cpu().numpy()
-
- # synthesize and vocode
- wav, sample_rate = gan_vocoder.infer_waveform(mel_pred)
-
- # write and output
- write(TEMP_RESULT_AUDIO, sample_rate, wav) #Make sure we get the correct wav
- with open(TEMP_SOURCE_AUDIO, "rb") as f:
- source_file = f.read()
- with open(TEMP_TARGET_AUDIO, "rb") as f:
- target_file = f.read()
- with open(TEMP_RESULT_AUDIO, "rb") as f:
- result_file = f.read()
-
-
- return Output(__root__=(AudioEntity(content=source_file, mel=Synthesizer.make_spectrogram(src_wav)), AudioEntity(content=target_file, mel=Synthesizer.make_spectrogram(ref_wav)), AudioEntity(content=result_file, mel=Synthesizer.make_spectrogram(wav))))
\ No newline at end of file
diff --git a/spaces/KyanChen/FunSR/App_main.py b/spaces/KyanChen/FunSR/App_main.py
deleted file mode 100644
index ce2f4b1bc12bdb29e54d96b3e812ade871ad78f3..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/FunSR/App_main.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import numpy as np
-import os
-os.system('nvidia-smi')
-os.system('ls /usr/local')
-os.system('pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116')
-os.system('pip install -U openmim')
-os.system('mim install mmcv-full')
-import models
-import gradio as gr
-
-import torch
-from torchvision import transforms
-from torchvision.transforms import InterpolationMode
-
-device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
-
-
-def construct_sample(img, mean=0.5, std=0.5):
- img = transforms.ToTensor()(img)
- img = transforms.Resize(48, InterpolationMode.BICUBIC)(img)
- img = transforms.Normalize(mean, std)(img)
- return img
-
-def build_model(cp):
- model_spec = torch.load(cp, map_location='cpu')['model']
- print(model_spec['args'])
- model = models.make(model_spec, load_sd=True).to(device)
- return model
-
-
-# Function for building extraction
-def sr_func(img, cp, scale):
- if cp == 'UC':
- checkpoint = 'pretrain/UC_FunSR_RDN.pth'
- elif cp == 'AID':
- checkpoint = 'pretrain/AID_FunSR_RDN.pth'
- else:
- raise NotImplementedError
- sample = construct_sample(img)
- print('Use: ', device)
- model = build_model(checkpoint)
- model.eval()
- sample = sample.to(device)
- sample = sample.unsqueeze(0)
-
- ori_size = torch.tensor(sample.shape[2:]) # BCHW
- target_size = ori_size * scale
- target_size = target_size.long()
- lr_target_size_img = torch.nn.functional.interpolate(sample, scale_factor=scale, mode='nearest')
- with torch.no_grad():
- pred = model(sample, target_size.tolist())
-
- if isinstance(pred, list):
- pred = pred[-1]
- pred = pred * 0.5 + 0.5
-
- pred *= 255
- pred = pred[0].detach().cpu()
- lr_target_size_img = lr_target_size_img * 0.5 + 0.5
- lr_target_size_img = 255 * lr_target_size_img[0].detach().cpu()
-
- lr_target_size_img = torch.clamp(lr_target_size_img, 0, 255).permute(1,2,0).numpy().astype(np.uint8)
- pred = torch.clamp(pred, 0, 255).permute(1,2,0).numpy().astype(np.uint8)
-
- line = np.ones((pred.shape[0], 5, 3), dtype=np.uint8) * 255
- pred = np.concatenate((lr_target_size_img, line, pred), axis=1)
- return pred
-
-title = "FunSR"
-description = "Gradio demo for continuous remote sensing image super-resolution. Upload image from UCMerced or AID Dataset or click any one of the examples, " \
- "Then change the upscaling magnification, and click \"Submit\" and wait for the super-resolved result. \n" \
- "Paper: Continuous Remote Sensing Image Super-Resolution based on Context Interaction in Implicit Function Space"
-
-article = "