diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyberlink PowerDirector 11 Full Version with Crack Download and Install Guide.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyberlink PowerDirector 11 Full Version with Crack Download and Install Guide.md
deleted file mode 100644
index 99784d78e4eda2e62afa2bd155730617c546e398..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyberlink PowerDirector 11 Full Version with Crack Download and Install Guide.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Cyberlink PowerDirector 11 Full Version with Crack: A Comprehensive Review
-
If you are looking for a powerful and easy-to-use video editing software, you might have heard of Cyberlink PowerDirector. It is one of the most popular and versatile video editors on the market, with a range of features and tools that can help you create stunning videos for any purpose. But what if you want to use the full version of Cyberlink PowerDirector without paying for it? Is there a way to get Cyberlink PowerDirector 11 full version with crack?
-
cyberlink powerdirector 11 full version with crack
In this article, we will review Cyberlink PowerDirector 11, its features, pros and cons, and how to download and install it with a crack. We will also compare it with some alternatives that you might want to consider. By the end of this article, you will have a clear idea of whether Cyberlink PowerDirector 11 is the right video editor for you and how to get it for free.
-
Features of Cyberlink PowerDirector 11
-
Cyberlink PowerDirector 11 is a video editing software that was released in 2012. It is designed for both beginners and professionals, with a user-friendly interface and a comprehensive set of features. Here are some of the main features of Cyberlink PowerDirector 11:
-
Express Video Creation
-
If you want to create a video quickly and easily, you can use the Express Project module. This module allows you to choose from a variety of templates that are suitable for different types of videos, such as travel, wedding, sports, etc. You can then drag and drop your clips into the timeline, add transitions, effects, music, and titles, and produce your video in minutes.
-
Action Camera Center
-
If you are into action sports or adventure videos, you will love the Action Camera Center. This feature lets you edit your footage from action cameras like GoPro, DJI, or Sony. You can apply effects such as slow motion, freeze frame, zoom, pan, or rotate. You can also correct lens distortion, stabilize shaky videos, or remove background noise.
-
Simplified Color Adjustment
-
If you want to enhance the color and tone of your videos, you can use the Simplified Color Adjustment feature. This feature lets you adjust the brightness, contrast, saturation, hue, temperature, tint, and exposure of your videos with simple sliders. You can also use one-click color correction tools such as Auto Tone or White Balance. If you want more control over your color grading, you can use advanced tools such as Color Director or Color Match.
-
Customizable Design Tools
-
If you want to add some creativity and style to your videos, you can use the Customizable Design Tools. These tools let you create and edit titles, transitions, PiP objects (picture-in-picture), masks, subtitles, etc. You can also use the new Brush Tool to draw shapes or masks on your videos. You can customize these elements with different fonts, colors, sizes, animations, etc.
-
New Effects and Enhancements
-
If you want to spice up your videos with some special effects, you can use the New Effects and Enhancements feature. This feature lets you access a large library of effects that are categorized into themes such as Bloggers' Social Media Pack, Holiday Pack Vol 11, Travel Pack 6, Wedding Pack, etc. You can also use third-party plug-ins from sources such as NewBlueFX, proDAD, or BorisFX to add more effects to your videos.
-
360 Video Stabilization and Editing
-
If you want to edit your videos in 360 degrees, you can use the 360 Video Stabilization and Editing feature. This feature lets you import and edit your footage from 360 cameras such as Samsung Gear VR, Ricoh Theta, or Kodak Pixpro . You can apply effects such as stabilization, trimming, splitting, or adding titles to your 360 videos. You can also use the True360 View Designer to convert your 360 videos into standard videos with different perspectives.
-
cyberlink powerdirector 11 ultimate suite crack
-cyberlink powerdirector 11 ultra download + crack
-cyberlink powerdirector 11 deluxe free download full version with crack
-cyberlink powerdirector 11 activation key crack
-cyberlink powerdirector 11 serial number crack
-how to install cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack premium crack
-cyberlink powerdirector 11 director suite crack
-cyberlink powerdirector 11 free download full version for windows 10 with crack
-cyberlink powerdirector 11 patch crack
-cyberlink powerdirector 11 ultimate download + crack
-cyberlink powerdirector 11 keygen crack
-cyberlink powerdirector 11 free download full version for windows 7 with crack
-cyberlink powerdirector 11 license key crack
-cyberlink powerdirector 11 registration code crack
-how to use cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack essential crack
-cyberlink powerdirector 11 ultimate serial key + crack full version
-cyberlink powerdirector 11 free download full version for windows 8 with crack
-cyberlink powerdirector 11 activation code crack
-how to get cyberlink powerdirector 11 for free with crack
-cyberlink powerdirector 11 content pack premium download + crack
-cyberlink powerdirector 11 ultra serial key + crack full version
-cyberlink powerdirector 11 free download full version for windows xp with crack
-cyberlink powerdirector 11 product key crack
-how to update cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack essential download + crack
-cyberlink powerdirector 11 deluxe serial key + crack full version
-cyberlink powerdirector 11 free download full version for mac with crack
-cyberlink powerdirector 11 activation patch crack
-how to register cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack premium free download + crack
-cyberlink powerdirector 11 ultra keygen + crack full version
-cyberlink powerdirector 11 free download full version for android with crack
-cyberlink powerdirector 11 trial resetter + crack full version
-how to uninstall cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack essential free download + crack
-cyberlink powerdirector 11 deluxe keygen + crack full version
-cyberlink powerdirector 11 free download full version for linux with crack
-cyberlink powerdirector 11 activation manager + crack full version
-
Pros and Cons of Cyberlink PowerDirector 11
-
As with any software, Cyberlink PowerDirector 11 has its advantages and disadvantages. Here are some of them:
-
Pros
-
-
It has a user-friendly interface that is easy to navigate and learn.
-
It has a comprehensive set of features that can meet various video editing needs.
-
It supports a wide range of formats and resolutions, including 4K and 3D .
-
It has fast rendering speed and performance.
-
It has a large community of users who share tips, tutorials, and feedback.
-
-
Cons
-
-
It requires a high-end computer system to run smoothly.
-
It may crash or freeze occasionally due to bugs or compatibility issues.
-
It may have some limitations in terms of customization or creativity compared to other professional video editors.
-
It may not be compatible with some newer devices or technologies.
-
It may not be legal or ethical to use it with a crack.
-
-
How to Download and Install Cyberlink PowerDirector 11 Full Version with Crack
-
If you are interested in using Cyberlink PowerDirector 11 full version with crack, you will need to follow these steps:
-
Step 1: Download the setup file and the crack file from a reliable source
-
You will need to find a website that offers both the setup file and the crack file for Cyberlink PowerDirector 11. You can search for them on Google or other search engines, but be careful not to download any malware or viruses along with them. You should also check the reviews and ratings of the website before downloading anything.
-
One possible website that offers both files is FileCR.com . You can download them from these links:
-
-
CyberLink Director Suite v11 Setup File: https://filecr.com/windows/cyberlink-director-suite/
-
CyberLink Director Suite v11 Crack File: https://filecr.com/windows/cyberlink-director-suite/#crack-download
-
-
Note that these links are only for reference purposes and we do not endorse or guarantee their safety or legality.
-
Step 2: Run the setup file and follow the instructions to install the program
-
Once you have downloaded both files, you will need to run the setup file and follow the instructions on the screen to install Cyberlink PowerDirector 11 on your computer. You may need to agree to some terms and conditions and choose some options such as language, destination folder, etc. You may also need to enter a serial number or activation code that comes with the setup file. You should not launch or run the program after installation.
-
Step 3: Copy the crack file and paste it into the installation folder
-
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Project 32 Bit Full Crack What You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Project 32 Bit Full Crack What You Need to Know.md
deleted file mode 100644
index ce083e534ae4ba818da4006b7ac5c3734ef8791b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Project 32 Bit Full Crack What You Need to Know.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
How to Download Microsoft Project 32 Bit Full Crack for Free
-
Microsoft Project is a powerful project management software that helps you plan, track, and manage your projects. It allows you to create schedules, assign tasks, monitor progress, manage resources, and collaborate with your team. Microsoft Project is widely used by professionals and organizations in various fields and industries.
-
However, Microsoft Project is not a cheap software. It requires a subscription to Microsoft 365 or a one-time purchase of a standalone license. If you want to use Microsoft Project without paying for it, you might be tempted to download Microsoft Project 32 bit full crack for free from the internet.
A crack is a modified version of a software that bypasses its security and activation features. By using a crack, you can run the software without a valid license or product key. However, downloading and using Microsoft Project 32 bit full crack is not a good idea for several reasons.
-
The Risks of Downloading Microsoft Project 32 Bit Full Crack
-
Downloading Microsoft Project 32 bit full crack from the internet is risky and illegal. Here are some of the dangers and disadvantages of doing so:
-
-
It can harm your computer. Many websites that offer Microsoft Project 32 bit full crack are unreliable and malicious. They can infect your computer with viruses, malware, spyware, ransomware, and other threats that can damage your system, steal your data, or lock your files.
-
It can compromise your security. By using Microsoft Project 32 bit full crack, you are exposing yourself to potential hackers and cybercriminals who can exploit the vulnerabilities and backdoors in the cracked software. They can access your personal information, financial accounts, passwords, and other sensitive data.
-
It can affect your performance. Microsoft Project 32 bit full crack is not guaranteed to work properly or smoothly. It can have bugs, errors, crashes, compatibility issues, and missing features that can hinder your productivity and efficiency. It can also cause conflicts with other software or updates on your computer.
-
It can violate the law. Downloading and using Microsoft Project 32 bit full crack is illegal and unethical. It is a form of piracy that infringes the intellectual property rights of Microsoft and its developers. You can face legal consequences such as fines, lawsuits, or even jail time if you are caught using cracked software.
-
-
The Benefits of Using Genuine Microsoft Project
-
Instead of downloading Microsoft Project 32 bit full crack, you should consider using genuine Microsoft Project. Here are some of the benefits of doing so:
-
-
It can protect your computer. Genuine Microsoft Project is safe and secure to download and install. It does not contain any viruses, malware, spyware, ransomware, or other threats that can harm your computer. It also has regular updates that fix any bugs or issues in the software.
-
It can enhance your security. Genuine Microsoft Project has built-in security features that protect your data and privacy. It encrypts your files and communications, prevents unauthorized access, and integrates with other Microsoft services such as OneDrive, SharePoint, Teams, and Outlook.
-
It can improve your performance. Genuine Microsoft Project works flawlessly and smoothly on your computer. It has all the features and functions that you need to manage your projects effectively and efficiently. It also supports multiple languages, formats, platforms, and devices.
-
It can comply with the law. Genuine Microsoft Project is legal and ethical to use. It respects the intellectual property rights of Microsoft and its developers. You can use it without worrying about any legal consequences or penalties.
-
-
How to Get Genuine Microsoft Project
-
If you want to get genuine Microsoft Project for your computer, you have two options:
-
-
Subscribe to Microsoft 365. Microsoft 365 is a cloud-based service that gives you access to various Microsoft applications such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive, Teams, and more. It also includes Microsoft Project as part of its plans. You can choose from different plans depending on your needs and budget. You can pay monthly or yearly for the subscription and enjoy all the benefits of Microsoft 365.
-
Purchase a standalone license. A standalone license is a one-time purchase that gives you the ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Downloadgametrainsimulatormodindonesia.md b/spaces/1gistliPinn/ChatGPT4/Examples/Downloadgametrainsimulatormodindonesia.md
deleted file mode 100644
index 07b13592fa748314bd28952b8038a7e7b18280cd..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Downloadgametrainsimulatormodindonesia.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
Download Game Train Simulator Mod Indonesia: How to Enjoy Realistic and Immersive Trainz Experience
-
-
If you are a fan of train simulation games and want to experience a realistic and immersive trainz experience, you might want to download game train simulator mod Indonesia. This is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. But what is game train simulator mod Indonesia exactly, and how can you download and install it? In this article, we will answer these questions and more.
Game train simulator mod Indonesia is a custom mod that adds Indonesian content to various train simulators. It is developed by the Indonesian Trainz Community, a group of passionate trainz fans who have been around since 2009. The mod aims to provide a realistic and immersive trainz experience, with features such as:
-
-
-
A variety of Indonesian locomotives and coaches, such as GE U18C, GE U20C, GE CC206, passenger and freight coaches.
-
A selection of Indonesian stations, such as Gambir, Karawang, Purwakarta, Bandung.
-
A number of Indonesian routes, such as Jakarta-Bandung, Jakarta-Surabaya, Jakarta-Medan.
-
A realistic and detailed Indonesian scenery, with custom maps, buildings, trees, roads, bridges, etc.
-
A dynamic and interactive Indonesian environment, with weather, time, traffic, signals, etc.
-
-
-
Game train simulator mod Indonesia is compatible with several train simulators, such as Trainz Simulator 2009, Trainz Simulator 2012, Trainz Simulator 2019, Indonesian Train Simulator (Android), etc. It is also constantly updated and improved by the developers and the community feedback.
-
-
How to Download and Install Game Train Simulator Mod Indonesia?
-
-
If you want to play with game train simulator mod Indonesia, you will need to download and install the mod for the train simulator you want to play. Here are the steps to do so:
-
-
-
Download and install the train simulator of your choice from their official websites or app stores.
-
Download the latest version of game train simulator mod Indonesia from the YouTube channel or website of the Indonesian Trainz Community. You can find the links for different train simulators below:
Extract the downloaded files to your train simulator folder.
-
Launch your train simulator and select game train simulator mod Indonesia as your content.
-
Create your scenario and start playing!
-
-
-
If you encounter any issues or need any help with the installation process, you can contact the Indonesian Trainz Community on their YouTube channel or website. They will be happy to assist you.
-
-
Conclusion
-
-
Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!
-
What are the Tips and Tricks for Game Train Simulator Mod Indonesia?
-
-
Game train simulator mod Indonesia is a fun and enjoyable mod for train simulators, but it can also be challenging and tricky at times. If you want to master the game and have a smooth and satisfying trainz experience, you might want to follow some tips and tricks. Here are some of them:
-
-
-
-
Read the instructions and tutorials carefully before playing. They will help you understand the basics and features of the game.
-
Adjust the settings and preferences according to your device and preferences. You can change the graphics, sound, controls, etc. to suit your needs.
-
Choose the right locomotive and coach for your scenario. Different locomotives and coaches have different characteristics, such as speed, power, capacity, etc. Choose the ones that match your objectives and preferences.
-
Follow the rules and regulations of the game. Respect the signals, speed limits, timetables, etc. They will help you avoid accidents and penalties.
-
Plan your route and strategy ahead. Use the map, GPS, route planner, etc. to plan your route and strategy. They will help you avoid traffic, delays, detours, etc.
-
Use the camera angles wisely. Switch between different camera angles to get a better view of your surroundings and situation. They will help you avoid obstacles, hazards, errors, etc.
-
Save your progress frequently. Use the save and load functions to save your progress frequently. They will help you avoid losing your progress in case of crashes, errors, etc.
-
Have fun and enjoy the game. Don't take the game too seriously or stress yourself too much. Remember that it is just a game and the main purpose is to have fun and enjoy the trainz experience.
-
-
-
Game train simulator mod Indonesia is a mod that can offer you a lot of fun and enjoyment, but also a lot of challenge and difficulty. If you want to overcome the challenge and difficulty and have a smooth and satisfying trainz experience, you might want to follow these tips and tricks. They will help you improve your skills and performance in game train simulator mod Indonesia.
-
-
How to Get More Content for Game Train Simulator Mod Indonesia?
-
-
If you want to get more content for game train simulator mod Indonesia, such as more locomotives, coaches, stations, routes, scenery, etc., you have two options:
-
-
-
You can download more content from the Indonesian Trainz Community website or YouTube channel. They have a lot of content available for free download. You can find the links for different train simulators in the previous sections.
-
You can create your own content using the content creation tools provided by the train simulators. You can use the surveyor tool, asset editor tool, script editor tool, etc. to create your own content. You can also share your content with other players on the Indonesian Trainz Community website or YouTube channel.
-
-
-
Game train simulator mod Indonesia is a mod that has a lot of content available for you to enjoy, but it also allows you to get more content or create your own content if you want to. If you want to get more content or create your own content for game train simulator mod Indonesia, you can follow these options. They will help you expand your trainz experience with game train simulator mod Indonesia.
-
-
Conclusion
-
-
Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!
-
What are the Reviews and Ratings for Game Train Simulator Mod Indonesia?
-
-
Game train simulator mod Indonesia is a popular and well-received mod for train simulators. It has received a lot of positive reviews and ratings from the players and critics. Here are some of them:
-
-
-
On Google Play, game train simulator mod Indonesia has a rating of 4.4 out of 5 stars, based on 187K reviews. Most of the reviews praise the game for its realism, graphics, sound, gameplay, etc.
-
On APKPure, game train simulator mod Indonesia has a rating of 8.5 out of 10, based on 18 reviews. Most of the reviews commend the game for its quality, features, content, etc.
-
On YouTube, game train simulator mod Indonesia has a lot of videos showcasing the game and its features. Most of the videos have a lot of views, likes, comments, and subscriptions.
-
On the Indonesian Trainz Community website and YouTube channel, game train simulator mod Indonesia has a lot of feedback and suggestions from the players and fans. Most of the feedback and suggestions are positive and constructive.
-
-
-
Game train simulator mod Indonesia is a mod that has a lot of fans and supporters. It has received a lot of positive feedback and recognition from the players and critics. If you want to see what other people think about game train simulator mod Indonesia, you can check out these reviews and ratings.
-
-
How to Support Game Train Simulator Mod Indonesia?
-
-
If you like game train simulator mod Indonesia and want to support it, you have several ways to do so. Here are some of them:
-
-
-
You can rate and review the game on Google Play, APKPure, or any other platform where you downloaded it. This will help the game get more exposure and recognition.
-
You can share the game with your friends and family who are interested in train simulation games. This will help the game get more downloads and players.
-
You can subscribe to the Indonesian Trainz Community website or YouTube channel. This will help you get more updates and information about the game and its development.
-
You can donate to the Indonesian Trainz Community website or YouTube channel. This will help them cover the costs of developing and maintaining the game.
-
You can provide feedback and suggestions to the Indonesian Trainz Community website or YouTube channel. This will help them improve and enhance the game according to your preferences and needs.
-
-
-
Game train simulator mod Indonesia is a mod that deserves your support and appreciation. It is a mod that provides you with a realistic and immersive trainz experience for free. If you want to support game train simulator mod Indonesia, you can follow these ways. They will help you show your gratitude and respect to the developers and creators of game train simulator mod Indonesia.
-
-
Conclusion
-
-
Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!
-
Conclusion
-
-
Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Auto Report FB How to Automate Reporting on Facebook with a Simple App.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Auto Report FB How to Automate Reporting on Facebook with a Simple App.md
deleted file mode 100644
index 8460b1cfe3069254d0d7350108fa852483f97a33..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Auto Report FB How to Automate Reporting on Facebook with a Simple App.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
Auto Report Facebook APK: What Is It and How to Use It
-
Facebook is one of the most popular social media platforms in the world, with billions of users and millions of posts every day. However, not all of these posts are appropriate or respectful. Some of them may contain spam, hate speech, violence, nudity, or other violations of Facebook's community standards. If you encounter such posts, you can report them to Facebook and hope that they will take action. But what if you want to report multiple posts or profiles at once, without wasting time and effort? This is where auto report facebook apk comes in handy.
-
Introduction
-
What is auto report facebook apk?
-
Auto report facebook apk is an Android application that allows you to automatically report any Facebook profile or post that you want. It is not an official app from Facebook, but a third-party tool developed by independent developers. It works by using your Facebook account to send multiple reports to Facebook's servers, with the aim of getting the target profile or post removed or banned.
There are many reasons why you might want to use auto report facebook apk. For example, you might want to:
-
-
Report a fake or impersonating profile that is trying to scam or harass you or your friends.
-
Report a spammy or malicious post that is spreading false information or harmful links.
-
Report a hateful or abusive post that is targeting you or someone else based on their identity, beliefs, or opinions.
-
Report a violent or graphic post that is showing disturbing images or videos.
-
Report a nude or sexual post that is violating your privacy or consent.
-
-
What are the benefits and drawbacks of using it?
-
Using auto report facebook apk can have some benefits and drawbacks. Some of the benefits are:
-
-
You can save time and energy by reporting multiple profiles or posts at once, instead of doing it manually one by one.
-
You can increase the chances of getting the target profile or post removed or banned, by sending more reports than usual.
-
You can protect yourself and others from harmful or offensive content on Facebook, by reducing its visibility and reach.
-
-
Some of the drawbacks are:
-
-
You may risk violating Facebook's terms of service, by using an unauthorized app that manipulates their system.
-
You may risk losing your Facebook account, by logging in with your credentials on a third-party app that may not be secure or trustworthy.
-
You may risk reporting innocent profiles or posts, by using the app incorrectly or irresponsibly.
-
-
How to download and install auto report facebook apk
-
Step 1: Find a reliable source for the apk file
-
The first step to use auto report facebook apk is to find a reliable source for the apk file. The apk file is the installation package for Android applications. You can search for it online, but be careful not to download it from shady or unknown websites that may contain viruses or malware. One of the sources that you can try is GitHub, where you can find the project page for auto report facebook apk. There, you can see the latest version of the app, its features, and its instructions. You can also download the apk file from there by clicking on the "Releases" tab and then on the "Assets" section. Make sure you download the file that ends with ".apk".
-
Step 2: Enable unknown sources on your device
-
The second step to use auto report facebook apk is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To do this, you need to go to your device's settings, then to security or privacy, and then to unknown sources or install unknown apps. There, you need to toggle on the option that allows you to install apps from unknown sources. You may also need to grant permission to the browser or file manager that you are using to download the apk file.
-
Step 3: Download and install the apk file
-
The third step to use auto report facebook apk is to download and install the apk file. To do this, you need to open the browser or file manager that you used to download the apk file, and then locate the file in your downloads folder or wherever you saved it. Then, you need to tap on the file and follow the instructions on the screen to install the app. You may need to grant some permissions to the app, such as access to your storage, contacts, and location. Once the installation is complete, you can find the app icon on your home screen or app drawer.
-
auto report fb apk download
-facebook auto reporter v2
-auto report facebook account
-facebook auto report script
-auto report facebook group
-facebook auto report bot
-auto report facebook page
-facebook auto report tool
-auto report fb apk free
-facebook auto reporter v2 download
-auto report facebook online
-facebook auto report imacros
-auto report facebook app
-facebook auto report software
-auto report fb apk 2023
-facebook auto reporter v2 free
-auto report facebook profile
-facebook auto report chrome extension
-auto report fb apk latest version
-facebook auto reporter v2 tutorial
-auto report facebook comment
-facebook auto report generator
-auto report fb apk mod
-facebook auto reporter v2 script
-auto report fb apk no root
-facebook auto reporter v2 youtube
-auto report fb apk for android
-facebook auto report hack
-auto report fb apk 2022
-facebook auto reporter v2 github
-auto report fb apk pro
-facebook auto report website
-auto report fb apk terbaru
-facebook auto reporter v2 online
-auto report fb apk 2021
-facebook auto reporter v2 review
-auto report fb apk update
-facebook auto reporter v2 crack
-auto report fb apk premium
-facebook auto reporter v2 alternative
-auto report fb apk old version
-facebook auto reporter v2 for pc
-auto report fb apk 2020
-facebook auto reporter v2 for android
-auto report fb apk cracked
-facebook auto reporter v2 for mac
-auto report fb apk full version
-facebook reporting tools free
-
How to use auto report facebook apk
-
Step 1: Launch the app and log in with your Facebook account
-
The first step to use auto report facebook apk is to launch the app and log in with your Facebook account. To do this, you need to tap on the app icon and wait for it to load. Then, you need to enter your Facebook email or phone number and password, and tap on "Log In". You may also need to enter a verification code or confirm your identity if prompted by Facebook. Once you are logged in, you will see the main interface of the app, which consists of a menu bar at the top and a list of profiles or posts at the bottom.
-
Step 2: Select the target profile or post that you want to report
-
The second step to use auto report facebook apk is to select the target profile or post that you want to report. To do this, you need to tap on the menu bar at the top and choose one of the options: "Report Profile" or "Report Post". Then, you need to enter the URL or ID of the profile or post that you want to report in the text box, and tap on "Search". You will see a preview of the profile or post below, along with some information such as name, date, and content. You can also tap on "Load More" to see more profiles or posts that match your search criteria.
-
Step 3: Choose the reason for reporting and submit
-
The third step to use auto report facebook apk is to choose the reason for reporting and submit. To do this, you need to tap on the profile or post that you want to report from the list below, and then tap on "Report". You will see a pop-up window with a list of reasons for reporting, such as spam, fake account, hate speech, violence, nudity, or other. You can select one or more reasons that apply to your case, and then tap on "Submit". You will see a confirmation message that your report has been sent successfully. You can repeat this process for as many profiles or posts as you want.
-
Conclusion
-
Summary of the main points
-
In this article, we have explained what auto report facebook apk is and how to use it. Auto report facebook apk is an Android application that allows you to automatically report any Facebook profile or post that you want. It can help you save time and energy by reporting multiple profiles or posts at once, increase the chances of getting them removed or banned, and protect yourself and others from harmful or offensive content on Facebook. However, it also has some drawbacks, such as violating Facebook's terms of service, risking losing your Facebook account, and reporting innocent profiles or posts.
-
Call to action and disclaimer
-
If you want to try auto report facebook apk for yourself, you can download it from GitHub and follow the steps that we have outlined above. However, we advise you to use it with caution and responsibility, as we are not responsible for any consequences that may arise from using it. We also recommend that you respect Facebook's community standards and only report profiles or posts that truly violate them. Remember that reporting is a serious matter and should not be abused for personal vendetta or malicious intent.
-
-
Reason for reporting
-
Description
-
-
-
Spam
-
The profile or post is unsolicited, repetitive, or irrelevant.
-
-
-
Fake account
-
The profile is not representing a real person or entity.
-
-
-
Hate speech
-
The profile or post is attacking or discriminating against a group or individual based on their race, ethnicity, religion, gender, sexual orientation, disability, or other characteristic.
-
-
-
Violence
-
The profile or post is promoting or showing physical harm, threats, or cruelty to oneself or others.
-
-
-
Nudity
-
The profile or post is displaying or soliciting sexual or explicit content that violates Facebook's policies.
-
-
-
Other
-
The profile or post is violating Facebook's community standards in some other way.
-
-
-
This is an example of a table that you can use in your article to illustrate the different reasons for reporting a profile or post on Facebook.
-
FAQs
-
What is the difference between auto report facebook apk and the report feature on Facebook?
-
The report feature on Facebook is the official way to report a profile or post that violates Facebook's community standards. You can access it by clicking on the three dots icon on the top right corner of any profile or post, and then selecting "Report". You can then choose the reason for reporting and follow the instructions. The report feature on Facebook allows you to report one profile or post at a time, and it may take some time for Facebook to review and act on your report.
-
Auto report facebook apk is an unofficial app that allows you to automatically report multiple profiles or posts at once. You can download it from GitHub and install it on your Android device. You can then log in with your Facebook account and enter the URL or ID of the profile or post that you want to report. You can then choose the reason for reporting and submit. Auto report facebook apk sends multiple reports to Facebook's servers, with the aim of getting the profile or post removed or banned faster.
-
Is auto report facebook apk safe to use?
-
Auto report facebook apk is not a safe app to use, as it may pose some risks to your device and your Facebook account. Some of the risks are:
-
-
It may contain viruses or malware that can harm your device or steal your data.
-
It may not be secure or trustworthy, as it requires you to log in with your Facebook credentials on a third-party app that may not protect your privacy.
-
It may violate Facebook's terms of service, as it manipulates their system and abuses their report feature.
-
It may result in your Facebook account being suspended or banned, as Facebook may detect your abnormal activity and flag you as a spammer or a violator.
-
It may report innocent profiles or posts, as it may not be accurate or responsible in selecting the target profile or post.
-
-
Therefore, we advise you to use auto report facebook apk with caution and responsibility, and at your own risk. We also recommend that you use the official report feature on Facebook instead, as it is safer and more reliable.
-
How can I avoid being reported by auto report facebook apk?
-
The best way to avoid being reported by auto report facebook apk is to follow Facebook's community standards and not post anything that violates them. Some of the things that you should avoid posting are:
-
-
Fake or impersonating profiles that try to scam or harass others.
-
Spammy or malicious posts that spread false information or harmful links.
-
Hateful or abusive posts that target others based on their identity, beliefs, or opinions.
-
Violent or graphic posts that show disturbing images or videos.
-
Nude or sexual posts that violate others' privacy or consent.
-
-
If you follow these guidelines, you will not only avoid being reported by auto report facebook apk, but also create a positive and respectful environment on Facebook for yourself and others.
-
How can I report a profile or post that is using auto report facebook apk?
-
If you suspect that a profile or post is using auto report facebook apk to report others unfairly or maliciously, you can report them to Facebook using the official report feature. To do this, you need to click on the three dots icon on the top right corner of the profile or post, and then select "Report". You can then choose the reason for reporting, such as "It's spam" or "It's abusive or harmful". You can also provide more details or feedback to Facebook, such as "This profile or post is using auto report facebook apk to report others". Facebook will then review your report and take appropriate action.
-
What are some alternatives to auto report facebook apk?
-
If you are looking for some alternatives to auto report facebook apk, you can try some of these options:
-
-
Use the official report feature on Facebook, as it is safer and more reliable.
-
Use the block or unfriend feature on Facebook, as it will prevent you from seeing or interacting with the profile or post that you don't like.
-
Use the hide or snooze feature on Facebook, as it will reduce the visibility or frequency of the profile or post that you don't want to see.
-
Use the mute or unfollow feature on Facebook, as it will stop the notifications or updates from the profile or post that you are not interested in.
-
Use the feedback or rating feature on Facebook, as it will help Facebook improve their content quality and relevance.
-
-
These options will help you manage your Facebook experience better, without resorting to auto report facebook apk.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Create Convert and Edit PDF Files with PrimoPDF - A Free and Reliable PDF Creator.md b/spaces/1phancelerku/anime-remove-background/Create Convert and Edit PDF Files with PrimoPDF - A Free and Reliable PDF Creator.md
deleted file mode 100644
index b6023506142d59c9cb4b7c81474ae560f1dbc47e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Create Convert and Edit PDF Files with PrimoPDF - A Free and Reliable PDF Creator.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
How to Download PrimoPDF: A Free PDF Converter and Creator
-
If you are looking for a free and easy way to convert or create PDF files from any Windows application, you might want to try PrimoPDF. PrimoPDF is a free tool provided by Nitro Software, Inc that offers high-quality conversion to PDF, comprising a user-friendly interface that enables printing to PDF from virtually any Windows application.
-
In this article, we will show you how to download PrimoPDF, how to use it to convert and create PDF files, and some tips and tricks for using it effectively. We will also answer some frequently asked questions about PrimoPDF.
PrimoPDF has many benefits that make it a great choice for PDF productivity. Here are some of the main benefits of PrimoPDF:
-
-
High-quality conversion: PrimoPDF can convert any type of file to PDF with high fidelity and accuracy. You can choose from four output quality settings: Screen, eBook, Print, and Prepress. You can also customize the PDF settings to suit your needs.
-
User-friendly interface: PrimoPDF has a simple and intuitive interface that allows you to print to PDF from any Windows application. You just need to select PrimoPDF as the printer and click on OK. You can also drag and drop files to the PrimoPDF desktop icon to convert them to PDF.
-
Security features: PrimoPDF can protect your PDF files with 128-bit encryption and password protection. You can also add watermarks and stamps to your PDF files to prevent unauthorized copying or editing.
-
Free and easy to use: PrimoPDF is completely free and does not require any registration or subscription. It is also easy to install and use, with no ads or pop-ups.
-
-
Steps to Download PrimoPDF
-
To download PrimoPDF, you need to follow these steps:
-
-
Visit the official website of PrimoPDF. You will see a page like this:
-
-
-
-
Click on the "Download Now" button. You will be redirected to a page where you can choose between two options: "Premium Upgrade" or "Free Download".
-
-
-
How to download primopdf for free
-Download primopdf for windows 10
-PrimoPDF - PDF converter and creator
-Download primopdf from nitro software
-PrimoPDF download and installation guide
-PrimoPDF review and features
-Download primopdf for mac
-PrimoPDF alternatives and comparisons
-How to use primopdf to create pdf files
-Download primopdf offline installer
-PrimoPDF vs Adobe Acrobat
-Download primopdf for android
-PrimoPDF - the best free pdf tool
-Download primopdf from sourceforge
-PrimoPDF troubleshooting and support
-How to update primopdf to the latest version
-Download primopdf for linux
-PrimoPDF pros and cons
-How to uninstall primopdf from your pc
-Download primopdf from cnet download
-PrimoPDF security and privacy
-How to convert pdf files with primopdf
-Download primopdf for chromebook
-PrimoPDF user manual and tutorials
-How to merge pdf files with primopdf
-Download primopdf for windows 7
-PrimoPDF license and terms of use
-How to edit pdf files with primopdf
-Download primopdf for windows 8.1
-PrimoPDF feedback and ratings
-How to sign pdf files with primopdf
-Download primopdf for windows xp
-PrimoPDF FAQs and tips
-How to protect pdf files with primopdf
-Download primopdf for windows vista
-PrimoPDF blog and news
-How to print pdf files with primopdf
-Download primopdf for ios
-PrimoPDF coupons and discounts
-How to compress pdf files with primopdf
-Download primopdf for windows server 2023
-PrimoPDF testimonials and case studies
-How to rotate pdf files with primopdf
-Download primopdf portable version
-PrimoPDF awards and recognition
-How to split pdf files with primopdf
-Download primopdf old versions
-PrimoPDF social media and community
-How to add bookmarks to pdf files with primopdf
-
-
If you want to upgrade to Nitro Pro, which is a more advanced PDF software that offers more features and functions, you can click on the "Premium Upgrade" button. You will be able to enjoy a free trial of Nitro Pro for 14 days before deciding whether to purchase it or not.
-
If you want to download PrimoPDF for free, you can click on the "Free Download" button. You will see a dialog box like this:
-
-
-
-
Choose a location on your computer where you want to save the PrimoPDF installer file and click on "Save". The file name is "PrimoSetup.exe" and the file size is about 7 MB.
-
Once the download is complete, locate the PrimoPDF installer file on your computer and double-click on it. You will see a window like this:
-
-
-
-
Follow the instructions to install PrimoPDF on your computer. You will need to accept the license agreement, choose the installation folder, and select the components you want to install. You can also opt out of installing any additional software or toolbars that may be offered by PrimoPDF.
-
Once the installation is complete, you will see a window like this:
-
-
-
-
Click on "Finish" to exit the installer. You have successfully downloaded and installed PrimoPDF on your computer.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Kaash Paige Love Songs MP3 and Listen Offline.md b/spaces/1phancelerku/anime-remove-background/Download Kaash Paige Love Songs MP3 and Listen Offline.md
deleted file mode 100644
index fdf6a5a44ac9c44c7e67b569041a1453980a144b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Kaash Paige Love Songs MP3 and Listen Offline.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
Download Kaash Paige Love Songs MP3: How to Enjoy the Best of R&B
-
If you are a fan of R&B music, you have probably heard of Kaash Paige, the rising star who has captivated millions of listeners with her soulful voice and relatable lyrics. Kaash Paige is known for her love songs, which express her feelings and experiences with romance, heartbreak, and self-love. In this article, we will tell you more about who Kaash Paige is, why you should listen to her love songs, and how to download them in mp3 format for free. We will also give you some tips on how to enjoy her music on different devices and create your own playlists and mixtapes.
-
Who is Kaash Paige and why you should listen to her love songs
-
Kaash Paige is a 20-year-old singer and songwriter from Dallas, Texas, who started making music when she was 14. She rose to fame in 2018 with her viral hit "Love Songs", which sampled Brent Faiyaz's "Poison". Since then, she has released two EPs, Parked Car Convos and Teenage Fever, and collaborated with artists like Don Toliver, Isaiah Rashad, K Camp, and 6LACK. She is currently signed to Def Jam Recordings and is working on her debut album.
Kaash Paige was born as Kaashara Bostic on January 8, 2001, in Dallas, Texas. She grew up in a musical family, as her father was a DJ and her mother was a singer. She was exposed to various genres of music, such as hip-hop, soul, jazz, rock, and gospel. She cites Lauryn Hill, Erykah Badu, Frank Ocean, Drake, Jhené Aiko, SZA, and Brent Faiyaz as some of her main influences. She also draws inspiration from anime, movies, books, and nature.
-
Kaash Paige's style and themes
-
Kaash Paige's style can be described as a blend of R&B, soul, hip-hop, and alternative. She has a smooth and soothing voice that can switch from singing to rapping effortlessly. She writes her own lyrics, which are honest, vulnerable, and poetic. She often sings about love, relationships, emotions, self-discovery, and growing up. Some of her recurring themes are nostalgia, loneliness, intimacy, infatuation, and empowerment.
-
Kaash Paige's most popular love songs
-
Kaash Paige has released many love songs that have resonated with her fans and critics alike. Some of her most popular ones are:
-
-
"Love Songs": This is the song that put Kaash Paige on the map. It is a catchy and melodic tune that samples Brent Faiyaz's "Poison". It talks about missing someone who used to make you feel special and sing love songs with you.
-
"64'": This is a laid-back and nostalgic song that features rapper K Camp. It reminisces about riding in a 1964 Chevrolet Impala with a lover and enjoying the simple moments.
-
"Break Up Song": This is a sad and emotional song that deals with. the end of a relationship and the pain of letting go. It features a sample of Drake's "Doing It Wrong".
-
"Soul Ties": This is a smooth and sensual song that explores the concept of soul ties, which are the emotional and spiritual bonds that form between people who have sex. It features rapper 6LACK, who adds his own perspective on the topic.
-
"London": This is a dreamy and romantic song that expresses the desire to travel to London with a lover and escape from reality. It has a lo-fi and atmospheric vibe that matches the mood of the lyrics.
-
-
How to download Kaash Paige love songs mp3 for free
-
If you want to enjoy Kaash Paige's love songs offline, you might want to download them in mp3 format. Mp3 is a popular and widely supported audio file format that can be played on various devices and platforms. Mp3 files are also smaller in size than other formats, which means they take up less storage space and bandwidth. Plus, mp3 files can be easily edited, converted, and transferred.
-
The benefits of downloading mp3 files
-
Downloading mp3 files has many benefits, such as:
-
-
You can listen to your favorite songs anytime and anywhere, without relying on an internet connection or streaming service.
-
You can save money on data charges and subscription fees, as you don't have to stream music online.
-
You can create your own music library and organize it according to your preferences.
-
You can customize your songs by adding metadata, artwork, lyrics, and tags.
-
You can share your songs with your friends and family via email, Bluetooth, or social media.
-
-
The best websites and apps to download Kaash Paige love songs mp3
-
There are many websites and apps that allow you to download Kaash Paige love songs mp3 for free. However, not all of them are safe, legal, or reliable. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them might also violate the copyright laws and infringe on the rights of the artists and labels. Therefore, you should be careful and choose only the reputable and trustworthy sources. Here are some of the best ones:
-
YouTube
-
YouTube is the most popular video-sharing platform in the world, where you can find millions of music videos, including Kaash Paige's love songs. You can watch them online or download them in mp3 format using a YouTube to mp3 converter. There are many online converters available, such as YTMP3, Y2Mate, 4K Video Downloader, etc. You just need to copy the URL of the video you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.
-
download kaash paige love songs lyrics
-kaash paige love songs mp3 free download
-kaash paige love songs remix download
-download kaash paige love songs video
-kaash paige love songs album download
-download kaash paige love songs instrumental
-kaash paige love songs mp3 320kbps download
-kaash paige love songs acoustic download
-download kaash paage love songs feat 6lack
-kaash paige love songs ringtone download
-download kaash paige love songs slowed
-kaash paige love songs zip download
-kaash paige love songs spotify download
-download kaash paige love songs clean version
-kaash paige love songs karaoke download
-download kaash paige love songs genius lyrics
-kaash paige love songs soundcloud download
-kaash paige love songs live performance download
-download kaash paige love songs piano cover
-kaash paige love songs tiktok version download
-download kaash paige love songs reaction video
-kaash paige love songs behind the scenes download
-kaash paige love songs official music video download
-download kaash paige love songs guitar chords
-kaash paige love songs dance challenge download
-download kaash paige love songs nightcore
-kaash paige love songs mashup download
-kaash paige love songs unplugged download
-download kaash paige love songs meaning explained
-kaash paige love songs extended version download
-download kaash paige love songs sample used
-kaash paige love songs flac download
-kaash paige love songs radio edit download
-download kaash paige love songs interview clip
-kaash paage love songs fan made video download
-download kaash pagee love songss lyrics translation
-kassh pagee luv songz mp3 downlod
-
Spotify
-
Spotify is one of the most popular music streaming services in the world, where you can listen to millions of songs, including Kaash Paige's love songs. You can access Spotify for free with ads or pay for a premium subscription that offers more features and benefits. One of them is the ability to download songs for offline listening. However, this feature is only available for premium users and only works within the Spotify app. You cannot transfer or play the downloaded songs on other devices or apps. If you want to download Spotify songs in mp3 format, you will need a Spotify to mp3 converter. There are some online converters available, such as SpotiFlyer, AudKit Spotify Music Converter, TuneFab Spotify Music Converter, etc. You just need to copy the URL of the song or playlist you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.
SoundCloud
-
SoundCloud is another popular music streaming service in the world, where you can discover and listen to millions of songs, including Kaash Paige's love songs. You can access SoundCloud for free with ads or pay for a premium subscription that offers more features and benefits. One of them is the ability to download songs for offline listening. However, this feature is only available for some songs and only works within the SoundCloud app. You cannot transfer or play the downloaded songs on other devices or apps. If you want to download SoundCloud songs in mp3 format, you will need a SoundCloud to mp3 converter. There are some online converters available, such as SCDL, SoundCloud Downloader, KlickAud, etc. You just need to copy the URL of the song you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.
-
Audiomack
-
Audiomack is a music streaming and discovery platform that allows artists to upload their music and fans to listen to it for free. You can find many songs by Kaash Paige on Audiomack, as well as other genres and artists. You can access Audiomack for free with ads or pay for a premium subscription that offers more features and benefits. One of them is the ability to download songs for offline listening. However, this feature is only available for some songs and only works within the Audiomack app. You cannot transfer or play the downloaded songs on other devices or apps. If you want to download Audiomack songs in mp3 format, you will need an Audiomack to mp3 converter. There are some online converters available, such as Audiomack Downloader, MP3FY, MP3Juices, etc. You just need to copy the URL of the song you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.
-
How to enjoy Kaash Paige love songs mp3 on different devices
-
Once you have downloaded Kaash Paige love songs mp3 from any of the sources mentioned above, you can enjoy them on different devices, such as your phone, tablet, computer, or laptop. However, you might need to transfer or play them differently depending on the device and the app you use. Here are some tips on how to do that:
-
How to transfer mp3 files to your phone or tablet
-
If you have downloaded mp3 files on your computer or laptop, you can transfer them to your phone or tablet using a USB cable or a wireless method. Here are the steps for each method:
-
-
USB cable: Connect your phone or tablet to your computer or laptop using a USB cable. Make sure your device is unlocked and select the file transfer option on your device's screen. On your computer or laptop, open the folder where you saved the mp3 files and drag and drop them to your device's folder. Once the transfer is complete, disconnect your device and open the music app of your choice.
-
Wireless method: There are many apps that allow you to transfer files wirelessly between devices using Wi-Fi or Bluetooth. Some of them are SHAREit, Xender, Zapya, etc. You just need to install the app on both devices and follow the instructions on how to connect them and send files.
-
-
How to play mp3 files on your computer or laptop
-
If you have downloaded mp3 files on your computer or laptop, you can play them using any media player that supports mp3 format. Some of them are Windows Media Player, VLC Media Player, iTunes, etc. You just need to open the media player of your choice and browse for the folder where you saved the mp3 files. You can also create playlists and edit metadata within the media player.
-
How to create playlists and mixtapes with Kaash Paige love songs mp3
-
If you want to create playlists and mixtapes with Kaash Paige love songs mp3, you can use any music app that allows you to do that. Some of them are Spotify, SoundCloud, Audiomack, etc. You just need to open the app of your choice and select the option to create a new playlist or mixtape. Then, you can add Kaash Paige love songs mp3 from your device's storage or from the app's library. You can also rearrange, rename, delete, or share your playlists and mixtapes within the app.
-Conclusion and FAQs
-
In conclusion, Kaash Paige is a talented and promising R&B singer who has a lot of love songs that you can enjoy. You can download her love songs in mp3 format for free from various websites and apps, such as YouTube, Spotify, SoundCloud, and Audiomack. You can also transfer and play them on different devices, such as your phone, tablet, computer, or laptop. You can also create your own playlists and mixtapes with her love songs and share them with your friends and family. Kaash Paige's love songs are perfect for any mood and occasion, whether you are feeling happy, sad, romantic, or nostalgic. If you are looking for some quality R&B music, you should definitely check out Kaash Paige's love songs mp3.
-
Here are some FAQs that you might have about Kaash Paige and her love songs:
-
-
Q: What does Kaash Paige mean?
-
A: Kaash Paige is a stage name that stands for Kill All Arrogance Stop Hatred. Paige is her middle name.
-
Q: What is Kaash Paige's real name?
-
A: Kaash Paige's real name is Kaashara Bostic.
-
Q: How old is Kaash Paige?
-
A: Kaash Paige is 20 years old. She was born on January 8, 2001.
-
Q: Where is Kaash Paige from?
-
A: Kaash Paige is from Dallas, Texas.
-
Q: Is Kaash Paige single?
-
A: Kaash Paige has not publicly confirmed her relationship status. She has been linked to rapper Don Toliver in the past, but they have not confirmed their romance.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/src/face3d/util/visualizer.py b/spaces/4Taps/SadTalker/src/face3d/util/visualizer.py
deleted file mode 100644
index 4023a6d4086acba9bc88e079f625194d324d7c9e..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/util/visualizer.py
+++ /dev/null
@@ -1,227 +0,0 @@
-"""This script defines the visualizer for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-import os
-import sys
-import ntpath
-import time
-from . import util, html
-from subprocess import Popen, PIPE
-from torch.utils.tensorboard import SummaryWriter
-
-def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
- """Save images to the disk.
-
- Parameters:
- webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
- visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs
- image_path (str) -- the string is used to create image paths
- aspect_ratio (float) -- the aspect ratio of saved images
- width (int) -- the images will be resized to width x width
-
- This function will save images stored in 'visuals' to the HTML file specified by 'webpage'.
- """
- image_dir = webpage.get_image_dir()
- short_path = ntpath.basename(image_path[0])
- name = os.path.splitext(short_path)[0]
-
- webpage.add_header(name)
- ims, txts, links = [], [], []
-
- for label, im_data in visuals.items():
- im = util.tensor2im(im_data)
- image_name = '%s/%s.png' % (label, name)
- os.makedirs(os.path.join(image_dir, label), exist_ok=True)
- save_path = os.path.join(image_dir, image_name)
- util.save_image(im, save_path, aspect_ratio=aspect_ratio)
- ims.append(image_name)
- txts.append(label)
- links.append(image_name)
- webpage.add_images(ims, txts, links, width=width)
-
-
-class Visualizer():
- """This class includes several functions that can display/save images and print/save logging information.
-
- It uses a Python library tensprboardX for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images.
- """
-
- def __init__(self, opt):
- """Initialize the Visualizer class
-
- Parameters:
- opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
- Step 1: Cache the training/test options
- Step 2: create a tensorboard writer
- Step 3: create an HTML object for saveing HTML filters
- Step 4: create a logging file to store training losses
- """
- self.opt = opt # cache the option
- self.use_html = opt.isTrain and not opt.no_html
- self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, 'logs', opt.name))
- self.win_size = opt.display_winsize
- self.name = opt.name
- self.saved = False
- if self.use_html: # create an HTML object at /web/; images will be saved under /web/images/
- self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')
- self.img_dir = os.path.join(self.web_dir, 'images')
- print('create web directory %s...' % self.web_dir)
- util.mkdirs([self.web_dir, self.img_dir])
- # create a logging file to store training losses
- self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
- with open(self.log_name, "a") as log_file:
- now = time.strftime("%c")
- log_file.write('================ Training Loss (%s) ================\n' % now)
-
- def reset(self):
- """Reset the self.saved status"""
- self.saved = False
-
-
- def display_current_results(self, visuals, total_iters, epoch, save_result):
- """Display current results on tensorboad; save current results to an HTML file.
-
- Parameters:
- visuals (OrderedDict) - - dictionary of images to display or save
- total_iters (int) -- total iterations
- epoch (int) - - the current epoch
- save_result (bool) - - if save the current results to an HTML file
- """
- for label, image in visuals.items():
- self.writer.add_image(label, util.tensor2im(image), total_iters, dataformats='HWC')
-
- if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved.
- self.saved = True
- # save images to the disk
- for label, image in visuals.items():
- image_numpy = util.tensor2im(image)
- img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label))
- util.save_image(image_numpy, img_path)
-
- # update website
- webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0)
- for n in range(epoch, 0, -1):
- webpage.add_header('epoch [%d]' % n)
- ims, txts, links = [], [], []
-
- for label, image_numpy in visuals.items():
- image_numpy = util.tensor2im(image)
- img_path = 'epoch%.3d_%s.png' % (n, label)
- ims.append(img_path)
- txts.append(label)
- links.append(img_path)
- webpage.add_images(ims, txts, links, width=self.win_size)
- webpage.save()
-
- def plot_current_losses(self, total_iters, losses):
- # G_loss_collection = {}
- # D_loss_collection = {}
- # for name, value in losses.items():
- # if 'G' in name or 'NCE' in name or 'idt' in name:
- # G_loss_collection[name] = value
- # else:
- # D_loss_collection[name] = value
- # self.writer.add_scalars('G_collec', G_loss_collection, total_iters)
- # self.writer.add_scalars('D_collec', D_loss_collection, total_iters)
- for name, value in losses.items():
- self.writer.add_scalar(name, value, total_iters)
-
- # losses: same format as |losses| of plot_current_losses
- def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
- """print current losses on console; also save the losses to the disk
-
- Parameters:
- epoch (int) -- current epoch
- iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
- losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
- t_comp (float) -- computational time per data point (normalized by batch_size)
- t_data (float) -- data loading time per data point (normalized by batch_size)
- """
- message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data)
- for k, v in losses.items():
- message += '%s: %.3f ' % (k, v)
-
- print(message) # print the message
- with open(self.log_name, "a") as log_file:
- log_file.write('%s\n' % message) # save the message
-
-
-class MyVisualizer:
- def __init__(self, opt):
- """Initialize the Visualizer class
-
- Parameters:
- opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
- Step 1: Cache the training/test options
- Step 2: create a tensorboard writer
- Step 3: create an HTML object for saveing HTML filters
- Step 4: create a logging file to store training losses
- """
- self.opt = opt # cache the optio
- self.name = opt.name
- self.img_dir = os.path.join(opt.checkpoints_dir, opt.name, 'results')
-
- if opt.phase != 'test':
- self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, opt.name, 'logs'))
- # create a logging file to store training losses
- self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
- with open(self.log_name, "a") as log_file:
- now = time.strftime("%c")
- log_file.write('================ Training Loss (%s) ================\n' % now)
-
-
- def display_current_results(self, visuals, total_iters, epoch, dataset='train', save_results=False, count=0, name=None,
- add_image=True):
- """Display current results on tensorboad; save current results to an HTML file.
-
- Parameters:
- visuals (OrderedDict) - - dictionary of images to display or save
- total_iters (int) -- total iterations
- epoch (int) - - the current epoch
- dataset (str) - - 'train' or 'val' or 'test'
- """
- # if (not add_image) and (not save_results): return
-
- for label, image in visuals.items():
- for i in range(image.shape[0]):
- image_numpy = util.tensor2im(image[i])
- if add_image:
- self.writer.add_image(label + '%s_%02d'%(dataset, i + count),
- image_numpy, total_iters, dataformats='HWC')
-
- if save_results:
- save_path = os.path.join(self.img_dir, dataset, 'epoch_%s_%06d'%(epoch, total_iters))
- if not os.path.isdir(save_path):
- os.makedirs(save_path)
-
- if name is not None:
- img_path = os.path.join(save_path, '%s.png' % name)
- else:
- img_path = os.path.join(save_path, '%s_%03d.png' % (label, i + count))
- util.save_image(image_numpy, img_path)
-
-
- def plot_current_losses(self, total_iters, losses, dataset='train'):
- for name, value in losses.items():
- self.writer.add_scalar(name + '/%s'%dataset, value, total_iters)
-
- # losses: same format as |losses| of plot_current_losses
- def print_current_losses(self, epoch, iters, losses, t_comp, t_data, dataset='train'):
- """print current losses on console; also save the losses to the disk
-
- Parameters:
- epoch (int) -- current epoch
- iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
- losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
- t_comp (float) -- computational time per data point (normalized by batch_size)
- t_data (float) -- data loading time per data point (normalized by batch_size)
- """
- message = '(dataset: %s, epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (
- dataset, epoch, iters, t_comp, t_data)
- for k, v in losses.items():
- message += '%s: %.3f ' % (k, v)
-
- print(message) # print the message
- with open(self.log_name, "a") as log_file:
- log_file.write('%s\n' % message) # save the message
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/t2m_trans.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/t2m_trans.py
deleted file mode 100644
index 54bd0a485d7e8dbeaaac91d049f63ebd136cb074..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/t2m_trans.py
+++ /dev/null
@@ -1,211 +0,0 @@
-import math
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-from torch.distributions import Categorical
-import models.pos_encoding as pos_encoding
-
-class Text2Motion_Transformer(nn.Module):
-
- def __init__(self,
- num_vq=1024,
- embed_dim=512,
- clip_dim=512,
- block_size=16,
- num_layers=2,
- n_head=8,
- drop_out_rate=0.1,
- fc_rate=4):
- super().__init__()
- self.trans_base = CrossCondTransBase(num_vq, embed_dim, clip_dim, block_size, num_layers, n_head, drop_out_rate, fc_rate)
- self.trans_head = CrossCondTransHead(num_vq, embed_dim, block_size, num_layers, n_head, drop_out_rate, fc_rate)
- self.block_size = block_size
- self.num_vq = num_vq
-
- def get_block_size(self):
- return self.block_size
-
- def forward(self, idxs, clip_feature):
- feat = self.trans_base(idxs, clip_feature)
- logits = self.trans_head(feat)
- return logits
-
- def sample(self, clip_feature, if_categorial=False):
- for k in range(self.block_size):
- if k == 0:
- x = []
- else:
- x = xs
- logits = self.forward(x, clip_feature)
- logits = logits[:, -1, :]
- probs = F.softmax(logits, dim=-1)
- if if_categorial:
- dist = Categorical(probs)
- idx = dist.sample()
- if idx == self.num_vq:
- break
- idx = idx.unsqueeze(-1)
- else:
- _, idx = torch.topk(probs, k=1, dim=-1)
- if idx[0] == self.num_vq:
- break
- # append to the sequence and continue
- if k == 0:
- xs = idx
- else:
- xs = torch.cat((xs, idx), dim=1)
-
- if k == self.block_size - 1:
- return xs[:, :-1]
- return xs
-
-class CausalCrossConditionalSelfAttention(nn.Module):
-
- def __init__(self, embed_dim=512, block_size=16, n_head=8, drop_out_rate=0.1):
- super().__init__()
- assert embed_dim % 8 == 0
- # key, query, value projections for all heads
- self.key = nn.Linear(embed_dim, embed_dim)
- self.query = nn.Linear(embed_dim, embed_dim)
- self.value = nn.Linear(embed_dim, embed_dim)
-
- self.attn_drop = nn.Dropout(drop_out_rate)
- self.resid_drop = nn.Dropout(drop_out_rate)
-
- self.proj = nn.Linear(embed_dim, embed_dim)
- # causal mask to ensure that attention is only applied to the left in the input sequence
- self.register_buffer("mask", torch.tril(torch.ones(block_size, block_size)).view(1, 1, block_size, block_size))
- self.n_head = n_head
-
- def forward(self, x):
- B, T, C = x.size()
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
- att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
- att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
- y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y))
- return y
-
-class Block(nn.Module):
-
- def __init__(self, embed_dim=512, block_size=16, n_head=8, drop_out_rate=0.1, fc_rate=4):
- super().__init__()
- self.ln1 = nn.LayerNorm(embed_dim)
- self.ln2 = nn.LayerNorm(embed_dim)
- self.attn = CausalCrossConditionalSelfAttention(embed_dim, block_size, n_head, drop_out_rate)
- self.mlp = nn.Sequential(
- nn.Linear(embed_dim, fc_rate * embed_dim),
- nn.GELU(),
- nn.Linear(fc_rate * embed_dim, embed_dim),
- nn.Dropout(drop_out_rate),
- )
-
- def forward(self, x):
- x = x + self.attn(self.ln1(x))
- x = x + self.mlp(self.ln2(x))
- return x
-
-class CrossCondTransBase(nn.Module):
-
- def __init__(self,
- num_vq=1024,
- embed_dim=512,
- clip_dim=512,
- block_size=16,
- num_layers=2,
- n_head=8,
- drop_out_rate=0.1,
- fc_rate=4):
- super().__init__()
- self.tok_emb = nn.Embedding(num_vq + 2, embed_dim)
- self.cond_emb = nn.Linear(clip_dim, embed_dim)
- self.pos_embedding = nn.Embedding(block_size, embed_dim)
- self.drop = nn.Dropout(drop_out_rate)
- # transformer block
- self.blocks = nn.Sequential(*[Block(embed_dim, block_size, n_head, drop_out_rate, fc_rate) for _ in range(num_layers)])
- self.pos_embed = pos_encoding.PositionEmbedding(block_size, embed_dim, 0.0, False)
-
- self.block_size = block_size
-
- self.apply(self._init_weights)
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- def forward(self, idx, clip_feature):
- if len(idx) == 0:
- token_embeddings = self.cond_emb(clip_feature).unsqueeze(1)
- else:
- b, t = idx.size()
- assert t <= self.block_size, "Cannot forward, model block size is exhausted."
- # forward the Trans model
- token_embeddings = self.tok_emb(idx)
- token_embeddings = torch.cat([self.cond_emb(clip_feature).unsqueeze(1), token_embeddings], dim=1)
-
- x = self.pos_embed(token_embeddings)
- x = self.blocks(x)
-
- return x
-
-
-class CrossCondTransHead(nn.Module):
-
- def __init__(self,
- num_vq=1024,
- embed_dim=512,
- block_size=16,
- num_layers=2,
- n_head=8,
- drop_out_rate=0.1,
- fc_rate=4):
- super().__init__()
-
- self.blocks = nn.Sequential(*[Block(embed_dim, block_size, n_head, drop_out_rate, fc_rate) for _ in range(num_layers)])
- self.ln_f = nn.LayerNorm(embed_dim)
- self.head = nn.Linear(embed_dim, num_vq + 1, bias=False)
- self.block_size = block_size
-
- self.apply(self._init_weights)
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- def forward(self, x):
- x = self.blocks(x)
- x = self.ln_f(x)
- logits = self.head(x)
- return logits
-
-
-
-
-
-
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/docs/Makefile b/spaces/AIFILMS/generate_human_motion/pyrender/docs/Makefile
deleted file mode 100644
index b1064a04362a0c4372fae351f99ed3bd9f82ff92..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/pyrender/docs/Makefile
+++ /dev/null
@@ -1,23 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = sphinx-build
-SOURCEDIR = source
-BUILDDIR = build
-
-# Put it first so that "make" without argument is like "make help".
-help:
- @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-clean:
- @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
- rm -rf ./source/generated/*
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
- @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/trainer.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/trainer.py
deleted file mode 100644
index dbb190dd6cfb938071de77ffecda560c9ddecc85..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/trainer.py
+++ /dev/null
@@ -1,561 +0,0 @@
-import random
-import subprocess
-import traceback
-from datetime import datetime
-
-from torch.cuda.amp import GradScaler, autocast
-import numpy as np
-import torch.optim
-import torch.utils.data
-import copy
-import logging
-import os
-import re
-import sys
-import torch
-import torch.distributed as dist
-import torch.multiprocessing as mp
-import tqdm
-
-from text_to_speech.utils.commons.ckpt_utils import get_last_checkpoint, get_all_ckpts
-from text_to_speech.utils.commons.ddp_utils import DDP
-from text_to_speech.utils.commons.hparams import hparams
-from text_to_speech.utils.commons.tensor_utils import move_to_cuda
-from text_to_speech.utils.os_utils import remove_file
-
-
-class Tee(object):
- def __init__(self, name, mode):
- self.file = open(name, mode)
- self.stdout = sys.stdout
- sys.stdout = self
-
- def __del__(self):
- sys.stdout = self.stdout
- self.file.close()
-
- def write(self, data):
- self.file.write(data)
- self.stdout.write(data)
-
- def flush(self):
- self.file.flush()
-
-
-class Trainer:
- def __init__(
- self,
- work_dir,
- default_save_path=None,
- accumulate_grad_batches=1,
- max_updates=160000,
- print_nan_grads=False,
- val_check_interval=2000,
- num_sanity_val_steps=5,
- amp=False,
- # tb logger
- log_save_interval=100,
- tb_log_interval=10,
- # checkpoint
- monitor_key='val_loss',
- monitor_mode='min',
- num_ckpt_keep=5,
- save_best=True,
- resume_from_checkpoint=0,
- seed=1234,
- debug=False,
- ):
- os.makedirs(work_dir, exist_ok=True)
- self.work_dir = work_dir
- self.accumulate_grad_batches = accumulate_grad_batches
- self.max_updates = max_updates
- self.num_sanity_val_steps = num_sanity_val_steps
- self.print_nan_grads = print_nan_grads
- self.default_save_path = default_save_path
- self.resume_from_checkpoint = resume_from_checkpoint if resume_from_checkpoint > 0 else None
- self.seed = seed
- self.debug = debug
- # model and optm
- self.task = None
- self.optimizers = []
-
- # trainer state
- self.testing = False
- self.global_step = 0
- self.current_epoch = 0
- self.total_batches = 0
-
- # configure checkpoint
- self.monitor_key = monitor_key
- self.num_ckpt_keep = num_ckpt_keep
- self.save_best = save_best
- self.monitor_op = np.less if monitor_mode == 'min' else np.greater
- self.best_val_results = np.Inf if monitor_mode == 'min' else -np.Inf
- self.mode = 'min'
-
- # allow int, string and gpu list
- self.all_gpu_ids = [
- int(x) for x in os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",") if x != '']
- self.num_gpus = len(self.all_gpu_ids)
- self.on_gpu = self.num_gpus > 0
- self.root_gpu = 0
- logging.info(f'GPU available: {torch.cuda.is_available()}, GPU used: {self.all_gpu_ids}')
- self.use_ddp = self.num_gpus > 1
- self.proc_rank = 0
- # Tensorboard logging
- self.log_save_interval = log_save_interval
- self.val_check_interval = val_check_interval
- self.tb_log_interval = tb_log_interval
- self.amp = amp
- self.amp_scalar = GradScaler()
-
- def test(self, task_cls):
- self.testing = True
- self.fit(task_cls)
-
- def fit(self, task_cls):
- if len(self.all_gpu_ids) > 1:
- mp.spawn(self.ddp_run, nprocs=self.num_gpus, args=(task_cls, copy.deepcopy(hparams)))
- else:
- self.task = task_cls()
- self.task.trainer = self
- self.run_single_process(self.task)
- return 1
-
- def ddp_run(self, gpu_idx, task_cls, hparams_):
- hparams.update(hparams_)
- self.proc_rank = gpu_idx
- self.init_ddp_connection(self.proc_rank, self.num_gpus)
- if dist.get_rank() != 0 and not self.debug:
- sys.stdout = open(os.devnull, "w")
- sys.stderr = open(os.devnull, "w")
- task = task_cls()
- task.trainer = self
- torch.cuda.set_device(gpu_idx)
- self.root_gpu = gpu_idx
- self.task = task
- self.run_single_process(task)
-
- def run_single_process(self, task):
- """Sanity check a few things before starting actual training.
-
- :param task:
- """
- # build model, optm and load checkpoint
- if self.proc_rank == 0:
- self.save_terminal_logs()
- if not self.testing:
- self.save_codes()
-
- model = task.build_model()
- if model is not None:
- task.model = model
- checkpoint, _ = get_last_checkpoint(self.work_dir, self.resume_from_checkpoint)
- if checkpoint is not None:
- self.restore_weights(checkpoint)
- elif self.on_gpu:
- task.cuda(self.root_gpu)
- if not self.testing:
- self.optimizers = task.configure_optimizers()
- self.fisrt_epoch = True
- if checkpoint is not None:
- self.restore_opt_state(checkpoint)
- del checkpoint
- # clear cache after restore
- if self.on_gpu:
- torch.cuda.empty_cache()
-
- if self.use_ddp:
- self.task = self.configure_ddp(self.task)
- dist.barrier()
-
- task_ref = self.get_task_ref()
- task_ref.trainer = self
- task_ref.testing = self.testing
- # link up experiment object
- if self.proc_rank == 0:
- task_ref.build_tensorboard(save_dir=self.work_dir, name='tb_logs')
- else:
- os.makedirs('tmp', exist_ok=True)
- task_ref.build_tensorboard(save_dir='tmp', name='tb_tmp')
- self.logger = task_ref.logger
- try:
- if self.testing:
- self.run_evaluation(test=True)
- else:
- self.train()
- except KeyboardInterrupt as e:
- traceback.print_exc()
- task_ref.on_keyboard_interrupt()
-
- ####################
- # valid and test
- ####################
- def run_evaluation(self, test=False):
- eval_results = self.evaluate(self.task, test, tqdm_desc='Valid' if not test else 'test',
- max_batches=hparams['eval_max_batches'])
- if eval_results is not None and 'tb_log' in eval_results:
- tb_log_output = eval_results['tb_log']
- self.log_metrics_to_tb(tb_log_output)
- if self.proc_rank == 0 and not test:
- self.save_checkpoint(epoch=self.current_epoch, logs=eval_results)
-
- def evaluate(self, task, test=False, tqdm_desc='Valid', max_batches=None):
- if max_batches == -1:
- max_batches = None
- # enable eval mode
- task.zero_grad()
- task.eval()
- torch.set_grad_enabled(False)
-
- task_ref = self.get_task_ref()
- if test:
- ret = task_ref.test_start()
- if ret == 'EXIT':
- return
- else:
- task_ref.validation_start()
- outputs = []
- dataloader = task_ref.test_dataloader() if test else task_ref.val_dataloader()
- pbar = tqdm.tqdm(dataloader, desc=tqdm_desc, total=max_batches, dynamic_ncols=True, unit='step',
- disable=self.root_gpu > 0)
- # give model a chance to do something with the outputs (and method defined)
- for batch_idx, batch in enumerate(pbar):
- if batch is None: # pragma: no cover
- continue
- # stop short when on fast_dev_run (sets max_batch=1)
- if max_batches is not None and batch_idx >= max_batches:
- break
-
- # make dataloader_idx arg in validation_step optional
- if self.on_gpu:
- batch = move_to_cuda(batch, self.root_gpu)
- args = [batch, batch_idx]
- if self.use_ddp:
- output = task(*args)
- else:
- if test:
- output = task_ref.test_step(*args)
- else:
- output = task_ref.validation_step(*args)
- # track outputs for collation
- outputs.append(output)
- # give model a chance to do something with the outputs (and method defined)
- if test:
- eval_results = task_ref.test_end(outputs)
- else:
- eval_results = task_ref.validation_end(outputs)
- # enable train mode again
- task.train()
- torch.set_grad_enabled(True)
- return eval_results
-
- ####################
- # train
- ####################
- def train(self):
- task_ref = self.get_task_ref()
- task_ref.on_train_start()
- if self.num_sanity_val_steps > 0:
- # run tiny validation (if validation defined) to make sure program won't crash during val
- self.evaluate(self.task, False, 'Sanity Val', max_batches=self.num_sanity_val_steps)
- # clear cache before training
- if self.on_gpu:
- torch.cuda.empty_cache()
- dataloader = task_ref.train_dataloader()
- epoch = self.current_epoch
- # run all epochs
- while True:
- # set seed for distributed sampler (enables shuffling for each epoch)
- if self.use_ddp and hasattr(dataloader.sampler, 'set_epoch'):
- dataloader.sampler.set_epoch(epoch)
- # update training progress in trainer and model
- task_ref.current_epoch = epoch
- self.current_epoch = epoch
- # total batches includes multiple val checks
- self.batch_loss_value = 0 # accumulated grads
- # before epoch hook
- task_ref.on_epoch_start()
-
- # run epoch
- train_pbar = tqdm.tqdm(dataloader, initial=self.global_step, total=float('inf'),
- dynamic_ncols=True, unit='step', disable=self.root_gpu > 0)
- for batch_idx, batch in enumerate(train_pbar):
- if self.global_step % self.val_check_interval == 0 and not self.fisrt_epoch:
- self.run_evaluation()
- pbar_metrics, tb_metrics = self.run_training_batch(batch_idx, batch)
- train_pbar.set_postfix(**pbar_metrics)
- self.fisrt_epoch = False
- # when metrics should be logged
- if (self.global_step + 1) % self.tb_log_interval == 0:
- # logs user requested information to logger
- self.log_metrics_to_tb(tb_metrics)
-
- self.global_step += 1
- task_ref.global_step = self.global_step
- if self.global_step > self.max_updates:
- print("| Training end..")
- break
- # epoch end hook
- task_ref.on_epoch_end()
- epoch += 1
- if self.global_step > self.max_updates:
- break
- task_ref.on_train_end()
-
- def run_training_batch(self, batch_idx, batch):
- if batch is None:
- return {}
- all_progress_bar_metrics = []
- all_log_metrics = []
- task_ref = self.get_task_ref()
- for opt_idx, optimizer in enumerate(self.optimizers):
- if optimizer is None:
- continue
- # make sure only the gradients of the current optimizer's paramaters are calculated
- # in the training step to prevent dangling gradients in multiple-optimizer setup.
- if len(self.optimizers) > 1:
- for param in task_ref.parameters():
- param.requires_grad = False
- for group in optimizer.param_groups:
- for param in group['params']:
- param.requires_grad = True
-
- # forward pass
- with autocast(enabled=self.amp):
- if self.on_gpu:
- batch = move_to_cuda(copy.copy(batch), self.root_gpu)
- args = [batch, batch_idx, opt_idx]
- if self.use_ddp:
- output = self.task(*args)
- else:
- output = task_ref.training_step(*args)
- loss = output['loss']
- if loss is None:
- continue
- progress_bar_metrics = output['progress_bar']
- log_metrics = output['tb_log']
- # accumulate loss
- loss = loss / self.accumulate_grad_batches
-
- # backward pass
- if loss.requires_grad:
- if self.amp:
- self.amp_scalar.scale(loss).backward()
- else:
- loss.backward()
-
- # track progress bar metrics
- all_log_metrics.append(log_metrics)
- all_progress_bar_metrics.append(progress_bar_metrics)
-
- if loss is None:
- continue
-
- # nan grads
- if self.print_nan_grads:
- has_nan_grad = False
- for name, param in task_ref.named_parameters():
- if (param.grad is not None) and torch.isnan(param.grad.float()).any():
- print("| NaN params: ", name, param, param.grad)
- has_nan_grad = True
- if has_nan_grad:
- exit(0)
-
- # gradient update with accumulated gradients
- if (self.global_step + 1) % self.accumulate_grad_batches == 0:
- grad_norm_dict = task_ref.on_before_optimization(opt_idx)
- if grad_norm_dict is not None:
- all_log_metrics[-1].update(grad_norm_dict)
- if self.amp:
- self.amp_scalar.step(optimizer)
- self.amp_scalar.update()
- else:
- optimizer.step()
- optimizer.zero_grad()
- task_ref.on_after_optimization(self.current_epoch, batch_idx, optimizer, opt_idx)
-
- # collapse all metrics into one dict
- all_progress_bar_metrics = {k: v for d in all_progress_bar_metrics for k, v in d.items()}
- all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}
- return all_progress_bar_metrics, all_log_metrics
-
- ####################
- # load and save checkpoint
- ####################
- def restore_weights(self, checkpoint):
- # load model state
- task_ref = self.get_task_ref()
-
- for k, v in checkpoint['state_dict'].items():
- getattr(task_ref, k).load_state_dict(v)
-
- if self.on_gpu:
- task_ref.cuda(self.root_gpu)
- # load training state (affects trainer only)
- self.best_val_results = checkpoint['checkpoint_callback_best']
- self.global_step = checkpoint['global_step']
- self.current_epoch = checkpoint['epoch']
- task_ref.global_step = self.global_step
-
- # wait for all models to restore weights
- if self.use_ddp:
- # wait for all processes to catch up
- dist.barrier()
-
- def restore_opt_state(self, checkpoint):
- if self.testing:
- return
- # restore the optimizers
- optimizer_states = checkpoint['optimizer_states']
- for optimizer, opt_state in zip(self.optimizers, optimizer_states):
- if optimizer is None:
- return
- try:
- optimizer.load_state_dict(opt_state)
- # move optimizer to GPU 1 weight at a time
- if self.on_gpu:
- for state in optimizer.state.values():
- for k, v in state.items():
- if isinstance(v, torch.Tensor):
- state[k] = v.cuda(self.root_gpu)
- except ValueError:
- print("| WARMING: optimizer parameters not match !!!")
- try:
- if dist.is_initialized() and dist.get_rank() > 0:
- return
- except Exception as e:
- print(e)
- return
- did_restore = True
- return did_restore
-
- def save_checkpoint(self, epoch, logs=None):
- monitor_op = np.less
- ckpt_path = f'{self.work_dir}/model_ckpt_steps_{self.global_step}.ckpt'
- logging.info(f'Epoch {epoch:05d}@{self.global_step}: saving model to {ckpt_path}')
- self._atomic_save(ckpt_path)
- for old_ckpt in get_all_ckpts(self.work_dir)[self.num_ckpt_keep:]:
- remove_file(old_ckpt)
- logging.info(f'Delete ckpt: {os.path.basename(old_ckpt)}')
- current = None
- if logs is not None and self.monitor_key in logs:
- current = logs[self.monitor_key]
- if current is not None and self.save_best:
- if monitor_op(current, self.best_val_results):
- best_filepath = f'{self.work_dir}/model_ckpt_best.pt'
- self.best_val_results = current
- logging.info(
- f'Epoch {epoch:05d}@{self.global_step}: {self.monitor_key} reached {current:0.5f}. '
- f'Saving model to {best_filepath}')
- self._atomic_save(best_filepath)
-
- def _atomic_save(self, filepath):
- checkpoint = self.dump_checkpoint()
- tmp_path = str(filepath) + ".part"
- torch.save(checkpoint, tmp_path, _use_new_zipfile_serialization=False)
- os.replace(tmp_path, filepath)
-
- def dump_checkpoint(self):
- checkpoint = {'epoch': self.current_epoch, 'global_step': self.global_step,
- 'checkpoint_callback_best': self.best_val_results}
- # save optimizers
- optimizer_states = []
- for i, optimizer in enumerate(self.optimizers):
- if optimizer is not None:
- optimizer_states.append(optimizer.state_dict())
-
- checkpoint['optimizer_states'] = optimizer_states
- task_ref = self.get_task_ref()
- checkpoint['state_dict'] = {
- k: v.state_dict() for k, v in task_ref.named_children() if len(list(v.parameters())) > 0}
- return checkpoint
-
- ####################
- # DDP
- ####################
- def configure_ddp(self, task):
- task = DDP(task, device_ids=[self.root_gpu], find_unused_parameters=True)
- random.seed(self.seed)
- np.random.seed(self.seed)
- return task
-
- def init_ddp_connection(self, proc_rank, world_size):
- root_node = '127.0.0.1'
- root_node = self.resolve_root_node_address(root_node)
- os.environ['MASTER_ADDR'] = root_node
- dist.init_process_group('nccl', rank=proc_rank, world_size=world_size)
-
- def resolve_root_node_address(self, root_node):
- if '[' in root_node:
- name = root_node.split('[')[0]
- number = root_node.split(',')[0]
- if '-' in number:
- number = number.split('-')[0]
- number = re.sub('[^0-9]', '', number)
- root_node = name + number
- return root_node
-
- ####################
- # utils
- ####################
- def get_task_ref(self):
- from text_to_speech.utils.commons.base_task import BaseTask
- task: BaseTask = self.task.module if isinstance(self.task, DDP) else self.task
- return task
-
- def log_metrics_to_tb(self, metrics, step=None):
- """Logs the metric dict passed in.
-
- :param metrics:
- """
- # turn all tensors to scalars
- scalar_metrics = self.metrics_to_scalars(metrics)
-
- step = step if step is not None else self.global_step
- # log actual metrics
- if self.proc_rank == 0:
- self.log_metrics(self.logger, scalar_metrics, step=step)
-
- @staticmethod
- def log_metrics(logger, metrics, step=None):
- for k, v in metrics.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
- logger.add_scalar(k, v, step)
-
- def metrics_to_scalars(self, metrics):
- new_metrics = {}
- for k, v in metrics.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
-
- if type(v) is dict:
- v = self.metrics_to_scalars(v)
-
- new_metrics[k] = v
-
- return new_metrics
-
- def save_terminal_logs(self):
- t = datetime.now().strftime('%Y%m%d%H%M%S')
- os.makedirs(f'{self.work_dir}/terminal_logs', exist_ok=True)
- Tee(f'{self.work_dir}/terminal_logs/log_{t}.txt', 'w')
-
- def save_codes(self):
- if len(hparams['save_codes']) > 0:
- t = datetime.now().strftime('%Y%m%d%H%M%S')
- code_dir = f'{self.work_dir}/codes/{t}'
- subprocess.check_call(f'mkdir -p "{code_dir}"', shell=True)
- for c in hparams['save_codes']:
- if os.path.exists(c):
- subprocess.check_call(
- f'rsync -aR '
- f'--include="*.py" '
- f'--include="*.yaml" '
- f'--exclude="__pycache__" '
- f'--include="*/" '
- f'--exclude="*" '
- f'"./{c}" "{code_dir}/"',
- shell=True)
- print(f"| Copied codes to {code_dir}.")
diff --git a/spaces/AIGText/GlyphControl/annotator/canny/__init__.py b/spaces/AIGText/GlyphControl/annotator/canny/__init__.py
deleted file mode 100644
index cb0da951dc838ec9dec2131007e036113281800b..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/annotator/canny/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import cv2
-
-
-class CannyDetector:
- def __call__(self, img, low_threshold, high_threshold):
- return cv2.Canny(img, low_threshold, high_threshold)
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/sequence.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/sequence.js
deleted file mode 100644
index 3aaf79a43af266f73e993e3f5abc2f48856debdf..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/sequence.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import Sequence from './logic/runcommands/sequence/Sequence.js';
-export default Sequence;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/CustomProgress.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/CustomProgress.d.ts
deleted file mode 100644
index 8116e2f82a5b38a7394908436a6f4aea5339dc07..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/CustomProgress.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import CustomProgress from '../../../plugins/customprogress';
-export default CustomProgress;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/saicinpainting/training/visualizers/base.py b/spaces/AlexWang/lama/saicinpainting/training/visualizers/base.py
deleted file mode 100644
index 675f01682ddf5e31b6cc341735378c6f3b242e49..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/training/visualizers/base.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import abc
-from typing import Dict, List
-
-import numpy as np
-import torch
-from skimage import color
-from skimage.segmentation import mark_boundaries
-
-from . import colors
-
-COLORS, _ = colors.generate_colors(151) # 151 - max classes for semantic segmentation
-
-
-class BaseVisualizer:
- @abc.abstractmethod
- def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None):
- """
- Take a batch, make an image from it and visualize
- """
- raise NotImplementedError()
-
-
-def visualize_mask_and_images(images_dict: Dict[str, np.ndarray], keys: List[str],
- last_without_mask=True, rescale_keys=None, mask_only_first=None,
- black_mask=False) -> np.ndarray:
- mask = images_dict['mask'] > 0.5
- result = []
- for i, k in enumerate(keys):
- img = images_dict[k]
- img = np.transpose(img, (1, 2, 0))
-
- if rescale_keys is not None and k in rescale_keys:
- img = img - img.min()
- img /= img.max() + 1e-5
- if len(img.shape) == 2:
- img = np.expand_dims(img, 2)
-
- if img.shape[2] == 1:
- img = np.repeat(img, 3, axis=2)
- elif (img.shape[2] > 3):
- img_classes = img.argmax(2)
- img = color.label2rgb(img_classes, colors=COLORS)
-
- if mask_only_first:
- need_mark_boundaries = i == 0
- else:
- need_mark_boundaries = i < len(keys) - 1 or not last_without_mask
-
- if need_mark_boundaries:
- if black_mask:
- img = img * (1 - mask[0][..., None])
- img = mark_boundaries(img,
- mask[0],
- color=(1., 0., 0.),
- outline_color=(1., 1., 1.),
- mode='thick')
- result.append(img)
- return np.concatenate(result, axis=1)
-
-
-def visualize_mask_and_images_batch(batch: Dict[str, torch.Tensor], keys: List[str], max_items=10,
- last_without_mask=True, rescale_keys=None) -> np.ndarray:
- batch = {k: tens.detach().cpu().numpy() for k, tens in batch.items()
- if k in keys or k == 'mask'}
-
- batch_size = next(iter(batch.values())).shape[0]
- items_to_vis = min(batch_size, max_items)
- result = []
- for i in range(items_to_vis):
- cur_dct = {k: tens[i] for k, tens in batch.items()}
- result.append(visualize_mask_and_images(cur_dct, keys, last_without_mask=last_without_mask,
- rescale_keys=rescale_keys))
- return np.concatenate(result, axis=0)
diff --git a/spaces/AlexWortega/Kandinsky2.0/app.py b/spaces/AlexWortega/Kandinsky2.0/app.py
deleted file mode 100644
index c432b9d2555914993a84662e1966119530f32106..0000000000000000000000000000000000000000
--- a/spaces/AlexWortega/Kandinsky2.0/app.py
+++ /dev/null
@@ -1,215 +0,0 @@
-
-import gradio as gr
-import torch
-from torch import autocast
-from kandinsky2 import get_kandinsky2
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-model = get_kandinsky2(device, task_type='text2img')
-
-
-
-
-def infer(prompt):
- images = model.generate_text2img(prompt, batch_size=4, h=512, w=512, num_steps=75, denoised_type='dynamic_threshold', dynamic_threshold_v=99.5, sampler='ddim_sampler', ddim_eta=0.05, guidance_scale=10)
- return images
-
-css = """
- .gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
- }
- .gr-button {
- color: white;
- border-color: black;
- background: black;
- }
- input[type='range'] {
- accent-color: black;
- }
- .dark input[type='range'] {
- accent-color: #dfdfdf;
- }
- .container {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
- }
- #gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
- }
- #gallery>div>.h-full {
- min-height: 20rem;
- }
- .details:hover {
- text-decoration: underline;
- }
- .gr-button {
- white-space: nowrap;
- }
- .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
- }
- #advanced-btn {
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 12px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
- .footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
- .acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
- }
- #container-advanced-btns{
- display: flex;
- flex-wrap: wrap;
- justify-content: space-between;
- align-items: center;
- }
- .animate-spin {
- animation: spin 1s linear infinite;
- }
- @keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
- }
- #share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
- }
- #share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
- }
- #share-btn * {
- all: unset;
- }
- .gr-form{
- flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0;
- }
- #prompt-container{
- gap: 0;
- }
- #generated_id{
- min-height: 700px
- }
-"""
-block = gr.Blocks(css=css)
-
-examples = [
- [
- 'Красная площадь'
- ],
- [
- 'Thinking man in anime style'
- ],
- [
- 'אבוקדו'
- ],
-]
-
-with block as demo:
- gr.Markdown("""
-
-
-[](https://pytorch.org/) [](https://huggingface.co/sberbank-ai/Kandinsky_2.0)
-
-
-
-## Model architecture:
-
-It is a latent diffusion model with two multilingual text encoders:
-* mCLIP-XLMR 560M parameters
-* mT5-encoder-small 146M parameters
-
-These encoders and multilingual training datasets unveil the real multilingual text-to-image generation experience!
-
-**Kandinsky 2.0** was trained on a large 1B multilingual set, including samples that we used to train Kandinsky.
-
-In terms of diffusion architecture Kandinsky 2.0 implements UNet with 1.2B parameters.
-
-**Kandinsky 2.0** architecture overview:
-
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row().style(mobile_collapse=False, equal_height=True):
-
- text = gr.Textbox(
- label="Enter your prompt", show_label=False, max_lines=1
- ).style(
- border=(True, False, True, True),
- rounded=(True, False, False, True),
- container=False,
- )
- btn = gr.Button("Run").style(
- margin=False,
- rounded=(False, True, True, False),
- )
-
- gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="generated_id").style(
- grid=[2], height="auto"
- )
-
- ex = gr.Examples(examples=examples, fn=infer, inputs=[text], outputs=gallery, cache_examples=True)
- ex.dataset.headers = [""]
-
- text.submit(infer, inputs=[text], outputs=gallery)
- btn.click(infer, inputs=[text], outputs=gallery)
-gr.Markdown("""
-
-
-# Authors
-
-+ Arseniy Shakhmatov: [Github](https://github.com/cene555), [Blog](https://t.me/gradientdip)
-+ Anton Razzhigaev: [Github](https://github.com/razzant), [Blog](https://t.me/abstractDL)
-+ Aleksandr Nikolich: [Github](https://github.com/AlexWortega), [Blog](https://t.me/lovedeathtransformers)
-+ Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse)
-+ Igor Pavlov: [Github](https://github.com/boomb0om)
-+ Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey)
-+ Denis Dimitrov: [Github](https://github.com/denndimitrov)
-
- """
- )
-
-demo.queue(max_size=25).launch()
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_examples.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_examples.md
deleted file mode 100644
index f97a9ad09ac507a122de9e93cc80f1d07793ac0a..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_examples.md
+++ /dev/null
@@ -1,282 +0,0 @@
-
-
-# Community pipelines
-
-[[open-in-colab]]
-
-> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
-
-**Community** examples consist of both inference and training examples that have been added by the community.
-Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
-If a community doesn't work as expected, please open an issue and ping the author on it.
-
-| Example | Description | Code Example | Colab | Author |
-|:---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------:|
-| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
-| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
-| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) |
-| Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
-| Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) |
-| Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech)
-
-To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
-```py
-pipe = DiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder"
-)
-```
-
-## Example usages
-
-### CLIP Guided Stable Diffusion
-
-CLIP guided stable diffusion can help to generate more realistic images
-by guiding stable diffusion at every denoising step with an additional CLIP model.
-
-The following code requires roughly 12GB of GPU RAM.
-
-```python
-from diffusers import DiffusionPipeline
-from transformers import CLIPImageProcessor, CLIPModel
-import torch
-
-
-feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
-clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
-
-
-guided_pipeline = DiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- custom_pipeline="clip_guided_stable_diffusion",
- clip_model=clip_model,
- feature_extractor=feature_extractor,
- torch_dtype=torch.float16,
-)
-guided_pipeline.enable_attention_slicing()
-guided_pipeline = guided_pipeline.to("cuda")
-
-prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
-
-generator = torch.Generator(device="cuda").manual_seed(0)
-images = []
-for i in range(4):
- image = guided_pipeline(
- prompt,
- num_inference_steps=50,
- guidance_scale=7.5,
- clip_guidance_scale=100,
- num_cutouts=4,
- use_cutouts=False,
- generator=generator,
- ).images[0]
- images.append(image)
-
-# save images locally
-for i, img in enumerate(images):
- img.save(f"./clip_guided_sd/image_{i}.png")
-```
-
-The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
-Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
-
-.
-
-### One Step Unet
-
-The dummy "one-step-unet" can be run as follows:
-
-```python
-from diffusers import DiffusionPipeline
-
-pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
-pipe()
-```
-
-**Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841).
-
-### Stable Diffusion Interpolation
-
-The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
-
-```python
-from diffusers import DiffusionPipeline
-import torch
-
-pipe = DiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- torch_dtype=torch.float16,
- safety_checker=None, # Very important for videos...lots of false positives while interpolating
- custom_pipeline="interpolate_stable_diffusion",
-).to("cuda")
-pipe.enable_attention_slicing()
-
-frame_filepaths = pipe.walk(
- prompts=["a dog", "a cat", "a horse"],
- seeds=[42, 1337, 1234],
- num_interpolation_steps=16,
- output_dir="./dreams",
- batch_size=4,
- height=512,
- width=512,
- guidance_scale=8.5,
- num_inference_steps=50,
-)
-```
-
-The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
-
-> **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
-
-### Stable Diffusion Mega
-
-The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
-
-```python
-#!/usr/bin/env python3
-from diffusers import DiffusionPipeline
-import PIL
-import requests
-from io import BytesIO
-import torch
-
-
-def download_image(url):
- response = requests.get(url)
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
-
-
-pipe = DiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- custom_pipeline="stable_diffusion_mega",
- torch_dtype=torch.float16,
-)
-pipe.to("cuda")
-pipe.enable_attention_slicing()
-
-
-### Text-to-Image
-
-images = pipe.text2img("An astronaut riding a horse").images
-
-### Image-to-Image
-
-init_image = download_image(
- "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
-)
-
-prompt = "A fantasy landscape, trending on artstation"
-
-images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
-
-### Inpainting
-
-img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
-mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
-init_image = download_image(img_url).resize((512, 512))
-mask_image = download_image(mask_url).resize((512, 512))
-
-prompt = "a cat sitting on a bench"
-images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images
-```
-
-As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
-
-### Long Prompt Weighting Stable Diffusion
-
-The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using "()" or decrease words weighting by using "[]"
-The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class.
-
-#### pytorch
-
-```python
-from diffusers import DiffusionPipeline
-import torch
-
-pipe = DiffusionPipeline.from_pretrained(
- "hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16
-)
-pipe = pipe.to("cuda")
-
-prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
-neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
-
-pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
-```
-
-#### onnxruntime
-
-```python
-from diffusers import DiffusionPipeline
-import torch
-
-pipe = DiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- custom_pipeline="lpw_stable_diffusion_onnx",
- revision="onnx",
- provider="CUDAExecutionProvider",
-)
-
-prompt = "a photo of an astronaut riding a horse on mars, best quality"
-neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
-
-pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
-```
-
-if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
-
-### Speech to Image
-
-The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
-
-```Python
-import torch
-
-import matplotlib.pyplot as plt
-from datasets import load_dataset
-from diffusers import DiffusionPipeline
-from transformers import (
- WhisperForConditionalGeneration,
- WhisperProcessor,
-)
-
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
-
-audio_sample = ds[3]
-
-text = audio_sample["text"].lower()
-speech_data = audio_sample["audio"]["array"]
-
-model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
-processor = WhisperProcessor.from_pretrained("openai/whisper-small")
-
-diffuser_pipeline = DiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- custom_pipeline="speech_to_image_diffusion",
- speech_model=model,
- speech_processor=processor,
-
- torch_dtype=torch.float16,
-)
-
-diffuser_pipeline.enable_attention_slicing()
-diffuser_pipeline = diffuser_pipeline.to(device)
-
-output = diffuser_pipeline(speech_data)
-plt.imshow(output.images[0])
-```
-This example produces the following image:
-
-
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text_inversion.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text_inversion.md
deleted file mode 100644
index 948127bc09b93839f4717253d64d0a50da6b1c3d..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text_inversion.md
+++ /dev/null
@@ -1,275 +0,0 @@
-
-
-
-
-# Textual-Inversion
-
-[[open-in-colab]]
-
-[textual-inversion](https://arxiv.org/abs/2208.01618)은 소수의 예시 이미지에서 새로운 콘셉트를 포착하는 기법입니다. 이 기술은 원래 [Latent Diffusion](https://github.com/CompVis/latent-diffusion)에서 시연되었지만, 이후 [Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/conceptual/stable_diffusion)과 같은 유사한 다른 모델에도 적용되었습니다. 학습된 콘셉트는 text-to-image 파이프라인에서 생성된 이미지를 더 잘 제어하는 데 사용할 수 있습니다. 이 모델은 텍스트 인코더의 임베딩 공간에서 새로운 '단어'를 학습하여 개인화된 이미지 생성을 위한 텍스트 프롬프트 내에서 사용됩니다.
-
-
-By using just 3-5 images you can teach new concepts to a model such as Stable Diffusion for personalized image generation (image source).
-
-이 가이드에서는 textual-inversion으로 [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) 모델을 학습하는 방법을 설명합니다. 이 가이드에서 사용된 모든 textual-inversion 학습 스크립트는 [여기](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)에서 확인할 수 있습니다. 내부적으로 어떻게 작동하는지 자세히 살펴보고 싶으시다면 해당 링크를 참조해주시기 바랍니다.
-
-
-
-[Stable Diffusion Textual Inversion Concepts Library](https://huggingface.co/sd-concepts-library)에는 커뮤니티에서 제작한 학습된 textual-inversion 모델들이 있습니다. 시간이 지남에 따라 더 많은 콘셉트들이 추가되어 유용한 리소스로 성장할 것입니다!
-
-
-
-시작하기 전에 학습을 위한 의존성 라이브러리들을 설치해야 합니다:
-
-```bash
-pip install diffusers accelerate transformers
-```
-
-의존성 라이브러리들의 설치가 완료되면, [🤗Accelerate](https://github.com/huggingface/accelerate/) 환경을 초기화시킵니다.
-
-```bash
-accelerate config
-```
-
-별도의 설정없이, 기본 🤗Accelerate 환경을 설정하려면 다음과 같이 하세요:
-
-```bash
-accelerate config default
-```
-
-또는 사용 중인 환경이 노트북과 같은 대화형 셸을 지원하지 않는다면, 다음과 같이 사용할 수 있습니다:
-
-```py
-from accelerate.utils import write_basic_config
-
-write_basic_config()
-```
-
-마지막으로, Memory-Efficient Attention을 통해 메모리 사용량을 줄이기 위해 [xFormers](https://huggingface.co/docs/diffusers/main/en/training/optimization/xformers)를 설치합니다. xFormers를 설치한 후, 학습 스크립트에 `--enable_xformers_memory_efficient_attention` 인자를 추가합니다. xFormers는 Flax에서 지원되지 않습니다.
-
-## 허브에 모델 업로드하기
-
-모델을 허브에 저장하려면, 학습 스크립트에 다음 인자를 추가해야 합니다.
-
-```bash
---push_to_hub
-```
-
-## 체크포인트 저장 및 불러오기
-
-학습중에 모델의 체크포인트를 정기적으로 저장하는 것이 좋습니다. 이렇게 하면 어떤 이유로든 학습이 중단된 경우 저장된 체크포인트에서 학습을 다시 시작할 수 있습니다. 학습 스크립트에 다음 인자를 전달하면 500단계마다 전체 학습 상태가 `output_dir`의 하위 폴더에 체크포인트로서 저장됩니다.
-
-```bash
---checkpointing_steps=500
-```
-
-저장된 체크포인트에서 학습을 재개하려면, 학습 스크립트와 재개할 특정 체크포인트에 다음 인자를 전달하세요.
-
-```bash
---resume_from_checkpoint="checkpoint-1500"
-```
-
-## 파인 튜닝
-
-학습용 데이터셋으로 [고양이 장난감 데이터셋](https://huggingface.co/datasets/diffusers/cat_toy_example)을 다운로드하여 디렉토리에 저장하세요. 여러분만의 고유한 데이터셋을 사용하고자 한다면, [학습용 데이터셋 만들기](https://huggingface.co/docs/diffusers/training/create_dataset) 가이드를 살펴보시기 바랍니다.
-
-```py
-from huggingface_hub import snapshot_download
-
-local_dir = "./cat"
-snapshot_download(
- "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes"
-)
-```
-
-모델의 리포지토리 ID(또는 모델 가중치가 포함된 디렉터리 경로)를 `MODEL_NAME` 환경 변수에 할당하고, 해당 값을 [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) 인자에 전달합니다. 그리고 이미지가 포함된 디렉터리 경로를 `DATA_DIR` 환경 변수에 할당합니다.
-
-이제 [학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py)를 실행할 수 있습니다. 스크립트는 다음 파일을 생성하고 리포지토리에 저장합니다.
-
-- `learned_embeds.bin`
-- `token_identifier.txt`
-- `type_of_concept.txt`.
-
-
-
-💡V100 GPU 1개를 기준으로 전체 학습에는 최대 1시간이 걸립니다. 학습이 완료되기를 기다리는 동안 궁금한 점이 있으면 아래 섹션에서 [textual-inversion이 어떻게 작동하는지](https://huggingface.co/docs/diffusers/training/text_inversion#how-it-works) 자유롭게 확인하세요 !
-
-
-
-
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-v1-5"
-export DATA_DIR="./cat"
-
-accelerate launch textual_inversion.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_data_dir=$DATA_DIR \
- --learnable_property="object" \
- --placeholder_token="" --initializer_token="toy" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --max_train_steps=3000 \
- --learning_rate=5.0e-04 --scale_lr \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --output_dir="textual_inversion_cat" \
- --push_to_hub
-```
-
-
-
-💡학습 성능을 올리기 위해, 플레이스홀더 토큰(``)을 (단일한 임베딩 벡터가 아닌) 복수의 임베딩 벡터로 표현하는 것 역시 고려할 있습니다. 이러한 트릭이 모델이 보다 복잡한 이미지의 스타일(앞서 말한 콘셉트)을 더 잘 캡처하는 데 도움이 될 수 있습니다. 복수의 임베딩 벡터 학습을 활성화하려면 다음 옵션을 전달하십시오.
-
-```bash
---num_vectors=5
-```
-
-
-
-
-
-TPU에 액세스할 수 있는 경우, [Flax 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion_flax.py)를 사용하여 더 빠르게 모델을 학습시켜보세요. (물론 GPU에서도 작동합니다.) 동일한 설정에서 Flax 학습 스크립트는 PyTorch 학습 스크립트보다 최소 70% 더 빨라야 합니다! ⚡️
-
-시작하기 앞서 Flax에 대한 의존성 라이브러리들을 설치해야 합니다.
-
-```bash
-pip install -U -r requirements_flax.txt
-```
-
-모델의 리포지토리 ID(또는 모델 가중치가 포함된 디렉터리 경로)를 `MODEL_NAME` 환경 변수에 할당하고, 해당 값을 [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) 인자에 전달합니다.
-
-그런 다음 [학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion_flax.py)를 시작할 수 있습니다.
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export DATA_DIR="./cat"
-
-python textual_inversion_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_data_dir=$DATA_DIR \
- --learnable_property="object" \
- --placeholder_token="" --initializer_token="toy" \
- --resolution=512 \
- --train_batch_size=1 \
- --max_train_steps=3000 \
- --learning_rate=5.0e-04 --scale_lr \
- --output_dir="textual_inversion_cat" \
- --push_to_hub
-```
-
-
-
-### 중간 로깅
-
-모델의 학습 진행 상황을 추적하는 데 관심이 있는 경우, 학습 과정에서 생성된 이미지를 저장할 수 있습니다. 학습 스크립트에 다음 인수를 추가하여 중간 로깅을 활성화합니다.
-
-- `validation_prompt` : 샘플을 생성하는 데 사용되는 프롬프트(기본값은 `None`으로 설정되며, 이 때 중간 로깅은 비활성화됨)
-- `num_validation_images` : 생성할 샘플 이미지 수
-- `validation_steps` : `validation_prompt`로부터 샘플 이미지를 생성하기 전 스텝의 수
-
-```bash
---validation_prompt="A backpack"
---num_validation_images=4
---validation_steps=100
-```
-
-## 추론
-
-모델을 학습한 후에는, 해당 모델을 [`StableDiffusionPipeline`]을 사용하여 추론에 사용할 수 있습니다.
-
-textual-inversion 스크립트는 기본적으로 textual-inversion을 통해 얻어진 임베딩 벡터만을 저장합니다. 해당 임베딩 벡터들은 텍스트 인코더의 임베딩 행렬에 추가되어 있습습니다.
-
-
-
-
-
-💡 커뮤니티는 [sd-concepts-library](https://huggingface.co/sd-concepts-library) 라는 대규모의 textual-inversion 임베딩 벡터 라이브러리를 만들었습니다. textual-inversion 임베딩을 밑바닥부터 학습하는 대신, 해당 라이브러리에 본인이 찾는 textual-inversion 임베딩이 이미 추가되어 있지 않은지를 확인하는 것도 좋은 방법이 될 것 같습니다.
-
-
-
-textual-inversion 임베딩 벡터을 불러오기 위해서는, 먼저 해당 임베딩 벡터를 학습할 때 사용한 모델을 불러와야 합니다. 여기서는 [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/docs/diffusers/training/runwayml/stable-diffusion-v1-5) 모델이 사용되었다고 가정하고 불러오겠습니다.
-
-```python
-from diffusers import StableDiffusionPipeline
-import torch
-
-model_id = "runwayml/stable-diffusion-v1-5"
-pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
-```
-
-다음으로 `TextualInversionLoaderMixin.load_textual_inversion` 함수를 통해, textual-inversion 임베딩 벡터를 불러와야 합니다. 여기서 우리는 이전의 `` 예제의 임베딩을 불러올 것입니다.
-
-```python
-pipe.load_textual_inversion("sd-concepts-library/cat-toy")
-```
-
-이제 플레이스홀더 토큰(``)이 잘 동작하는지를 확인하는 파이프라인을 실행할 수 있습니다.
-
-```python
-prompt = "A backpack"
-
-image = pipe(prompt, num_inference_steps=50).images[0]
-image.save("cat-backpack.png")
-```
-
-`TextualInversionLoaderMixin.load_textual_inversion`은 Diffusers 형식으로 저장된 텍스트 임베딩 벡터를 로드할 수 있을 뿐만 아니라, [Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) 형식으로 저장된 임베딩 벡터도 로드할 수 있습니다. 이렇게 하려면, 먼저 [civitAI](https://civitai.com/models/3036?modelVersionId=8387)에서 임베딩 벡터를 다운로드한 다음 로컬에서 불러와야 합니다.
-
-```python
-pipe.load_textual_inversion("./charturnerv2.pt")
-```
-
-
-
-현재 Flax에 대한 `load_textual_inversion` 함수는 없습니다. 따라서 학습 후 textual-inversion 임베딩 벡터가 모델의 일부로서 저장되었는지를 확인해야 합니다. 그런 다음은 다른 Flax 모델과 마찬가지로 실행할 수 있습니다.
-
-```python
-import jax
-import numpy as np
-from flax.jax_utils import replicate
-from flax.training.common_utils import shard
-from diffusers import FlaxStableDiffusionPipeline
-
-model_path = "path-to-your-trained-model"
-pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16)
-
-prompt = "A backpack"
-prng_seed = jax.random.PRNGKey(0)
-num_inference_steps = 50
-
-num_samples = jax.device_count()
-prompt = num_samples * [prompt]
-prompt_ids = pipeline.prepare_inputs(prompt)
-
-# shard inputs and rng
-params = replicate(params)
-prng_seed = jax.random.split(prng_seed, jax.device_count())
-prompt_ids = shard(prompt_ids)
-
-images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
-images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
-image.save("cat-backpack.png")
-```
-
-
-
-## 작동 방식
-
-
-Architecture overview from the Textual Inversion blog post.
-
-일반적으로 텍스트 프롬프트는 모델에 전달되기 전에 임베딩으로 토큰화됩니다. textual-inversion은 비슷한 작업을 수행하지만, 위 다이어그램의 특수 토큰 `S*`로부터 새로운 토큰 임베딩 `v*`를 학습합니다. 모델의 아웃풋은 디퓨전 모델을 조정하는 데 사용되며, 디퓨전 모델이 단 몇 개의 예제 이미지에서 신속하고 새로운 콘셉트를 이해하는 데 도움을 줍니다.
-
-이를 위해 textual-inversion은 제너레이터 모델과 학습용 이미지의 노이즈 버전을 사용합니다. 제너레이터는 노이즈가 적은 버전의 이미지를 예측하려고 시도하며 토큰 임베딩 `v*`은 제너레이터의 성능에 따라 최적화됩니다. 토큰 임베딩이 새로운 콘셉트를 성공적으로 포착하면 디퓨전 모델에 더 유용한 정보를 제공하고 노이즈가 적은 더 선명한 이미지를 생성하는 데 도움이 됩니다. 이러한 최적화 프로세스는 일반적으로 다양한 프롬프트와 이미지에 수천 번에 노출됨으로써 이루어집니다.
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py
deleted file mode 100644
index fa00a0d201af350dcc56584e19a222150f978132..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py
+++ /dev/null
@@ -1,621 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import random
-import unittest
-
-import numpy as np
-import torch
-from PIL import Image
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
- StableDiffusionInpaintPipelineLegacy,
- UNet2DConditionModel,
- UNet2DModel,
- VQModel,
-)
-from diffusers.utils import floats_tensor, load_image, nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, load_numpy, preprocess_image, require_torch_gpu
-
-
-enable_full_determinism()
-
-
-class StableDiffusionInpaintLegacyPipelineFastTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- @property
- def dummy_image(self):
- batch_size = 1
- num_channels = 3
- sizes = (32, 32)
-
- image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device)
- return image
-
- @property
- def dummy_uncond_unet(self):
- torch.manual_seed(0)
- model = UNet2DModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=3,
- out_channels=3,
- down_block_types=("DownBlock2D", "AttnDownBlock2D"),
- up_block_types=("AttnUpBlock2D", "UpBlock2D"),
- )
- return model
-
- @property
- def dummy_cond_unet(self):
- torch.manual_seed(0)
- model = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- return model
-
- @property
- def dummy_cond_unet_inpaint(self):
- torch.manual_seed(0)
- model = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=9,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- return model
-
- @property
- def dummy_vq_model(self):
- torch.manual_seed(0)
- model = VQModel(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=3,
- )
- return model
-
- @property
- def dummy_vae(self):
- torch.manual_seed(0)
- model = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- return model
-
- @property
- def dummy_text_encoder(self):
- torch.manual_seed(0)
- config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- return CLIPTextModel(config)
-
- @property
- def dummy_extractor(self):
- def extract(*args, **kwargs):
- class Out:
- def __init__(self):
- self.pixel_values = torch.ones([0])
-
- def to(self, device):
- self.pixel_values.to(device)
- return self
-
- return Out()
-
- return extract
-
- def test_stable_diffusion_inpaint_legacy(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- unet = self.dummy_cond_unet
- scheduler = PNDMScheduler(skip_prk_steps=True)
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
- init_image = Image.fromarray(np.uint8(image)).convert("RGB")
- mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((32, 32))
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionInpaintPipelineLegacy(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=self.dummy_extractor,
- )
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.Generator(device=device).manual_seed(0)
- output = sd_pipe(
- [prompt],
- generator=generator,
- guidance_scale=6.0,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- )
-
- image = output.images
-
- generator = torch.Generator(device=device).manual_seed(0)
- image_from_tuple = sd_pipe(
- [prompt],
- generator=generator,
- guidance_scale=6.0,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- return_dict=False,
- )[0]
-
- image_slice = image[0, -3:, -3:, -1]
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
-
- assert image.shape == (1, 32, 32, 3)
- expected_slice = np.array([0.4941, 0.5396, 0.4689, 0.6338, 0.5392, 0.4094, 0.5477, 0.5904, 0.5165])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_inpaint_legacy_batched(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- unet = self.dummy_cond_unet
- scheduler = PNDMScheduler(skip_prk_steps=True)
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
- init_image = Image.fromarray(np.uint8(image)).convert("RGB")
- init_images_tens = preprocess_image(init_image, batch_size=2)
- init_masks_tens = init_images_tens + 4
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionInpaintPipelineLegacy(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=self.dummy_extractor,
- )
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- generator = torch.Generator(device=device).manual_seed(0)
- images = sd_pipe(
- [prompt] * 2,
- generator=generator,
- guidance_scale=6.0,
- num_inference_steps=2,
- output_type="np",
- image=init_images_tens,
- mask_image=init_masks_tens,
- ).images
-
- assert images.shape == (2, 32, 32, 3)
-
- image_slice_0 = images[0, -3:, -3:, -1].flatten()
- image_slice_1 = images[1, -3:, -3:, -1].flatten()
-
- expected_slice_0 = np.array([0.4697, 0.3770, 0.4096, 0.4653, 0.4497, 0.4183, 0.3950, 0.4668, 0.4672])
- expected_slice_1 = np.array([0.4105, 0.4987, 0.5771, 0.4921, 0.4237, 0.5684, 0.5496, 0.4645, 0.5272])
-
- assert np.abs(expected_slice_0 - image_slice_0).max() < 1e-2
- assert np.abs(expected_slice_1 - image_slice_1).max() < 1e-2
-
- def test_stable_diffusion_inpaint_legacy_negative_prompt(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- unet = self.dummy_cond_unet
- scheduler = PNDMScheduler(skip_prk_steps=True)
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
- init_image = Image.fromarray(np.uint8(image)).convert("RGB")
- mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((32, 32))
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionInpaintPipelineLegacy(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=self.dummy_extractor,
- )
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
- negative_prompt = "french fries"
- generator = torch.Generator(device=device).manual_seed(0)
- output = sd_pipe(
- prompt,
- negative_prompt=negative_prompt,
- generator=generator,
- guidance_scale=6.0,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- )
-
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 32, 32, 3)
- expected_slice = np.array([0.4941, 0.5396, 0.4689, 0.6338, 0.5392, 0.4094, 0.5477, 0.5904, 0.5165])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_inpaint_legacy_num_images_per_prompt(self):
- device = "cpu"
- unet = self.dummy_cond_unet
- scheduler = PNDMScheduler(skip_prk_steps=True)
- vae = self.dummy_vae
- bert = self.dummy_text_encoder
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
- init_image = Image.fromarray(np.uint8(image)).convert("RGB")
- mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((32, 32))
-
- # make sure here that pndm scheduler skips prk
- sd_pipe = StableDiffusionInpaintPipelineLegacy(
- unet=unet,
- scheduler=scheduler,
- vae=vae,
- text_encoder=bert,
- tokenizer=tokenizer,
- safety_checker=None,
- feature_extractor=self.dummy_extractor,
- )
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
-
- # test num_images_per_prompt=1 (default)
- images = sd_pipe(
- prompt,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- ).images
-
- assert images.shape == (1, 32, 32, 3)
-
- # test num_images_per_prompt=1 (default) for batch of prompts
- batch_size = 2
- images = sd_pipe(
- [prompt] * batch_size,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- ).images
-
- assert images.shape == (batch_size, 32, 32, 3)
-
- # test num_images_per_prompt for single prompt
- num_images_per_prompt = 2
- images = sd_pipe(
- prompt,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- num_images_per_prompt=num_images_per_prompt,
- ).images
-
- assert images.shape == (num_images_per_prompt, 32, 32, 3)
-
- # test num_images_per_prompt for batch of prompts
- batch_size = 2
- images = sd_pipe(
- [prompt] * batch_size,
- num_inference_steps=2,
- output_type="np",
- image=init_image,
- mask_image=mask_image,
- num_images_per_prompt=num_images_per_prompt,
- ).images
-
- assert images.shape == (batch_size * num_images_per_prompt, 32, 32, 3)
-
-
-@slow
-@require_torch_gpu
-class StableDiffusionInpaintLegacyPipelineSlowTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, generator_device="cpu", seed=0):
- generator = torch.Generator(device=generator_device).manual_seed(seed)
- init_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint/input_bench_image.png"
- )
- mask_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint/input_bench_mask.png"
- )
- inputs = {
- "prompt": "A red cat sitting on a park bench",
- "image": init_image,
- "mask_image": mask_image,
- "generator": generator,
- "num_inference_steps": 3,
- "strength": 0.75,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_inpaint_legacy_pndm(self):
- pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained(
- "CompVis/stable-diffusion-v1-4", safety_checker=None
- )
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- image = pipe(**inputs).images
- image_slice = image[0, 253:256, 253:256, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.5665, 0.6117, 0.6430, 0.4057, 0.4594, 0.5658, 0.1596, 0.3106, 0.4305])
-
- assert np.abs(expected_slice - image_slice).max() < 3e-3
-
- def test_stable_diffusion_inpaint_legacy_batched(self):
- pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained(
- "CompVis/stable-diffusion-v1-4", safety_checker=None
- )
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- inputs["prompt"] = [inputs["prompt"]] * 2
- inputs["image"] = preprocess_image(inputs["image"], batch_size=2)
-
- mask = inputs["mask_image"].convert("L")
- mask = np.array(mask).astype(np.float32) / 255.0
- mask = torch.from_numpy(1 - mask)
- masks = torch.vstack([mask[None][None]] * 2)
- inputs["mask_image"] = masks
-
- image = pipe(**inputs).images
- assert image.shape == (2, 512, 512, 3)
-
- image_slice_0 = image[0, 253:256, 253:256, -1].flatten()
- image_slice_1 = image[1, 253:256, 253:256, -1].flatten()
-
- expected_slice_0 = np.array(
- [0.52093095, 0.4176447, 0.32752383, 0.6175223, 0.50563973, 0.36470804, 0.65460044, 0.5775188, 0.44332123]
- )
- expected_slice_1 = np.array(
- [0.3592432, 0.4233033, 0.3914635, 0.31014425, 0.3702293, 0.39412856, 0.17526966, 0.2642669, 0.37480092]
- )
-
- assert np.abs(expected_slice_0 - image_slice_0).max() < 3e-3
- assert np.abs(expected_slice_1 - image_slice_1).max() < 3e-3
-
- def test_stable_diffusion_inpaint_legacy_k_lms(self):
- pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained(
- "CompVis/stable-diffusion-v1-4", safety_checker=None
- )
- pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- image = pipe(**inputs).images
- image_slice = image[0, 253:256, 253:256, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.4534, 0.4467, 0.4329, 0.4329, 0.4339, 0.4220, 0.4244, 0.4332, 0.4426])
-
- assert np.abs(expected_slice - image_slice).max() < 3e-3
-
- def test_stable_diffusion_inpaint_legacy_intermediate_state(self):
- number_of_steps = 0
-
- def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
- callback_fn.has_been_called = True
- nonlocal number_of_steps
- number_of_steps += 1
- if step == 1:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 64)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array([0.5977, 1.5449, 1.0586, -0.3250, 0.7383, -0.0862, 0.4631, -0.2571, -1.1289])
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 1e-3
- elif step == 2:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 64)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array([0.5190, 1.1621, 0.6885, 0.2424, 0.3337, -0.1617, 0.6914, -0.1957, -0.5474])
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 1e-3
-
- callback_fn.has_been_called = False
-
- pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained(
- "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16
- )
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs()
- pipe(**inputs, callback=callback_fn, callback_steps=1)
- assert callback_fn.has_been_called
- assert number_of_steps == 2
-
-
-@nightly
-@require_torch_gpu
-class StableDiffusionInpaintLegacyPipelineNightlyTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
- generator = torch.Generator(device=generator_device).manual_seed(seed)
- init_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint/input_bench_image.png"
- )
- mask_image = load_image(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint/input_bench_mask.png"
- )
- inputs = {
- "prompt": "A red cat sitting on a park bench",
- "image": init_image,
- "mask_image": mask_image,
- "generator": generator,
- "num_inference_steps": 50,
- "strength": 0.75,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_inpaint_pndm(self):
- sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5")
- sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_pndm.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_inpaint_ddim(self):
- sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5")
- sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_ddim.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_inpaint_lms(self):
- sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5")
- sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_lms.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_inpaint_dpm(self):
- sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5")
- sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- inputs["num_inference_steps"] = 30
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_dpm_multi.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index 5c623eb56836760694b50f3e4e66aa0f1fc069df..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './danet_r50-d8_512x512_40k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/main.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/main.css
deleted file mode 100644
index 11f1afddd620f6a0fa198ebd1d627ab6b6afbd75..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/main.css
+++ /dev/null
@@ -1,612 +0,0 @@
-.tabs.svelte-710i53 {
- margin-top: 0
-}
-
-.py-6 {
- padding-top: 2.5rem
-}
-
-.small-button {
- min-width: 0 !important;
- max-width: 171px;
- height: 39.594px;
- align-self: end;
-}
-
-.refresh-button {
- max-width: 4.4em;
- min-width: 2.2em !important;
- height: 39.594px;
- align-self: end;
- line-height: 1em;
- border-radius: 0.5em;
- flex: none;
-}
-
-.refresh-button-small {
- max-width: 2.2em;
-}
-
-.button_nowrap {
- white-space: nowrap;
-}
-
-#slim-column {
- flex: none !important;
- min-width: 0 !important;
-}
-
-.slim-dropdown {
- background-color: transparent !important;
- border: none !important;
- padding: 0 !important;
-}
-
-#download-label, #upload-label {
- min-height: 0
-}
-
-.dark svg {
- fill: white;
-}
-
-.dark a {
- color: white !important;
-}
-
-ol li p, ul li p {
- display: inline-block;
-}
-
-#chat-tab, #default-tab, #notebook-tab, #parameters, #chat-settings, #lora, #training-tab, #model-tab, #session-tab {
- border: 0;
-}
-
-.gradio-container-3-18-0 .prose * h1, h2, h3, h4 {
- color: white;
-}
-
-.gradio-container {
- max-width: 100% !important;
- padding-top: 0 !important;
-}
-
-#extensions {
- margin-top: 5px;
- margin-bottom: 35px;
-}
-
-.extension-tab {
- border: 0 !important;
-}
-
-span.math.inline {
- font-size: 27px;
- vertical-align: baseline !important;
-}
-
-div.svelte-15lo0d8 > *, div.svelte-15lo0d8 > .form > * {
- flex-wrap: nowrap;
-}
-
-.header_bar {
- background-color: #f7f7f7;
- margin-bottom: 19px;
- display: inline !important;
- overflow-x: scroll;
- margin-left: calc(-1 * var(--size-4));
- margin-right: calc(-1 * var(--size-4));
-}
-
-.dark .header_bar {
- border: none !important;
- background-color: #8080802b;
-}
-
-.header_bar button.selected {
- border-radius: 0;
-}
-
-.textbox_default textarea {
- height: calc(100dvh - 271px);
-}
-
-.textbox_default_output textarea {
- height: calc(100dvh - 185px);
-}
-
-.textbox textarea {
- height: calc(100dvh - 241px);
-}
-
-.textbox_logits textarea {
- height: calc(100dvh - 236px);
-}
-
-.textbox_logits_notebook textarea {
- height: calc(100dvh - 292px);
-}
-
-.monospace textarea {
- font-family: monospace;
-}
-
-.textbox_default textarea,
-.textbox_default_output textarea,
-.textbox_logits textarea,
-.textbox_logits_notebook textarea,
-.textbox textarea {
- font-size: 16px !important;
- color: #46464A !important;
-}
-
-.dark textarea {
- color: #efefef !important;
-}
-
-@media screen and (max-width: 711px) {
- .textbox_default textarea {
- height: calc(100dvh - 259px);
- }
-
- div .default-token-counter {
- top: calc( 0.5 * (100dvh - 236px) ) !important;
- }
-
- .transparent-substring {
- display: none;
- }
-
- .hover-menu {
- min-width: 250px !important;
- }
-}
-
-/* Hide the gradio footer*/
-footer {
- display: none !important;
-}
-
-button {
- font-size: 14px !important;
-}
-
-.file-saver {
- position: fixed !important;
- top: 50%;
- left: 50%;
- transform: translate(-50%, -50%); /* center horizontally */
- max-width: 500px;
- background-color: var(--input-background-fill);
- border: 2px solid black !important;
- z-index: 1000;
-}
-
-.dark .file-saver {
- border: 2px solid white !important;
-}
-
-.checkboxgroup-table label {
- background: none !important;
- padding: 0 !important;
- border: 0 !important;
-}
-
-.checkboxgroup-table div {
- display: grid !important;
-}
-
-.markdown ul ol {
- font-size: 100% !important;
-}
-
-.pretty_scrollbar::-webkit-scrollbar {
- width: 5px;
-}
-
-.pretty_scrollbar::-webkit-scrollbar-track {
- background: transparent;
-}
-
-.pretty_scrollbar::-webkit-scrollbar-thumb,
-.pretty_scrollbar::-webkit-scrollbar-thumb:hover {
- background: #c5c5d2;
-}
-
-.dark .pretty_scrollbar::-webkit-scrollbar-thumb,
-.dark .pretty_scrollbar::-webkit-scrollbar-thumb:hover {
- background: #374151;
-}
-
-.pretty_scrollbar::-webkit-resizer {
- background: #c5c5d2;
-}
-
-.dark .pretty_scrollbar::-webkit-resizer {
- background: #374151;
-}
-
-audio {
- max-width: 100%;
-}
-
-/* Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui */
-.token-counter {
- position: absolute !important;
- top: calc( 0.5 * (100dvh - 218px) ) !important;
- right: 2px;
- z-index: 100;
- background: var(--input-background-fill) !important;
- min-height: 0 !important;
-}
-
-.default-token-counter {
- top: calc( 0.5 * (100dvh - 248px) ) !important;
-}
-
-.token-counter span {
- padding: 1px;
- box-shadow: 0 0 0 0.3em rgba(192,192,192,0.15), inset 0 0 0.6em rgba(192,192,192,0.075);
- border: 2px solid rgba(192,192,192,0.4) !important;
- border-radius: 0.4em;
-}
-
-.no-background {
- background: var(--background-fill-primary) !important;
- padding: 0px !important;
-}
-
-/*----------------------------------------------
- Chat tab
-----------------------------------------------*/
-.h-\[40vh\], .wrap.svelte-byatnx.svelte-byatnx.svelte-byatnx {
- height: 66.67vh
-}
-
-.gradio-container {
- margin-left: auto !important;
- margin-right: auto !important;
-}
-
-.w-screen {
- width: unset
-}
-
-div.svelte-362y77>*, div.svelte-362y77>.form>* {
- flex-wrap: nowrap
-}
-
-.pending.svelte-1ed2p3z {
- opacity: 1;
-}
-
-.wrap.svelte-6roggh.svelte-6roggh {
- max-height: 92.5%;
-}
-
-/* This is for the microphone button in the whisper extension */
-.sm.svelte-1ipelgc {
- width: 100%;
-}
-
-#chat-tab button#Generate, #chat-tab button#stop {
- width: 89.3438px !important;
-}
-
-#chat-tab button, #notebook-tab button, #default-tab button {
- min-width: 0 !important;
-}
-
-#chat-tab > :first-child, #extensions {
- max-width: 880px;
- margin-left: auto;
- margin-right: auto;
-}
-
-@media screen and (max-width: 688px) {
- #chat-tab {
- padding-left: 0px;
- padding-right: 0px;
- }
-
- .chat-parent {
- height: calc(100dvh - 179px) !important;
- }
-
- .old-ui .chat-parent {
- height: calc(100dvh - 310px) !important;
- }
-}
-
-.chat {
- margin-left: auto;
- margin-right: auto;
- max-width: 880px;
- height: 100%;
- overflow-y: auto;
- padding-right: 15px;
- display: flex;
- flex-direction: column;
- word-break: break-word;
- overflow-wrap: anywhere;
-}
-
-.chat-parent {
- height: calc(100dvh - 181px);
- overflow: auto !important;
-}
-
-.old-ui .chat-parent {
- height: calc(100dvh - 270px);
-}
-
-.chat-parent.bigchat {
- height: calc(100dvh - 181px) !important;
-}
-
-.chat > .messages {
- display: flex;
- flex-direction: column;
-}
-
-.chat .message:last-child {
- margin-bottom: 0px !important;
- padding-bottom: 0px !important;
-}
-
-.message-body li {
- margin-top: 0 !important;
- margin-bottom: 0 !important;
-}
-
-.message-body li > p {
- display: inline !important;
-}
-
-.message-body ul, .message-body ol {
- font-size: 15px !important;
-}
-
-.message-body ul {
- list-style-type: disc !important;
-}
-
-.message-body pre {
- margin-bottom: 1.25em !important;
-}
-
-.message-body code {
- white-space: pre-wrap !important;
- word-wrap: break-word !important;
-}
-
-.message-body :not(pre) > code {
- white-space: normal !important;
-}
-
-#chat-input {
- padding: 0;
- padding-top: 18px;
- background: transparent;
- border: none;
-}
-
-#chat-input textarea:focus {
- box-shadow: none !important;
-}
-
-@media print {
- body {
- visibility: hidden;
- }
-
- .chat {
- visibility: visible;
- position: absolute;
- left: 0;
- top: 0;
- max-width: unset;
- max-height: unset;
- width: 100%;
- overflow-y: visible;
- }
-
- .message {
- break-inside: avoid;
- }
-
- .gradio-container {
- overflow: visible;
- }
-
- .tab-nav {
- display: none !important;
- }
-
- #chat-tab > :first-child {
- max-width: unset;
- }
-}
-
-#show-controls {
- position: absolute;
- height: 100%;
- background-color: var(--background-fill-primary);
- border: 0px;
- border-radius: 0px;
-}
-
-#show-controls label {
- z-index: 1000;
- position: absolute;
- left: calc(100% - 168px);
-}
-
-#typing-container {
- display: none;
- position: absolute;
- background-color: transparent;
- left: -2px;
- padding: var(--block-padding);
-}
-
-.typing {
- position: relative;
-}
-
-.visible-dots #typing-container {
- display: block;
-}
-
-.typing span {
- content: '';
- animation: blink 1.5s infinite;
- animation-fill-mode: both;
- height: 10px;
- width: 10px;
- background: #3b5998;;
- position: absolute;
- left:0;
- top:0;
- border-radius: 50%;
-}
-
-.typing .dot1 {
- animation-delay: .2s;
- margin-left: calc(10px * 1.5);
-}
-
-.typing .dot2 {
- animation-delay: .4s;
- margin-left: calc(10px * 3);
-}
-
-@keyframes blink {
- 0% {
- opacity: .1;
- }
- 20% {
- opacity: 1;
- }
- 100% {
- opacity: .1;
- }
-}
-
-#chat-tab .generating {
- display: none !important;
-}
-
-.hover-element {
- position: relative;
- font-size: 24px;
-}
-
-.hover-menu {
- display: none;
- position: absolute;
- bottom: 80%;
- left: 0;
- background-color: var(--background-fill-secondary);
- box-shadow: 0 0 10px rgba(0, 0, 0, 0.5);
- z-index: 10000;
- min-width: 330px;
- flex-direction: column;
-}
-
-.hover-menu button {
- width: 100%;
- background: transparent !important;
- border-radius: 0px !important;
- justify-content: space-between;
- margin: 0 !important;
- height: 36px;
-}
-
-.hover-menu button:not(#clear-history-confirm) {
- border-bottom: 0 !important;
-}
-
-.hover-menu button:not(#clear-history-confirm):last-child {
- border-bottom: var(--button-border-width) solid var(--button-secondary-border-color) !important;
-}
-
-.hover-menu button:hover {
- background: var(--button-secondary-background-fill-hover) !important;
-}
-
-.transparent-substring {
- opacity: 0.333;
-}
-
-#chat-tab:not(.old-ui) #chat-buttons {
- display: none !important;
-}
-
-#gr-hover-container {
- min-width: 0 !important;
- display: flex;
- flex-direction: column-reverse;
- padding-right: 20px;
- padding-bottom: 3px;
- flex-grow: 0 !important;
-}
-
-#generate-stop-container {
- min-width: 0 !important;
- display: flex;
- flex-direction: column-reverse;
- padding-bottom: 3px;
- flex: 0 auto !important;
-}
-
-#chat-input-container {
- min-width: 0 !important;
-}
-
-#chat-input-container > .form {
- background: transparent;
- border: none;
-}
-
-#chat-input-row {
- padding-bottom: 20px;
-}
-
-.old-ui #chat-input-row, #chat-input-row.bigchat {
- padding-bottom: 0px !important;
-}
-
-#chat-col {
- padding-bottom: 115px;
-}
-
-.old-ui #chat-col, #chat-col.bigchat {
- padding-bottom: 95px !important;
-}
-
-.old-ui #chat-buttons #clear-history-confirm {
- order: -1;
-}
-
-.chat ol, .chat ul {
- margin-top: 6px !important;
-}
-
-/*----------------------------------------------
- Past chats menus
-----------------------------------------------*/
-#past-chats-row {
- margin-bottom: calc( -1 * var(--layout-gap) );
-}
-
-#rename-row label {
- margin-top: var(--layout-gap);
-}
-
-/*----------------------------------------------
- Keep dropdown menus above errored components
-----------------------------------------------*/
-.options {
- z-index: 100 !important;
-}
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py
deleted file mode 100644
index c52dda18b41705705b47dd0e995b124048c16fba..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py
+++ /dev/null
@@ -1,358 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-import warnings
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.autograd.function import Function, once_differentiable
-
-from annotator.uniformer.mmcv import deprecated_api_warning
-from annotator.uniformer.mmcv.cnn import constant_init, xavier_init
-from annotator.uniformer.mmcv.cnn.bricks.registry import ATTENTION
-from annotator.uniformer.mmcv.runner import BaseModule
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['ms_deform_attn_backward', 'ms_deform_attn_forward'])
-
-
-class MultiScaleDeformableAttnFunction(Function):
-
- @staticmethod
- def forward(ctx, value, value_spatial_shapes, value_level_start_index,
- sampling_locations, attention_weights, im2col_step):
- """GPU version of multi-scale deformable attention.
-
- Args:
- value (Tensor): The value has shape
- (bs, num_keys, mum_heads, embed_dims//num_heads)
- value_spatial_shapes (Tensor): Spatial shape of
- each feature map, has shape (num_levels, 2),
- last dimension 2 represent (h, w)
- sampling_locations (Tensor): The location of sampling points,
- has shape
- (bs ,num_queries, num_heads, num_levels, num_points, 2),
- the last dimension 2 represent (x, y).
- attention_weights (Tensor): The weight of sampling points used
- when calculate the attention, has shape
- (bs ,num_queries, num_heads, num_levels, num_points),
- im2col_step (Tensor): The step used in image to column.
-
- Returns:
- Tensor: has shape (bs, num_queries, embed_dims)
- """
-
- ctx.im2col_step = im2col_step
- output = ext_module.ms_deform_attn_forward(
- value,
- value_spatial_shapes,
- value_level_start_index,
- sampling_locations,
- attention_weights,
- im2col_step=ctx.im2col_step)
- ctx.save_for_backward(value, value_spatial_shapes,
- value_level_start_index, sampling_locations,
- attention_weights)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- """GPU version of backward function.
-
- Args:
- grad_output (Tensor): Gradient
- of output tensor of forward.
-
- Returns:
- Tuple[Tensor]: Gradient
- of input tensors in forward.
- """
- value, value_spatial_shapes, value_level_start_index,\
- sampling_locations, attention_weights = ctx.saved_tensors
- grad_value = torch.zeros_like(value)
- grad_sampling_loc = torch.zeros_like(sampling_locations)
- grad_attn_weight = torch.zeros_like(attention_weights)
-
- ext_module.ms_deform_attn_backward(
- value,
- value_spatial_shapes,
- value_level_start_index,
- sampling_locations,
- attention_weights,
- grad_output.contiguous(),
- grad_value,
- grad_sampling_loc,
- grad_attn_weight,
- im2col_step=ctx.im2col_step)
-
- return grad_value, None, None, \
- grad_sampling_loc, grad_attn_weight, None
-
-
-def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes,
- sampling_locations, attention_weights):
- """CPU version of multi-scale deformable attention.
-
- Args:
- value (Tensor): The value has shape
- (bs, num_keys, mum_heads, embed_dims//num_heads)
- value_spatial_shapes (Tensor): Spatial shape of
- each feature map, has shape (num_levels, 2),
- last dimension 2 represent (h, w)
- sampling_locations (Tensor): The location of sampling points,
- has shape
- (bs ,num_queries, num_heads, num_levels, num_points, 2),
- the last dimension 2 represent (x, y).
- attention_weights (Tensor): The weight of sampling points used
- when calculate the attention, has shape
- (bs ,num_queries, num_heads, num_levels, num_points),
-
- Returns:
- Tensor: has shape (bs, num_queries, embed_dims)
- """
-
- bs, _, num_heads, embed_dims = value.shape
- _, num_queries, num_heads, num_levels, num_points, _ =\
- sampling_locations.shape
- value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes],
- dim=1)
- sampling_grids = 2 * sampling_locations - 1
- sampling_value_list = []
- for level, (H_, W_) in enumerate(value_spatial_shapes):
- # bs, H_*W_, num_heads, embed_dims ->
- # bs, H_*W_, num_heads*embed_dims ->
- # bs, num_heads*embed_dims, H_*W_ ->
- # bs*num_heads, embed_dims, H_, W_
- value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape(
- bs * num_heads, embed_dims, H_, W_)
- # bs, num_queries, num_heads, num_points, 2 ->
- # bs, num_heads, num_queries, num_points, 2 ->
- # bs*num_heads, num_queries, num_points, 2
- sampling_grid_l_ = sampling_grids[:, :, :,
- level].transpose(1, 2).flatten(0, 1)
- # bs*num_heads, embed_dims, num_queries, num_points
- sampling_value_l_ = F.grid_sample(
- value_l_,
- sampling_grid_l_,
- mode='bilinear',
- padding_mode='zeros',
- align_corners=False)
- sampling_value_list.append(sampling_value_l_)
- # (bs, num_queries, num_heads, num_levels, num_points) ->
- # (bs, num_heads, num_queries, num_levels, num_points) ->
- # (bs, num_heads, 1, num_queries, num_levels*num_points)
- attention_weights = attention_weights.transpose(1, 2).reshape(
- bs * num_heads, 1, num_queries, num_levels * num_points)
- output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) *
- attention_weights).sum(-1).view(bs, num_heads * embed_dims,
- num_queries)
- return output.transpose(1, 2).contiguous()
-
-
-@ATTENTION.register_module()
-class MultiScaleDeformableAttention(BaseModule):
- """An attention module used in Deformable-Detr.
-
- `Deformable DETR: Deformable Transformers for End-to-End Object Detection.
- `_.
-
- Args:
- embed_dims (int): The embedding dimension of Attention.
- Default: 256.
- num_heads (int): Parallel attention heads. Default: 64.
- num_levels (int): The number of feature map used in
- Attention. Default: 4.
- num_points (int): The number of sampling points for
- each query in each head. Default: 4.
- im2col_step (int): The step used in image_to_column.
- Default: 64.
- dropout (float): A Dropout layer on `inp_identity`.
- Default: 0.1.
- batch_first (bool): Key, Query and Value are shape of
- (batch, n, embed_dim)
- or (n, batch, embed_dim). Default to False.
- norm_cfg (dict): Config dict for normalization layer.
- Default: None.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- """
-
- def __init__(self,
- embed_dims=256,
- num_heads=8,
- num_levels=4,
- num_points=4,
- im2col_step=64,
- dropout=0.1,
- batch_first=False,
- norm_cfg=None,
- init_cfg=None):
- super().__init__(init_cfg)
- if embed_dims % num_heads != 0:
- raise ValueError(f'embed_dims must be divisible by num_heads, '
- f'but got {embed_dims} and {num_heads}')
- dim_per_head = embed_dims // num_heads
- self.norm_cfg = norm_cfg
- self.dropout = nn.Dropout(dropout)
- self.batch_first = batch_first
-
- # you'd better set dim_per_head to a power of 2
- # which is more efficient in the CUDA implementation
- def _is_power_of_2(n):
- if (not isinstance(n, int)) or (n < 0):
- raise ValueError(
- 'invalid input for _is_power_of_2: {} (type: {})'.format(
- n, type(n)))
- return (n & (n - 1) == 0) and n != 0
-
- if not _is_power_of_2(dim_per_head):
- warnings.warn(
- "You'd better set embed_dims in "
- 'MultiScaleDeformAttention to make '
- 'the dimension of each attention head a power of 2 '
- 'which is more efficient in our CUDA implementation.')
-
- self.im2col_step = im2col_step
- self.embed_dims = embed_dims
- self.num_levels = num_levels
- self.num_heads = num_heads
- self.num_points = num_points
- self.sampling_offsets = nn.Linear(
- embed_dims, num_heads * num_levels * num_points * 2)
- self.attention_weights = nn.Linear(embed_dims,
- num_heads * num_levels * num_points)
- self.value_proj = nn.Linear(embed_dims, embed_dims)
- self.output_proj = nn.Linear(embed_dims, embed_dims)
- self.init_weights()
-
- def init_weights(self):
- """Default initialization for Parameters of Module."""
- constant_init(self.sampling_offsets, 0.)
- thetas = torch.arange(
- self.num_heads,
- dtype=torch.float32) * (2.0 * math.pi / self.num_heads)
- grid_init = torch.stack([thetas.cos(), thetas.sin()], -1)
- grid_init = (grid_init /
- grid_init.abs().max(-1, keepdim=True)[0]).view(
- self.num_heads, 1, 1,
- 2).repeat(1, self.num_levels, self.num_points, 1)
- for i in range(self.num_points):
- grid_init[:, :, i, :] *= i + 1
-
- self.sampling_offsets.bias.data = grid_init.view(-1)
- constant_init(self.attention_weights, val=0., bias=0.)
- xavier_init(self.value_proj, distribution='uniform', bias=0.)
- xavier_init(self.output_proj, distribution='uniform', bias=0.)
- self._is_init = True
-
- @deprecated_api_warning({'residual': 'identity'},
- cls_name='MultiScaleDeformableAttention')
- def forward(self,
- query,
- key=None,
- value=None,
- identity=None,
- query_pos=None,
- key_padding_mask=None,
- reference_points=None,
- spatial_shapes=None,
- level_start_index=None,
- **kwargs):
- """Forward Function of MultiScaleDeformAttention.
-
- Args:
- query (Tensor): Query of Transformer with shape
- (num_query, bs, embed_dims).
- key (Tensor): The key tensor with shape
- `(num_key, bs, embed_dims)`.
- value (Tensor): The value tensor with shape
- `(num_key, bs, embed_dims)`.
- identity (Tensor): The tensor used for addition, with the
- same shape as `query`. Default None. If None,
- `query` will be used.
- query_pos (Tensor): The positional encoding for `query`.
- Default: None.
- key_pos (Tensor): The positional encoding for `key`. Default
- None.
- reference_points (Tensor): The normalized reference
- points with shape (bs, num_query, num_levels, 2),
- all elements is range in [0, 1], top-left (0,0),
- bottom-right (1, 1), including padding area.
- or (N, Length_{query}, num_levels, 4), add
- additional two dimensions is (w, h) to
- form reference boxes.
- key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_key].
- spatial_shapes (Tensor): Spatial shape of features in
- different levels. With shape (num_levels, 2),
- last dimension represents (h, w).
- level_start_index (Tensor): The start index of each level.
- A tensor has shape ``(num_levels, )`` and can be represented
- as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...].
-
- Returns:
- Tensor: forwarded results with shape [num_query, bs, embed_dims].
- """
-
- if value is None:
- value = query
-
- if identity is None:
- identity = query
- if query_pos is not None:
- query = query + query_pos
- if not self.batch_first:
- # change to (bs, num_query ,embed_dims)
- query = query.permute(1, 0, 2)
- value = value.permute(1, 0, 2)
-
- bs, num_query, _ = query.shape
- bs, num_value, _ = value.shape
- assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value
-
- value = self.value_proj(value)
- if key_padding_mask is not None:
- value = value.masked_fill(key_padding_mask[..., None], 0.0)
- value = value.view(bs, num_value, self.num_heads, -1)
- sampling_offsets = self.sampling_offsets(query).view(
- bs, num_query, self.num_heads, self.num_levels, self.num_points, 2)
- attention_weights = self.attention_weights(query).view(
- bs, num_query, self.num_heads, self.num_levels * self.num_points)
- attention_weights = attention_weights.softmax(-1)
-
- attention_weights = attention_weights.view(bs, num_query,
- self.num_heads,
- self.num_levels,
- self.num_points)
- if reference_points.shape[-1] == 2:
- offset_normalizer = torch.stack(
- [spatial_shapes[..., 1], spatial_shapes[..., 0]], -1)
- sampling_locations = reference_points[:, :, None, :, None, :] \
- + sampling_offsets \
- / offset_normalizer[None, None, None, :, None, :]
- elif reference_points.shape[-1] == 4:
- sampling_locations = reference_points[:, :, None, :, None, :2] \
- + sampling_offsets / self.num_points \
- * reference_points[:, :, None, :, None, 2:] \
- * 0.5
- else:
- raise ValueError(
- f'Last dim of reference_points must be'
- f' 2 or 4, but get {reference_points.shape[-1]} instead.')
- if torch.cuda.is_available() and value.is_cuda:
- output = MultiScaleDeformableAttnFunction.apply(
- value, spatial_shapes, level_start_index, sampling_locations,
- attention_weights, self.im2col_step)
- else:
- output = multi_scale_deformable_attn_pytorch(
- value, spatial_shapes, sampling_locations, attention_weights)
-
- output = self.output_proj(output)
-
- if not self.batch_first:
- # (num_query, bs ,embed_dims)
- output = output.permute(1, 0, 2)
-
- return self.dropout(output) + identity
diff --git a/spaces/Arnx/MusicGenXvAKN/MODEL_CARD.md b/spaces/Arnx/MusicGenXvAKN/MODEL_CARD.md
deleted file mode 100644
index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/MODEL_CARD.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# MusicGen Model Card
-
-## Model details
-
-**Organization developing the model:** The FAIR team of Meta AI.
-
-**Model date:** MusicGen was trained between April 2023 and May 2023.
-
-**Model version:** This is the version 1 of the model.
-
-**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
-
-**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv].
-
-**Citation details** See [our paper][arxiv]
-
-**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
-
-**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
-
-## Intended use
-**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
-
-- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
-- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
-
-**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
-
-**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
-
-## Metrics
-
-**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
-
-- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
-- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
-- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
-
-Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
-
-- Overall quality of the music samples;
-- Text relevance to the provided text input;
-- Adherence to the melody for melody-guided music generation.
-
-More details on performance measures and human studies can be found in the paper.
-
-**Decision thresholds:** Not applicable.
-
-## Evaluation datasets
-
-The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
-
-## Training datasets
-
-The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
-
-## Quantitative analysis
-
-More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
-
-## Limitations and biases
-
-**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
-
-**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
-
-**Limitations:**
-
-- The model is not able to generate realistic vocals.
-- The model has been trained with English descriptions and will not perform as well in other languages.
-- The model does not perform equally well for all music styles and cultures.
-- The model sometimes generates end of songs, collapsing to silence.
-- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
-
-**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
-
-**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
-
-**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
-
-[arxiv]: https://arxiv.org/abs/2306.05284
diff --git a/spaces/Arnx/MusicGenXvAKN/tests/__init__.py b/spaces/Arnx/MusicGenXvAKN/tests/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/tests/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/Artrajz/vits-simple-api/utils/utils.py b/spaces/Artrajz/vits-simple-api/utils/utils.py
deleted file mode 100644
index fcca4711767b4c60f49932e00dcd11bbd9bfddea..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/utils/utils.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import logging
-import os
-from json import loads
-from torch import load, FloatTensor
-from numpy import float32
-import librosa
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
-
-
-def load_checkpoint(checkpoint_path, model):
- checkpoint_dict = load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict.get('iteration', None)
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logging.info(f"{k} is not in the checkpoint")
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- if iteration:
- logging.info(f"Loaded checkpoint '{checkpoint_path}' (iteration {iteration})")
- else:
- logging.info(f"Loaded checkpoint '{checkpoint_path}'")
- return
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, 'r', encoding='utf-8') as f:
- data = f.read()
- config = loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def load_audio_to_torch(full_path, target_sampling_rate):
- audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True)
- return FloatTensor(audio.astype(float32))
-
-
-def clean_folder(folder_path):
- for filename in os.listdir(folder_path):
- file_path = os.path.join(folder_path, filename)
- # 如果是文件,则删除文件
- if os.path.isfile(file_path):
- os.remove(file_path)
-
-
-# is none -> True, is not none -> False
-def check_is_none(s):
- return s is None or (isinstance(s, str) and str(s).isspace()) or str(s) == ""
-
-def save_audio(audio, path):
- with open(path,"wb") as f:
- f.write(audio)
diff --git a/spaces/ArturStepanenko/digitsSpace/README.md b/spaces/ArturStepanenko/digitsSpace/README.md
deleted file mode 100644
index 5b412acfd57dbf88e2592a5618e037031d155218..0000000000000000000000000000000000000000
--- a/spaces/ArturStepanenko/digitsSpace/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: digitsSpace
-emoji: 🐨
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/AsakuraMizu/moe-tts/README.md b/spaces/AsakuraMizu/moe-tts/README.md
deleted file mode 100644
index f32eafaa03c3ac48c1d4989d5429fb6febded0aa..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Moe TTS
-emoji: 😊🎙️
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: skytnt/moe-tts
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/git.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/git.py
deleted file mode 100644
index 8d1d499376744954308bdf96f80e5b5a39a24195..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/git.py
+++ /dev/null
@@ -1,526 +0,0 @@
-import logging
-import os.path
-import pathlib
-import re
-import urllib.parse
-import urllib.request
-from typing import List, Optional, Tuple
-
-from pip._internal.exceptions import BadCommand, InstallationError
-from pip._internal.utils.misc import HiddenText, display_path, hide_url
-from pip._internal.utils.subprocess import make_command
-from pip._internal.vcs.versioncontrol import (
- AuthInfo,
- RemoteNotFoundError,
- RemoteNotValidError,
- RevOptions,
- VersionControl,
- find_path_to_project_root_from_repo_root,
- vcs,
-)
-
-urlsplit = urllib.parse.urlsplit
-urlunsplit = urllib.parse.urlunsplit
-
-
-logger = logging.getLogger(__name__)
-
-
-GIT_VERSION_REGEX = re.compile(
- r"^git version " # Prefix.
- r"(\d+)" # Major.
- r"\.(\d+)" # Dot, minor.
- r"(?:\.(\d+))?" # Optional dot, patch.
- r".*$" # Suffix, including any pre- and post-release segments we don't care about.
-)
-
-HASH_REGEX = re.compile("^[a-fA-F0-9]{40}$")
-
-# SCP (Secure copy protocol) shorthand. e.g. 'git@example.com:foo/bar.git'
-SCP_REGEX = re.compile(
- r"""^
- # Optional user, e.g. 'git@'
- (\w+@)?
- # Server, e.g. 'github.com'.
- ([^/:]+):
- # The server-side path. e.g. 'user/project.git'. Must start with an
- # alphanumeric character so as not to be confusable with a Windows paths
- # like 'C:/foo/bar' or 'C:\foo\bar'.
- (\w[^:]*)
- $""",
- re.VERBOSE,
-)
-
-
-def looks_like_hash(sha: str) -> bool:
- return bool(HASH_REGEX.match(sha))
-
-
-class Git(VersionControl):
- name = "git"
- dirname = ".git"
- repo_name = "clone"
- schemes = (
- "git+http",
- "git+https",
- "git+ssh",
- "git+git",
- "git+file",
- )
- # Prevent the user's environment variables from interfering with pip:
- # https://github.com/pypa/pip/issues/1130
- unset_environ = ("GIT_DIR", "GIT_WORK_TREE")
- default_arg_rev = "HEAD"
-
- @staticmethod
- def get_base_rev_args(rev: str) -> List[str]:
- return [rev]
-
- def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:
- _, rev_options = self.get_url_rev_options(hide_url(url))
- if not rev_options.rev:
- return False
- if not self.is_commit_id_equal(dest, rev_options.rev):
- # the current commit is different from rev,
- # which means rev was something else than a commit hash
- return False
- # return False in the rare case rev is both a commit hash
- # and a tag or a branch; we don't want to cache in that case
- # because that branch/tag could point to something else in the future
- is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0])
- return not is_tag_or_branch
-
- def get_git_version(self) -> Tuple[int, ...]:
- version = self.run_command(
- ["version"],
- command_desc="git version",
- show_stdout=False,
- stdout_only=True,
- )
- match = GIT_VERSION_REGEX.match(version)
- if not match:
- logger.warning("Can't parse git version: %s", version)
- return ()
- return tuple(int(c) for c in match.groups())
-
- @classmethod
- def get_current_branch(cls, location: str) -> Optional[str]:
- """
- Return the current branch, or None if HEAD isn't at a branch
- (e.g. detached HEAD).
- """
- # git-symbolic-ref exits with empty stdout if "HEAD" is a detached
- # HEAD rather than a symbolic ref. In addition, the -q causes the
- # command to exit with status code 1 instead of 128 in this case
- # and to suppress the message to stderr.
- args = ["symbolic-ref", "-q", "HEAD"]
- output = cls.run_command(
- args,
- extra_ok_returncodes=(1,),
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- )
- ref = output.strip()
-
- if ref.startswith("refs/heads/"):
- return ref[len("refs/heads/") :]
-
- return None
-
- @classmethod
- def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]:
- """
- Return (sha_or_none, is_branch), where sha_or_none is a commit hash
- if the revision names a remote branch or tag, otherwise None.
-
- Args:
- dest: the repository directory.
- rev: the revision name.
- """
- # Pass rev to pre-filter the list.
- output = cls.run_command(
- ["show-ref", rev],
- cwd=dest,
- show_stdout=False,
- stdout_only=True,
- on_returncode="ignore",
- )
- refs = {}
- # NOTE: We do not use splitlines here since that would split on other
- # unicode separators, which can be maliciously used to install a
- # different revision.
- for line in output.strip().split("\n"):
- line = line.rstrip("\r")
- if not line:
- continue
- try:
- ref_sha, ref_name = line.split(" ", maxsplit=2)
- except ValueError:
- # Include the offending line to simplify troubleshooting if
- # this error ever occurs.
- raise ValueError(f"unexpected show-ref line: {line!r}")
-
- refs[ref_name] = ref_sha
-
- branch_ref = f"refs/remotes/origin/{rev}"
- tag_ref = f"refs/tags/{rev}"
-
- sha = refs.get(branch_ref)
- if sha is not None:
- return (sha, True)
-
- sha = refs.get(tag_ref)
-
- return (sha, False)
-
- @classmethod
- def _should_fetch(cls, dest: str, rev: str) -> bool:
- """
- Return true if rev is a ref or is a commit that we don't have locally.
-
- Branches and tags are not considered in this method because they are
- assumed to be always available locally (which is a normal outcome of
- ``git clone`` and ``git fetch --tags``).
- """
- if rev.startswith("refs/"):
- # Always fetch remote refs.
- return True
-
- if not looks_like_hash(rev):
- # Git fetch would fail with abbreviated commits.
- return False
-
- if cls.has_commit(dest, rev):
- # Don't fetch if we have the commit locally.
- return False
-
- return True
-
- @classmethod
- def resolve_revision(
- cls, dest: str, url: HiddenText, rev_options: RevOptions
- ) -> RevOptions:
- """
- Resolve a revision to a new RevOptions object with the SHA1 of the
- branch, tag, or ref if found.
-
- Args:
- rev_options: a RevOptions object.
- """
- rev = rev_options.arg_rev
- # The arg_rev property's implementation for Git ensures that the
- # rev return value is always non-None.
- assert rev is not None
-
- sha, is_branch = cls.get_revision_sha(dest, rev)
-
- if sha is not None:
- rev_options = rev_options.make_new(sha)
- rev_options.branch_name = rev if is_branch else None
-
- return rev_options
-
- # Do not show a warning for the common case of something that has
- # the form of a Git commit hash.
- if not looks_like_hash(rev):
- logger.warning(
- "Did not find branch or tag '%s', assuming revision or ref.",
- rev,
- )
-
- if not cls._should_fetch(dest, rev):
- return rev_options
-
- # fetch the requested revision
- cls.run_command(
- make_command("fetch", "-q", url, rev_options.to_args()),
- cwd=dest,
- )
- # Change the revision to the SHA of the ref we fetched
- sha = cls.get_revision(dest, rev="FETCH_HEAD")
- rev_options = rev_options.make_new(sha)
-
- return rev_options
-
- @classmethod
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
- """
- Return whether the current commit hash equals the given name.
-
- Args:
- dest: the repository directory.
- name: a string name.
- """
- if not name:
- # Then avoid an unnecessary subprocess call.
- return False
-
- return cls.get_revision(dest) == name
-
- def fetch_new(
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
- ) -> None:
- rev_display = rev_options.to_display()
- logger.info("Cloning %s%s to %s", url, rev_display, display_path(dest))
- if verbosity <= 0:
- flags: Tuple[str, ...] = ("--quiet",)
- elif verbosity == 1:
- flags = ()
- else:
- flags = ("--verbose", "--progress")
- if self.get_git_version() >= (2, 17):
- # Git added support for partial clone in 2.17
- # https://git-scm.com/docs/partial-clone
- # Speeds up cloning by functioning without a complete copy of repository
- self.run_command(
- make_command(
- "clone",
- "--filter=blob:none",
- *flags,
- url,
- dest,
- )
- )
- else:
- self.run_command(make_command("clone", *flags, url, dest))
-
- if rev_options.rev:
- # Then a specific revision was requested.
- rev_options = self.resolve_revision(dest, url, rev_options)
- branch_name = getattr(rev_options, "branch_name", None)
- logger.debug("Rev options %s, branch_name %s", rev_options, branch_name)
- if branch_name is None:
- # Only do a checkout if the current commit id doesn't match
- # the requested revision.
- if not self.is_commit_id_equal(dest, rev_options.rev):
- cmd_args = make_command(
- "checkout",
- "-q",
- rev_options.to_args(),
- )
- self.run_command(cmd_args, cwd=dest)
- elif self.get_current_branch(dest) != branch_name:
- # Then a specific branch was requested, and that branch
- # is not yet checked out.
- track_branch = f"origin/{branch_name}"
- cmd_args = [
- "checkout",
- "-b",
- branch_name,
- "--track",
- track_branch,
- ]
- self.run_command(cmd_args, cwd=dest)
- else:
- sha = self.get_revision(dest)
- rev_options = rev_options.make_new(sha)
-
- logger.info("Resolved %s to commit %s", url, rev_options.rev)
-
- #: repo may contain submodules
- self.update_submodules(dest)
-
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- self.run_command(
- make_command("config", "remote.origin.url", url),
- cwd=dest,
- )
- cmd_args = make_command("checkout", "-q", rev_options.to_args())
- self.run_command(cmd_args, cwd=dest)
-
- self.update_submodules(dest)
-
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- # First fetch changes from the default remote
- if self.get_git_version() >= (1, 9):
- # fetch tags in addition to everything else
- self.run_command(["fetch", "-q", "--tags"], cwd=dest)
- else:
- self.run_command(["fetch", "-q"], cwd=dest)
- # Then reset to wanted revision (maybe even origin/master)
- rev_options = self.resolve_revision(dest, url, rev_options)
- cmd_args = make_command("reset", "--hard", "-q", rev_options.to_args())
- self.run_command(cmd_args, cwd=dest)
- #: update submodules
- self.update_submodules(dest)
-
- @classmethod
- def get_remote_url(cls, location: str) -> str:
- """
- Return URL of the first remote encountered.
-
- Raises RemoteNotFoundError if the repository does not have a remote
- url configured.
- """
- # We need to pass 1 for extra_ok_returncodes since the command
- # exits with return code 1 if there are no matching lines.
- stdout = cls.run_command(
- ["config", "--get-regexp", r"remote\..*\.url"],
- extra_ok_returncodes=(1,),
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- )
- remotes = stdout.splitlines()
- try:
- found_remote = remotes[0]
- except IndexError:
- raise RemoteNotFoundError
-
- for remote in remotes:
- if remote.startswith("remote.origin.url "):
- found_remote = remote
- break
- url = found_remote.split(" ")[1]
- return cls._git_remote_to_pip_url(url.strip())
-
- @staticmethod
- def _git_remote_to_pip_url(url: str) -> str:
- """
- Convert a remote url from what git uses to what pip accepts.
-
- There are 3 legal forms **url** may take:
-
- 1. A fully qualified url: ssh://git@example.com/foo/bar.git
- 2. A local project.git folder: /path/to/bare/repository.git
- 3. SCP shorthand for form 1: git@example.com:foo/bar.git
-
- Form 1 is output as-is. Form 2 must be converted to URI and form 3 must
- be converted to form 1.
-
- See the corresponding test test_git_remote_url_to_pip() for examples of
- sample inputs/outputs.
- """
- if re.match(r"\w+://", url):
- # This is already valid. Pass it though as-is.
- return url
- if os.path.exists(url):
- # A local bare remote (git clone --mirror).
- # Needs a file:// prefix.
- return pathlib.PurePath(url).as_uri()
- scp_match = SCP_REGEX.match(url)
- if scp_match:
- # Add an ssh:// prefix and replace the ':' with a '/'.
- return scp_match.expand(r"ssh://\1\2/\3")
- # Otherwise, bail out.
- raise RemoteNotValidError(url)
-
- @classmethod
- def has_commit(cls, location: str, rev: str) -> bool:
- """
- Check if rev is a commit that is available in the local repository.
- """
- try:
- cls.run_command(
- ["rev-parse", "-q", "--verify", "sha^" + rev],
- cwd=location,
- log_failed_cmd=False,
- )
- except InstallationError:
- return False
- else:
- return True
-
- @classmethod
- def get_revision(cls, location: str, rev: Optional[str] = None) -> str:
- if rev is None:
- rev = "HEAD"
- current_rev = cls.run_command(
- ["rev-parse", rev],
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- )
- return current_rev.strip()
-
- @classmethod
- def get_subdirectory(cls, location: str) -> Optional[str]:
- """
- Return the path to Python project root, relative to the repo root.
- Return None if the project root is in the repo root.
- """
- # find the repo root
- git_dir = cls.run_command(
- ["rev-parse", "--git-dir"],
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- ).strip()
- if not os.path.isabs(git_dir):
- git_dir = os.path.join(location, git_dir)
- repo_root = os.path.abspath(os.path.join(git_dir, ".."))
- return find_path_to_project_root_from_repo_root(location, repo_root)
-
- @classmethod
- def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
- """
- Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'.
- That's required because although they use SSH they sometimes don't
- work with a ssh:// scheme (e.g. GitHub). But we need a scheme for
- parsing. Hence we remove it again afterwards and return it as a stub.
- """
- # Works around an apparent Git bug
- # (see https://article.gmane.org/gmane.comp.version-control.git/146500)
- scheme, netloc, path, query, fragment = urlsplit(url)
- if scheme.endswith("file"):
- initial_slashes = path[: -len(path.lstrip("/"))]
- newpath = initial_slashes + urllib.request.url2pathname(path).replace(
- "\\", "/"
- ).lstrip("/")
- after_plus = scheme.find("+") + 1
- url = scheme[:after_plus] + urlunsplit(
- (scheme[after_plus:], netloc, newpath, query, fragment),
- )
-
- if "://" not in url:
- assert "file:" not in url
- url = url.replace("git+", "git+ssh://")
- url, rev, user_pass = super().get_url_rev_and_auth(url)
- url = url.replace("ssh://", "")
- else:
- url, rev, user_pass = super().get_url_rev_and_auth(url)
-
- return url, rev, user_pass
-
- @classmethod
- def update_submodules(cls, location: str) -> None:
- if not os.path.exists(os.path.join(location, ".gitmodules")):
- return
- cls.run_command(
- ["submodule", "update", "--init", "--recursive", "-q"],
- cwd=location,
- )
-
- @classmethod
- def get_repository_root(cls, location: str) -> Optional[str]:
- loc = super().get_repository_root(location)
- if loc:
- return loc
- try:
- r = cls.run_command(
- ["rev-parse", "--show-toplevel"],
- cwd=location,
- show_stdout=False,
- stdout_only=True,
- on_returncode="raise",
- log_failed_cmd=False,
- )
- except BadCommand:
- logger.debug(
- "could not determine if %s is under git control "
- "because git is not available",
- location,
- )
- return None
- except InstallationError:
- return None
- return os.path.normpath(r.rstrip("\r\n"))
-
- @staticmethod
- def should_add_vcs_url_prefix(repo_url: str) -> bool:
- """In either https or ssh form, requirements must be prefixed with git+."""
- return True
-
-
-vcs.register(Git)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py
deleted file mode 100644
index 1989dfcd0855d833a75e403f6a5e88725d78022f..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import itertools
-import logging
-from collections import defaultdict
-from enum import Enum
-from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union
-import torch
-from fvcore.common.param_scheduler import CosineParamScheduler, MultiStepParamScheduler
-
-from detectron2.config import CfgNode
-
-from .lr_scheduler import LRMultiplier, WarmupParamScheduler
-
-_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]]
-_GradientClipper = Callable[[_GradientClipperInput], None]
-
-
-class GradientClipType(Enum):
- VALUE = "value"
- NORM = "norm"
-
-
-def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper:
- """
- Creates gradient clipping closure to clip by value or by norm,
- according to the provided config.
- """
- cfg = copy.deepcopy(cfg)
-
- def clip_grad_norm(p: _GradientClipperInput):
- torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE)
-
- def clip_grad_value(p: _GradientClipperInput):
- torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE)
-
- _GRADIENT_CLIP_TYPE_TO_CLIPPER = {
- GradientClipType.VALUE: clip_grad_value,
- GradientClipType.NORM: clip_grad_norm,
- }
- return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)]
-
-
-def _generate_optimizer_class_with_gradient_clipping(
- optimizer: Type[torch.optim.Optimizer],
- *,
- per_param_clipper: Optional[_GradientClipper] = None,
- global_clipper: Optional[_GradientClipper] = None,
-) -> Type[torch.optim.Optimizer]:
- """
- Dynamically creates a new type that inherits the type of a given instance
- and overrides the `step` method to add gradient clipping
- """
- assert (
- per_param_clipper is None or global_clipper is None
- ), "Not allowed to use both per-parameter clipping and global clipping"
-
- def optimizer_wgc_step(self, closure=None):
- if per_param_clipper is not None:
- for group in self.param_groups:
- for p in group["params"]:
- per_param_clipper(p)
- else:
- # global clipper for future use with detr
- # (https://github.com/facebookresearch/detr/pull/287)
- all_params = itertools.chain(*[g["params"] for g in self.param_groups])
- global_clipper(all_params)
- super(type(self), self).step(closure)
-
- OptimizerWithGradientClip = type(
- optimizer.__name__ + "WithGradientClip",
- (optimizer,),
- {"step": optimizer_wgc_step},
- )
- return OptimizerWithGradientClip
-
-
-def maybe_add_gradient_clipping(
- cfg: CfgNode, optimizer: Type[torch.optim.Optimizer]
-) -> Type[torch.optim.Optimizer]:
- """
- If gradient clipping is enabled through config options, wraps the existing
- optimizer type to become a new dynamically created class OptimizerWithGradientClip
- that inherits the given optimizer and overrides the `step` method to
- include gradient clipping.
-
- Args:
- cfg: CfgNode, configuration options
- optimizer: type. A subclass of torch.optim.Optimizer
-
- Return:
- type: either the input `optimizer` (if gradient clipping is disabled), or
- a subclass of it with gradient clipping included in the `step` method.
- """
- if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED:
- return optimizer
- if isinstance(optimizer, torch.optim.Optimizer):
- optimizer_type = type(optimizer)
- else:
- assert issubclass(optimizer, torch.optim.Optimizer), optimizer
- optimizer_type = optimizer
-
- grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS)
- OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping(
- optimizer_type, per_param_clipper=grad_clipper
- )
- if isinstance(optimizer, torch.optim.Optimizer):
- optimizer.__class__ = OptimizerWithGradientClip # a bit hacky, not recommended
- return optimizer
- else:
- return OptimizerWithGradientClip
-
-
-def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer:
- """
- Build an optimizer from config.
- """
- params = get_default_optimizer_params(
- model,
- base_lr=cfg.SOLVER.BASE_LR,
- weight_decay_norm=cfg.SOLVER.WEIGHT_DECAY_NORM,
- bias_lr_factor=cfg.SOLVER.BIAS_LR_FACTOR,
- weight_decay_bias=cfg.SOLVER.WEIGHT_DECAY_BIAS,
- )
- return maybe_add_gradient_clipping(cfg, torch.optim.SGD)(
- params,
- lr=cfg.SOLVER.BASE_LR,
- momentum=cfg.SOLVER.MOMENTUM,
- nesterov=cfg.SOLVER.NESTEROV,
- weight_decay=cfg.SOLVER.WEIGHT_DECAY,
- )
-
-
-def get_default_optimizer_params(
- model: torch.nn.Module,
- base_lr: Optional[float] = None,
- weight_decay: Optional[float] = None,
- weight_decay_norm: Optional[float] = None,
- bias_lr_factor: Optional[float] = 1.0,
- weight_decay_bias: Optional[float] = None,
- overrides: Optional[Dict[str, Dict[str, float]]] = None,
-) -> List[Dict[str, Any]]:
- """
- Get default param list for optimizer, with support for a few types of
- overrides. If no overrides needed, this is equivalent to `model.parameters()`.
-
- Args:
- base_lr: lr for every group by default. Can be omitted to use the one in optimizer.
- weight_decay: weight decay for every group by default. Can be omitted to use the one
- in optimizer.
- weight_decay_norm: override weight decay for params in normalization layers
- bias_lr_factor: multiplier of lr for bias parameters.
- weight_decay_bias: override weight decay for bias parameters
- overrides: if not `None`, provides values for optimizer hyperparameters
- (LR, weight decay) for module parameters with a given name; e.g.
- ``{"embedding": {"lr": 0.01, "weight_decay": 0.1}}`` will set the LR and
- weight decay values for all module parameters named `embedding`.
-
- For common detection models, ``weight_decay_norm`` is the only option
- needed to be set. ``bias_lr_factor,weight_decay_bias`` are legacy settings
- from Detectron1 that are not found useful.
-
- Example:
- ::
- torch.optim.SGD(get_default_optimizer_params(model, weight_decay_norm=0),
- lr=0.01, weight_decay=1e-4, momentum=0.9)
- """
- if overrides is None:
- overrides = {}
- defaults = {}
- if base_lr is not None:
- defaults["lr"] = base_lr
- if weight_decay is not None:
- defaults["weight_decay"] = weight_decay
- bias_overrides = {}
- if bias_lr_factor is not None and bias_lr_factor != 1.0:
- # NOTE: unlike Detectron v1, we now by default make bias hyperparameters
- # exactly the same as regular weights.
- if base_lr is None:
- raise ValueError("bias_lr_factor requires base_lr")
- bias_overrides["lr"] = base_lr * bias_lr_factor
- if weight_decay_bias is not None:
- bias_overrides["weight_decay"] = weight_decay_bias
- if len(bias_overrides):
- if "bias" in overrides:
- raise ValueError("Conflicting overrides for 'bias'")
- overrides["bias"] = bias_overrides
-
- norm_module_types = (
- torch.nn.BatchNorm1d,
- torch.nn.BatchNorm2d,
- torch.nn.BatchNorm3d,
- torch.nn.SyncBatchNorm,
- # NaiveSyncBatchNorm inherits from BatchNorm2d
- torch.nn.GroupNorm,
- torch.nn.InstanceNorm1d,
- torch.nn.InstanceNorm2d,
- torch.nn.InstanceNorm3d,
- torch.nn.LayerNorm,
- torch.nn.LocalResponseNorm,
- )
- params: List[Dict[str, Any]] = []
- memo: Set[torch.nn.parameter.Parameter] = set()
- for module in model.modules():
- for module_param_name, value in module.named_parameters(recurse=False):
- if not value.requires_grad:
- continue
- # Avoid duplicating parameters
- if value in memo:
- continue
- memo.add(value)
-
- hyperparams = copy.copy(defaults)
- if isinstance(module, norm_module_types) and weight_decay_norm is not None:
- hyperparams["weight_decay"] = weight_decay_norm
- hyperparams.update(overrides.get(module_param_name, {}))
- params.append({"params": [value], **hyperparams})
- return reduce_param_groups(params)
-
-
-def _expand_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
- # Transform parameter groups into per-parameter structure.
- # Later items in `params` can overwrite parameters set in previous items.
- ret = defaultdict(dict)
- for item in params:
- assert "params" in item
- cur_params = {x: y for x, y in item.items() if x != "params"}
- for param in item["params"]:
- ret[param].update({"params": [param], **cur_params})
- return list(ret.values())
-
-
-def reduce_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
- # Reorganize the parameter groups and merge duplicated groups.
- # The number of parameter groups needs to be as small as possible in order
- # to efficiently use the PyTorch multi-tensor optimizer. Therefore instead
- # of using a parameter_group per single parameter, we reorganize the
- # parameter groups and merge duplicated groups. This approach speeds
- # up multi-tensor optimizer significantly.
- params = _expand_param_groups(params)
- groups = defaultdict(list) # re-group all parameter groups by their hyperparams
- for item in params:
- cur_params = tuple((x, y) for x, y in item.items() if x != "params")
- groups[cur_params].extend(item["params"])
- ret = []
- for param_keys, param_values in groups.items():
- cur = {kv[0]: kv[1] for kv in param_keys}
- cur["params"] = param_values
- ret.append(cur)
- return ret
-
-
-def build_lr_scheduler(
- cfg: CfgNode, optimizer: torch.optim.Optimizer
-) -> torch.optim.lr_scheduler._LRScheduler:
- """
- Build a LR scheduler from config.
- """
- name = cfg.SOLVER.LR_SCHEDULER_NAME
-
- if name == "WarmupMultiStepLR":
- steps = [x for x in cfg.SOLVER.STEPS if x <= cfg.SOLVER.MAX_ITER]
- if len(steps) != len(cfg.SOLVER.STEPS):
- logger = logging.getLogger(__name__)
- logger.warning(
- "SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. "
- "These values will be ignored."
- )
- sched = MultiStepParamScheduler(
- values=[cfg.SOLVER.GAMMA ** k for k in range(len(steps) + 1)],
- milestones=steps,
- num_updates=cfg.SOLVER.MAX_ITER,
- )
- elif name == "WarmupCosineLR":
- sched = CosineParamScheduler(1, 0)
- else:
- raise ValueError("Unknown LR scheduler: {}".format(name))
-
- sched = WarmupParamScheduler(
- sched,
- cfg.SOLVER.WARMUP_FACTOR,
- min(cfg.SOLVER.WARMUP_ITERS / cfg.SOLVER.MAX_ITER, 1.0),
- cfg.SOLVER.WARMUP_METHOD,
- )
- return LRMultiplier(optimizer, multiplier=sched, max_iter=cfg.SOLVER.MAX_ITER)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_events.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_events.py
deleted file mode 100644
index c1b03e4d1a703a417a83c2805be1ca15a4e458ed..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_events.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import json
-import os
-import tempfile
-import unittest
-
-from detectron2.utils.events import CommonMetricPrinter, EventStorage, JSONWriter
-
-
-class TestEventWriter(unittest.TestCase):
- def testScalar(self):
- with tempfile.TemporaryDirectory(
- prefix="detectron2_tests"
- ) as dir, EventStorage() as storage:
- json_file = os.path.join(dir, "test.json")
- writer = JSONWriter(json_file)
- for k in range(60):
- storage.put_scalar("key", k, smoothing_hint=False)
- if (k + 1) % 20 == 0:
- writer.write()
- storage.step()
- writer.close()
- with open(json_file) as f:
- data = [json.loads(l) for l in f]
- self.assertTrue([int(k["key"]) for k in data] == [19, 39, 59])
-
- def testScalarMismatchedPeriod(self):
- with tempfile.TemporaryDirectory(
- prefix="detectron2_tests"
- ) as dir, EventStorage() as storage:
- json_file = os.path.join(dir, "test.json")
-
- writer = JSONWriter(json_file)
- for k in range(60):
- if k % 17 == 0: # write in a differnt period
- storage.put_scalar("key2", k, smoothing_hint=False)
- storage.put_scalar("key", k, smoothing_hint=False)
- if (k + 1) % 20 == 0:
- writer.write()
- storage.step()
- writer.close()
- with open(json_file) as f:
- data = [json.loads(l) for l in f]
- self.assertTrue([int(k.get("key2", 0)) for k in data] == [17, 0, 34, 0, 51, 0])
- self.assertTrue([int(k.get("key", 0)) for k in data] == [0, 19, 0, 39, 0, 59])
- self.assertTrue([int(k["iteration"]) for k in data] == [17, 19, 34, 39, 51, 59])
-
- def testPrintETA(self):
- with EventStorage() as s:
- p1 = CommonMetricPrinter(10)
- p2 = CommonMetricPrinter()
-
- s.put_scalar("time", 1.0)
- s.step()
- s.put_scalar("time", 1.0)
- s.step()
-
- with self.assertLogs("detectron2.utils.events") as logs:
- p1.write()
- self.assertIn("eta", logs.output[0])
-
- with self.assertLogs("detectron2.utils.events") as logs:
- p2.write()
- self.assertNotIn("eta", logs.output[0])
diff --git a/spaces/AzulaFire/SparkDebate/README.md b/spaces/AzulaFire/SparkDebate/README.md
deleted file mode 100644
index 45bb5a1ae0f781cdbefe4a3ec9b74c4b12e84db8..0000000000000000000000000000000000000000
--- a/spaces/AzulaFire/SparkDebate/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SparkDebate
-emoji: 🌖
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-# 知识库问答因为部署环境没卡,所以暂时无法使用(🥺
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Bart92/RVC_HF/demucs/parser.py b/spaces/Bart92/RVC_HF/demucs/parser.py
deleted file mode 100644
index 4e8a19cf976e3c6dfe411da64b8dce3e9a4548e0..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/demucs/parser.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-from pathlib import Path
-
-
-def get_parser():
- parser = argparse.ArgumentParser("demucs", description="Train and evaluate Demucs.")
- default_raw = None
- default_musdb = None
- if 'DEMUCS_RAW' in os.environ:
- default_raw = Path(os.environ['DEMUCS_RAW'])
- if 'DEMUCS_MUSDB' in os.environ:
- default_musdb = Path(os.environ['DEMUCS_MUSDB'])
- parser.add_argument(
- "--raw",
- type=Path,
- default=default_raw,
- help="Path to raw audio, can be faster, see python3 -m demucs.raw to extract.")
- parser.add_argument("--no_raw", action="store_const", const=None, dest="raw")
- parser.add_argument("-m",
- "--musdb",
- type=Path,
- default=default_musdb,
- help="Path to musdb root")
- parser.add_argument("--is_wav", action="store_true",
- help="Indicate that the MusDB dataset is in wav format (i.e. MusDB-HQ).")
- parser.add_argument("--metadata", type=Path, default=Path("metadata/"),
- help="Folder where metadata information is stored.")
- parser.add_argument("--wav", type=Path,
- help="Path to a wav dataset. This should contain a 'train' and a 'valid' "
- "subfolder.")
- parser.add_argument("--samplerate", type=int, default=44100)
- parser.add_argument("--audio_channels", type=int, default=2)
- parser.add_argument("--samples",
- default=44100 * 10,
- type=int,
- help="number of samples to feed in")
- parser.add_argument("--data_stride",
- default=44100,
- type=int,
- help="Stride for chunks, shorter = longer epochs")
- parser.add_argument("-w", "--workers", default=10, type=int, help="Loader workers")
- parser.add_argument("--eval_workers", default=2, type=int, help="Final evaluation workers")
- parser.add_argument("-d",
- "--device",
- help="Device to train on, default is cuda if available else cpu")
- parser.add_argument("--eval_cpu", action="store_true", help="Eval on test will be run on cpu.")
- parser.add_argument("--dummy", help="Dummy parameter, useful to create a new checkpoint file")
- parser.add_argument("--test", help="Just run the test pipeline + one validation. "
- "This should be a filename relative to the models/ folder.")
- parser.add_argument("--test_pretrained", help="Just run the test pipeline + one validation, "
- "on a pretrained model. ")
-
- parser.add_argument("--rank", default=0, type=int)
- parser.add_argument("--world_size", default=1, type=int)
- parser.add_argument("--master")
-
- parser.add_argument("--checkpoints",
- type=Path,
- default=Path("checkpoints"),
- help="Folder where to store checkpoints etc")
- parser.add_argument("--evals",
- type=Path,
- default=Path("evals"),
- help="Folder where to store evals and waveforms")
- parser.add_argument("--save",
- action="store_true",
- help="Save estimated for the test set waveforms")
- parser.add_argument("--logs",
- type=Path,
- default=Path("logs"),
- help="Folder where to store logs")
- parser.add_argument("--models",
- type=Path,
- default=Path("models"),
- help="Folder where to store trained models")
- parser.add_argument("-R",
- "--restart",
- action='store_true',
- help='Restart training, ignoring previous run')
-
- parser.add_argument("--seed", type=int, default=42)
- parser.add_argument("-e", "--epochs", type=int, default=180, help="Number of epochs")
- parser.add_argument("-r",
- "--repeat",
- type=int,
- default=2,
- help="Repeat the train set, longer epochs")
- parser.add_argument("-b", "--batch_size", type=int, default=64)
- parser.add_argument("--lr", type=float, default=3e-4)
- parser.add_argument("--mse", action="store_true", help="Use MSE instead of L1")
- parser.add_argument("--init", help="Initialize from a pre-trained model.")
-
- # Augmentation options
- parser.add_argument("--no_augment",
- action="store_false",
- dest="augment",
- default=True,
- help="No basic data augmentation.")
- parser.add_argument("--repitch", type=float, default=0.2,
- help="Probability to do tempo/pitch change")
- parser.add_argument("--max_tempo", type=float, default=12,
- help="Maximum relative tempo change in %% when using repitch.")
-
- parser.add_argument("--remix_group_size",
- type=int,
- default=4,
- help="Shuffle sources using group of this size. Useful to somewhat "
- "replicate multi-gpu training "
- "on less GPUs.")
- parser.add_argument("--shifts",
- type=int,
- default=10,
- help="Number of random shifts used for the shift trick.")
- parser.add_argument("--overlap",
- type=float,
- default=0.25,
- help="Overlap when --split_valid is passed.")
-
- # See model.py for doc
- parser.add_argument("--growth",
- type=float,
- default=2.,
- help="Number of channels between two layers will increase by this factor")
- parser.add_argument("--depth",
- type=int,
- default=6,
- help="Number of layers for the encoder and decoder")
- parser.add_argument("--lstm_layers", type=int, default=2, help="Number of layers for the LSTM")
- parser.add_argument("--channels",
- type=int,
- default=64,
- help="Number of channels for the first encoder layer")
- parser.add_argument("--kernel_size",
- type=int,
- default=8,
- help="Kernel size for the (transposed) convolutions")
- parser.add_argument("--conv_stride",
- type=int,
- default=4,
- help="Stride for the (transposed) convolutions")
- parser.add_argument("--context",
- type=int,
- default=3,
- help="Context size for the decoder convolutions "
- "before the transposed convolutions")
- parser.add_argument("--rescale",
- type=float,
- default=0.1,
- help="Initial weight rescale reference")
- parser.add_argument("--no_resample", action="store_false",
- default=True, dest="resample",
- help="No Resampling of the input/output x2")
- parser.add_argument("--no_glu",
- action="store_false",
- default=True,
- dest="glu",
- help="Replace all GLUs by ReLUs")
- parser.add_argument("--no_rewrite",
- action="store_false",
- default=True,
- dest="rewrite",
- help="No 1x1 rewrite convolutions")
- parser.add_argument("--normalize", action="store_true")
- parser.add_argument("--no_norm_wav", action="store_false", dest='norm_wav', default=True)
-
- # Tasnet options
- parser.add_argument("--tasnet", action="store_true")
- parser.add_argument("--split_valid",
- action="store_true",
- help="Predict chunks by chunks for valid and test. Required for tasnet")
- parser.add_argument("--X", type=int, default=8)
-
- # Other options
- parser.add_argument("--show",
- action="store_true",
- help="Show model architecture, size and exit")
- parser.add_argument("--save_model", action="store_true",
- help="Skip traning, just save final model "
- "for the current checkpoint value.")
- parser.add_argument("--save_state",
- help="Skip training, just save state "
- "for the current checkpoint value. You should "
- "provide a model name as argument.")
-
- # Quantization options
- parser.add_argument("--q-min-size", type=float, default=1,
- help="Only quantize layers over this size (in MB)")
- parser.add_argument(
- "--qat", type=int, help="If provided, use QAT training with that many bits.")
-
- parser.add_argument("--diffq", type=float, default=0)
- parser.add_argument(
- "--ms-target", type=float, default=162,
- help="Model size target in MB, when using DiffQ. Best model will be kept "
- "only if it is smaller than this target.")
-
- return parser
-
-
-def get_name(parser, args):
- """
- Return the name of an experiment given the args. Some parameters are ignored,
- for instance --workers, as they do not impact the final result.
- """
- ignore_args = set([
- "checkpoints",
- "deterministic",
- "eval",
- "evals",
- "eval_cpu",
- "eval_workers",
- "logs",
- "master",
- "rank",
- "restart",
- "save",
- "save_model",
- "save_state",
- "show",
- "workers",
- "world_size",
- ])
- parts = []
- name_args = dict(args.__dict__)
- for name, value in name_args.items():
- if name in ignore_args:
- continue
- if value != parser.get_default(name):
- if isinstance(value, Path):
- parts.append(f"{name}={value.name}")
- else:
- parts.append(f"{name}={value}")
- if parts:
- name = " ".join(parts)
- else:
- name = "default"
- return name
diff --git a/spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Hack Mod Apk An1.md b/spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Hack Mod Apk An1.md
deleted file mode 100644
index b2d5052820b0a79c844948232766446fa43d2496..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Hack Mod Apk An1.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Camioneros de Europa 3 Hack Mod APK: Cómo descargarlo e instalarlo
-
Si eres un fan de los juegos realistas de conducción de camiones, es posible que hayas oído hablar de Truckers of Europe 3, un popular juego de simulador que te permite experimentar la vida de un camionero en Europa. Pero lo que si quieres disfrutar del juego sin limitaciones o restricciones? Ahí es donde un mod hack APK viene muy bien. En este artículo, le diremos todo lo que necesita saber sobre Camioneros de Europa 3 hack mod APK, incluyendo lo que es, cómo descargar e instalar, y cómo usarlo. ¡Vamos a empezar!
Truckers of Europe 3 es un juego de simulador de conductor de camión realista desarrollado por Wanda Software. El juego cuenta con gráficos que están cerca de los de las computadoras de escritorio, y le permite conducir varios camiones a través de diferentes países europeos. Puede elegir entre diferentes tipos de carga, como contenedores, tuberías de acero, alimentos o cualquier producto útil, y entregarlos a sus destinos. También puede personalizar su camión con diferentes piezas, colores y accesorios. El juego también tiene física realista, efectos meteorológicos, tráfico y ciclos día-noche.
-
Características de los camioneros de Europa 3
-
Algunas de las principales características de Truckers of Europe 3 son:
-
-
Múltiples camiones para elegir, cada uno con sus propias características y rendimiento.
-
Diferentes tipos de carga a transportar, cada uno con su propio peso y tamaño.
-
Varias rutas y lugares para explorar, desde carreteras hasta caminos rurales.
-
Física de conducción y controles realistas, incluyendo volante, pedales, caja de cambios e indicadores.
-
Condiciones climáticas dinámicas y ciclos día-noche que afectan el juego.
-
Apariencia y accesorios de camiones personalizables, como pintura, ruedas, luces, cuernos y más.
-
Radio en el juego con varios géneros musicales y estaciones de noticias.
-
Logros y tablas de clasificación para competir con otros jugadores.
-
-
¿Por qué es posible que desee utilizar un Hack Mod APK
-
Si bien Truckers of Europe 3 es un juego divertido e inmersivo, también tiene algunos inconvenientes que podrían hacer que desee utilizar un mod hack APK. Algunos de estos inconvenientes son:
-
-
El juego requiere mucho espacio de almacenamiento y RAM para funcionar sin problemas.
-
El juego tiene anuncios que pueden interrumpir su juego o consumir sus datos.
-
El juego tiene compras en la aplicación que pueden darle una ventaja sobre otros jugadores o desbloquear más características.
-
El juego puede ser desafiante y frustrante a veces, especialmente cuando tienes que lidiar con el tráfico, accidentes, multas o carga dañada.
-
-
Un hack mod APK es una versión modificada del juego original que puede evitar estos inconvenientes y darle más libertad y disfrute. Un mod hack APK también puede proporcionar algunas características adicionales que no están disponibles en el juego original.
-
¿Qué es Camioneros de Europa 3 Hack Mod APK?
-
Camioneros de Europa 3 hack mod APK es una versión modificada del juego original que puede darle dinero ilimitado, desbloquear todos los camiones y accesorios, eliminar anuncios, y más. Con este hack mod APK, puede jugar el juego sin limitaciones o restricciones. Usted
Sin embargo, antes de decidirse a descargar e instalar camioneros de Europa 3 hack mod APK, también debe ser consciente de los riesgos potenciales y desventajas que vienen con él.
-
Beneficios de los camioneros de Europa 3 Hack Mod APK
-
Algunos de los beneficios de usar camioneros de Europa 3 hack mod APK son:
-
-
Puede obtener dinero ilimitado que puede utilizar para comprar y actualizar cualquier camión o accesorio que desee.
-
Puede desbloquear todos los camiones y accesorios que están bloqueados o requieren dinero real para comprar.
-
Puede eliminar los anuncios que pueden ser molestos o distraer mientras juega el juego.
-
Puedes disfrutar del juego sin ninguna dificultad o frustración, ya que puedes evitar el tráfico, accidentes, multas o carga dañada.
-
-
-
Riesgos de los camioneros de Europa 3 Hack Mod APK
-
Algunos de los riesgos de usar camioneros de Europa 3 hack mod APK son:
-
-
-
Puede exponer su dispositivo a malware o virus que pueden dañar sus datos o sistema.
-
Puede violar los términos y condiciones del juego original y obtener prohibido o suspendido de jugar.
-
Puede perder su progreso o datos si el mod hack APK no es compatible con la última versión del juego o su dispositivo.
-
Puedes arruinar la diversión y el desafío del juego, ya que puedes completar fácilmente todas las misiones y logros sin ningún esfuerzo.
-
Puedes perderte las actualizaciones y nuevas características que los desarrolladores agregan al juego original regularmente.
-
-
Cómo descargar e instalar camioneros de Europa 3 Hack Mod APK
-
Si todavía quieres probar Camioneros de Europa 3 hack mod APK, es necesario seguir algunos pasos para descargar e instalar en su dispositivo. Estos son los pasos:
-
Paso 1: Encontrar una fuente confiable
-
El primer paso es encontrar una fuente confiable que proporciona el enlace de descarga para camioneros de Europa 3 hack mod APK. Puedes buscar en línea sitios web o foros que ofrecen este servicio, pero ten cuidado de no hacer clic en enlaces sospechosos o falsos que puedan dañar tu dispositivo. También puede comprobar las revisiones y calificaciones de otros usuarios que han descargado el mod hack APK de la misma fuente. Una buena fuente debe proporcionar un enlace de descarga seguro y de trabajo, así como una descripción detallada e instrucciones para usar el mod hack APK.
-
Paso 2: Habilitar fuentes desconocidas en su dispositivo
-
-
Paso 3: Descargar el archivo APK
-
El tercer paso es descargar el archivo APK de la fuente que ha elegido. Puede usar su navegador o una aplicación de administrador de descargas para hacer esto. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargar el archivo. El tamaño del archivo puede variar dependiendo de la fuente y la versión del mod hack APK. Una vez completada la descarga, puedes encontrar el archivo en tu carpeta de descargas o donde sea que lo hayas guardado.
-
Paso 4: Instalar el archivo APK
-
El paso final es instalar el archivo APK en su dispositivo. Para hacer esto, toque en el archivo y siga las instrucciones de instalación. Es posible que vea un mensaje de advertencia de que instalar esta aplicación podría dañar su dispositivo, pero puede ignorarlo si confía en la fuente que está utilizando. También es posible que tenga que permitir algunos permisos para que la aplicación funcione correctamente. Una vez que la instalación se ha completado, puede iniciar la aplicación y disfrutar de los camioneros de Europa 3 hack mod APK.
-
Cómo utilizar camioneros de Europa 3 Hack Mod APK
-
Ahora que ha descargado e instalado Camioneros de Europa 3 hack mod APK, es posible que se pregunte cómo usarlo. Aquí hay algunos consejos y trucos para jugar Camioneros de Europa 3 con hack mod APK:
-
Consejos y trucos para jugar camioneros de Europa 3 con Hack Mod APK
-
-
Utilice la función de dinero ilimitado para comprar y actualizar cualquier camión o accesorio que desee. También puede usarlo para pagar cualquier multa o reparar cualquier daño que pueda sufrir mientras conduce.
-
Utilice la función de desbloqueo de todos los camiones y accesorios para probar diferentes camiones y personalizarlos según su preferencia. También puede cambiar entre diferentes camiones sin perder su progreso o carga.
-
Utilice la función de eliminación de anuncios para jugar el juego sin interrupciones o distracciones. También puede guardar sus datos y batería al no cargar ningún anuncio.
-
-
Utilice la función de hackeo de velocidad para conducir más rápido que el límite de velocidad normal. También puede superar a otros vehículos y llegar a su destino más rápido. Sin embargo, tenga cuidado de no estrellarse o ser atrapado por la policía.
-
Utilice la función de navegación de mapa gratuito para encontrar la mejor ruta a su destino. También puede ver las condiciones del tráfico y la carretera y evitar retrasos o desvíos.
-
-
Conclusión
-
Truckers of Europe 3 es un juego de simulador de conductor de camión realista que le permite experimentar la vida de un camionero en Europa. Sin embargo, si desea jugar el juego sin limitaciones o restricciones, puede utilizar un mod hack APK que puede darle dinero ilimitado, desbloquear todos los camiones y accesorios, eliminar anuncios, y más. En este artículo, hemos explicado lo que Camioneros de Europa 3 hack mod APK es, cómo descargarlo e instalarlo, y cómo usarlo. También hemos proporcionado algunos consejos y trucos para jugar Camioneros de Europa 3 con hack mod APK. Sin embargo, también le advertimos sobre los posibles riesgos y desventajas de usar un mod de hackeo APK, tales como malware, prohibiciones, pérdida de datos o pérdida de diversión. Por lo tanto, le aconsejamos que utilice un mod hack APK a su propio riesgo y discreción. Esperamos que haya encontrado este artículo útil e informativo. ¡Feliz transporte!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre camioneros de Europa 3 mod hack APK:
-
-
Es camioneros de Europa 3 hack mod APK seguro de usar?
-
Camioneros de Europa 3 hack mod APK no es una aplicación oficial de los desarrolladores del juego original. Por lo tanto, no se garantiza su seguridad. Puede contener malware o virus que puedan dañar su dispositivo o datos. También podría violar los términos y condiciones del juego original y conseguir que se le prohibió o suspendido de jugar. Por lo tanto, usted debe utilizar un mod hack APK a su propio riesgo y discreción.
-
¿Cómo actualizo camioneros de Europa 3 hack mod APK?
-
-
¿Puedo jugar camioneros de Europa 3 hack mod APK en línea?
-
Camioneros de Europa 3 hack mod APK podría no funcionar bien con el modo en línea del juego original. Es posible que se enfrenten a algunos errores o problemas técnicos al jugar en línea con otros jugadores. También puede ser detectado por el sistema anti-cheat del juego y obtener prohibido o suspendido de jugar. Por lo tanto, le recomendamos que juegue Camioneros de Europa 3 hack mod APK offline o en modo de un solo jugador.
-
¿Puedo usar camioneros de Europa 3 hack mod APK en dispositivos iOS?
-
Camioneros de Europa 3 hack mod APK es una aplicación para Android que solo se puede instalar en dispositivos Android. Por lo tanto, no puede usarlo en dispositivos iOS como iPhones o iPads. Sin embargo, es posible que encuentre algunas formas alternativas de utilizar un mod hack APK en dispositivos iOS, como el uso de un emulador o una herramienta de jailbreak. Sin embargo, estos métodos no son recomendables ya que podrían dañar su dispositivo o datos.
-
¿Puedo usar camioneros de Europa 3 hack mod APK en PC?
-
Camioneros de Europa 3 hack mod APK es una aplicación para Android que solo se puede instalar en dispositivos Android. Por lo tanto, no se puede utilizar en el PC directamente. Sin embargo, usted puede encontrar algunas maneras de utilizar un mod hack APK en el PC, como el uso de un emulador o una máquina virtual. Estos métodos le permiten ejecutar aplicaciones Android en el PC mediante la simulación de un entorno Android. Sin embargo, no se garantiza que estos métodos funcionen bien o sin problemas.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cmo Crear Y Construir Apk.md b/spaces/Benson/text-generation/Examples/Cmo Crear Y Construir Apk.md
deleted file mode 100644
index 511352b9c63af7857d3b4d3d912d62622471fc94..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Crear Y Construir Apk.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Tải elaboración y construcción APK - Un juego de construcción gratis para Android
-
¿Te gustan los juegos de construcción? ¿Quieres dar rienda suelta a tu creatividad e imaginación? ¿Quieres divertirte con tus amigos y familiares? Si respondiste afirmativamente a cualquiera de estas preguntas, entonces deberías probar Crafting and Building APK, un nuevo juego de construcción gratuito para dispositivos Android. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo e instalarlo, por qué deberías jugarlo, cómo jugarlo y cuáles son algunas alternativas. ¡Vamos a empezar!
-
¿Qué es la elaboración y construcción de APK?
-
Una breve introducción al juego y sus características
-
Elaboración y construcción de APK es un juego gratuito para dispositivos Android que le permite construir sus propias construcciones, desde casas y castillos hasta minas y templos. También puedes explorar el mundo creado por otros jugadores, interactuar con aldeanos y animales, personalizar tu personaje y jugar online con tus amigos. El juego tiene muchas características que lo hacen divertido y atractivo, como:
Toque en el archivo y permita la instalación desde fuentes desconocidas si se le solicita.
-
Espere a que el proceso de instalación termine y luego inicie el juego desde el cajón de la aplicación o la pantalla de inicio.
-
¡Disfruta jugando a crear y construir APK en tu dispositivo!
-
-
¿Por qué debe jugar a la elaboración y construcción de APK?
-
Los beneficios de jugar este juego para diferentes grupos de edad
-
Elaboración y construcción de APK es un juego que puede ser disfrutado por cualquier persona, independientemente de su edad o origen. Estos son algunos de los beneficios de jugar este juego para diferentes grupos de edad:
-
-
Para los niños: Jugar a este juego puede ayudar a los niños a desarrollar su creatividad, imaginación, habilidades para resolver problemas, conciencia espacial, coordinación mano-ojo y habilidades motoras finas. También puede fomentar su curiosidad, interés y pasión por aprender cosas nuevas.
-
Los aspectos divertidos y creativos del juego
-
Elaboración y construcción de APK es un juego que también puede proporcionar un montón de oportunidades divertidas y creativas para los jugadores. Estos son algunos de los aspectos divertidos y creativos del juego:
-
-
Puedes construir lo que quieras, desde casas sencillas y granjas hasta ciudades y monumentos complejos. También puede decorar sus edificios con muebles, pinturas, alfombras y más.
-
Puedes explorar el mundo creado por otros jugadores, descubrir nuevos lugares y admirar sus creaciones. También puede interactuar con aldeanos y animales, comerciar con ellos o luchar contra ellos.
-
Puedes personalizar tu personaje con diferentes pieles, ropa, accesorios y peinados. También puede cambiar la apariencia de sus mascotas y vehículos.
-
Puedes jugar en línea con tus amigos y familiares, chatear con ellos, colaborar con ellos o competir con ellos. También puede unirse o crear sus propios servidores y comunidades.
-
-
El modo multijugador y la interacción social con otros jugadores
-
-
-
Puedes jugar online con hasta 10 jugadores al mismo tiempo, ya sea en modo cooperativo o competitivo. También puede invitar a sus amigos a unirse a su juego o unirse a su juego.
-
Puedes chatear con otros jugadores usando mensajes de texto o de voz. También puedes usar emojis y pegatinas para expresar tus emociones y reacciones.
-
Puedes compartir tus creaciones con otros jugadores, calificar sus creaciones, comentarlas o seguirlas. También puedes obtener comentarios y sugerencias de otros jugadores sobre cómo mejorar tus habilidades y experiencia.
-
Puede unirse o crear sus propios servidores y comunidades, donde puede conocer gente nueva, hacer amigos o encontrar socios. También puedes participar en eventos, concursos, desafíos o juegos organizados por otros jugadores o por ti mismo.
-
-
¿Cómo se juega elaboración y construcción APK?
-
El juego básico y los controles
-
Elaboración y construcción de APK es un juego que es fácil de jugar y controlar. Estos son algunos de los juegos básicos y controles:
-
-
Para moverse, puede usar el joystick en el lado izquierdo de la pantalla. Para mirar alrededor, puede deslizar el dedo hacia el lado derecho de la pantalla.
-
Para saltar, puede tocar el botón de salto en el lado derecho de la pantalla. Para volar, puede pulsar dos veces el botón de salto y luego usar el joystick para controlar su dirección.
-
Para seleccionar un tipo de bloque, puede tocar el botón de inventario en el lado derecho de la pantalla y luego elegir entre los bloques disponibles. Para colocar un bloque, puede tocar en la pantalla donde desea colocarlo. Para romper un bloque, puede pulsar y mantener pulsado en la pantalla donde desea romperlo.
-
Para interactuar con un objeto, como una puerta, un cofre o un animal, puede pulsarlo. Para usar un objeto, como una espada, un arco o una poción, puede seleccionarlo de su inventario y luego tocar en la pantalla donde desea usarlo.
-
-
-
Los diferentes modos y opciones disponibles
-
Elaboración y construcción de APK es un juego que ofrece diferentes modos y opciones para los jugadores para elegir. Estos son algunos de los diferentes modos y opciones disponibles:
-
-
Modo creativo: En este modo, tienes recursos ilimitados y puedes construir lo que quieras sin restricciones. También puedes volar y acceder a todos los bloques y objetos del juego.
-
Modo de supervivencia: En este modo, tienes que reunir recursos, crear herramientas y armas, y sobrevivir a los peligros del mundo. También tienes que controlar tus niveles de salud, hambre y sed. También puedes luchar contra enemigos, como zombis, arañas y esqueletos.
-
Modo aventura: En este modo, puedes explorar el mundo creado por otros jugadores, completar misiones y recoger recompensas. También puede interactuar con aldeanos y animales, comerciar con ellos o luchar contra ellos.
-
Modo multijugador: En este modo, puedes jugar online con otros jugadores, ya sea en modo cooperativo o competitivo. También puede chatear con ellos, compartir sus creaciones o unirse a sus servidores y comunidades.
-
Opciones: En este menú, puede cambiar la configuración del juego, como el sonido, los gráficos, el idioma y los controles. También puede acceder a la sección de ayuda, donde puede encontrar tutoriales, consejos y preguntas frecuentes.
-
-
Algunos consejos y trucos para mejorar tus habilidades y experiencia
-
Elaboración y construcción de APK es un juego que puede ser desafiante y gratificante al mismo tiempo. Aquí hay algunos consejos y trucos para mejorar tus habilidades y experiencia:
-
-
Utilice la mesa de elaboración para crear nuevos artículos, tales como herramientas, armas, armaduras y muebles. También puede utilizar el horno para fundir minerales y cocinar alimentos.
-
Usa el mapa para navegar por el mundo y encontrar tu ubicación. También puedes usar la brújula para encontrar tu punto de aparición o tu hogar.
-
-
Utiliza antorchas para iluminar tus edificios y evitar que los monstruos aparezcan. También puedes usar antorchas para marcar tu camino o tu territorio.
-
Use cofres para guardar sus artículos y mantenerlos seguros. También puede usar letreros para etiquetar sus cofres o sus edificios.
-
Use escaleras para subir o bajar paredes o torres. También puede usar escaleras o losas para crear pendientes o techos.
-
Use puertas para asegurar sus entradas o salidas. También puede usar trampillas para crear pasajes secretos o habitaciones ocultas.
-
Use cercas para encerrar sus granjas o jardines. También puede usar puertas para acceder a ellas.
-
Usa cubos para recoger agua o lava. También puedes usar cubos para crear fuentes o piscinas.
-
Usa semillas para cultivar cultivos o flores. También puedes usar harina de huesos para acelerar su crecimiento.
-
-
¿Cuáles son algunas alternativas a la elaboración y construcción de APK?
-
Una comparación de juegos similares en el mercado
-
Elaboración y construcción APK no es el único juego de construcción en el mercado. Hay muchos otros juegos similares que puedes probar si buscas más opciones. Estos son algunos de ellos:
-
-
-
Juego
Descripción
Precio
-
Minecraft
El juego de construcción más popular del mundo, donde puedes crear lo que quieras con bloques en un mundo de caja de arena. También puedes jugar online con millones de jugadores.
$6.99
-
Roblox
Una plataforma donde puedes jugar millones de juegos creados por otros usuarios o crear tus propios juegos con Roblox Studio. También puedes personalizar tu avatar y chatear con otros jugadores.
Gratis (con compras en la aplicación)
-
Terraria
Un juego de aventura en 2D donde puedes explorar, construir, crear, luchar y sobrevivir en un mundo generado al azar. También puedes jugar online con hasta 8 jugadores.
$4.99
-
-con animales y plantas. También puede visitar otras islas y jugar con otros jugadores.
Gratis (con compras en la aplicación)
-
-
Los pros y los contras de cada alternativa
-
Cada uno de estos juegos tiene sus propios pros y contras que debes considerar antes de elegir uno. Estos son algunos de ellos:
-
-
Juego
Pros
Contras
-
Minecraft
- Tiene una base de fans enorme y leal - Tiene un montón de contenido y actualizaciones - Tiene una gran cantidad de mods y plugins - Tiene un montón de potencial educativo y creativo
- Requiere una cuenta de pago para jugar en línea - Puede ser lento o defectuoso en algunos dispositivos Puede ser demasiado complejo o abrumador para algunos jugadores - Puede ser adictivo o perjudicial para algunos jugadores
-
Roblox
- Tiene mucha variedad y diversidad - Tiene mucho contenido generado por el usuario - Tiene muchas características sociales y opciones - Tiene muchas oportunidades para aprender y ganar
- Tiene mucho contenido inapropiado o inseguro Tiene un montón de anuncios y microtransacciones - Tiene un montón de hackers y estafadores - Tiene un montón de problemas técnicos y problemas técnicos
-
Terraria
- Tiene mucha profundidad y detalle - Tiene mucha exploración y aventura - Tiene mucha personalización y personalización - Tiene mucho desafío y valor de repetición
- Tiene una curva de aprendizaje empinada - Tiene un tamaño mundial limitado Tiene un modo multijugador limitado - Tiene una calidad gráfica limitada
-
Sandbox 3D
- Tiene una interfaz simple e intuitiva - Tiene unos gráficos coloridos y vibrantes - Tiene un juego relajante y casual - Tiene una comunidad amigable y de apoyo
- Tiene un bloque limitado de tipos y elementos - Tiene modos de juego y opciones limitadas br<>-> Tiene características y funciones limitadas en línea - Tiene un desarrollo y soporte limitados
-
-
-
La mejor opción para sus preferencias y necesidades
-
La mejor opción para tus preferencias y necesidades depende de lo que estés buscando en un juego de construcción. Aquí hay algunas preguntas que puedes hacerte para ayudarte a decidir:
-
-
¿Quieres jugar online o offline?
-
¿Quieres jugar solo o con otros?
-
¿Quieres construir o explorar?
-
¿Quieres crear o consumir?
-
¿Quieres aprender o divertirte?
-
¿Quieres pagar o jugar gratis?
-
¿Quieres tener más control o más libertad?
-
¿Quieres tener más realismo o más fantasía?
-
¿Quieres tener más simplicidad o más complejidad?
-
¿Quieres tener más calidad o más cantidad?
-
-
Basado en sus respuestas, puede comparar los diferentes juegos y elegir el que más le convenga. Por supuesto, también puedes probarlos todos y ver cuál te gusta más. ¡La elección es tuya!
-
Conclusión
-
Un resumen de los puntos principales y una llamada a la acción
-y control, y ofrece diferentes modos y opciones para los jugadores a elegir. También es un juego que tiene muchos aspectos divertidos y creativos, como construir lo que quieras, explorar el mundo, personalizar tu personaje y jugar online con tus amigos. Sin embargo, si buscas más alternativas, también puedes probar otros juegos similares en el mercado, como Minecraft, Roblox, Terraria, Sandbox 3D y Crafty Lands. Cada uno de estos juegos tiene sus propios pros y contras que usted debe considerar antes de elegir uno. La mejor opción para tus preferencias y necesidades depende de lo que estés buscando en un juego de construcción. Esperamos que este artículo le ha ayudado a aprender más acerca de la elaboración y construcción de APK y para decidir si desea descargar e instalar en su dispositivo o no. Si lo haces, esperamos que tengas mucha diversión y creatividad con este juego. ¡Gracias por leer!
-
Preguntas frecuentes
-
-
-
¿Es la elaboración y construcción de APK seguro para descargar e instalar?
-You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
-"T4 small" is sufficient to run this demo.
-
-'''
-
-HF_TOKEN_NOT_SPECIFIED_WARNING = f'''# Attention - The environment variable `HF_TOKEN` is not specified. Please specify your Hugging Face token with write permission as the value of it.
-
-You can check and create your Hugging Face tokens here.
-You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
-
-'''
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def show_warning(warning_text: str) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown(warning_text)
- return demo
-
-
-pipe = InferencePipeline(HF_TOKEN)
-trainer = Trainer(HF_TOKEN)
-
-with gr.Blocks(css='style.css') as demo:
- if os.getenv('IS_SHARED_UI'):
- show_warning(SHARED_UI_WARNING)
- if not torch.cuda.is_available():
- show_warning(CUDA_NOT_AVAILABLE_WARNING)
- if not HF_TOKEN:
- show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
-
- gr.Markdown(TITLE)
- with gr.Tabs():
- with gr.TabItem('Train'):
- create_training_demo(trainer, pipe)
- with gr.TabItem('Test'):
- create_inference_demo(pipe, HF_TOKEN)
- with gr.TabItem('Upload'):
- gr.Markdown('''
- - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed.
- ''')
- create_upload_demo(HF_TOKEN)
-
-demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/Eddycrack864/Applio-Inference/julius/bands.py b/spaces/Eddycrack864/Applio-Inference/julius/bands.py
deleted file mode 100644
index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/julius/bands.py
+++ /dev/null
@@ -1,119 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-"""
-Decomposition of a signal over frequency bands in the waveform domain.
-"""
-from typing import Optional, Sequence
-import torch
-
-from .core import mel_frequencies
-from .lowpass import LowPassFilters
-from .utils import simple_repr
-
-
-class SplitBands(torch.nn.Module):
- """
- Decomposes a signal over the given frequency bands in the waveform domain using
- a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`.
- You can either specify explicitely the frequency cutoffs, or just the number of bands,
- in which case the frequency cutoffs will be spread out evenly in mel scale.
-
- Args:
- sample_rate (float): Sample rate of the input signal in Hz.
- n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`.
- In that case, the cutoff frequencies will be evenly spaced in mel-space.
- cutoffs (list[float] or None): list of frequency cutoffs in Hz.
- pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`,
- the output will have the same length as the input.
- zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations.
- fft (bool or None): See `LowPassFilters` for more info.
-
- ..note::
- The sum of all the bands will always be the input signal.
-
- ..warning::
- Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along
- with the sample rate.
-
- Shape:
-
- - Input: `[*, T]`
- - Output: `[B, *, T']`, with `T'=T` if `pad` is True.
- If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1`
-
- >>> bands = SplitBands(sample_rate=128, n_bands=10)
- >>> x = torch.randn(6, 4, 1024)
- >>> list(bands(x).shape)
- [10, 6, 4, 1024]
- """
-
- def __init__(self, sample_rate: float, n_bands: Optional[int] = None,
- cutoffs: Optional[Sequence[float]] = None, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- super().__init__()
- if (cutoffs is None) + (n_bands is None) != 1:
- raise ValueError("You must provide either n_bands, or cutoffs, but not boths.")
-
- self.sample_rate = sample_rate
- self.n_bands = n_bands
- self._cutoffs = list(cutoffs) if cutoffs is not None else None
- self.pad = pad
- self.zeros = zeros
- self.fft = fft
-
- if cutoffs is None:
- if n_bands is None:
- raise ValueError("You must provide one of n_bands or cutoffs.")
- if not n_bands >= 1:
- raise ValueError(f"n_bands must be greater than one (got {n_bands})")
- cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1]
- else:
- if max(cutoffs) > 0.5 * sample_rate:
- raise ValueError("A cutoff above sample_rate/2 does not make sense.")
- if len(cutoffs) > 0:
- self.lowpass = LowPassFilters(
- [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft)
- else:
- # Here I cannot make both TorchScript and MyPy happy.
- # I miss the good old times, before all this madness was created.
- self.lowpass = None # type: ignore
-
- def forward(self, input):
- if self.lowpass is None:
- return input[None]
- lows = self.lowpass(input)
- low = lows[0]
- bands = [low]
- for low_and_band in lows[1:]:
- # Get a bandpass filter by substracting lowpasses
- band = low_and_band - low
- bands.append(band)
- low = low_and_band
- # Last band is whatever is left in the signal
- bands.append(input - low)
- return torch.stack(bands)
-
- @property
- def cutoffs(self):
- if self._cutoffs is not None:
- return self._cutoffs
- elif self.lowpass is not None:
- return [c * self.sample_rate for c in self.lowpass.cutoffs]
- else:
- return []
-
- def __repr__(self):
- return simple_repr(self, overrides={"cutoffs": self._cutoffs})
-
-
-def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None,
- cutoffs: Optional[Sequence[float]] = None, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- """
- Functional version of `SplitBands`, refer to this class for more information.
-
- >>> x = torch.randn(6, 4, 1024)
- >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape)
- [3, 6, 4, 1024]
- """
- return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal)
diff --git a/spaces/Egrt/GCycleGAN/utils/__init__.py b/spaces/Egrt/GCycleGAN/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_33966KB.py
deleted file mode 100644
index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_33966KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_33966KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16, 32)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/psenet_r50_fpnf.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/psenet_r50_fpnf.py
deleted file mode 100644
index a3aff0d1325d3b9e25b5ed095cea28d313f611a0..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/psenet_r50_fpnf.py
+++ /dev/null
@@ -1,51 +0,0 @@
-model_poly = dict(
- type='PSENet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPNF',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- fusion_type='concat'),
- bbox_head=dict(
- type='PSEHead',
- in_channels=[256],
- out_channels=7,
- loss=dict(type='PSELoss'),
- postprocessor=dict(type='PSEPostprocessor', text_repr_type='poly')),
- train_cfg=None,
- test_cfg=None)
-
-model_quad = dict(
- type='PSENet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPNF',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- fusion_type='concat'),
- bbox_head=dict(
- type='PSEHead',
- in_channels=[256],
- out_channels=7,
- loss=dict(type='PSELoss'),
- postprocessor=dict(type='PSEPostprocessor', text_repr_type='quad')),
- train_cfg=None,
- test_cfg=None)
diff --git a/spaces/FaceOnLive/ID-Document-Recognition-SDK/README.md b/spaces/FaceOnLive/ID-Document-Recognition-SDK/README.md
deleted file mode 100644
index fbae6272b776115559559e68e20919222cc53a5a..0000000000000000000000000000000000000000
--- a/spaces/FaceOnLive/ID-Document-Recognition-SDK/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: ID Document Recognition SDK
-emoji: 🚀
-colorFrom: purple
-colorTo: pink
-sdk: docker
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/data.py b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/data.py
deleted file mode 100644
index 55e60f7a136b4e312ae7a19e4b5206316b201cf3..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/data.py
+++ /dev/null
@@ -1,399 +0,0 @@
-#!/usr/bin/env python3
-# Portions Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn as nn
-import torchaudio
-import logging
-
-from .models.multimodal_preprocessors import SimpleTokenizer
-from PIL import Image
-from pytorchvideo import transforms as pv_transforms
-from pytorchvideo.data.clip_sampling import ConstantClipsPerVideoSampler
-from pytorchvideo.data.encoded_video import EncodedVideo
-
-from torchvision import transforms
-from torchvision.transforms._transforms_video import NormalizeVideo
-
-DEFAULT_AUDIO_FRAME_SHIFT_MS = 10 # in milliseconds
-
-BPE_PATH = "./model/ImageBind/bpe/bpe_simple_vocab_16e6.txt.gz"
-
-
-def waveform2melspec(waveform, sample_rate, num_mel_bins, target_length):
- # Based on https://github.com/YuanGongND/ast/blob/d7d8b4b8e06cdaeb6c843cdb38794c1c7692234c/src/dataloader.py#L102
- waveform -= waveform.mean()
- fbank = torchaudio.compliance.kaldi.fbank(
- waveform,
- htk_compat=True,
- sample_frequency=sample_rate,
- use_energy=False,
- window_type="hanning",
- num_mel_bins=num_mel_bins,
- dither=0.0,
- frame_length=25,
- frame_shift=DEFAULT_AUDIO_FRAME_SHIFT_MS,
- )
- # Convert to [mel_bins, num_frames] shape
- fbank = fbank.transpose(0, 1)
- # Pad to target_length
- n_frames = fbank.size(1)
- p = target_length - n_frames
- # if p is too large (say >20%), flash a warning
- if abs(p) / n_frames > 0.2:
- logging.warning(
- "Large gap between audio n_frames(%d) and "
- "target_length (%d). Is the audio_target_length "
- "setting correct?",
- n_frames,
- target_length,
- )
- # cut and pad
- if p > 0:
- fbank = torch.nn.functional.pad(fbank, (0, p), mode="constant", value=0)
- elif p < 0:
- fbank = fbank[:, 0:target_length]
- # Convert to [1, mel_bins, num_frames] shape, essentially like a 1
- # channel image
- fbank = fbank.unsqueeze(0)
- return fbank
-
-
-def get_clip_timepoints(clip_sampler, duration):
- # Read out all clips in this video
- all_clips_timepoints = []
- is_last_clip = False
- end = 0.0
- while not is_last_clip:
- start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None)
- all_clips_timepoints.append((start, end))
- return all_clips_timepoints
-
-
-def load_and_transform_vision_data(image_paths, device):
- if image_paths is None:
- return None
-
- image_ouputs = []
- for image_path in image_paths:
- data_transform = transforms.Compose(
- [
- transforms.Resize(
- 224, interpolation=transforms.InterpolationMode.BICUBIC
- ),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- transforms.Normalize(
- mean=(0.48145466, 0.4578275, 0.40821073),
- std=(0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
- with open(image_path, "rb") as fopen:
- image = Image.open(fopen).convert("RGB")
-
- image = data_transform(image).to(device)
- image_ouputs.append(image)
- return torch.stack(image_ouputs, dim=0)
-
-
-def load_and_transform_vision_data_for_web_demo(image_paths, device):
- if image_paths is None:
- return None
-
- image_ouputs = []
- for image_path in image_paths:
- data_transform = transforms.Compose(
- [
- transforms.Resize(
- (224,224), interpolation=transforms.InterpolationMode.BICUBIC
- ),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- transforms.Normalize(
- mean=(0.48145466, 0.4578275, 0.40821073),
- std=(0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
- with open(image_path, "rb") as fopen:
- image = Image.open(fopen).convert("RGB")
-
- image = data_transform(image).to(device)
- image_ouputs.append(image)
- return torch.stack(image_ouputs, dim=0)
-
-
-def load_and_transform_thermal_data(thermal_paths, device):
- if thermal_paths is None:
- return None
-
- thermal_ouputs = []
- for thermal_path in thermal_paths:
- data_transform = transforms.Compose(
- [
- transforms.Resize(
- 224, interpolation=transforms.InterpolationMode.BICUBIC
- ),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- ]
- )
- with open(thermal_path, "rb") as fopen:
- thermal = Image.open(fopen).convert("L")
- thermal = data_transform(thermal).to(device)
- thermal_ouputs.append(thermal)
- return torch.stack(thermal_ouputs, dim=0)
-
-
-def load_and_transform_text(text, device):
- if text is None:
- return None
- tokenizer = SimpleTokenizer(bpe_path=BPE_PATH)
- tokens = [tokenizer(t).unsqueeze(0).to(device) for t in text]
- tokens = torch.cat(tokens, dim=0)
- return tokens
-
-
-def load_and_transform_audio_data(
- audio_paths,
- device,
- num_mel_bins=128,
- target_length=204,
- sample_rate=16000,
- clip_duration=2,
- clips_per_video=3,
- mean=-4.268,
- std=9.138,
-):
- if audio_paths is None:
- return None
-
- audio_outputs = []
- clip_sampler = ConstantClipsPerVideoSampler(
- clip_duration=clip_duration, clips_per_video=clips_per_video
- )
-
- for audio_path in audio_paths:
- waveform, sr = torchaudio.load(audio_path)
- if sample_rate != sr:
- waveform = torchaudio.functional.resample(
- waveform, orig_freq=sr, new_freq=sample_rate
- )
- all_clips_timepoints = get_clip_timepoints(
- clip_sampler, waveform.size(1) / sample_rate
- )
- all_clips = []
- for clip_timepoints in all_clips_timepoints:
- waveform_clip = waveform[
- :,
- int(clip_timepoints[0] * sample_rate) : int(
- clip_timepoints[1] * sample_rate
- ),
- ]
- waveform_melspec = waveform2melspec(
- waveform_clip, sample_rate, num_mel_bins, target_length
- )
- all_clips.append(waveform_melspec)
-
- normalize = transforms.Normalize(mean=mean, std=std)
- all_clips = [normalize(ac).to(device) for ac in all_clips]
-
- all_clips = torch.stack(all_clips, dim=0)
- audio_outputs.append(all_clips)
-
- return torch.stack(audio_outputs, dim=0)
-
-
-def get_clip_timepoints(clip_sampler, duration):
- # Read out all clips in this video
- all_clips_timepoints = []
- is_last_clip = False
- end = 0.0
- while not is_last_clip:
- start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None)
- all_clips_timepoints.append((start, end))
- return all_clips_timepoints
-
-
-def crop_boxes(boxes, x_offset, y_offset):
- """
- Peform crop on the bounding boxes given the offsets.
- Args:
- boxes (ndarray or None): bounding boxes to peform crop. The dimension
- is `num boxes` x 4.
- x_offset (int): cropping offset in the x axis.
- y_offset (int): cropping offset in the y axis.
- Returns:
- cropped_boxes (ndarray or None): the cropped boxes with dimension of
- `num boxes` x 4.
- """
- cropped_boxes = boxes.copy()
- cropped_boxes[:, [0, 2]] = boxes[:, [0, 2]] - x_offset
- cropped_boxes[:, [1, 3]] = boxes[:, [1, 3]] - y_offset
-
- return cropped_boxes
-
-
-def uniform_crop(images, size, spatial_idx, boxes=None, scale_size=None):
- """
- Perform uniform spatial sampling on the images and corresponding boxes.
- Args:
- images (tensor): images to perform uniform crop. The dimension is
- `num frames` x `channel` x `height` x `width`.
- size (int): size of height and weight to crop the images.
- spatial_idx (int): 0, 1, or 2 for left, center, and right crop if width
- is larger than height. Or 0, 1, or 2 for top, center, and bottom
- crop if height is larger than width.
- boxes (ndarray or None): optional. Corresponding boxes to images.
- Dimension is `num boxes` x 4.
- scale_size (int): optinal. If not None, resize the images to scale_size before
- performing any crop.
- Returns:
- cropped (tensor): images with dimension of
- `num frames` x `channel` x `size` x `size`.
- cropped_boxes (ndarray or None): the cropped boxes with dimension of
- `num boxes` x 4.
- """
- assert spatial_idx in [0, 1, 2]
- ndim = len(images.shape)
- if ndim == 3:
- images = images.unsqueeze(0)
- height = images.shape[2]
- width = images.shape[3]
-
- if scale_size is not None:
- if width <= height:
- width, height = scale_size, int(height / width * scale_size)
- else:
- width, height = int(width / height * scale_size), scale_size
- images = torch.nn.functional.interpolate(
- images,
- size=(height, width),
- mode="bilinear",
- align_corners=False,
- )
-
- y_offset = int(math.ceil((height - size) / 2))
- x_offset = int(math.ceil((width - size) / 2))
-
- if height > width:
- if spatial_idx == 0:
- y_offset = 0
- elif spatial_idx == 2:
- y_offset = height - size
- else:
- if spatial_idx == 0:
- x_offset = 0
- elif spatial_idx == 2:
- x_offset = width - size
- cropped = images[:, :, y_offset : y_offset + size, x_offset : x_offset + size]
- cropped_boxes = crop_boxes(boxes, x_offset, y_offset) if boxes is not None else None
- if ndim == 3:
- cropped = cropped.squeeze(0)
- return cropped, cropped_boxes
-
-
-class SpatialCrop(nn.Module):
- """
- Convert the video into 3 smaller clips spatially. Must be used after the
- temporal crops to get spatial crops, and should be used with
- -2 in the spatial crop at the slowfast augmentation stage (so full
- frames are passed in here). Will return a larger list with the
- 3x spatial crops as well.
- """
-
- def __init__(self, crop_size: int = 224, num_crops: int = 3):
- super().__init__()
- self.crop_size = crop_size
- if num_crops == 3:
- self.crops_to_ext = [0, 1, 2]
- self.flipped_crops_to_ext = []
- elif num_crops == 1:
- self.crops_to_ext = [1]
- self.flipped_crops_to_ext = []
- else:
- raise NotImplementedError("Nothing else supported yet")
-
- def forward(self, videos):
- """
- Args:
- videos: A list of C, T, H, W videos.
- Returns:
- videos: A list with 3x the number of elements. Each video converted
- to C, T, H', W' by spatial cropping.
- """
- assert isinstance(videos, list), "Must be a list of videos after temporal crops"
- assert all([video.ndim == 4 for video in videos]), "Must be (C,T,H,W)"
- res = []
- for video in videos:
- for spatial_idx in self.crops_to_ext:
- res.append(uniform_crop(video, self.crop_size, spatial_idx)[0])
- if not self.flipped_crops_to_ext:
- continue
- flipped_video = transforms.functional.hflip(video)
- for spatial_idx in self.flipped_crops_to_ext:
- res.append(uniform_crop(flipped_video, self.crop_size, spatial_idx)[0])
- return res
-
-
-def load_and_transform_video_data(
- video_paths,
- device,
- clip_duration=2,
- clips_per_video=5,
- sample_rate=16000,
-):
- if video_paths is None:
- return None
-
- video_outputs = []
- video_transform = transforms.Compose(
- [
- pv_transforms.ShortSideScale(224),
- NormalizeVideo(
- mean=(0.48145466, 0.4578275, 0.40821073),
- std=(0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
-
- clip_sampler = ConstantClipsPerVideoSampler(
- clip_duration=clip_duration, clips_per_video=clips_per_video
- )
- frame_sampler = pv_transforms.UniformTemporalSubsample(num_samples=clip_duration)
-
- for video_path in video_paths:
- video = EncodedVideo.from_path(
- video_path,
- decoder="decord",
- decode_audio=False,
- **{"sample_rate": sample_rate},
- )
-
- all_clips_timepoints = get_clip_timepoints(clip_sampler, video.duration)
-
- all_video = []
- for clip_timepoints in all_clips_timepoints:
- # Read the clip, get frames
- clip = video.get_clip(clip_timepoints[0], clip_timepoints[1])
- if clip is None:
- raise ValueError("No clip found")
- video_clip = frame_sampler(clip["video"])
- video_clip = video_clip / 255.0 # since this is float, need 0-1
-
- all_video.append(video_clip)
-
- all_video = [video_transform(clip) for clip in all_video]
- all_video = SpatialCrop(224, num_crops=3)(all_video)
-
- all_video = torch.stack(all_video, dim=0)
- video_outputs.append(all_video)
-
- return torch.stack(video_outputs, dim=0).to(device)
diff --git a/spaces/FantasticGNU/AnomalyGPT/utils/config.py b/spaces/FantasticGNU/AnomalyGPT/utils/config.py
deleted file mode 100644
index b364ee774f8437a9962280f28748e2167a45e732..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/utils/config.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import yaml
-from easydict import EasyDict
-import os
-from .logger import print_log
-
-def log_args_to_file(args, pre='args', logger=None):
- for key, val in args.__dict__.items():
- print_log(f'{pre}.{key} : {val}', logger = logger)
-
-def log_config_to_file(cfg, pre='cfg', logger=None):
- for key, val in cfg.items():
- if isinstance(cfg[key], EasyDict):
- print_log(f'{pre}.{key} = edict()', logger = logger)
- log_config_to_file(cfg[key], pre=pre + '.' + key, logger=logger)
- continue
- print_log(f'{pre}.{key} : {val}', logger = logger)
-
-def merge_new_config(config, new_config):
- for key, val in new_config.items():
- if not isinstance(val, dict):
- if key == '_base_':
- with open(new_config['_base_'], 'r') as f:
- try:
- val = yaml.load(f, Loader=yaml.FullLoader)
- except:
- val = yaml.load(f)
- config[key] = EasyDict()
- merge_new_config(config[key], val)
- else:
- config[key] = val
- continue
- if key not in config:
- config[key] = EasyDict()
- merge_new_config(config[key], val)
- return config
-
-def cfg_from_yaml_file(cfg_file):
- config = EasyDict()
- with open(cfg_file, 'r') as f:
- try:
- new_config = yaml.load(f, Loader=yaml.FullLoader)
- except:
- new_config = yaml.load(f)
- merge_new_config(config=config, new_config=new_config)
- return config
-
-def get_config(args, logger=None):
- if args.resume:
- cfg_path = os.path.join(args.experiment_path, 'config.yaml')
- if not os.path.exists(cfg_path):
- print_log("Failed to resume", logger = logger)
- raise FileNotFoundError()
- print_log(f'Resume yaml from {cfg_path}', logger = logger)
- args.config = cfg_path
- config = cfg_from_yaml_file(args.config)
- if not args.resume and args.local_rank == 0:
- save_experiment_config(args, config, logger)
- return config
-
-def save_experiment_config(args, config, logger = None):
- config_path = os.path.join(args.experiment_path, 'config.yaml')
- os.system('cp %s %s' % (args.config, config_path))
- print_log(f'Copy the Config file from {args.config} to {config_path}',logger = logger )
\ No newline at end of file
diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/commons.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Giuliano/breast_cancer_prediction_tfjs/index.html b/spaces/Giuliano/breast_cancer_prediction_tfjs/index.html
deleted file mode 100644
index 39fc073472a2863d4984846ce9dddba635761205..0000000000000000000000000000000000000000
--- a/spaces/Giuliano/breast_cancer_prediction_tfjs/index.html
+++ /dev/null
@@ -1,322 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
training the application ... wait
-
-
Result:
-
-
*** This software is EXPERIMENTAL and should only be used for research purposes. Please see a doctor for any diagnostic reasons.
-
-
-
\ No newline at end of file
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py
deleted file mode 100644
index ed9ca9bd4d78341d622acb0bd469339be81530e2..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py
+++ /dev/null
@@ -1,171 +0,0 @@
-# This code is copied from https://github.com/thomasjpfan/pytorch/blob/401ec389db2c9d2978917a6e4d1101b20340d7e7/torch/optim/lr_scheduler.py
-
-
-# This code is under review at PyTorch and is to be merged eventually to make CLR available to all.
-# Tested with pytorch 0.2.0
-
-import numpy as np
-
-
-class CyclicLR(object):
- """Sets the learning rate of each parameter group according to
- cyclical learning rate policy (CLR). The policy cycles the learning
- rate between two boundaries with a constant frequency, as detailed in
- the paper `Cyclical Learning Rates for Training Neural Networks`_.
- The distance between the two boundaries can be scaled on a per-iteration
- or per-cycle basis.
- Cyclical learning rate policy changes the learning rate after every batch.
- `batch_step` should be called after a batch has been used for training.
- To resume training, save `last_batch_iteration` and use it to instantiate `CycleLR`.
- This class has three built-in policies, as put forth in the paper:
- "triangular":
- A basic triangular cycle w/ no amplitude scaling.
- "triangular2":
- A basic triangular cycle that scales initial amplitude by half each cycle.
- "exp_range":
- A cycle that scales initial amplitude by gamma**(cycle iterations) at each
- cycle iteration.
- This implementation was adapted from the github repo: `bckenstler/CLR`_
- Args:
- optimizer (Optimizer): Wrapped optimizer.
- base_lr (float or list): Initial learning rate which is the
- lower boundary in the cycle for eachparam groups.
- Default: 0.001
- max_lr (float or list): Upper boundaries in the cycle for
- each parameter group. Functionally,
- it defines the cycle amplitude (max_lr - base_lr).
- The lr at any cycle is the sum of base_lr
- and some scaling of the amplitude; therefore
- max_lr may not actually be reached depending on
- scaling function. Default: 0.006
- step_size (int): Number of training iterations per
- half cycle. Authors suggest setting step_size
- 2-8 x training iterations in epoch. Default: 2000
- mode (str): One of {triangular, triangular2, exp_range}.
- Values correspond to policies detailed above.
- If scale_fn is not None, this argument is ignored.
- Default: 'triangular'
- gamma (float): Constant in 'exp_range' scaling function:
- gamma**(cycle iterations)
- Default: 1.0
- scale_fn (function): Custom scaling policy defined by a single
- argument lambda function, where
- 0 <= scale_fn(x) <= 1 for all x >= 0.
- mode paramater is ignored
- Default: None
- scale_mode (str): {'cycle', 'iterations'}.
- Defines whether scale_fn is evaluated on
- cycle number or cycle iterations (training
- iterations since start of cycle).
- Default: 'cycle'
- last_batch_iteration (int): The index of the last batch. Default: -1
- Example:
- >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
- >>> scheduler = torch.optim.CyclicLR(optimizer)
- >>> data_loader = torch.utils.data.DataLoader(...)
- >>> for epoch in range(10):
- >>> for batch in data_loader:
- >>> scheduler.batch_step()
- >>> train_batch(...)
- .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186
- .. _bckenstler/CLR: https://github.com/bckenstler/CLR
- """
-
- def __init__(
- self,
- optimizer,
- base_lr=1e-3,
- max_lr=6e-3,
- step_size=2000,
- mode="triangular",
- gamma=1.0,
- scale_fn=None,
- scale_mode="cycle",
- last_batch_iteration=-1,
- ):
-
- # if not isinstance(optimizer, Optimizer):
- # raise TypeError('{} is not an Optimizer'.format(
- # type(optimizer).__name__))
- self.optimizer = optimizer
-
- if isinstance(base_lr, list) or isinstance(base_lr, tuple):
- if len(base_lr) != len(optimizer.param_groups):
- raise ValueError(
- "expected {} base_lr, got {}".format(
- len(optimizer.param_groups), len(base_lr)
- )
- )
- self.base_lrs = list(base_lr)
- else:
- self.base_lrs = [base_lr] * len(optimizer.param_groups)
-
- if isinstance(max_lr, list) or isinstance(max_lr, tuple):
- if len(max_lr) != len(optimizer.param_groups):
- raise ValueError(
- "expected {} max_lr, got {}".format(
- len(optimizer.param_groups), len(max_lr)
- )
- )
- self.max_lrs = list(max_lr)
- else:
- self.max_lrs = [max_lr] * len(optimizer.param_groups)
-
- self.step_size = step_size
-
- if mode not in ["triangular", "triangular2", "exp_range"] and scale_fn is None:
- raise ValueError("mode is invalid and scale_fn is None")
-
- self.mode = mode
- self.gamma = gamma
- self.current_lr = None
-
- if scale_fn is None:
- if self.mode == "triangular":
- self.scale_fn = self._triangular_scale_fn
- self.scale_mode = "cycle"
- elif self.mode == "triangular2":
- self.scale_fn = self._triangular2_scale_fn
- self.scale_mode = "cycle"
- elif self.mode == "exp_range":
- self.scale_fn = self._exp_range_scale_fn
- self.scale_mode = "iterations"
- else:
- self.scale_fn = scale_fn
- self.scale_mode = scale_mode
-
- self.batch_step(last_batch_iteration + 1)
- self.last_batch_iteration = last_batch_iteration
-
- def batch_step(self, batch_iteration=None):
- if batch_iteration is None:
- batch_iteration = self.last_batch_iteration + 1
- self.last_batch_iteration = batch_iteration
- for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
- param_group["lr"] = lr
- self.current_lr = lr
-
- def _triangular_scale_fn(self, x):
- return 1.0
-
- def _triangular2_scale_fn(self, x):
- return 1 / (2.0 ** (x - 1))
-
- def _exp_range_scale_fn(self, x):
- return self.gamma ** (x)
-
- def get_lr(self):
- step_size = float(self.step_size)
- cycle = np.floor(1 + self.last_batch_iteration / (2 * step_size))
- x = np.abs(self.last_batch_iteration / step_size - 2 * cycle + 1)
-
- lrs = []
- param_lrs = zip(self.optimizer.param_groups, self.base_lrs, self.max_lrs)
- for param_group, base_lr, max_lr in param_lrs:
- base_height = (max_lr - base_lr) * np.maximum(0, (1 - x))
- if self.scale_mode == "cycle":
- lr = base_lr + base_height * self.scale_fn(cycle)
- else:
- lr = base_lr + base_height * self.scale_fn(self.last_batch_iteration)
- lrs.append(lr)
- return lrs
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py
deleted file mode 100644
index 72d4db963ffd95851b945911b3db9941426583ab..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py
+++ /dev/null
@@ -1,8 +0,0 @@
-_base_ = '../htc/htc_r50_fpn_1x_coco.py'
-
-model = dict(
- backbone=dict(
- type='DetectoRS_ResNet',
- conv_cfg=dict(type='ConvAWS'),
- sac=dict(type='SAC', use_deform=True),
- stage_with_sac=(False, True, True, True)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/README.md
deleted file mode 100644
index 17eaa7d1dea393cbf9b8e3fd44c607b447812e6f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/README.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# Mixed Precision Training
-
-## Introduction
-
-[OTHERS]
-
-```latex
-@article{micikevicius2017mixed,
- title={Mixed precision training},
- author={Micikevicius, Paulius and Narang, Sharan and Alben, Jonah and Diamos, Gregory and Elsen, Erich and Garcia, David and Ginsburg, Boris and Houston, Michael and Kuchaiev, Oleksii and Venkatesh, Ganesh and others},
- journal={arXiv preprint arXiv:1710.03740},
- year={2017}
-}
-```
-
-## Results and Models
-
-| Architecture | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-|:------------:|:---------:|:-------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:|
-| Faster R-CNN | R-50 | pytorch | 1x | 3.4 | 28.8 | 37.5 | - |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fp16/faster_rcnn_r50_fpn_fp16_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fp16/faster_rcnn_r50_fpn_fp16_1x_coco/faster_rcnn_r50_fpn_fp16_1x_coco_20200204-d4dc1471.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fp16/faster_rcnn_r50_fpn_fp16_1x_coco/faster_rcnn_r50_fpn_fp16_1x_coco_20200204_143530.log.json) |
-| Mask R-CNN | R-50 | pytorch | 1x | 3.6 | 24.1 | 38.1 | 34.7 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fp16/mask_rcnn_r50_fpn_fp16_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fp16/mask_rcnn_r50_fpn_fp16_1x_coco/mask_rcnn_r50_fpn_fp16_1x_coco_20200205-59faf7e4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fp16/mask_rcnn_r50_fpn_fp16_1x_coco/mask_rcnn_r50_fpn_fp16_1x_coco_20200205_130539.log.json) |
-| Retinanet | R-50 | pytorch | 1x | 2.8 | 31.6 | 36.4 | |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fp16/retinanet_r50_fpn_fp16_1x_coco/retinanet_r50_fpn_fp16_1x_coco_20200702-0dbfb212.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fp16/retinanet_r50_fpn_fp16_1x_coco/retinanet_r50_fpn_fp16_1x_coco_20200702_020127.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py
deleted file mode 100644
index 3995603a6cee82a7d7cff620cb8bffe14b15b6a1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnest101',
- backbone=dict(stem_channels=128, depth=101))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_task.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_task.py
deleted file mode 100644
index e4152fde6861488acc3595fa25c456bf60f134b9..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_task.py
+++ /dev/null
@@ -1,331 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-from collections import OrderedDict, defaultdict
-import json
-import os
-import logging
-from argparse import ArgumentError
-
-from fairseq import options, models
-from fairseq.data import (
- data_utils,
- Dictionary,
- LanguagePairDataset,
- IndexedDataset,
- FairseqDataset,
-)
-from .multitask_data_utils import (
- MultitaskDatasetWrapper,
- MultidatasetEpochBatchIterator,
-)
-
-
-from fairseq.tasks import LegacyFairseqTask, register_task
-
-logger = logging.getLogger(__name__)
-
-
-@register_task("laser")
-class LaserTask(LegacyFairseqTask):
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- parser.add_argument(
- "configfile", metavar="PATH", help="dataset configuration file in json"
- )
- parser.add_argument(
- "--weighting-alpha",
- type=float,
- default=None,
- help="alpha for automatic weighting",
- )
- parser.add_argument(
- "--raw-text", action="store_true", help="load raw text dataset"
- )
- parser.add_argument(
- "--left-pad-source",
- default="True",
- type=str,
- metavar="BOOL",
- help="pad the source on the left (default: True)",
- )
- parser.add_argument(
- "--left-pad-target",
- default="False",
- type=str,
- metavar="BOOL",
- help="pad the target on the left (default: False)",
- )
- try:
- parser.add_argument(
- "--max-source-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the source sequence",
- )
- parser.add_argument(
- "--max-target-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the target sequence",
- )
- except ArgumentError:
- # this might have already been defined. Once we transition this to hydra it should be fine to add it here.
- pass
-
- def __init__(self, args, config, src_dictionary, tgt_dictionary, num_tasks):
- super().__init__(args)
- self.config = config
- self.src_dictionary = src_dictionary
- self.tgt_dictionary = tgt_dictionary
- self.num_tasks = num_tasks
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- with open(args.configfile, "r") as f:
- config = json.load(f)
- num_tasks = max(dataset["id"] for dataset in config["train"]) + 1
-
- args.left_pad_source = options.eval_bool(args.left_pad_source)
- args.left_pad_target = options.eval_bool(args.left_pad_target)
-
- src_dictionary = Dictionary.load(config["src_vocab"])
- tgt_dictionary = Dictionary.load(config["tgt_vocab"])
-
- logger.info(
- "| src Dictionary {} : {} types".format(
- config["src_vocab"], len(src_dictionary)
- )
- )
- logger.info(
- "| tgt Dictionary {} : {} types".format(
- config["tgt_vocab"], len(tgt_dictionary)
- )
- )
-
- return cls(args, config, src_dictionary, tgt_dictionary, num_tasks)
-
- # Experimental overriding for backtranslation
- def build_model(self, args):
- model = models.build_model(args, self)
- return model
-
- def dataset(self, split):
- if split not in self.datasets:
- raise KeyError("Dataset not loaded: " + split)
- return self.datasets[split]
-
- def load_dataset(self, split, epoch=1, **kwargs):
- """Load a dataset split."""
-
- def indexed_dataset(path, dictionary):
- if self.args.raw_text:
- raise Exception("Unable to handle raw text.")
- dataset = IndexedDataset(path, fix_lua_indexing=True)
-
- return dataset
-
- pair_datasets = OrderedDict()
-
- if split == "valid":
- self.datasets[split] = pair_datasets
- return
-
- if split not in self.config:
- raise FileNotFoundError(
- "Dataset not found in config file: {}".format(split)
- )
-
- size_by_corpus = defaultdict(int)
- size_sum = 0
- size_sum_with_subsampling = 0
- init_pair_datasets = {}
-
- for dataset_config in self.config[split]:
- src_path = os.path.dirname(dataset_config["src"])
- corpus_name = src_path.split("/")[-2]
- language_pair_name = src_path.split("/")[-1]
- pair_datasets_key = corpus_name + "-" + language_pair_name
-
- logger.info(f"loading... {pair_datasets_key}")
- if "src" in dataset_config:
- src_dataset = indexed_dataset(
- dataset_config["src"], self.src_dictionary
- )
- else:
- src_dataset = None
-
- if "tgt" in dataset_config:
- tgt_dataset = indexed_dataset(
- dataset_config["tgt"], self.tgt_dictionary
- )
- else:
- tgt_dataset = None
-
- dataset = LanguagePairDataset(
- src_dataset,
- src_dataset.sizes,
- self.src_dictionary,
- tgt_dataset,
- tgt_dataset.sizes,
- self.tgt_dictionary,
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- )
-
- if pair_datasets_key in init_pair_datasets:
- logger.warning(
- f"Ignoring already added {pair_datasets_key}. "
- f"Consider using `sample` key in order to upsample."
- )
- else:
- init_pair_datasets[pair_datasets_key] = {
- "dataset": dataset,
- "sample": dataset_config.get("sample", None),
- "id": dataset_config.get("id", None),
- "len": len(dataset),
- }
-
- length_sum = 0
- weighted_freqs_sum = 0
- freq_per_dataset = {}
- vmax = 0
- vmin = 1
- weighted_freq_per_dataset = {}
-
- if self.args.weighting_alpha:
- for key in init_pair_datasets:
- if init_pair_datasets[key]["sample"] is None:
- length_sum += len(init_pair_datasets[key]["dataset"])
-
- for key in init_pair_datasets:
- if init_pair_datasets[key]["sample"] is None:
- val = float(init_pair_datasets[key]["len"]) / length_sum
- freq_per_dataset[key] = val
- weighted_freqs_sum += val ** self.args.weighting_alpha
-
- for key in freq_per_dataset:
- val = (
- freq_per_dataset[key] ** self.args.weighting_alpha
- / weighted_freqs_sum
- )
- vmin = min(vmin, val)
- vmax = max(vmax, val)
- weighted_freq_per_dataset[key] = val
-
- for pair_datasets_key in init_pair_datasets:
- dataset_config = init_pair_datasets[pair_datasets_key]
- dataset = dataset_config["dataset"]
- sample = dataset_config["sample"]
- if sample is None:
- sample = 1.0
-
- if pair_datasets_key in weighted_freq_per_dataset:
- w = vmax / weighted_freq_per_dataset[pair_datasets_key]
- sample = w
-
- sample = round(sample)
-
- initial_sample = sample
- initial_pair_datasets_key = pair_datasets_key
-
- while sample >= 1.0:
- assert (
- pair_datasets_key not in pair_datasets
- ), f"{pair_datasets_key} already in"
- size_sum_with_subsampling += len(dataset)
- pair_datasets[pair_datasets_key] = MultitaskDatasetWrapper(
- dataset, dataset_config.get("id", 0), 1.0, name=pair_datasets_key
- )
- size_sum += len(dataset)
- sample -= 1.0
- pair_datasets_key += "-up"
-
- assert sample < 1e-6, f"sample remains > 0 {pair_datasets_key}"
-
- logger.info(
- f"added pair {initial_pair_datasets_key} length {len(dataset)} new_length = {len(dataset)*initial_sample}"
- )
- size_by_corpus[corpus_name] += len(dataset)
-
- self.datasets[split] = pair_datasets
- logger.info(
- f"Datasets number = {len(self.datasets[split])} size = {size_sum} size_sum_with_subsampling = {size_sum_with_subsampling}"
- )
-
- @property
- def source_dictionary(self):
- return self.src_dictionary
-
- @property
- def target_dictionary(self):
- return self.tgt_dictionary
-
- def get_batch_iterator(
- self,
- dataset,
- max_tokens=None,
- max_sentences=None,
- max_positions=None,
- ignore_invalid_inputs=False,
- required_batch_size_multiple=1,
- seed=1,
- num_shards=1,
- shard_id=0,
- num_workers=0,
- epoch=1,
- data_buffer_size=0,
- disable_iterator_cache=False,
- ):
-
- assert isinstance(dataset, OrderedDict)
- assert len(dataset)
- assert isinstance(dataset[next(iter(dataset))], FairseqDataset)
-
- # initialize the dataset with the correct starting epoch
- for _, dt in dataset.items():
- dt.set_epoch(epoch)
-
- indices = OrderedDict()
- batch_sampler = OrderedDict()
-
- with data_utils.numpy_seed(seed + epoch):
- for key, dt in dataset.items():
- logger.info(f"\t ordered_indices {key}")
- indices[key] = dt.ordered_indices()
-
- # filter examples that are too large
- if max_positions is not None:
- for key, dt in dataset.items():
- logger.info(f"\t filter_by_size {key}")
- indices[key], ignored = dt.filter_indices_by_size(
- indices[key], max_positions
- )
-
- for key, dt in dataset.items():
- logger.info(f"\t batch_by_size {key}")
- batch_sampler[key] = data_utils.batch_by_size(
- indices[key],
- dt.num_tokens,
- max_tokens=max_tokens,
- max_sentences=max_sentences,
- required_batch_size_multiple=required_batch_size_multiple,
- )
-
- epoch_iter = MultidatasetEpochBatchIterator(
- dataset=dataset,
- batch_sampler=batch_sampler,
- seed=seed,
- num_shards=num_shards,
- shard_id=shard_id,
- num_workers=num_workers,
- epoch=epoch,
- )
-
- return epoch_iter
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/README.pretraining.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/README.pretraining.md
deleted file mode 100644
index a4e7453529111fdd198be637d911d1764cb96c0e..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/README.pretraining.md
+++ /dev/null
@@ -1,84 +0,0 @@
-# Pretraining RoBERTa using your own data
-
-This tutorial will walk you through pretraining RoBERTa over your own data.
-
-### 1) Preprocess the data
-
-Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training.
-
-We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/)
-to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course
-this dataset is quite small, so the resulting pretrained model will perform
-poorly, but it gives the general idea.
-
-First download the dataset:
-```bash
-wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
-unzip wikitext-103-raw-v1.zip
-```
-
-Next encode it with the GPT-2 BPE:
-```bash
-mkdir -p gpt2_bpe
-wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
-wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
-for SPLIT in train valid test; do \
- python -m examples.roberta.multiprocessing_bpe_encoder \
- --encoder-json gpt2_bpe/encoder.json \
- --vocab-bpe gpt2_bpe/vocab.bpe \
- --inputs wikitext-103-raw/wiki.${SPLIT}.raw \
- --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \
- --keep-empty \
- --workers 60; \
-done
-```
-
-Finally preprocess/binarize the data using the GPT-2 fairseq dictionary:
-```bash
-wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt
-fairseq-preprocess \
- --only-source \
- --srcdict gpt2_bpe/dict.txt \
- --trainpref wikitext-103-raw/wiki.train.bpe \
- --validpref wikitext-103-raw/wiki.valid.bpe \
- --testpref wikitext-103-raw/wiki.test.bpe \
- --destdir data-bin/wikitext-103 \
- --workers 60
-```
-
-### 2) Train RoBERTa base
-```bash
-DATA_DIR=data-bin/wikitext-103
-
-fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \
---config-name base task.data=$DATA_DIR
-```
-
-**Note:** You can optionally resume training the released RoBERTa base model by
-adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`.
-
-**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses
-a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to
-further increase the batch size by 16x (`optimization.update_freq`), for a total batch size
-of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need
-to reduce `dataset.batch_size` and increase dataset.update_freq to compensate.
-Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly
-to increase training speed.
-
-**Note:** The learning rate and batch size are tightly connected and need to be
-adjusted together. We generally recommend increasing the learning rate as you
-increase the batch size according to the following table (although it's also
-dataset dependent, so don't rely on the following values too closely):
-
-batch size | peak learning rate
----|---
-256 | 0.0001
-2048 | 0.0005
-8192 | 0.0007
-
-### 3) Load your pretrained model
-```python
-from fairseq.models.roberta import RobertaModel
-roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data')
-assert isinstance(roberta.model, torch.nn.Module)
-```
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/ASG_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/ASG_loss.py
deleted file mode 100644
index 41f50bbd70388ce723f2d316d4e9776bcd6be3c9..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/ASG_loss.py
+++ /dev/null
@@ -1,170 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from examples.speech_recognition.data.replabels import pack_replabels
-from fairseq import utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-
-
-@register_criterion("asg_loss")
-class ASGCriterion(FairseqCriterion):
- @staticmethod
- def add_args(parser):
- group = parser.add_argument_group("ASG Loss")
- group.add_argument(
- "--asg-transitions-init",
- help="initial diagonal value of transition matrix",
- type=float,
- default=0.0,
- )
- group.add_argument(
- "--max-replabel", help="maximum # of replabels", type=int, default=2
- )
- group.add_argument(
- "--linseg-updates",
- help="# of training updates to use LinSeg initialization",
- type=int,
- default=0,
- )
- group.add_argument(
- "--hide-linseg-messages",
- help="hide messages about LinSeg initialization",
- action="store_true",
- )
-
- def __init__(
- self,
- task,
- silence_token,
- asg_transitions_init,
- max_replabel,
- linseg_updates,
- hide_linseg_messages,
- ):
- from flashlight.lib.sequence.criterion import ASGLoss, CriterionScaleMode
-
- super().__init__(task)
- self.tgt_dict = task.target_dictionary
- self.eos = self.tgt_dict.eos()
- self.silence = (
- self.tgt_dict.index(silence_token)
- if silence_token in self.tgt_dict
- else None
- )
- self.max_replabel = max_replabel
-
- num_labels = len(self.tgt_dict)
- self.asg = ASGLoss(num_labels, scale_mode=CriterionScaleMode.TARGET_SZ_SQRT)
- self.asg.trans = torch.nn.Parameter(
- asg_transitions_init * torch.eye(num_labels), requires_grad=True
- )
-
- self.linseg_progress = torch.nn.Parameter(
- torch.tensor([0], dtype=torch.int), requires_grad=False
- )
- self.linseg_maximum = linseg_updates
- self.linseg_message_state = "none" if hide_linseg_messages else "start"
-
- @classmethod
- def build_criterion(cls, args, task):
- return cls(
- task,
- args.silence_token,
- args.asg_transitions_init,
- args.max_replabel,
- args.linseg_updates,
- args.hide_linseg_messages,
- )
-
- def linseg_step(self):
- if not self.training:
- return False
- if self.linseg_progress.item() < self.linseg_maximum:
- if self.linseg_message_state == "start":
- print("| using LinSeg to initialize ASG")
- self.linseg_message_state = "finish"
- self.linseg_progress.add_(1)
- return True
- elif self.linseg_message_state == "finish":
- print("| finished LinSeg initialization")
- self.linseg_message_state = "none"
- return False
-
- def replace_eos_with_silence(self, tgt):
- if tgt[-1] != self.eos:
- return tgt
- elif self.silence is None or (len(tgt) > 1 and tgt[-2] == self.silence):
- return tgt[:-1]
- else:
- return tgt[:-1] + [self.silence]
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
-
- net_output = model(**sample["net_input"])
- emissions = net_output["encoder_out"].transpose(0, 1).contiguous()
- B = emissions.size(0)
- T = emissions.size(1)
- device = emissions.device
-
- target = torch.IntTensor(B, T)
- target_size = torch.IntTensor(B)
- using_linseg = self.linseg_step()
-
- for b in range(B):
- initial_target_size = sample["target_lengths"][b].item()
- if initial_target_size == 0:
- raise ValueError("target size cannot be zero")
-
- tgt = sample["target"][b, :initial_target_size].tolist()
- tgt = self.replace_eos_with_silence(tgt)
- tgt = pack_replabels(tgt, self.tgt_dict, self.max_replabel)
- tgt = tgt[:T]
-
- if using_linseg:
- tgt = [tgt[t * len(tgt) // T] for t in range(T)]
-
- target[b][: len(tgt)] = torch.IntTensor(tgt)
- target_size[b] = len(tgt)
-
- loss = self.asg.forward(emissions, target.to(device), target_size.to(device))
-
- if reduce:
- loss = torch.sum(loss)
-
- sample_size = (
- sample["target"].size(0) if self.args.sentence_avg else sample["ntokens"]
- )
- logging_output = {
- "loss": utils.item(loss.data) if reduce else loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample["target"].size(0),
- "sample_size": sample_size,
- }
- return loss, sample_size, logging_output
-
- @staticmethod
- def aggregate_logging_outputs(logging_outputs):
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
- agg_output = {
- "loss": loss_sum / nsentences,
- "ntokens": ntokens,
- "nsentences": nsentences,
- "sample_size": sample_size,
- }
- return agg_output
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/concat_sentences_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/concat_sentences_dataset.py
deleted file mode 100644
index 625a29370e90f9d1d7274024afb902ed83a22325..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/concat_sentences_dataset.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from . import FairseqDataset
-
-
-class ConcatSentencesDataset(FairseqDataset):
- def __init__(self, *datasets):
- super().__init__()
- self.datasets = datasets
- assert all(
- len(ds) == len(datasets[0]) for ds in datasets
- ), "datasets must have the same length"
-
- def __getitem__(self, index):
- return torch.cat([ds[index] for ds in self.datasets])
-
- def __len__(self):
- return len(self.datasets[0])
-
- def collater(self, samples):
- return self.datasets[0].collater(samples)
-
- @property
- def sizes(self):
- return sum(ds.sizes for ds in self.datasets)
-
- def num_tokens(self, index):
- return sum(ds.num_tokens(index) for ds in self.datasets)
-
- def size(self, index):
- return sum(ds.size(index) for ds in self.datasets)
-
- def ordered_indices(self):
- return self.datasets[0].ordered_indices()
-
- @property
- def supports_prefetch(self):
- return any(getattr(ds, "supports_prefetch", False) for ds in self.datasets)
-
- def prefetch(self, indices):
- for ds in self.datasets:
- if getattr(ds, "supports_prefetch", False):
- ds.prefetch(indices)
-
- def set_epoch(self, epoch):
- super().set_epoch(epoch)
- for ds in self.datasets:
- if hasattr(ds, "set_epoch"):
- ds.set_epoch(epoch)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/test_fsdp.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/test_fsdp.sh
deleted file mode 100644
index 1f428a035e4474427ded991f8e8307ea59f61f69..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/test_fsdp.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-rm -rf fsdp_dummy
-mkdir -p fsdp_dummy
-CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 256 --batch-size 8 \
- --arch transformer_lm_gpt2_tiny \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 5 --log-format json --log-interval 1 \
- --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \
- --restore-file x.pt "$@"
-
-# Now we try to load the checkpoint
-CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 256 --batch-size 8 \
- --arch transformer_lm_gpt2_tiny \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 2 --log-format json --log-interval 1 \
- --save-interval-updates 2 --save-dir fsdp_dummy
diff --git a/spaces/Harsimran19/DepthGAN/README.md b/spaces/Harsimran19/DepthGAN/README.md
deleted file mode 100644
index 39ec1ef3be979394ab586810ef9eb5727bb2b21d..0000000000000000000000000000000000000000
--- a/spaces/Harsimran19/DepthGAN/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DepthGAN
-emoji: 🐠
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/inference_e2e.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/inference_e2e.py
deleted file mode 100644
index 062aecd4280925336ab1d36420d2cd47febf661c..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/inference_e2e.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import glob
-import os
-import numpy as np
-import argparse
-import json
-import torch
-from scipy.io.wavfile import write
-from env import AttrDict
-from meldataset import MAX_WAV_VALUE
-from models import Generator
-
-h = None
-device = None
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + "*")
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return ""
- return sorted(cp_list)[-1]
-
-
-def inference(a):
- generator = Generator(h).to(device)
-
- state_dict_g = load_checkpoint(a.checkpoint_file, device)
- generator.load_state_dict(state_dict_g["generator"])
-
- filelist = os.listdir(a.input_mels_dir)
-
- os.makedirs(a.output_dir, exist_ok=True)
-
- generator.eval()
- generator.remove_weight_norm()
- with torch.no_grad():
- for i, filname in enumerate(filelist):
- x = np.load(os.path.join(a.input_mels_dir, filname))
- x = torch.FloatTensor(x).to(device)
- y_g_hat = generator(x)
- audio = y_g_hat.squeeze()
- audio = audio * MAX_WAV_VALUE
- audio = audio.cpu().numpy().astype("int16")
-
- output_file = os.path.join(
- a.output_dir, os.path.splitext(filname)[0] + "_generated_e2e.wav"
- )
- write(output_file, h.sampling_rate, audio)
- print(output_file)
-
-
-def main():
- print("Initializing Inference Process..")
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--input_mels_dir", default="test_mel_files")
- parser.add_argument("--output_dir", default="generated_files_from_mel")
- parser.add_argument("--checkpoint_file", required=True)
- a = parser.parse_args()
-
- config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json")
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- torch.manual_seed(h.seed)
- global device
- if torch.cuda.is_available():
- torch.cuda.manual_seed(h.seed)
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
-
- inference(a)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/generate_mels.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/generate_mels.py
deleted file mode 100644
index a3d331aef019cfd8cf45d6264db88d0fa26e5c0f..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/generate_mels.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import numpy as np
-import os
-import torch
-import commons
-
-import models
-import utils
-from argparse import ArgumentParser
-from tqdm import tqdm
-from text import text_to_sequence
-
-if __name__ == "__main__":
- parser = ArgumentParser()
- parser.add_argument("-m", "--model_dir", required=True, type=str)
- parser.add_argument("-s", "--mels_dir", required=True, type=str)
- args = parser.parse_args()
- MODEL_DIR = args.model_dir # path to model dir
- SAVE_MELS_DIR = args.mels_dir # path to save generated mels
-
- if not os.path.exists(SAVE_MELS_DIR):
- os.makedirs(SAVE_MELS_DIR)
-
- hps = utils.get_hparams_from_dir(MODEL_DIR)
- symbols = list(hps.data.punc) + list(hps.data.chars)
- checkpoint_path = utils.latest_checkpoint_path(MODEL_DIR)
- cleaner = hps.data.text_cleaners
-
- model = models.FlowGenerator(
- len(symbols) + getattr(hps.data, "add_blank", False),
- out_channels=hps.data.n_mel_channels,
- **hps.model
- ).to("cuda")
-
- utils.load_checkpoint(checkpoint_path, model)
- model.decoder.store_inverse() # do not calcuate jacobians for fast decoding
- _ = model.eval()
-
- def get_mel(text, fpath):
- if getattr(hps.data, "add_blank", False):
- text_norm = text_to_sequence(text, symbols, cleaner)
- text_norm = commons.intersperse(text_norm, len(symbols))
- else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality
- text = " " + text.strip() + " "
- text_norm = text_to_sequence(text, symbols, cleaner)
-
- sequence = np.array(text_norm)[None, :]
-
- x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long()
- x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda()
-
- with torch.no_grad():
- noise_scale = 0.667
- length_scale = 1.0
- (y_gen_tst, *_), *_, (attn_gen, *_) = model(
- x_tst,
- x_tst_lengths,
- gen=True,
- noise_scale=noise_scale,
- length_scale=length_scale,
- )
-
- np.save(os.path.join(SAVE_MELS_DIR, fpath), y_gen_tst.cpu().detach().numpy())
-
- for f in [hps.data.training_files, hps.data.validation_files]:
- file_lines = open(f).read().splitlines()
-
- for line in tqdm(file_lines):
- fname, text = line.split("|")
- fname = os.path.basename(fname).replace(".wav", ".npy")
- get_mel(text, fname)
diff --git a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/.ipynb_checkpoints/utils-checkpoint.py b/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/.ipynb_checkpoints/utils-checkpoint.py
deleted file mode 100644
index 1206244aa2a004d9f653782de798bfef9e5e726b..0000000000000000000000000000000000000000
--- a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/.ipynb_checkpoints/utils-checkpoint.py
+++ /dev/null
@@ -1,555 +0,0 @@
-# %BANNER_BEGIN%
-# ---------------------------------------------------------------------
-# %COPYRIGHT_BEGIN%
-#
-# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL
-#
-# Unpublished Copyright (c) 2020
-# Magic Leap, Inc., All Rights Reserved.
-#
-# NOTICE: All information contained herein is, and remains the property
-# of COMPANY. The intellectual and technical concepts contained herein
-# are proprietary to COMPANY and may be covered by U.S. and Foreign
-# Patents, patents in process, and are protected by trade secret or
-# copyright law. Dissemination of this information or reproduction of
-# this material is strictly forbidden unless prior written permission is
-# obtained from COMPANY. Access to the source code contained herein is
-# hereby forbidden to anyone except current COMPANY employees, managers
-# or contractors who have executed Confidentiality and Non-disclosure
-# agreements explicitly covering such access.
-#
-# The copyright notice above does not evidence any actual or intended
-# publication or disclosure of this source code, which includes
-# information that is confidential and/or proprietary, and is a trade
-# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION,
-# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS
-# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS
-# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND
-# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE
-# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS
-# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE,
-# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART.
-#
-# %COPYRIGHT_END%
-# ----------------------------------------------------------------------
-# %AUTHORS_BEGIN%
-#
-# Originating Authors: Paul-Edouard Sarlin
-# Daniel DeTone
-# Tomasz Malisiewicz
-#
-# %AUTHORS_END%
-# --------------------------------------------------------------------*/
-# %BANNER_END%
-
-from pathlib import Path
-import time
-from collections import OrderedDict
-from threading import Thread
-import numpy as np
-import cv2
-import torch
-import matplotlib.pyplot as plt
-import matplotlib
-matplotlib.use('Agg')
-
-
-class AverageTimer:
- """ Class to help manage printing simple timing of code execution. """
-
- def __init__(self, smoothing=0.3, newline=False):
- self.smoothing = smoothing
- self.newline = newline
- self.times = OrderedDict()
- self.will_print = OrderedDict()
- self.reset()
-
- def reset(self):
- now = time.time()
- self.start = now
- self.last_time = now
- for name in self.will_print:
- self.will_print[name] = False
-
- def update(self, name='default'):
- now = time.time()
- dt = now - self.last_time
- if name in self.times:
- dt = self.smoothing * dt + (1 - self.smoothing) * self.times[name]
- self.times[name] = dt
- self.will_print[name] = True
- self.last_time = now
-
- def print(self, text='Timer'):
- total = 0.
- print('[{}]'.format(text), end=' ')
- for key in self.times:
- val = self.times[key]
- if self.will_print[key]:
- print('%s=%.3f' % (key, val), end=' ')
- total += val
- print('total=%.3f sec {%.1f FPS}' % (total, 1./total), end=' ')
- if self.newline:
- print(flush=True)
- else:
- print(end='\r', flush=True)
- self.reset()
-
-
-class VideoStreamer:
- """ Class to help process image streams. Four types of possible inputs:"
- 1.) USB Webcam.
- 2.) An IP camera
- 3.) A directory of images (files in directory matching 'image_glob').
- 4.) A video file, such as an .mp4 or .avi file.
- """
- def __init__(self, basedir, resize, skip, image_glob, max_length=1000000):
- self._ip_grabbed = False
- self._ip_running = False
- self._ip_camera = False
- self._ip_image = None
- self._ip_index = 0
- self.cap = []
- self.camera = True
- self.video_file = False
- self.listing = []
- self.resize = resize
- self.interp = cv2.INTER_AREA
- self.i = 0
- self.skip = skip
- self.max_length = max_length
- if isinstance(basedir, int) or basedir.isdigit():
- print('==> Processing USB webcam input: {}'.format(basedir))
- self.cap = cv2.VideoCapture(int(basedir))
- self.listing = range(0, self.max_length)
- elif basedir.startswith(('http', 'rtsp')):
- print('==> Processing IP camera input: {}'.format(basedir))
- self.cap = cv2.VideoCapture(basedir)
- self.start_ip_camera_thread()
- self._ip_camera = True
- self.listing = range(0, self.max_length)
- elif Path(basedir).is_dir():
- print('==> Processing image directory input: {}'.format(basedir))
- self.listing = list(Path(basedir).glob(image_glob[0]))
- for j in range(1, len(image_glob)):
- image_path = list(Path(basedir).glob(image_glob[j]))
- self.listing = self.listing + image_path
- self.listing.sort()
- self.listing = self.listing[::self.skip]
- self.max_length = np.min([self.max_length, len(self.listing)])
- if self.max_length == 0:
- raise IOError('No images found (maybe bad \'image_glob\' ?)')
- self.listing = self.listing[:self.max_length]
- self.camera = False
- elif Path(basedir).exists():
- print('==> Processing video input: {}'.format(basedir))
- self.cap = cv2.VideoCapture(basedir)
- self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
- num_frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
- self.listing = range(0, num_frames)
- self.listing = self.listing[::self.skip]
- self.video_file = True
- self.max_length = np.min([self.max_length, len(self.listing)])
- self.listing = self.listing[:self.max_length]
- else:
- raise ValueError('VideoStreamer input \"{}\" not recognized.'.format(basedir))
- if self.camera and not self.cap.isOpened():
- raise IOError('Could not read camera')
-
- def load_image(self, impath):
- """ Read image as grayscale and resize to img_size.
- Inputs
- impath: Path to input image.
- Returns
- grayim: uint8 numpy array sized H x W.
- """
- grayim = cv2.imread(impath, 0)
- if grayim is None:
- raise Exception('Error reading image %s' % impath)
- w, h = grayim.shape[1], grayim.shape[0]
- w_new, h_new = process_resize(w, h, self.resize)
- grayim = cv2.resize(
- grayim, (w_new, h_new), interpolation=self.interp)
- return grayim
-
- def next_frame(self):
- """ Return the next frame, and increment internal counter.
- Returns
- image: Next H x W image.
- status: True or False depending whether image was loaded.
- """
-
- if self.i == self.max_length:
- return (None, False)
- if self.camera:
-
- if self._ip_camera:
- #Wait for first image, making sure we haven't exited
- while self._ip_grabbed is False and self._ip_exited is False:
- time.sleep(.001)
-
- ret, image = self._ip_grabbed, self._ip_image.copy()
- if ret is False:
- self._ip_running = False
- else:
- ret, image = self.cap.read()
- if ret is False:
- print('VideoStreamer: Cannot get image from camera')
- return (None, False)
- w, h = image.shape[1], image.shape[0]
- if self.video_file:
- self.cap.set(cv2.CAP_PROP_POS_FRAMES, self.listing[self.i])
-
- w_new, h_new = process_resize(w, h, self.resize)
- image = cv2.resize(image, (w_new, h_new),
- interpolation=self.interp)
- image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
- else:
- image_file = str(self.listing[self.i])
- image = self.load_image(image_file)
- self.i = self.i + 1
- return (image, True)
-
- def start_ip_camera_thread(self):
- self._ip_thread = Thread(target=self.update_ip_camera, args=())
- self._ip_running = True
- self._ip_thread.start()
- self._ip_exited = False
- return self
-
- def update_ip_camera(self):
- while self._ip_running:
- ret, img = self.cap.read()
- if ret is False:
- self._ip_running = False
- self._ip_exited = True
- self._ip_grabbed = False
- return
-
- self._ip_image = img
- self._ip_grabbed = ret
- self._ip_index += 1
- #print('IPCAMERA THREAD got frame {}'.format(self._ip_index))
-
-
- def cleanup(self):
- self._ip_running = False
-
-# --- PREPROCESSING ---
-
-def process_resize(w, h, resize):
- assert(len(resize) > 0 and len(resize) <= 2)
- if len(resize) == 1 and resize[0] > -1:
- scale = resize[0] / max(h, w)
- w_new, h_new = int(round(w*scale)), int(round(h*scale))
- elif len(resize) == 1 and resize[0] == -1:
- w_new, h_new = w, h
- else: # len(resize) == 2:
- w_new, h_new = resize[0], resize[1]
-
- # Issue warning if resolution is too small or too large.
- if max(w_new, h_new) < 160:
- print('Warning: input resolution is very small, results may vary')
- elif max(w_new, h_new) > 2000:
- print('Warning: input resolution is very large, results may vary')
-
- return w_new, h_new
-
-
-def frame2tensor(frame, device):
- return torch.from_numpy(frame/255.).float()[None, None].to(device)
-
-
-def read_image(path, device, resize, rotation, resize_float):
- image = cv2.imread(str(path), cv2.IMREAD_GRAYSCALE)
- if image is None:
- return None, None, None
- w, h = image.shape[1], image.shape[0]
- w_new, h_new = process_resize(w, h, resize)
- scales = (float(w) / float(w_new), float(h) / float(h_new))
-
- if resize_float:
- image = cv2.resize(image.astype('float32'), (w_new, h_new))
- else:
- image = cv2.resize(image, (w_new, h_new)).astype('float32')
-
- if rotation != 0:
- image = np.rot90(image, k=rotation)
- if rotation % 2:
- scales = scales[::-1]
-
- inp = frame2tensor(image, device)
- return image, inp, scales
-
-
-# --- GEOMETRY ---
-
-
-def estimate_pose(kpts0, kpts1, K0, K1, thresh, conf=0.99999):
- if len(kpts0) < 5:
- return None
-
- f_mean = np.mean([K0[0, 0], K1[1, 1], K0[0, 0], K1[1, 1]])
- norm_thresh = thresh / f_mean
-
- kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None]
- kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None]
-
- E, mask = cv2.findEssentialMat(
- kpts0, kpts1, np.eye(3), threshold=norm_thresh, prob=conf,
- method=cv2.RANSAC)
-
- assert E is not None
-
- best_num_inliers = 0
- ret = None
- for _E in np.split(E, len(E) / 3):
- n, R, t, _ = cv2.recoverPose(
- _E, kpts0, kpts1, np.eye(3), 1e9, mask=mask)
- if n > best_num_inliers:
- best_num_inliers = n
- ret = (R, t[:, 0], mask.ravel() > 0)
- return ret
-
-
-def rotate_intrinsics(K, image_shape, rot):
- """image_shape is the shape of the image after rotation"""
- assert rot <= 3
- h, w = image_shape[:2][::-1 if (rot % 2) else 1]
- fx, fy, cx, cy = K[0, 0], K[1, 1], K[0, 2], K[1, 2]
- rot = rot % 4
- if rot == 1:
- return np.array([[fy, 0., cy],
- [0., fx, w-1-cx],
- [0., 0., 1.]], dtype=K.dtype)
- elif rot == 2:
- return np.array([[fx, 0., w-1-cx],
- [0., fy, h-1-cy],
- [0., 0., 1.]], dtype=K.dtype)
- else: # if rot == 3:
- return np.array([[fy, 0., h-1-cy],
- [0., fx, cx],
- [0., 0., 1.]], dtype=K.dtype)
-
-
-def rotate_pose_inplane(i_T_w, rot):
- rotation_matrices = [
- np.array([[np.cos(r), -np.sin(r), 0., 0.],
- [np.sin(r), np.cos(r), 0., 0.],
- [0., 0., 1., 0.],
- [0., 0., 0., 1.]], dtype=np.float32)
- for r in [np.deg2rad(d) for d in (0, 270, 180, 90)]
- ]
- return np.dot(rotation_matrices[rot], i_T_w)
-
-
-def scale_intrinsics(K, scales):
- scales = np.diag([1./scales[0], 1./scales[1], 1.])
- return np.dot(scales, K)
-
-
-def to_homogeneous(points):
- return np.concatenate([points, np.ones_like(points[:, :1])], axis=-1)
-
-
-def compute_epipolar_error(kpts0, kpts1, T_0to1, K0, K1):
- kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None]
- kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None]
- kpts0 = to_homogeneous(kpts0)
- kpts1 = to_homogeneous(kpts1)
-
- t0, t1, t2 = T_0to1[:3, 3]
- t_skew = np.array([
- [0, -t2, t1],
- [t2, 0, -t0],
- [-t1, t0, 0]
- ])
- E = t_skew @ T_0to1[:3, :3]
-
- Ep0 = kpts0 @ E.T # N x 3
- p1Ep0 = np.sum(kpts1 * Ep0, -1) # N
- Etp1 = kpts1 @ E # N x 3
- d = p1Ep0**2 * (1.0 / (Ep0[:, 0]**2 + Ep0[:, 1]**2)
- + 1.0 / (Etp1[:, 0]**2 + Etp1[:, 1]**2))
- return d
-
-
-def angle_error_mat(R1, R2):
- cos = (np.trace(np.dot(R1.T, R2)) - 1) / 2
- cos = np.clip(cos, -1., 1.) # numercial errors can make it out of bounds
- return np.rad2deg(np.abs(np.arccos(cos)))
-
-
-def angle_error_vec(v1, v2):
- n = np.linalg.norm(v1) * np.linalg.norm(v2)
- return np.rad2deg(np.arccos(np.clip(np.dot(v1, v2) / n, -1.0, 1.0)))
-
-
-def compute_pose_error(T_0to1, R, t):
- R_gt = T_0to1[:3, :3]
- t_gt = T_0to1[:3, 3]
- error_t = angle_error_vec(t, t_gt)
- error_t = np.minimum(error_t, 180 - error_t) # ambiguity of E estimation
- error_R = angle_error_mat(R, R_gt)
- return error_t, error_R
-
-
-def pose_auc(errors, thresholds):
- sort_idx = np.argsort(errors)
- errors = np.array(errors.copy())[sort_idx]
- recall = (np.arange(len(errors)) + 1) / len(errors)
- errors = np.r_[0., errors]
- recall = np.r_[0., recall]
- aucs = []
- for t in thresholds:
- last_index = np.searchsorted(errors, t)
- r = np.r_[recall[:last_index], recall[last_index-1]]
- e = np.r_[errors[:last_index], t]
- aucs.append(np.trapz(r, x=e)/t)
- return aucs
-
-
-# --- VISUALIZATION ---
-
-
-def plot_image_pair(imgs, dpi=100, size=6, pad=.5):
- n = len(imgs)
- assert n == 2, 'number of images must be two'
- figsize = (size*n, size*3/4) if size is not None else None
- _, ax = plt.subplots(1, n, figsize=figsize, dpi=dpi)
- for i in range(n):
- ax[i].imshow(imgs[i], cmap=plt.get_cmap('gray'), vmin=0, vmax=255)
- ax[i].get_yaxis().set_ticks([])
- ax[i].get_xaxis().set_ticks([])
- for spine in ax[i].spines.values(): # remove frame
- spine.set_visible(False)
- plt.tight_layout(pad=pad)
-
-
-def plot_keypoints(kpts0, kpts1, color='w', ps=2):
- ax = plt.gcf().axes
- ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps)
- ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps)
-
-
-def plot_matches(kpts0, kpts1, color, lw=1.5, ps=4):
- fig = plt.gcf()
- ax = fig.axes
- fig.canvas.draw()
-
- transFigure = fig.transFigure.inverted()
- fkpts0 = transFigure.transform(ax[0].transData.transform(kpts0))
- fkpts1 = transFigure.transform(ax[1].transData.transform(kpts1))
-
- fig.lines = [matplotlib.lines.Line2D(
- (fkpts0[i, 0], fkpts1[i, 0]), (fkpts0[i, 1], fkpts1[i, 1]), zorder=1,
- transform=fig.transFigure, c=color[i], linewidth=lw)
- for i in range(len(kpts0))]
- ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps)
- ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps)
-
-
-def make_matching_plot(image0, image1, kpts0, kpts1, mkpts0, mkpts1,
- color, text, path, show_keypoints=False,
- fast_viz=False, opencv_display=False,
- opencv_title='matches', small_text=[]):
-
- if fast_viz:
- make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0, mkpts1,
- color, text, path, show_keypoints, 10,
- opencv_display, opencv_title, small_text)
- return
-
- plot_image_pair([image0, image1])
- if show_keypoints:
- plot_keypoints(kpts0, kpts1, color='k', ps=4)
- plot_keypoints(kpts0, kpts1, color='w', ps=2)
- plot_matches(mkpts0, mkpts1, color)
-
- fig = plt.gcf()
- txt_color = 'k' if image0[:100, :150].mean() > 200 else 'w'
- fig.text(
- 0.01, 0.99, '\n'.join(text), transform=fig.axes[0].transAxes,
- fontsize=15, va='top', ha='left', color=txt_color)
-
- txt_color = 'k' if image0[-100:, :150].mean() > 200 else 'w'
- fig.text(
- 0.01, 0.01, '\n'.join(small_text), transform=fig.axes[0].transAxes,
- fontsize=5, va='bottom', ha='left', color=txt_color)
-
- plt.savefig(str(path), bbox_inches='tight', pad_inches=0)
- plt.close()
-
-
-def make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0,
- mkpts1, color, text, path=None,
- show_keypoints=False, margin=10,
- opencv_display=False, opencv_title='',
- small_text=[]):
- H0, W0 = image0.shape
- H1, W1 = image1.shape
- H, W = max(H0, H1), W0 + W1 + margin
-
- out = 255*np.ones((H, W), np.uint8)
- out[:H0, :W0] = image0
- out[:H1, W0+margin:] = image1
- out = np.stack([out]*3, -1)
-
- if show_keypoints:
- kpts0, kpts1 = np.round(kpts0).astype(int), np.round(kpts1).astype(int)
- white = (255, 255, 255)
- black = (0, 0, 0)
- for x, y in kpts0:
- cv2.circle(out, (x, y), 2, black, -1, lineType=cv2.LINE_AA)
- cv2.circle(out, (x, y), 1, white, -1, lineType=cv2.LINE_AA)
- for x, y in kpts1:
- cv2.circle(out, (x + margin + W0, y), 2, black, -1,
- lineType=cv2.LINE_AA)
- cv2.circle(out, (x + margin + W0, y), 1, white, -1,
- lineType=cv2.LINE_AA)
-
- mkpts0, mkpts1 = np.round(mkpts0).astype(int), np.round(mkpts1).astype(int)
- color = (np.array(color[:, :3])*255).astype(int)[:, ::-1]
- for (x0, y0), (x1, y1), c in zip(mkpts0, mkpts1, color):
- c = c.tolist()
- cv2.line(out, (x0, y0), (x1 + margin + W0, y1),
- color=c, thickness=1, lineType=cv2.LINE_AA)
- # display line end-points as circles
- cv2.circle(out, (x0, y0), 2, c, -1, lineType=cv2.LINE_AA)
- cv2.circle(out, (x1 + margin + W0, y1), 2, c, -1,
- lineType=cv2.LINE_AA)
-
- # Scale factor for consistent visualization across scales.
- sc = min(H / 640., 2.0)
-
- # Big text.
- Ht = int(30 * sc) # text height
- txt_color_fg = (255, 255, 255)
- txt_color_bg = (0, 0, 0)
- for i, t in enumerate(text):
- cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX,
- 1.0*sc, txt_color_bg, 2, cv2.LINE_AA)
- cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX,
- 1.0*sc, txt_color_fg, 1, cv2.LINE_AA)
-
- # Small text.
- Ht = int(18 * sc) # text height
- for i, t in enumerate(reversed(small_text)):
- cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX,
- 0.5*sc, txt_color_bg, 2, cv2.LINE_AA)
- cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX,
- 0.5*sc, txt_color_fg, 1, cv2.LINE_AA)
-
- if path is not None:
- cv2.imwrite(str(path), out)
-
- if opencv_display:
- cv2.imshow(opencv_title, out)
- cv2.waitKey(1)
-
- return out
-
-
-def error_colormap(x):
- return np.clip(
- np.stack([2-x*2, x*2, np.zeros_like(x), np.ones_like(x)], -1), 0, 1)
diff --git a/spaces/Hazem/roop/roop/utilities.py b/spaces/Hazem/roop/roop/utilities.py
deleted file mode 100644
index 90c8d981f5f159a459ca0c08cc23dfac8d04c068..0000000000000000000000000000000000000000
--- a/spaces/Hazem/roop/roop/utilities.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import glob
-import mimetypes
-import os
-import platform
-import shutil
-import ssl
-import subprocess
-import urllib
-from pathlib import Path
-from typing import List, Any
-from tqdm import tqdm
-
-import roop.globals
-
-TEMP_FILE = 'temp.mp4'
-TEMP_DIRECTORY = 'temp'
-
-# monkey patch ssl for mac
-if platform.system().lower() == 'darwin':
- ssl._create_default_https_context = ssl._create_unverified_context
-
-
-def run_ffmpeg(args: List[str]) -> bool:
- commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', roop.globals.log_level]
- commands.extend(args)
- try:
- subprocess.check_output(commands, stderr=subprocess.STDOUT)
- return True
- except Exception:
- pass
- return False
-
-
-def detect_fps(target_path: str) -> float:
- command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path]
- output = subprocess.check_output(command).decode().strip().split('/')
- try:
- numerator, denominator = map(int, output)
- return numerator / denominator
- except Exception:
- pass
- return 30.0
-
-
-def extract_frames(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')])
-
-
-def create_video(target_path: str, fps: float = 30.0) -> None:
- temp_output_path = get_temp_output_path(target_path)
- temp_directory_path = get_temp_directory_path(target_path)
- run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', roop.globals.video_encoder, '-crf', str(roop.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path])
-
-
-def restore_audio(target_path: str, output_path: str) -> None:
- temp_output_path = get_temp_output_path(target_path)
- done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path])
- if not done:
- move_temp(target_path, output_path)
-
-
-def get_temp_frame_paths(target_path: str) -> List[str]:
- temp_directory_path = get_temp_directory_path(target_path)
- return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png')))
-
-
-def get_temp_directory_path(target_path: str) -> str:
- target_name, _ = os.path.splitext(os.path.basename(target_path))
- target_directory_path = os.path.dirname(target_path)
- return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name)
-
-
-def get_temp_output_path(target_path: str) -> str:
- temp_directory_path = get_temp_directory_path(target_path)
- return os.path.join(temp_directory_path, TEMP_FILE)
-
-
-def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Any:
- if source_path and target_path:
- source_name, _ = os.path.splitext(os.path.basename(source_path))
- target_name, target_extension = os.path.splitext(os.path.basename(target_path))
- if os.path.isdir(output_path):
- return os.path.join(output_path, source_name + '-' + target_name + target_extension)
- return output_path
-
-
-def create_temp(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- Path(temp_directory_path).mkdir(parents=True, exist_ok=True)
-
-
-def move_temp(target_path: str, output_path: str) -> None:
- temp_output_path = get_temp_output_path(target_path)
- if os.path.isfile(temp_output_path):
- if os.path.isfile(output_path):
- os.remove(output_path)
- shutil.move(temp_output_path, output_path)
-
-
-def clean_temp(target_path: str) -> None:
- temp_directory_path = get_temp_directory_path(target_path)
- parent_directory_path = os.path.dirname(temp_directory_path)
- if not roop.globals.keep_frames and os.path.isdir(temp_directory_path):
- shutil.rmtree(temp_directory_path)
- if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path):
- os.rmdir(parent_directory_path)
-
-
-def has_image_extension(image_path: str) -> bool:
- return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp'))
-
-
-def is_image(image_path: str) -> bool:
- if image_path and os.path.isfile(image_path):
- mimetype, _ = mimetypes.guess_type(image_path)
- return bool(mimetype and mimetype.startswith('image/'))
- return False
-
-
-def is_video(video_path: str) -> bool:
- if video_path and os.path.isfile(video_path):
- mimetype, _ = mimetypes.guess_type(video_path)
- return bool(mimetype and mimetype.startswith('video/'))
- return False
-
-
-def conditional_download(download_directory_path: str, urls: List[str]) -> None:
- if not os.path.exists(download_directory_path):
- os.makedirs(download_directory_path)
- for url in urls:
- download_file_path = os.path.join(download_directory_path, os.path.basename(url))
- if not os.path.exists(download_file_path):
- request = urllib.request.urlopen(url) # type: ignore[attr-defined]
- total = int(request.headers.get('Content-Length', 0))
- with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress:
- urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
-
-
-def resolve_relative_path(path: str) -> str:
- return os.path.abspath(os.path.join(os.path.dirname(__file__), path))
diff --git a/spaces/Hina4867/bingo/src/lib/utils.ts b/spaces/Hina4867/bingo/src/lib/utils.ts
deleted file mode 100644
index 07feedb34e356b1b3cf867872f32d47a96ae12fb..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/lib/utils.ts
+++ /dev/null
@@ -1,138 +0,0 @@
-import { clsx, type ClassValue } from 'clsx'
-import { customAlphabet } from 'nanoid'
-import { twMerge } from 'tailwind-merge'
-
-export function cn(...inputs: ClassValue[]) {
- return twMerge(clsx(inputs))
-}
-
-export const nanoid = customAlphabet(
- '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz',
- 7
-) // 7-character random string
-
-export function createChunkDecoder() {
- const decoder = new TextDecoder()
- return function (chunk: Uint8Array | undefined): string {
- if (!chunk) return ''
- return decoder.decode(chunk, { stream: true })
- }
-}
-
-export function random (start: number, end: number) {
- return start + Math.ceil(Math.random() * (end - start))
-}
-
-export function randomIP() {
- return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}`
-}
-
-export function parseHeadersFromCurl(content: string) {
- const re = /-H '([^:]+):\s*([^']+)/mg
- const headers: HeadersInit = {}
- content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl
- content.replace(re, (_: string, key: string, value: string) => {
- headers[key] = value
- return ''
- })
-
- return headers
-}
-
-export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2']
-export function encodeHeadersToCookie(content: string) {
- const base64Content = btoa(content)
- const contentChunks = base64Content.match(/.{1,4000}/g) || []
- return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`)
-}
-
-export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) {
- let base64Content = ''
- ChunkKeys.forEach((key) => {
- base64Content += (cookies[key] || '')
- })
- try {
- return atob(base64Content)
- } catch(e) {
- return ''
- }
-}
-
-export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) {
- return parseHeadersFromCurl(extraCurlFromCookie(cookies))
-}
-
-export function formatDate(input: string | number | Date): string {
- const date = new Date(input)
- return date.toLocaleDateString('en-US', {
- month: 'long',
- day: 'numeric',
- year: 'numeric'
- })
-}
-
-export function parseCookie(cookie: string, cookieName: string) {
- const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie
- return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : ''
-}
-
-export function parseCookies(cookie: string, cookieNames: string[]) {
- const cookies: { [key: string]: string } = {}
- cookieNames.forEach(cookieName => {
- cookies[cookieName] = parseCookie(cookie, cookieName)
- })
- return cookies
-}
-
-export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0'
-export const DEFAULT_IP = process.env.BING_IP || randomIP()
-
-export function parseUA(ua?: string, default_ua = DEFAULT_UA) {
- return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua
-}
-
-export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>) {
- let {
- BING_COOKIE = process.env.BING_COOKIE,
- BING_UA = process.env.BING_UA,
- BING_IP = process.env.BING_IP,
- BING_HEADER = process.env.BING_HEADER,
- } = cookies
-
- if (BING_HEADER) {
- return extraHeadersFromCookie({
- BING_HEADER,
- ...cookies,
- })
- }
-
- const ua = parseUA(BING_UA)
-
- if (!BING_COOKIE) {
- BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || 'xxx' // hf 暂时不用 Cookie 也可以正常使用
- }
-
- const parsedCookie = parseCookie(BING_COOKIE, '_U')
- if (!parsedCookie) {
- throw new Error('Invalid Cookie')
- }
- return {
- 'x-forwarded-for': BING_IP || DEFAULT_IP,
- 'Accept-Encoding': 'gzip, deflate, br',
- 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
- 'User-Agent': ua!,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- cookie: `_U=${parsedCookie}` || '',
- }
-}
-
-export class WatchDog {
- private tid = 0
- watch(fn: Function, timeout = 2000) {
- clearTimeout(this.tid)
- this.tid = setTimeout(fn, timeout + Math.random() * 1000)
- }
- reset() {
- clearTimeout(this.tid)
- }
-}
diff --git a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/dpt_depth.py b/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/dpt_depth.py
deleted file mode 100644
index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000
--- a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/dpt_depth.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .base_model import BaseModel
-from .blocks import (
- FeatureFusionBlock,
- FeatureFusionBlock_custom,
- Interpolate,
- _make_encoder,
- forward_vit,
-)
-
-
-def _make_fusion_block(features, use_bn):
- return FeatureFusionBlock_custom(
- features,
- nn.ReLU(False),
- deconv=False,
- bn=use_bn,
- expand=False,
- align_corners=True,
- )
-
-
-class DPT(BaseModel):
- def __init__(
- self,
- head,
- features=256,
- backbone="vitb_rn50_384",
- readout="project",
- channels_last=False,
- use_bn=False,
- ):
-
- super(DPT, self).__init__()
-
- self.channels_last = channels_last
-
- hooks = {
- "vitb_rn50_384": [0, 1, 8, 11],
- "vitb16_384": [2, 5, 8, 11],
- "vitl16_384": [5, 11, 17, 23],
- }
-
- # Instantiate backbone and reassemble blocks
- self.pretrained, self.scratch = _make_encoder(
- backbone,
- features,
- False, # Set to true of you want to train from scratch, uses ImageNet weights
- groups=1,
- expand=False,
- exportable=False,
- hooks=hooks[backbone],
- use_readout=readout,
- )
-
- self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
-
- self.scratch.output_conv = head
-
-
- def forward(self, x):
- if self.channels_last == True:
- x.contiguous(memory_format=torch.channels_last)
-
- layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return out
-
-
-class DPTDepthModel(DPT):
- def __init__(self, path=None, non_negative=True, **kwargs):
- features = kwargs["features"] if "features" in kwargs else 256
-
- head = nn.Sequential(
- nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- super().__init__(head, **kwargs)
-
- if path is not None:
- self.load(path)
-
- def forward(self, x):
- return super().forward(x).squeeze(dim=1)
-
diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/dataloader.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/dataloader.py
deleted file mode 100644
index 831174de3c3a62f13fa3ff1f172b36c8d2a84c44..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/dataloader.py
+++ /dev/null
@@ -1,308 +0,0 @@
-import os
-import numpy as np
-from PIL import Image, ImageSequence
-import json
-import pandas as pd
-
-import torch
-from torch.utils.data import Dataset
-from torchvision import transforms
-import torchvision.transforms.functional as TF
-
-from celle.utils import replace_outliers
-
-def simple_conversion(seq):
- """Create 26-dim embedding"""
- chars = [
- "-",
- "M",
- "R",
- "H",
- "K",
- "D",
- "E",
- "S",
- "T",
- "N",
- "Q",
- "C",
- "U",
- "G",
- "P",
- "A",
- "V",
- "I",
- "F",
- "Y",
- "W",
- "L",
- "O",
- "X",
- "Z",
- "B",
- "J",
- ]
-
- nums = range(len(chars))
-
- seqs_x = np.zeros(len(seq))
-
- for idx, char in enumerate(seq):
-
- lui = chars.index(char)
-
- seqs_x[idx] = nums[lui]
-
- return torch.tensor([seqs_x]).long()
-
-
-class CellLoader(Dataset):
- """imports mined opencell images with protein sequence"""
-
- def __init__(
- self,
- data_csv=None,
- dataset=None,
- split_key=None,
- resize=600,
- crop_size=600,
- crop_method="random",
- sequence_mode="simple",
- vocab="bert",
- threshold="median",
- text_seq_len=0,
- pad_mode="random",
- ):
- self.data_csv = data_csv
- self.dataset = dataset
- self.image_folders = []
- self.crop_method = crop_method
- self.resize = resize
- self.crop_size = crop_size
- self.sequence_mode = sequence_mode
- self.threshold = threshold
- self.text_seq_len = int(text_seq_len)
- self.vocab = vocab
- self.pad_mode = pad_mode
-
- if self.sequence_mode == "embedding" or self.sequence_mode == "onehot":
-
-
- if self.vocab == "esm1b" or self.vocab == "esm2":
- from esm import Alphabet
-
- self.tokenizer = Alphabet.from_architecture(
- "ESM-1b"
- ).get_batch_converter()
- self.text_seq_len += 2
-
- if data_csv:
-
- data = pd.read_csv(data_csv)
-
- self.parent_path = os.path.dirname(data_csv).split(data_csv)[0]
-
- if split_key == "train":
- self.data = data[data["split"] == "train"]
- elif split_key == "val":
- self.data = data[data["split"] == "val"]
- else:
- self.data = data
-
- self.data = self.data.reset_index(drop=True)
-
-
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(
- self,
- idx,
- get_sequence=True,
- get_images=True,
- ):
- if get_sequence and self.text_seq_len > 0:
-
- protein_vector = self.get_protein_vector(idx)
-
- else:
- protein_vector = torch.zeros((1, 1))
-
- if get_images:
-
- nucleus, target, threshold = self.get_images(idx, self.dataset)
- else:
- nucleus, target, threshold = torch.zeros((3, 1))
-
- data_dict = {
- "nucleus": nucleus.float(),
- "target": target.float(),
- "threshold": threshold.float(),
- "sequence": protein_vector.long(),
- }
-
- return data_dict
-
- def get_protein_vector(self, idx):
-
- if "protein_sequence" not in self.data.columns:
-
- metadata = self.retrieve_metadata(idx)
- protein_sequence = metadata["sequence"]
- else:
- protein_sequence = self.data.iloc[idx]["protein_sequence"]
-
- protein_vector = self.tokenize_sequence(protein_sequence)
-
- return protein_vector
-
- def get_images(self, idx, dataset):
-
- if dataset == "HPA":
-
- nucleus = Image.open(
- os.path.join(
- self.parent_path, self.data.iloc[idx]["nucleus_image_path"]
- )
- )
-
- target = Image.open(
- os.path.join(self.parent_path, self.data.iloc[idx]["target_image_path"])
- )
-
- nucleus = TF.to_tensor(nucleus)[0]
- target = TF.to_tensor(target)[0]
-
- image = torch.stack([nucleus, target], axis=0)
-
- normalize = (0.0655, 0.0650), (0.1732, 0.1208)
-
- elif dataset == "OpenCell":
- image = Image.open(
- os.path.join(self.parent_path, self.data.iloc[idx]["image_path"])
- )
- nucleus, target = [page.copy() for page in ImageSequence.Iterator(image)]
-
- nucleus = replace_outliers(torch.divide(TF.to_tensor(nucleus), 65536))[0]
- target = replace_outliers(torch.divide(TF.to_tensor(target), 65536))[0]
-
- image = torch.stack([nucleus, target], axis=0)
-
- normalize = (
- (0.0272, 0.0244),
- (0.0486, 0.0671),
- )
-
- # # from https://discuss.pytorch.org/t/how-to-apply-same-transform-on-a-pair-of-picture/14914
-
- t_forms = [transforms.Resize(self.resize, antialias=None)]
-
- if self.crop_method == "random":
-
- t_forms.append(transforms.RandomCrop(self.crop_size))
- t_forms.append(transforms.RandomHorizontalFlip(p=0.5))
- t_forms.append(transforms.RandomVerticalFlip(p=0.5))
-
- elif self.crop_method == "center":
-
- t_forms.append(transforms.CenterCrop(self.crop_size))
-
- t_forms.append(transforms.Normalize(normalize[0], normalize[1]))
-
- image = transforms.Compose(t_forms)(image)
-
- nucleus, target = image
-
- nucleus /= torch.abs(nucleus).max()
- target -= target.min()
- target /= target.max()
-
- nucleus = nucleus.unsqueeze(0)
- target = target.unsqueeze(0)
-
- threshold = target
-
- if self.threshold == "mean":
-
- threshold = 1.0 * (threshold > (torch.mean(threshold)))
-
- elif self.threshold == "median":
-
- threshold = 1.0 * (threshold > (torch.median(threshold)))
-
- elif self.threshold == "1090_IQR":
-
- p10 = torch.quantile(threshold, 0.1, None)
- p90 = torch.quantile(threshold, 0.9, None)
- threshold = torch.clip(threshold, p10, p90)
-
- nucleus = torch.nan_to_num(nucleus, 0.0, 1.0, 0.0)
- target = torch.nan_to_num(target, 0.0, 1.0, 0.0)
- threshold = torch.nan_to_num(threshold, 0.0, 1.0, 0.0)
-
- return nucleus, target, threshold
-
- def retrieve_metadata(self, idx):
- with open(
- os.path.join(self.parent_path, self.data.iloc[idx]["metadata_path"])
- ) as f:
- metadata = json.load(f)
-
- return metadata
-
- def tokenize_sequence(self, protein_sequence):
-
- pad_token = 0
-
- if self.sequence_mode == "simple":
- protein_vector = simple_conversion(protein_sequence)
-
- elif self.sequence_mode == "center":
- protein_sequence = protein_sequence.center(self.text_seq_length, "-")
- protein_vector = simple_conversion(protein_sequence)
-
- elif self.sequence_mode == "alternating":
- protein_sequence = protein_sequence.center(self.text_seq_length, "-")
- protein_sequence = protein_sequence[::18]
- protein_sequence = protein_sequence.center(
- int(self.text_seq_length / 18) + 1, "-"
- )
- protein_vector = simple_conversion(protein_sequence)
-
-
- elif self.sequence_mode == "embedding":
-
- if self.vocab == "esm1b" or self.vocab == "esm2":
- pad_token = 1
- protein_vector = self.tokenizer([("", protein_sequence)])[-1]
-
- if protein_vector.shape[-1] < self.text_seq_len:
-
- diff = self.text_seq_len - protein_vector.shape[-1]
-
- if self.pad_mode == "end":
- protein_vector = torch.nn.functional.pad(
- protein_vector, (0, diff), "constant", pad_token
- )
- elif self.pad_mode == "random":
- split = diff - np.random.randint(0, diff + 1)
-
- protein_vector = torch.cat(
- [torch.ones(1, split) * 0, protein_vector], dim=1
- )
-
- protein_vector = torch.nn.functional.pad(
- protein_vector, (0, diff - split), "constant", pad_token
- )
-
- elif protein_vector.shape[-1] > self.text_seq_len:
- start_int = np.random.randint(
- 0, protein_vector.shape[-1] - self.text_seq_len
- )
-
- protein_vector = protein_vector[
- :, start_int : start_int + self.text_seq_len
- ]
-
- return protein_vector.long()
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/bytes.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/bytes.py
deleted file mode 100644
index f88f8f6929f5b6bdb0db470be9ebedf8fe1f752d..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/bytes.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-from fairseq.data.encoders import register_bpe
-from fairseq.data.encoders.byte_utils import (
- SPACE,
- SPACE_ESCAPE,
- byte_encode,
- smart_byte_decode,
-)
-
-
-@register_bpe("bytes")
-class Bytes(object):
- def __init__(self, *unused):
- pass
-
- @staticmethod
- def add_args(parser):
- pass
-
- @staticmethod
- def encode(x: str) -> str:
- encoded = byte_encode(x)
- escaped = encoded.replace(SPACE, SPACE_ESCAPE)
- return SPACE.join(list(escaped))
-
- @staticmethod
- def decode(x: str) -> str:
- unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE)
- return smart_byte_decode(unescaped)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/registry.py b/spaces/ICML2022/OFA/fairseq/fairseq/registry.py
deleted file mode 100644
index f3b9406043d75a51d7bf4af5294f82b33a8f9a5e..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/registry.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from argparse import Namespace
-
-from typing import Union
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.utils import merge_with_parent
-from hydra.core.config_store import ConfigStore
-from omegaconf import DictConfig
-
-REGISTRIES = {}
-
-
-def setup_registry(registry_name: str, base_class=None, default=None, required=False):
- assert registry_name.startswith("--")
- registry_name = registry_name[2:].replace("-", "_")
-
- REGISTRY = {}
- REGISTRY_CLASS_NAMES = set()
- DATACLASS_REGISTRY = {}
-
- # maintain a registry of all registries
- if registry_name in REGISTRIES:
- return # registry already exists
- REGISTRIES[registry_name] = {
- "registry": REGISTRY,
- "default": default,
- "dataclass_registry": DATACLASS_REGISTRY,
- }
-
- def build_x(cfg: Union[DictConfig, str, Namespace], *extra_args, **extra_kwargs):
- if isinstance(cfg, DictConfig):
- choice = cfg._name
-
- if choice and choice in DATACLASS_REGISTRY:
- dc = DATACLASS_REGISTRY[choice]
- cfg = merge_with_parent(dc(), cfg)
- elif isinstance(cfg, str):
- choice = cfg
- if choice in DATACLASS_REGISTRY:
- cfg = DATACLASS_REGISTRY[choice]()
- else:
- choice = getattr(cfg, registry_name, None)
- if choice in DATACLASS_REGISTRY:
- cfg = DATACLASS_REGISTRY[choice].from_namespace(cfg)
-
- if choice is None:
- if required:
- raise ValueError("{} is required!".format(registry_name))
- return None
-
- cls = REGISTRY[choice]
- if hasattr(cls, "build_" + registry_name):
- builder = getattr(cls, "build_" + registry_name)
- else:
- builder = cls
-
- return builder(cfg, *extra_args, **extra_kwargs)
-
- def register_x(name, dataclass=None):
- def register_x_cls(cls):
- if name in REGISTRY:
- raise ValueError(
- "Cannot register duplicate {} ({})".format(registry_name, name)
- )
- if cls.__name__ in REGISTRY_CLASS_NAMES:
- raise ValueError(
- "Cannot register {} with duplicate class name ({})".format(
- registry_name, cls.__name__
- )
- )
- if base_class is not None and not issubclass(cls, base_class):
- raise ValueError(
- "{} must extend {}".format(cls.__name__, base_class.__name__)
- )
-
- if dataclass is not None and not issubclass(dataclass, FairseqDataclass):
- raise ValueError(
- "Dataclass {} must extend FairseqDataclass".format(dataclass)
- )
-
- cls.__dataclass = dataclass
- if cls.__dataclass is not None:
- DATACLASS_REGISTRY[name] = cls.__dataclass
-
- cs = ConfigStore.instance()
- node = dataclass()
- node._name = name
- cs.store(name=name, group=registry_name, node=node, provider="fairseq")
-
- REGISTRY[name] = cls
-
- return cls
-
- return register_x_cls
-
- return build_x, register_x, REGISTRY, DATACLASS_REGISTRY
diff --git a/spaces/Ibtehaj10/cheating-detection/face_detections.py b/spaces/Ibtehaj10/cheating-detection/face_detections.py
deleted file mode 100644
index e4f433997c263777b4e63b5db26a9188b98988b0..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection/face_detections.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import cv2
-import datetime
-import imutils
-import numpy as np
-
-protopath = "deploy.prototxt"
-modelpath = "res10_300x300_ssd_iter_140000.caffemodel"
-detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath)
-# Only enable it if you are using OpenVino environment
-# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
-# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
-
-def main():
- cap = cv2.VideoCapture('test_video.mp4')
-
- fps_start_time = datetime.datetime.now()
- fps = 0
- total_frames = 0
-
- while True:
- ret, frame = cap.read()
- frame = imutils.resize(frame, width=600)
- total_frames = total_frames + 1
-
- (H, W) = frame.shape[:2]
-
- face_blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0), False, False)
-
- detector.setInput(face_blob)
- face_detections = detector.forward()
-
- for i in np.arange(0, face_detections.shape[2]):
- confidence = face_detections[0, 0, i, 2]
- if confidence > 0.5:
-
- face_box = face_detections[0, 0, i, 3:7] * np.array([W, H, W, H])
- (startX, startY, endX, endY) = face_box.astype("int")
-
- cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2)
-
- fps_end_time = datetime.datetime.now()
- time_diff = fps_end_time - fps_start_time
- if time_diff.seconds == 0:
- fps = 0.0
- else:
- fps = (total_frames / time_diff.seconds)
-
- fps_text = "FPS: {:.2f}".format(fps)
-
- cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- cv2.imshow("Application", frame)
- key = cv2.waitKey(1)
- if key == ord('q'):
- break
-
- cv2.destroyAllWindows()
-
-
-main()
diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
deleted file mode 100644
index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000
--- a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class HarvestF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.harvest(
- wav.astype(np.double),
- fs=self.hop_length,
- f0_ceil=self.f0_max,
- f0_floor=self.f0_min,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.harvest(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/Interfaces.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/Interfaces.tsx
deleted file mode 100644
index 59b80d06d6779c4681b9a89fec14d22c0c53872b..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/Interfaces.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-// Copyright (c) Meta Platforms, Inc. and affiliates.
-// All rights reserved.
-
-// This source code is licensed under the license found in the
-// LICENSE file in the root directory of this source tree.
-
-import { Tensor } from "onnxruntime-web";
-
-export interface modelScaleProps {
- samScale: number;
- height: number;
- width: number;
-}
-
-export interface modelInputProps {
- x: number;
- y: number;
- clickType: number;
-}
-
-export interface modeDataProps {
- clicks?: Array;
- tensor: Tensor;
- modelScale: modelScaleProps;
-}
-
-export interface ToolProps {
- handleMouseMove: (e: any) => void;
-}
diff --git a/spaces/Inthv/NER/README.md b/spaces/Inthv/NER/README.md
deleted file mode 100644
index 7ba4b8ee2e401fa4f56e8e15b2298062b6dfdfaf..0000000000000000000000000000000000000000
--- a/spaces/Inthv/NER/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NER
-emoji: 👁
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/IoMa/diffusers-gallery/README.md b/spaces/IoMa/diffusers-gallery/README.md
deleted file mode 100644
index ff1cbb6ee8e12c3a15d98730f50873db96260bad..0000000000000000000000000000000000000000
--- a/spaces/IoMa/diffusers-gallery/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Diffusers Gallery
-emoji: 🖼️
-colorFrom: red
-colorTo: green
-sdk: static
-app_port: 8080
-fullWidth: true
-pinned: false
-license: mit
-duplicated_from: huggingface-projects/diffusers-gallery
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/setup.py b/spaces/Jacks2003/3D_Photo_Inpainting/setup.py
deleted file mode 100644
index eddf6368ade3f8877d3eb6148157796c22066958..0000000000000000000000000000000000000000
--- a/spaces/Jacks2003/3D_Photo_Inpainting/setup.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from setuptools import setup
-
-setup(
- name='cynetworkx_workaround',
- version='1.0',
- description='A useful module',
- install_requires=['cynetworkx'], #external packages as dependencies
-)
\ No newline at end of file
diff --git a/spaces/Jason1112/ML-GUI/README.md b/spaces/Jason1112/ML-GUI/README.md
deleted file mode 100644
index dc7ae8c89f0a34231e41f681ef9f7d890fc07cd5..0000000000000000000000000000000000000000
--- a/spaces/Jason1112/ML-GUI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ML GUI
-emoji: 🐠
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JayKen/YSF-External-Testing/app.py b/spaces/JayKen/YSF-External-Testing/app.py
deleted file mode 100644
index 482562e452528720139353aea163fb6820596c95..0000000000000000000000000000000000000000
--- a/spaces/JayKen/YSF-External-Testing/app.py
+++ /dev/null
@@ -1,716 +0,0 @@
-import gradio as gr
-import openai
-from pathlib import Path
-import requests
-import json
-from huggingface_hub import login
-from datasets import load_dataset, Dataset
-import os
-from datetime import date
-from PIL import Image
-from io import BytesIO
-import random
-import html2text
-import re
-from markdown import markdown
-
-myopenaikey = os.getenv('openaiapi')
-openai.api_key = myopenaikey
-os.environ["OPENAI_API_KEY"] = myopenaikey
-login(os.environ.get("HF_TOKEN"))
-
-API_URL_LLAMA2 = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf"
-headers_llama2 = {"Authorization": "Bearer "+ os.environ.get("Vincenzo_key") }
-
-
-def scrape_page_content(my_company, my_company_url, my_company_page):
-
- desc = get_company_info(my_company_url)
- if len(desc) >= 500:
- prompt = "Could you summarize the following Company description in 2-3 sentences: {}".format(desc)
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt=prompt,
- max_tokens=100,
- )
-
- desc = output.choices[0].text
-
- try:
- response = requests.get(my_company_page)
- response.raise_for_status()
- html_conv = html2text.HTML2Text()
- html_conv.ignore_links = True
- html_conv.escape_all = True
- content = html_conv.handle(response.text)
-
- ratio1 = 0.1
- ratio2 = 0.1
- for i in range(0,10):
- if len(content[round(len(content) * ratio1): len(content) - round(len(content) * ratio2)]) < 10000:
- break
-
- if i%2 == 0:
- ratio1 = ratio1 + 0.1
- else:
- ratio2 = ratio2 + 0.1
-
- content = content[round(len(content) * ratio1): len(content) - round(len(content) * ratio2)]
-
- prompt = "Could you summarize the following scraped content from the webpage and write the company value proposition?\n\n{}".format(content)
-
- completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=250
- )
-
- return desc, completion.choices[0].message['content']
-
- except Exception as e:
- return desc, f"Error while scraping content: {e}"
-
-
-def get_unsplash_images(keywords):
-
- url = "https://api.unsplash.com/search/photos"
-
- params = {
- "query": keywords,
- "client_id": os.environ.get("unsplash"),
- "page": 1,
- "per_page": 6
- }
-
- response = requests.get(url, params=params)
-
- if response.status_code == 200:
- data = response.json()
-
- image_urls = [photo["urls"]["regular"] for photo in data["results"]]
-
-
- image_contents = []
- for x in image_urls:
- response_2 = requests.get(x)
- image_contents.append(Image.open(BytesIO(response_2.content)))
-
- return image_contents
- else:
- return None
-
-
-def extract_linkedIn_profile(link):
- api_endpoint = 'https://nubela.co/proxycurl/api/v2/linkedin'
- api_key = os.environ.get("proxyapi")
- header_dic = {'Authorization': 'Bearer ' + api_key}
- params = {
- 'url': link,
- 'fallback_to_cache': 'on-error',
- 'use_cache': 'if-present',
- 'skills': 'include',
- 'inferred_salary': 'include',
- 'personal_email': 'include',
- 'personal_contact_number': 'include',
- 'twitter_profile_id': 'include',
- 'facebook_profile_id': 'include',
- 'github_profile_id': 'include',
- 'extra': 'include',
- }
- response = requests.get(api_endpoint, params=params, headers=header_dic)
-
- return response
-
-
-def get_linkedin_profile(user):
- result = '''full_name: {}\noccupation: {}\nheadline: {}\nsummary: {}\ncountry: {}\ncity: {}\nlanguages: {}\ngender: {}\nbirth_date: {}\nindustry: {}\ninterests: {}\nskills: {}\n'''.format(user['full_name'], user['occupation'], user['headline'], user['summary'], user['country'], user['city'], ', '.join(user['languages']), user['gender'], user['birth_date'], user['industry'], ', '.join(user['interests']), ', '.join(user['skills']))
-
- if user['experiences']:
- result += '''experiences:\n'''
- for i,x in enumerate(user['experiences']):
- if i == 0:
- mycompany = x['company']
-
- if x["ends_at"] is None:
- ends_at = 'None'
- else:
- ends_at = 'Date'
- result += '''company: {}\ntitle: {}\ndescription: {}\nlocation: {}\n'''.format(x['company'], x['title'], x['description'], x['location'], x['company_linkedin_profile_url'], ends_at)
-
- if user['education']:
- result += '''education:\n'''
- for x in user['education']:
- result += '''field_of_study: {}\ndegree_name: {}\nschool: {}\n'''.format(x['field_of_study'], x['degree_name'], x['school'])
-
-
- companies = []
- for x in user['experiences']:
- if x["ends_at"] is None:
- companies.append((x['title'], x['company'], x['company_linkedin_profile_url']))
-
-
- return result, user['full_name'], companies
-
-
-def get_linkedin_profile_summary(link):
-
- dataset = load_dataset("JayKen/YSF-linkedIn", split="train")
-
- if link in dataset['users']:
- pos = dataset['users'].index(link)
- user_data = json.loads(dataset['info'][pos])
- user, name, companies = get_linkedin_profile(user_data)
-
- else:
- response = extract_linkedIn_profile(link)
- user, name, companies = get_linkedin_profile(response.json())
-
- json_string = json.dumps(response.json())
-
- dataset = Dataset.from_dict({'users': dataset['users']+[link],
- 'info': dataset['info']+[json_string]})
-
- dataset.push_to_hub("JayKen/YSF-linkedIn")
-
- if len(user) >= 5000:
- user = user.split('experiences')[0]
-
- prompt = 'The following is LinkedIn profile of a person: {}.\n\n\nCould you highlight most important personality traits of this person in points. List each trait and then further elaborate why you choose it?'.format(user)
-
- result = openai.Completion.create(engine="text-davinci-003", prompt=prompt, max_tokens=400)
-
- return name, companies, result.choices[0].text
-
-
-def get_aud_profile(link):
-
- global AUD_PRO_SUM
- global AUD_NAME
- global AUD_companies
-
- AUD_NAME, AUD_companies, AUD_PRO_SUM = get_linkedin_profile_summary(link)
-
- job_titles = []
- for x in AUD_companies:
- job_titles.append(x[0]+', '+x[1])
-
- return '\n'.join(job_titles), AUD_PRO_SUM, ''
-
-
-def set_audience_occ(occupation, company_url):
- global AUD_OCC
- global AUD_COM
-
- for x in AUD_companies:
- if x[0]+', '+x[1] == occupation:
- AUD_OCC = x[0]
- AUD_COM = x[1]
- url = x[2]
- break
-
- if url:
- company_url = url
-
- if company_url:
- desc = get_company_info(company_url)
- if len(desc) >= 500:
- prompt = "Could you summarize the following Company description in 2-3 sentences: {}".format(desc)
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt=prompt,
- max_tokens=100,
- )
-
- desc = output.choices[0].text
- else:
- desc = "Unable to fetch company description. Please enter the company linkedIn url."
- company_url = ''
-
- return AUD_OCC + '\n' + AUD_COM + '\n' + company_url, desc
-
-
-def get_company_info(link):
-
- dataset = load_dataset("JayKen/ysf-company-profile", split="train")
-
- if link in dataset['url']:
- pos = dataset['url'].index(link)
- company_info = dataset['aboutus'][pos]
- else:
- headers = {'Authorization': 'Bearer ' + os.environ.get("proxyapi")}
- api_endpoint = 'https://nubela.co/proxycurl/api/linkedin/company'
- params = {
- 'url': link,
- }
- response = requests.get(api_endpoint,
- params=params,
- headers=headers)
-
- company_info = response.json()['description']
-
- dataset = Dataset.from_dict({'url': dataset['url']+[link],
- 'aboutus': dataset['aboutus']+[company_info]})
-
- dataset.push_to_hub("JayKen/ysf-company-profile")
-
- return company_info
-
-
-
-def set_language(language):
- global LANG
-
- if language == "English (default)":
- LANG = ''
- else:
- LANG = language
-
-
-def get_subject(brief, cta, relation, language):
-
- set_language(language)
-
- prompt = "Here is the brief about my meeting: {}".format(brief)
- prompt += "\nAfter the meeting I want my {} to: {}".format(relation, cta)
- prompt += "\nCombine this and rewrite it for me. \n\nI will combine and rewrite the brief for you: "
-
- global CTA
- CTA = cta
-
- completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=225
- )
-
- global SUBJECT
- SUBJECT = completion.choices[0].message['content']
-
- if len(LANG):
-
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt="Translate the following English text to " + LANG + ": \n\n" + SUBJECT,
- max_tokens=225,
- )
-
- translation = output.choices[0].text
- else:
- translation = ''
-
- return prompt, SUBJECT, translation
-
-
-def get_concern(my_company, my_company_desc, relation, company_desc):
-
- prompt = "I will have a high-stake conversation with {}. {}. \n\nI work for {}. {}.\n\nI will have a presentation with {} from {} who has the following personality: {}\n\nThe presentation subject is: {}.\n\nWhat could be my {}'s, {} and {} concerns regarding only to {}.\n\nGive me 5-10 points".format(AUD_COM, company_desc, my_company, my_company_desc, AUD_NAME, AUD_COM, AUD_PRO_SUM, SUBJECT, relation, AUD_COM, AUD_NAME, CTA)
-
- completion = openai.ChatCompletion.create(
- model="gpt-4",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=500
- )
-
- return prompt, completion.choices[0].message['content']
-
-
-def get_concern_step1(my_company, my_company_desc, relation, company_desc):
-
- prompt = "I will have a high-stake conversation with {}. {}. \n\nI work for {}. {}.\n\nThe presentation subject is: {}.\n\nWhat could be {}'s concerns regarding only to {}.\n\nGive me 5-10 points".format(AUD_COM, company_desc, my_company, my_company_desc, SUBJECT, AUD_COM, CTA)
-
- completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=500
- )
-
- global CONCERNS1
- CONCERNS1 = prompt + completion.choices[0].message['content']
-
- return prompt, completion.choices[0].message['content']
-
-
-def get_concern_step2(relation):
-
- prompt = "I will have a high-stake conversation with {}, {}, who has the following personality: {}.\n\nFrom the above list of concerns show me only the list of concerns that are relevant to {}. ".format(AUD_NAME, relation, AUD_PRO_SUM, AUD_NAME)
-
- completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {"role": "system", "content": CONCERNS1},
- {"role": "user", "content": prompt}
- ],
- max_tokens=500
- )
-
- if len(LANG):
-
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'],
- max_tokens=500,
- )
-
- translation = output.choices[0].text
- else:
- translation = ''
-
- return prompt, completion.choices[0].message['content'], translation
-
-
-def get_andrew(relation, select_concern):
-
- prompt = "The main presentation subject I want to deliver to my {} is about {}. My {} is {}, who has the following personality: {}\n\nGive me a list of emotional rewards for {} when addressing {}. Focus on the emotional side. Consider {}'s main concerns:\n\n{}\nSort them from the most relevant to the least one.".format(relation, SUBJECT, relation, AUD_OCC, AUD_PRO_SUM, AUD_NAME, CTA, AUD_NAME, select_concern)
-
- completion = openai.ChatCompletion.create(
- model="gpt-4",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=400
- )
-
- if len(LANG):
-
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'],
- max_tokens=400,
- )
-
- translation = output.choices[0].text
- else:
- translation = ''
-
- return prompt, completion.choices[0].message['content'], translation
-
-
-def get_andrew_two(my_company, value_prop, relation, aud_com_desc):
-
- prompt = "I work for {} and its value proposition is: {}\n\nI am eager to explore the gains that my {}, {}, who holds the position of {} at {}: {}, can gain in the context of {}\n\nPlease outline the gains.".format(my_company, value_prop, relation, AUD_NAME, AUD_OCC, AUD_COM, aud_com_desc, CTA)
-
- completion = openai.ChatCompletion.create(
- model="gpt-4",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=400
- )
-
- if len(LANG):
-
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'],
- max_tokens=400,
- )
-
- translation = output.choices[0].text
- else:
- translation = ''
-
- return prompt, completion.choices[0].message['content'], translation
-
-
-def get_andrew_three(relation, s1, s2):
-
- prompt = "I am in the process of engaging my {} who holds the position of {}, to take concrete actions aligned with the presented topic: {}. I would greatly appreciate your expertise in crafting three distinct and compelling narratives that I can keep in mind when meeting my {}- {}, during a meeting. Each narrative must consider all the following points in the same narrative:\n\n1) {}.\n\n2) {}.\n\nKeep each narrative to 3-5 sentences. Don't address the audience. Use the following format Narrative 1: [write narrative]\nNarrative 2: [write narrative]\nNarrative 3: [write narrative].".format(relation, AUD_OCC, SUBJECT, relation, AUD_NAME, s1, s2)
-
- completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": prompt}
- ],
- max_tokens=400
- )
-
- if len(LANG):
-
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'],
- max_tokens=400,
- )
-
- translation = output.choices[0].text
- else:
- translation = ''
-
- return prompt, completion.choices[0].message['content'], translation
-
-
-def create_slides2(narrative, concerns, wins, gains):
-
- combined_prompt = "Can you create a business PPT presentation following the Hero Journey steps:\n\nOrdinary World: Describe the customer's current situation or pain point.\nCall to Adventure: Introduce the possibility of a solution (your product/service).\nRefusal of the Call: Address common objections or misconceptions.\nMeeting with the Mentor: Position your company as the guide.\nCrossing the First Threshold: Highlight first stepsto engage with your solution.\nTests, Allies, Enemies: Discuss case studies, user stories, and support.\nApproach to the Inmost Cave: Dive into the features and benefits.\nOrdeal: Address main objections and how your solution tackles them.\nReward: Emphasize the value and benefits of your solution.\nThe Road Back: Guide on the next steps or integration.\nResurrection: Reiterate the transformation promised.\nReturn with the Elixir: Call to action.\n\nMy storyline is:{}\n\nSome background information is: {}\n\n{}'s concerns are: {}\n\n{}'s emotional rewards are: {}\n\n{}'s gains are: {}\n\nGenerate all 12 steps with 12 slides. Don't use presentation jargons. Keep the bullet points as succinct phrases. Use the following format when generating slides:\nTitle: \n<3-5 bullet points>\nExplanation: \n".format(narrative, SUBJECT, AUD_NAME, concerns, AUD_NAME, wins, AUD_NAME, gains)
-
- completion = openai.ChatCompletion.create(
- model="gpt-4",
- messages=[
- {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."},
- {"role": "user", "content": combined_prompt}
- ],
- max_tokens=2500
- )
-
- return combined_prompt, completion.choices[0].message['content']
-
-
-def get_profile_element(relation):
-
- prompt = "I have a meeting with my {} to deliver my presenation on/about : {}.".format(relation, SUBJECT)
-
- prompt += "His/her personality is: {}.\n\nBased on my presentation subject, what are the two most important elements in his/her personality for me to know/consider? Give me one point in 1-2 sentences max.".format(AUD_LINKEDIN_PROFILE)
-
- output = openai.Completion.create(
- model="gpt-3.5-turbo-instruct",
- prompt=prompt,
- max_tokens=150,
- )
-
- return prompt, output.choices[0].text
-
-
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("# Your YSF 😄 Personal Coach! Version 1.3")
- with gr.Tabs():
- with gr.TabItem("Presentation Subject"):
- with gr.Row():
- with gr.Column():
- my_company = gr.Textbox(label="My Company Name")
- my_company_url = gr.Textbox(label="My Company linkedIn url")
- my_company_page = gr.Textbox(label="My company page url")
- with gr.Row():
- with gr.Column():
- button_my_company = gr.Button("Submit", variant="primary")
- with gr.Column():
- output_my_company_desc = gr.components.Textbox(label="My Company description summarised")
- output_my_company_value = gr.components.Textbox(label="My Company Value proposition")
-
- gr.Markdown("## Examples")
- gr.Examples(
- [["Your Speech Factory", "https://www.linkedin.com/company/yourspeechfactory/", "https://www.yourspeechfactory.com/"],
- ["Dell Technologies", "https://www.linkedin.com/company/delltechnologies/", "https://www.dell.com/en-us/dt/storage/powerflex.htm#tab0=0"]],
- [my_company, my_company_url, my_company_page],
- [output_my_company_desc, output_my_company_value],
- scrape_page_content,
- cache_examples=True,
- )
-
- with gr.Row():
- with gr.Column():
- language = gr.Dropdown(["English (default)", "Swedish", "Norwegian", "Italian"], label="Select Translation language")
- input_relation = gr.Textbox(label="Your relationship with audience your presenting to")
- input_brief = gr.Textbox(label="Brief me on the situation and/or meeting focus")
- input_cta = gr.Textbox(label="Call to action")
- with gr.Row():
- with gr.Column():
- button_subject = gr.Button("Submit", variant="primary")
- with gr.Column():
- output_subject_prompt = gr.components.Textbox(label="Input prompt")
- output_subject = gr.components.Textbox(label="Output presentation subject")
- output_subject_translate = gr.components.Textbox(label="Translation")
-
- gr.Markdown("## Examples")
- gr.Examples(
- [["I'm presenting Your Speech Factory as a business case for a potential equity investment", "Make an equity investment of 200 000 EUR in our upcoming fundraising round", "potential investor", "English (default)"],
- ["I will have sales meeting to discuss datacenter strategy with BNY Investment", "agree to a demo running Powerflex in AWS", "Client", "English (default)"]],
- [input_brief, input_cta, input_relation, language],
- [output_subject_prompt, output_subject, output_subject_translate],
- get_subject,
- cache_examples=True,
- )
-
-
- with gr.TabItem("Set Audience"):
- with gr.Row():
- with gr.Column():
- input_audience_linkedin_url = gr.Textbox(label="Audience LinkedIn URL")
- with gr.Row():
- with gr.Column():
- button_audience_linkedin_url = gr.Button("Get Profile Summary", variant="primary")
- with gr.Column():
- output_audience_role = gr.components.Textbox(label="Audience Job titles")
- output_audience_summary = gr.components.Textbox(label="Audience Profile summary")
- output_bullets_translate = gr.components.Textbox(label="Translation")
-
- gr.Markdown("## Examples")
- gr.Examples(
- [["https://www.linkedin.com/in/helge-onsum-47704919/"], ["https://www.linkedin.com/in/sean-traverse-0394a714/"]],
- [input_audience_linkedin_url],
- [output_audience_role, output_audience_summary, output_bullets_translate],
- get_aud_profile,
- cache_examples=True,
- )
-
- with gr.Row():
- with gr.Column():
- input_audience_occ = gr.Textbox(label="Select Audience Occupation")
- aud_company_url = gr.Textbox(label="Enter Company LinkedIn url (optional)")
- with gr.Row():
- with gr.Column():
- button_audience_occ = gr.Button("Set Audience Occupation", variant="primary")
- with gr.Column():
- output_audience_occ = gr.components.Textbox(label="Job Title Set")
- output_company_desc = gr.components.Textbox(label="Audience Company description summarised")
-
-
- with gr.TabItem("Concerns"):
- with gr.Row():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- button_concern = gr.Button("Generate concerns", variant="primary")
- with gr.Column():
- output_concern_prompt = gr.components.Textbox(label="Input Prompt")
- output_concern = gr.components.Textbox(label="Output")
-
-
- with gr.TabItem("2 Step Concerns"):
- with gr.Row():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- button_concern_step1 = gr.Button("Generate concerns step 1", variant="primary")
- with gr.Column():
- output_concern_step1_prompt = gr.components.Textbox(label="Input Prompt")
- output_concern_step1 = gr.components.Textbox(label="Output")
- with gr.Row():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- button_concern_step2 = gr.Button("Generate concerns step 2", variant="primary")
- with gr.Column():
- output_concern_step2_prompt = gr.components.Textbox(label="Input Prompt")
- output_concern_step2 = gr.components.Textbox(label="Output")
- output_concern_step2_translate = gr.components.Textbox(label="Translation")
-
-
- with gr.TabItem("Personal Win"):
- with gr.Row():
- with gr.Column():
- input_concerns_two = gr.Textbox(label="Selected Concerns")
- with gr.Row():
- with gr.Column():
- button_andrew = gr.Button("Generate rewards", variant="primary")
- with gr.Column():
- output_andrew_prompt = gr.components.Textbox(label="Input Prompt")
- output_andrew = gr.components.Textbox(label="Your Output")
- output_wins_translate = gr.components.Textbox(label="Translation")
-
-
- with gr.TabItem("Gains"):
- with gr.Row():
- with gr.Column():
- input_value_prop = gr.Textbox(label="Enter your company value proposition")
- with gr.Row():
- with gr.Column():
- button_andrew_two = gr.Button("Generate gains", variant="primary")
- with gr.Column():
- output_andrew_prompt_two = gr.components.Textbox(label="Input Prompt")
- output_andrew_two = gr.components.Textbox(label="Your Output")
- output_gains_translate = gr.components.Textbox(label="Translation")
-
- with gr.TabItem("Narrative"):
- with gr.Row():
- with gr.Column():
- input_one_three = gr.Textbox(label="Personal Wins")
- input_two_three = gr.Textbox(label="Gains")
- with gr.Row():
- with gr.Column():
- button_andrew_three = gr.Button("Generate Narratives", variant="primary")
- with gr.Column():
- output_andrew_prompt_three = gr.components.Textbox(label="Input Prompt")
- output_andrew_three = gr.components.Textbox(label="Output Narrative")
- output_narrative_translate = gr.components.Textbox(label="Translation")
-
-
- with gr.TabItem("Slides"):
- with gr.Row():
- with gr.Column():
- input_narrative = gr.Textbox(label="Input Narrative")
- with gr.Row():
- with gr.Column():
- button_slides_concerns = gr.Button("Generate slides", variant="primary")
- with gr.Column():
- output_slides_prompt = gr.components.Textbox(label="Input prompt")
- output_slides = gr.components.Textbox(label="Output presentation slides")
-
- with gr.Row():
- with gr.Column():
- input_keyword = gr.Textbox(lines=1, label="Input Keywords")
- with gr.Row():
- with gr.Column():
- button_unsplash = gr.Button("Generate Slide Images", variant="primary")
- with gr.Column():
- output_unsplash = gr.Gallery(label="Found relevant Images", show_label=False,
- elem_id="gallery", columns=[2], rows=[2], object_fit="contain", height="auto")
-
-
- with gr.TabItem("Personality element"):
- with gr.Row():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- button_element = gr.Button("Extract personality", variant="primary")
- with gr.Column():
- output_element_prompt = gr.components.Textbox(label="Input Prompt")
- output_element = gr.components.Textbox(label="Most Important personality highlight")
-
-
-
- button_my_company.click(scrape_page_content, inputs=[my_company, my_company_url, my_company_page],
- outputs=[output_my_company_desc, output_my_company_value])
-
- button_subject.click(get_subject, inputs=[input_brief, input_cta, input_relation, language],
- outputs=[output_subject_prompt, output_subject, output_subject_translate])
-
- button_audience_linkedin_url.click(get_aud_profile, inputs=[input_audience_linkedin_url],
- outputs=[output_audience_role, output_audience_summary, output_bullets_translate])
-
- button_audience_occ.click(set_audience_occ, inputs=[input_audience_occ, aud_company_url],
- outputs=[output_audience_occ, output_company_desc])
-
- button_concern.click(get_concern, inputs=[my_company, output_my_company_desc, input_relation, output_company_desc],
- outputs=[output_concern_prompt, output_concern])
-
- button_concern_step1.click(get_concern_step1, inputs=[my_company, output_my_company_desc, input_relation, output_company_desc],
- outputs=[output_concern_step1_prompt, output_concern_step1])
-
- button_concern_step2.click(get_concern_step2, inputs=[input_relation],
- outputs=[output_concern_step2_prompt, output_concern_step2, output_concern_step2_translate])
-
- button_andrew.click(get_andrew, inputs=[input_relation, input_concerns_two],
- outputs=[output_andrew_prompt, output_andrew, output_wins_translate])
-
- button_andrew_two.click(get_andrew_two, inputs=[my_company, input_value_prop, input_relation, output_company_desc],
- outputs=[output_andrew_prompt_two, output_andrew_two, output_gains_translate])
-
- button_andrew_three.click(get_andrew_three, inputs=[input_relation, input_one_three, input_two_three],
- outputs=[output_andrew_prompt_three, output_andrew_three, output_narrative_translate])
-
- button_slides_concerns.click(create_slides2, inputs=[input_narrative, input_concerns_two, input_one_three, input_two_three],
- outputs=[output_slides_prompt, output_slides])
-
- button_unsplash.click(get_unsplash_images, inputs=[input_keyword], outputs=[output_unsplash])
-
- button_element.click(get_profile_element, inputs=[input_relation], outputs=[output_element_prompt, output_element])
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Spider/con_spider_SVM.py b/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Spider/con_spider_SVM.py
deleted file mode 100644
index 96b79c9ec78d4fa3f7d32d1de2c6c5c1274f320e..0000000000000000000000000000000000000000
--- a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Spider/con_spider_SVM.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import cv2
-import numpy as np
-from PIL import Image
-import pickle
-import tensorflow as tf
-
-class spiderSVM:
- def __init__(self,url) -> None:
- self.image = url
-
- def predict_image(self):
- # Load the model
- load_extractor = tf.keras.models.load_model("././Model/Spider/resnetxSVM/resnet_EXTRACTOR.h5")
-
- modelpath = "././Model/Spider/resnetxSVM/dataSaved.pkl"
-
- with open(modelpath, 'rb') as file:
- saved_data = pickle.load(file)
- animal_breed = saved_data['class_name']
- model = saved_data['svm_model']
-
- im = Image.open(self.image)
- img = im.convert("RGB")
- img= np.asarray(img)
- image_resized= cv2.resize(img, (224,224))
- features = load_extractor.predict(np.expand_dims(image_resized, axis=0))
-
- reshaped_features = features.reshape(features.shape[0],-1)
- predicted_class = model.predict(reshaped_features)
- pred_prob = model.predict_proba(reshaped_features)
- prediction_probability = pred_prob[0][predicted_class[0]]
- predicted_class
-
- output_class= animal_breed[predicted_class[0]]
-
- return [output_class, prediction_probability]
diff --git a/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/utils/page_utils.py b/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/utils/page_utils.py
deleted file mode 100644
index 5d3e4e78e97ab27a97c198dfee4df3d0051971f0..0000000000000000000000000000000000000000
--- a/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/utils/page_utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from typing import Optional
-
-
-class ColorPalette:
- """Color Palette Container."""
- all = []
-
- def __init__(
- self,
- c50: str,
- c100: str,
- c200: str,
- c300: str,
- c400: str,
- c500: str,
- c600: str,
- c700: str,
- c800: str,
- c900: str,
- c950: str,
- name: Optional[str] = None,
- ):
- self.c50 = c50
- self.c100 = c100
- self.c200 = c200
- self.c300 = c300
- self.c400 = c400
- self.c500 = c500
- self.c600 = c600
- self.c700 = c700
- self.c800 = c800
- self.c900 = c900
- self.c950 = c950
- self.name = name
- ColorPalette.all.append(self)
-
-
-KALBE_THEME_COLOR = ColorPalette(
- name='kalbe',
- c50='#f2f9e8',
- c100='#dff3c4',
- c200='#c2e78d',
- c300='#9fd862',
- c400='#7fc93f',
- c500='#3F831C',
- c600='#31661a',
- c700='#244c13',
- c800='#18340c',
- c900='#0c1b06',
- c950='#050a02',
-)
\ No newline at end of file
diff --git a/spaces/KashiwaByte/SparkDebate-V2.0/demo.py b/spaces/KashiwaByte/SparkDebate-V2.0/demo.py
deleted file mode 100644
index 4e20e6b848c3a6ceae078d7e81f6d3206cd5651a..0000000000000000000000000000000000000000
--- a/spaces/KashiwaByte/SparkDebate-V2.0/demo.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from utils.API import SparkAPI
-app_id = input("app_id here :")
-api_key = input("api_key here :")
-api_secret = input("api_secret here :")
-bot = SparkAPI(app_id=app_id ,api_key=api_key ,api_secret=api_secret)
-bot.chat_stream()
\ No newline at end of file
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/models/deepmind_version.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/models/deepmind_version.py
deleted file mode 100644
index 17b33b271ec40cfc78db9e96bd54f44dd90ec844..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/models/deepmind_version.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from utils.display import *
-from utils.dsp import *
-
-
-class WaveRNN(nn.Module) :
- def __init__(self, hidden_size=896, quantisation=256) :
- super(WaveRNN, self).__init__()
-
- self.hidden_size = hidden_size
- self.split_size = hidden_size // 2
-
- # The main matmul
- self.R = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=False)
-
- # Output fc layers
- self.O1 = nn.Linear(self.split_size, self.split_size)
- self.O2 = nn.Linear(self.split_size, quantisation)
- self.O3 = nn.Linear(self.split_size, self.split_size)
- self.O4 = nn.Linear(self.split_size, quantisation)
-
- # Input fc layers
- self.I_coarse = nn.Linear(2, 3 * self.split_size, bias=False)
- self.I_fine = nn.Linear(3, 3 * self.split_size, bias=False)
-
- # biases for the gates
- self.bias_u = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_r = nn.Parameter(torch.zeros(self.hidden_size))
- self.bias_e = nn.Parameter(torch.zeros(self.hidden_size))
-
- # display num params
- self.num_params()
-
-
- def forward(self, prev_y, prev_hidden, current_coarse) :
-
- # Main matmul - the projection is split 3 ways
- R_hidden = self.R(prev_hidden)
- R_u, R_r, R_e, = torch.split(R_hidden, self.hidden_size, dim=1)
-
- # Project the prev input
- coarse_input_proj = self.I_coarse(prev_y)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project the prev input and current coarse sample
- fine_input = torch.cat([prev_y, current_coarse], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # concatenate for the gates
- I_u = torch.cat([I_coarse_u, I_fine_u], dim=1)
- I_r = torch.cat([I_coarse_r, I_fine_r], dim=1)
- I_e = torch.cat([I_coarse_e, I_fine_e], dim=1)
-
- # Compute all gates for coarse and fine
- u = F.sigmoid(R_u + I_u + self.bias_u)
- r = F.sigmoid(R_r + I_r + self.bias_r)
- e = torch.tanh(r * R_e + I_e + self.bias_e)
- hidden = u * prev_hidden + (1. - u) * e
-
- # Split the hidden state
- hidden_coarse, hidden_fine = torch.split(hidden, self.split_size, dim=1)
-
- # Compute outputs
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
-
- return out_coarse, out_fine, hidden
-
-
- def generate(self, seq_len):
- with torch.no_grad():
- # First split up the biases for the gates
- b_coarse_u, b_fine_u = torch.split(self.bias_u, self.split_size)
- b_coarse_r, b_fine_r = torch.split(self.bias_r, self.split_size)
- b_coarse_e, b_fine_e = torch.split(self.bias_e, self.split_size)
-
- # Lists for the two output seqs
- c_outputs, f_outputs = [], []
-
- # Some initial inputs
- out_coarse = torch.LongTensor([0]).cuda()
- out_fine = torch.LongTensor([0]).cuda()
-
- # We'll meed a hidden state
- hidden = self.init_hidden()
-
- # Need a clock for display
- start = time.time()
-
- # Loop for generation
- for i in range(seq_len) :
-
- # Split into two hidden states
- hidden_coarse, hidden_fine = \
- torch.split(hidden, self.split_size, dim=1)
-
- # Scale and concat previous predictions
- out_coarse = out_coarse.unsqueeze(0).float() / 127.5 - 1.
- out_fine = out_fine.unsqueeze(0).float() / 127.5 - 1.
- prev_outputs = torch.cat([out_coarse, out_fine], dim=1)
-
- # Project input
- coarse_input_proj = self.I_coarse(prev_outputs)
- I_coarse_u, I_coarse_r, I_coarse_e = \
- torch.split(coarse_input_proj, self.split_size, dim=1)
-
- # Project hidden state and split 6 ways
- R_hidden = self.R(hidden)
- R_coarse_u , R_fine_u, \
- R_coarse_r, R_fine_r, \
- R_coarse_e, R_fine_e = torch.split(R_hidden, self.split_size, dim=1)
-
- # Compute the coarse gates
- u = F.sigmoid(R_coarse_u + I_coarse_u + b_coarse_u)
- r = F.sigmoid(R_coarse_r + I_coarse_r + b_coarse_r)
- e = torch.tanh(r * R_coarse_e + I_coarse_e + b_coarse_e)
- hidden_coarse = u * hidden_coarse + (1. - u) * e
-
- # Compute the coarse output
- out_coarse = self.O2(F.relu(self.O1(hidden_coarse)))
- posterior = F.softmax(out_coarse, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_coarse = distrib.sample()
- c_outputs.append(out_coarse)
-
- # Project the [prev outputs and predicted coarse sample]
- coarse_pred = out_coarse.float() / 127.5 - 1.
- fine_input = torch.cat([prev_outputs, coarse_pred.unsqueeze(0)], dim=1)
- fine_input_proj = self.I_fine(fine_input)
- I_fine_u, I_fine_r, I_fine_e = \
- torch.split(fine_input_proj, self.split_size, dim=1)
-
- # Compute the fine gates
- u = F.sigmoid(R_fine_u + I_fine_u + b_fine_u)
- r = F.sigmoid(R_fine_r + I_fine_r + b_fine_r)
- e = torch.tanh(r * R_fine_e + I_fine_e + b_fine_e)
- hidden_fine = u * hidden_fine + (1. - u) * e
-
- # Compute the fine output
- out_fine = self.O4(F.relu(self.O3(hidden_fine)))
- posterior = F.softmax(out_fine, dim=1)
- distrib = torch.distributions.Categorical(posterior)
- out_fine = distrib.sample()
- f_outputs.append(out_fine)
-
- # Put the hidden state back together
- hidden = torch.cat([hidden_coarse, hidden_fine], dim=1)
-
- # Display progress
- speed = (i + 1) / (time.time() - start)
- stream('Gen: %i/%i -- Speed: %i', (i + 1, seq_len, speed))
-
- coarse = torch.stack(c_outputs).squeeze(1).cpu().data.numpy()
- fine = torch.stack(f_outputs).squeeze(1).cpu().data.numpy()
- output = combine_signal(coarse, fine)
-
- return output, coarse, fine
-
- def init_hidden(self, batch_size=1) :
- return torch.zeros(batch_size, self.hidden_size).cuda()
-
- def num_params(self) :
- parameters = filter(lambda p: p.requires_grad, self.parameters())
- parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
- print('Trainable Parameters: %.3f million' % parameters)
\ No newline at end of file
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/necks/fpn.py b/spaces/KyanChen/RSPrompter/mmdet/models/necks/fpn.py
deleted file mode 100644
index 67bd8879641f8539f329e6ffb94f88d25e417244..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/necks/fpn.py
+++ /dev/null
@@ -1,221 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Tuple, Union
-
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmengine.model import BaseModule
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import ConfigType, MultiConfig, OptConfigType
-
-
-@MODELS.register_module()
-class FPN(BaseModule):
- r"""Feature Pyramid Network.
-
- This is an implementation of paper `Feature Pyramid Networks for Object
- Detection `_.
-
- Args:
- in_channels (list[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale).
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Defaults to 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Defaults to -1, which means the
- last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Defaults to False.
- If True, it is equivalent to `add_extra_convs='on_input'`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Defaults to False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Defaults to False.
- conv_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- convolution layer. Defaults to None.
- norm_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- normalization layer. Defaults to None.
- act_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- activation layer in ConvModule. Defaults to None.
- upsample_cfg (:obj:`ConfigDict` or dict, optional): Config dict
- for interpolate layer. Defaults to dict(mode='nearest').
- init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \
- dict]): Initialization config dict.
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(
- self,
- in_channels: List[int],
- out_channels: int,
- num_outs: int,
- start_level: int = 0,
- end_level: int = -1,
- add_extra_convs: Union[bool, str] = False,
- relu_before_extra_convs: bool = False,
- no_norm_on_lateral: bool = False,
- conv_cfg: OptConfigType = None,
- norm_cfg: OptConfigType = None,
- act_cfg: OptConfigType = None,
- upsample_cfg: ConfigType = dict(mode='nearest'),
- init_cfg: MultiConfig = dict(
- type='Xavier', layer='Conv2d', distribution='uniform')
- ) -> None:
- super().__init__(init_cfg=init_cfg)
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1 or end_level == self.num_ins - 1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level is not the last level, no extra level is allowed
- self.backbone_end_level = end_level + 1
- assert end_level < self.num_ins
- assert num_outs == end_level - start_level + 1
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- self.add_extra_convs = 'on_input'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- def forward(self, inputs: Tuple[Tensor]) -> tuple:
- """Forward function.
-
- Args:
- inputs (tuple[Tensor]): Features from the upstream network, each
- is a 4D-tensor.
-
- Returns:
- tuple: Feature maps, each is a 4D-tensor.
- """
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- # fix runtime error of "+=" inplace operation in PyTorch 1.10
- laterals[i - 1] = laterals[i - 1] + F.interpolate(
- laterals[i], **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] = laterals[i - 1] + F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dataset_wrappers.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dataset_wrappers.py
deleted file mode 100644
index 1adff10beb024940f9066a407cc76ddb06b27404..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dataset_wrappers.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-
-import numpy as np
-from mmengine.dataset import BaseDataset, force_full_init
-
-from mmpretrain.registry import DATASETS
-
-
-@DATASETS.register_module()
-class KFoldDataset:
- """A wrapper of dataset for K-Fold cross-validation.
-
- K-Fold cross-validation divides all the samples in groups of samples,
- called folds, of almost equal sizes. And we use k-1 of folds to do training
- and use the fold left to do validation.
-
- Args:
- dataset (:obj:`mmengine.dataset.BaseDataset` | dict): The dataset to be
- divided
- fold (int): The fold used to do validation. Defaults to 0.
- num_splits (int): The number of all folds. Defaults to 5.
- test_mode (bool): Use the training dataset or validation dataset.
- Defaults to False.
- seed (int, optional): The seed to shuffle the dataset before splitting.
- If None, not shuffle the dataset. Defaults to None.
- """
-
- def __init__(self,
- dataset,
- fold=0,
- num_splits=5,
- test_mode=False,
- seed=None):
- if isinstance(dataset, dict):
- self.dataset = DATASETS.build(dataset)
- # Init the dataset wrapper lazily according to the dataset setting.
- lazy_init = dataset.get('lazy_init', False)
- elif isinstance(dataset, BaseDataset):
- self.dataset = dataset
- else:
- raise TypeError(f'Unsupported dataset type {type(dataset)}.')
-
- self._metainfo = getattr(self.dataset, 'metainfo', {})
- self.fold = fold
- self.num_splits = num_splits
- self.test_mode = test_mode
- self.seed = seed
-
- self._fully_initialized = False
- if not lazy_init:
- self.full_init()
-
- @property
- def metainfo(self) -> dict:
- """Get the meta information of ``self.dataset``.
-
- Returns:
- dict: Meta information of the dataset.
- """
- # Prevent `self._metainfo` from being modified by outside.
- return copy.deepcopy(self._metainfo)
-
- def full_init(self):
- """fully initialize the dataset."""
- if self._fully_initialized:
- return
-
- self.dataset.full_init()
- ori_len = len(self.dataset)
- indices = list(range(ori_len))
- if self.seed is not None:
- rng = np.random.default_rng(self.seed)
- rng.shuffle(indices)
-
- test_start = ori_len * self.fold // self.num_splits
- test_end = ori_len * (self.fold + 1) // self.num_splits
- if self.test_mode:
- indices = indices[test_start:test_end]
- else:
- indices = indices[:test_start] + indices[test_end:]
-
- self._ori_indices = indices
- self.dataset = self.dataset.get_subset(indices)
-
- self._fully_initialized = True
-
- @force_full_init
- def _get_ori_dataset_idx(self, idx: int) -> int:
- """Convert global idx to local index.
-
- Args:
- idx (int): Global index of ``KFoldDataset``.
-
- Returns:
- int: The original index in the whole dataset.
- """
- return self._ori_indices[idx]
-
- @force_full_init
- def get_data_info(self, idx: int) -> dict:
- """Get annotation by index.
-
- Args:
- idx (int): Global index of ``KFoldDataset``.
-
- Returns:
- dict: The idx-th annotation of the datasets.
- """
- return self.dataset.get_data_info(idx)
-
- @force_full_init
- def __len__(self):
- return len(self.dataset)
-
- @force_full_init
- def __getitem__(self, idx):
- return self.dataset[idx]
-
- @force_full_init
- def get_cat_ids(self, idx):
- return self.dataset.get_cat_ids(idx)
-
- @force_full_init
- def get_gt_labels(self):
- return self.dataset.get_gt_labels()
-
- @property
- def CLASSES(self):
- """Return all categories names."""
- return self._metainfo.get('classes', None)
-
- @property
- def class_to_idx(self):
- """Map mapping class name to class index.
-
- Returns:
- dict: mapping from class name to class index.
- """
-
- return {cat: i for i, cat in enumerate(self.CLASSES)}
-
- def __repr__(self):
- """Print the basic information of the dataset.
-
- Returns:
- str: Formatted string.
- """
- head = 'Dataset ' + self.__class__.__name__
- body = []
- type_ = 'test' if self.test_mode else 'training'
- body.append(f'Type: \t{type_}')
- body.append(f'Seed: \t{self.seed}')
-
- def ordinal(n):
- # Copy from https://codegolf.stackexchange.com/a/74047
- suffix = 'tsnrhtdd'[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]
- return f'{n}{suffix}'
-
- body.append(
- f'Fold: \t{ordinal(self.fold+1)} of {self.num_splits}-fold')
- if self._fully_initialized:
- body.append(f'Number of samples: \t{self.__len__()}')
- else:
- body.append("Haven't been initialized")
-
- if self.CLASSES is not None:
- body.append(f'Number of categories: \t{len(self.CLASSES)}')
- else:
- body.append('The `CLASSES` meta info is not set.')
-
- body.append(
- f'Original dataset type:\t{self.dataset.__class__.__name__}')
-
- lines = [head] + [' ' * 4 + line for line in body]
- return '\n'.join(lines)
diff --git a/spaces/LanguageBind/LanguageBind/languagebind/thermal/processing_thermal.py b/spaces/LanguageBind/LanguageBind/languagebind/thermal/processing_thermal.py
deleted file mode 100644
index 36ed1f09d3bf23514baf4859e462d28bc49dfd53..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/languagebind/thermal/processing_thermal.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import torch
-from PIL import Image
-from torchvision import transforms
-from transformers import ProcessorMixin, BatchEncoding
-from transformers.image_processing_utils import BatchFeature
-
-OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073)
-OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711)
-
-def make_list_of_images(x):
- if not isinstance(x, list):
- return [x]
- return x
-
-def get_thermal_transform(config):
- config = config.vision_config
- transform = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC),
- transforms.CenterCrop(224),
- transforms.Normalize(OPENAI_DATASET_MEAN, OPENAI_DATASET_STD) # assume image
- ]
- )
- return transform
-
-
-def load_and_transform_thermal(thermal_path, transform):
- thermal = Image.open(thermal_path)
- thermal_outputs = transform(thermal)
- return thermal_outputs
-
-class LanguageBindThermalProcessor(ProcessorMixin):
- attributes = []
- tokenizer_class = ("LanguageBindThermalTokenizer")
-
- def __init__(self, config, tokenizer=None, **kwargs):
- super().__init__(**kwargs)
- self.config = config
- self.transform = get_thermal_transform(config)
- self.image_processor = load_and_transform_thermal
- self.tokenizer = tokenizer
-
- def __call__(self, images=None, text=None, context_length=77, return_tensors=None, **kwargs):
- if text is None and images is None:
- raise ValueError("You have to specify either text or images. Both cannot be none.")
-
- if text is not None:
- encoding = self.tokenizer(text, max_length=context_length, padding='max_length',
- truncation=True, return_tensors=return_tensors, **kwargs)
-
- if images is not None:
- images = make_list_of_images(images)
- image_features = [self.image_processor(image, self.transform) for image in images]
- image_features = torch.stack(image_features)
-
- if text is not None and images is not None:
- encoding["pixel_values"] = image_features
- return encoding
- elif text is not None:
- return encoding
- else:
- return {"pixel_values": image_features}
-
- def batch_decode(self, skip_special_tokens=True, *args, **kwargs):
- """
- This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
- refer to the docstring of this method for more information.
- """
- return self.tokenizer.batch_decode(*args, skip_special_tokens=skip_special_tokens, **kwargs)
-
- def decode(self, skip_special_tokens=True, *args, **kwargs):
- """
- This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
- the docstring of this method for more information.
- """
- return self.tokenizer.decode(*args, skip_special_tokens=skip_special_tokens, **kwargs)
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/_deeplab.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/_deeplab.py
deleted file mode 100644
index e663007dde9a56add1aa540be76cf2f5d81de82f..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/_deeplab.py
+++ /dev/null
@@ -1,180 +0,0 @@
-# Credit: https://github.com/VainF/DeepLabV3Plus-Pytorch
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from .utils import _SimpleSegmentationModel
-
-
-__all__ = ["DeepLabV3"]
-
-
-class DeepLabV3(_SimpleSegmentationModel):
- """
- Implements DeepLabV3 model from
- `"Rethinking Atrous Convolution for Semantic Image Segmentation"
- `_.
-
- Arguments:
- backbone (nn.Module): the network used to compute the features for the model.
- The backbone should return an OrderedDict[Tensor], with the key being
- "out" for the last feature map used, and "aux" if an auxiliary classifier
- is used.
- classifier (nn.Module): module that takes the "out" element returned from
- the backbone and returns a dense prediction.
- aux_classifier (nn.Module, optional): auxiliary classifier used during training
- """
- pass
-
-class DeepLabHeadV3Plus(nn.Module):
- def __init__(self, in_channels, low_level_channels, num_classes, aspp_dilate=[12, 24, 36]):
- super(DeepLabHeadV3Plus, self).__init__()
- self.project = nn.Sequential(
- nn.Conv2d(low_level_channels, 48, 1, bias=False),
- nn.BatchNorm2d(48),
- nn.ReLU(inplace=True),
- )
-
- self.aspp = ASPP(in_channels, aspp_dilate)
-
- self.classifier = nn.Sequential(
- nn.Conv2d(304, 256, 3, padding=1, bias=False),
- nn.BatchNorm2d(256),
- nn.ReLU(inplace=True),
- nn.Conv2d(256, num_classes, 1)
- )
- self._init_weight()
-
- def forward(self, feature):
- low_level_feature = self.project( feature['low_level'] )
- output_feature = self.aspp(feature['out'])
- output_feature = F.interpolate(output_feature, size=low_level_feature.shape[2:], mode='bilinear', align_corners=False)
- return self.classifier( torch.cat( [ low_level_feature, output_feature ], dim=1 ) )
-
- def _init_weight(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight)
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-class DeepLabHead(nn.Module):
- def __init__(self, in_channels, num_classes, aspp_dilate=[12, 24, 36]):
- super(DeepLabHead, self).__init__()
-
- self.classifier = nn.Sequential(
- ASPP(in_channels, aspp_dilate),
- nn.Conv2d(256, 256, 3, padding=1, bias=False),
- nn.BatchNorm2d(256),
- nn.ReLU(inplace=True),
- nn.Conv2d(256, num_classes, 1)
- )
- self._init_weight()
-
- def forward(self, feature):
- return self.classifier( feature['out'] )
-
- def _init_weight(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight)
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-class AtrousSeparableConvolution(nn.Module):
- """ Atrous Separable Convolution
- """
- def __init__(self, in_channels, out_channels, kernel_size,
- stride=1, padding=0, dilation=1, bias=True):
- super(AtrousSeparableConvolution, self).__init__()
- self.body = nn.Sequential(
- # Separable Conv
- nn.Conv2d( in_channels, in_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, bias=bias, groups=in_channels ),
- # PointWise Conv
- nn.Conv2d( in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias),
- )
-
- self._init_weight()
-
- def forward(self, x):
- return self.body(x)
-
- def _init_weight(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight)
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-class ASPPConv(nn.Sequential):
- def __init__(self, in_channels, out_channels, dilation):
- modules = [
- nn.Conv2d(in_channels, out_channels, 3, padding=dilation, dilation=dilation, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(inplace=True)
- ]
- super(ASPPConv, self).__init__(*modules)
-
-class ASPPPooling(nn.Sequential):
- def __init__(self, in_channels, out_channels):
- super(ASPPPooling, self).__init__(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(in_channels, out_channels, 1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(inplace=True))
-
- def forward(self, x):
- size = x.shape[-2:]
- x = super(ASPPPooling, self).forward(x)
- return F.interpolate(x, size=size, mode='bilinear', align_corners=False)
-
-class ASPP(nn.Module):
- def __init__(self, in_channels, atrous_rates):
- super(ASPP, self).__init__()
- out_channels = 256
- modules = []
- modules.append(nn.Sequential(
- nn.Conv2d(in_channels, out_channels, 1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(inplace=True)))
-
- rate1, rate2, rate3 = tuple(atrous_rates)
- modules.append(ASPPConv(in_channels, out_channels, rate1))
- modules.append(ASPPConv(in_channels, out_channels, rate2))
- modules.append(ASPPConv(in_channels, out_channels, rate3))
- modules.append(ASPPPooling(in_channels, out_channels))
-
- self.convs = nn.ModuleList(modules)
-
- self.project = nn.Sequential(
- nn.Conv2d(5 * out_channels, out_channels, 1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(inplace=True),
- nn.Dropout(0.1),)
-
- def forward(self, x):
- res = []
- for conv in self.convs:
- res.append(conv(x))
- res = torch.cat(res, dim=1)
- return self.project(res)
-
-
-
-def convert_to_separable_conv(module):
- new_module = module
- if isinstance(module, nn.Conv2d) and module.kernel_size[0]>1:
- new_module = AtrousSeparableConvolution(module.in_channels,
- module.out_channels,
- module.kernel_size,
- module.stride,
- module.padding,
- module.dilation,
- module.bias)
- for name, child in module.named_children():
- new_module.add_module(name, convert_to_separable_conv(child))
- return new_module
\ No newline at end of file
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/chat-suggestions.tsx b/spaces/Makiing/coolb-in-gtest/src/components/chat-suggestions.tsx
deleted file mode 100644
index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/chat-suggestions.tsx
+++ /dev/null
@@ -1,45 +0,0 @@
-import React, { useMemo } from 'react'
-import Image from 'next/image'
-import HelpIcon from '@/assets/images/help.svg'
-import { SuggestedResponse } from '@/lib/bots/bing/types'
-import { useBing } from '@/lib/hooks/use-bing'
-import { atom, useAtom } from 'jotai'
-
-type Suggestions = SuggestedResponse[]
-const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text }))
-const suggestionsAtom = atom([])
-
-type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions }
-
-export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) {
- const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom)
- const toggleSuggestions = (() => {
- if (currentSuggestions === helpSuggestions) {
- setSuggestions(suggestions)
- } else {
- setSuggestions(helpSuggestions)
- }
- })
-
- useMemo(() => {
- setSuggestions(suggestions)
- window.scrollBy(0, 2000)
- }, [suggestions.length])
-
- return currentSuggestions?.length ? (
-
{page_content}", unsafe_allow_html=True)
-
- all_chats = chat_history
- all_chat_history_str = '\n'.join(
- [f'{x[0]}: {x[1]}' for x in all_chats])
- st.title(':blue[All chat records]')
- st.text_area('', value=all_chat_history_str, height=250, label_visibility='collapsed')
-if __name__ == '__main__':
- main(pinecone_index_name, chroma_collection_name, persist_directory,
- docsearch_ready, directory_name)
diff --git a/spaces/aabyzov/playground/app.py b/spaces/aabyzov/playground/app.py
deleted file mode 100644
index a4491fa68b763a8a344f905b856e79f8ff7aabf7..0000000000000000000000000000000000000000
--- a/spaces/aabyzov/playground/app.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import streamlit as st
-
-x = st.slider('Select a value')
-st.write(x, 'squared is', x * x)
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/approx_max_iou_assigner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/approx_max_iou_assigner.py
deleted file mode 100644
index 6d07656d173744426795c81c14c6bcdb4e63a406..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/approx_max_iou_assigner.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..iou_calculators import build_iou_calculator
-from .max_iou_assigner import MaxIoUAssigner
-
-
-@BBOX_ASSIGNERS.register_module()
-class ApproxMaxIoUAssigner(MaxIoUAssigner):
- """Assign a corresponding gt bbox or background to each bbox.
-
- Each proposals will be assigned with an integer indicating the ground truth
- index. (semi-positive index: gt label (0-based), -1: background)
-
- - -1: negative sample, no assigned gt
- - semi-positive integer: positive sample, index (0-based) of assigned gt
-
- Args:
- pos_iou_thr (float): IoU threshold for positive bboxes.
- neg_iou_thr (float or tuple): IoU threshold for negative bboxes.
- min_pos_iou (float): Minimum iou for a bbox to be considered as a
- positive bbox. Positive samples can have smaller IoU than
- pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
- gt_max_assign_all (bool): Whether to assign all bboxes with the same
- highest overlap with some gt to that gt.
- ignore_iof_thr (float): IoF threshold for ignoring bboxes (if
- `gt_bboxes_ignore` is specified). Negative values mean not
- ignoring any bboxes.
- ignore_wrt_candidates (bool): Whether to compute the iof between
- `bboxes` and `gt_bboxes_ignore`, or the contrary.
- match_low_quality (bool): Whether to allow quality matches. This is
- usually allowed for RPN and single stage detectors, but not allowed
- in the second stage.
- gpu_assign_thr (int): The upper bound of the number of GT for GPU
- assign. When the number of gt is above this threshold, will assign
- on CPU device. Negative values mean not assign on CPU.
- """
-
- def __init__(self,
- pos_iou_thr,
- neg_iou_thr,
- min_pos_iou=.0,
- gt_max_assign_all=True,
- ignore_iof_thr=-1,
- ignore_wrt_candidates=True,
- match_low_quality=True,
- gpu_assign_thr=-1,
- iou_calculator=dict(type='BboxOverlaps2D')):
- self.pos_iou_thr = pos_iou_thr
- self.neg_iou_thr = neg_iou_thr
- self.min_pos_iou = min_pos_iou
- self.gt_max_assign_all = gt_max_assign_all
- self.ignore_iof_thr = ignore_iof_thr
- self.ignore_wrt_candidates = ignore_wrt_candidates
- self.gpu_assign_thr = gpu_assign_thr
- self.match_low_quality = match_low_quality
- self.iou_calculator = build_iou_calculator(iou_calculator)
-
- def assign(self,
- approxs,
- squares,
- approxs_per_octave,
- gt_bboxes,
- gt_bboxes_ignore=None,
- gt_labels=None):
- """Assign gt to approxs.
-
- This method assign a gt bbox to each group of approxs (bboxes),
- each group of approxs is represent by a base approx (bbox) and
- will be assigned with -1, or a semi-positive number.
- background_label (-1) means negative sample,
- semi-positive number is the index (0-based) of assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. assign every bbox to background_label (-1)
- 2. use the max IoU of each group of approxs to assign
- 2. assign proposals whose iou with all gts < neg_iou_thr to background
- 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr,
- assign it to that bbox
- 4. for each gt bbox, assign its nearest proposals (may be more than
- one) to itself
-
- Args:
- approxs (Tensor): Bounding boxes to be assigned,
- shape(approxs_per_octave*n, 4).
- squares (Tensor): Base Bounding boxes to be assigned,
- shape(n, 4).
- approxs_per_octave (int): number of approxs per octave
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- num_squares = squares.size(0)
- num_gts = gt_bboxes.size(0)
-
- if num_squares == 0 or num_gts == 0:
- # No predictions and/or truth, return empty assignment
- overlaps = approxs.new(num_gts, num_squares)
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- return assign_result
-
- # re-organize anchors by approxs_per_octave x num_squares
- approxs = torch.transpose(
- approxs.view(num_squares, approxs_per_octave, 4), 0,
- 1).contiguous().view(-1, 4)
- assign_on_cpu = True if (self.gpu_assign_thr > 0) and (
- num_gts > self.gpu_assign_thr) else False
- # compute overlap and assign gt on CPU when number of GT is large
- if assign_on_cpu:
- device = approxs.device
- approxs = approxs.cpu()
- gt_bboxes = gt_bboxes.cpu()
- if gt_bboxes_ignore is not None:
- gt_bboxes_ignore = gt_bboxes_ignore.cpu()
- if gt_labels is not None:
- gt_labels = gt_labels.cpu()
- all_overlaps = self.iou_calculator(approxs, gt_bboxes)
-
- overlaps, _ = all_overlaps.view(approxs_per_octave, num_squares,
- num_gts).max(dim=0)
- overlaps = torch.transpose(overlaps, 0, 1)
-
- if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None
- and gt_bboxes_ignore.numel() > 0 and squares.numel() > 0):
- if self.ignore_wrt_candidates:
- ignore_overlaps = self.iou_calculator(
- squares, gt_bboxes_ignore, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=1)
- else:
- ignore_overlaps = self.iou_calculator(
- gt_bboxes_ignore, squares, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=0)
- overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1
-
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- if assign_on_cpu:
- assign_result.gt_inds = assign_result.gt_inds.to(device)
- assign_result.max_overlaps = assign_result.max_overlaps.to(device)
- if assign_result.labels is not None:
- assign_result.labels = assign_result.labels.to(device)
- return assign_result
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/adaptation.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/adaptation.py
deleted file mode 100644
index 0c70ce1235cfd978e4d85e5e8f67704b7b993ff4..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/adaptation.py
+++ /dev/null
@@ -1,366 +0,0 @@
-import weakref
-
-from . import interface
-from pyglet.util import debug_print
-from pyglet.media.drivers.base import AbstractAudioDriver, AbstractAudioPlayer, MediaEvent
-from pyglet.media.mediathreads import PlayerWorkerThread
-from pyglet.media.drivers.listener import AbstractListener
-
-_debug = debug_print('debug_media')
-
-
-class OpenALDriver(AbstractAudioDriver):
- def __init__(self, device_name=None):
- super().__init__()
-
- self.device = interface.OpenALDevice(device_name)
- self.context = self.device.create_context()
- self.context.make_current()
-
- self._listener = OpenALListener(self)
-
- self.worker = PlayerWorkerThread()
- self.worker.start()
-
- def __del__(self):
- assert _debug("Delete OpenALDriver")
- self.delete()
-
- def create_audio_player(self, source, player):
- assert self.device is not None, "Device was closed"
- return OpenALAudioPlayer(self, source, player)
-
- def delete(self):
- self.worker.stop()
- self.context = None
-
- def have_version(self, major, minor):
- return (major, minor) <= self.get_version()
-
- def get_version(self):
- assert self.device is not None, "Device was closed"
- return self.device.get_version()
-
- def get_extensions(self):
- assert self.device is not None, "Device was closed"
- return self.device.get_extensions()
-
- def have_extension(self, extension):
- return extension in self.get_extensions()
-
- def get_listener(self):
- return self._listener
-
-
-class OpenALListener(AbstractListener):
- def __init__(self, driver):
- self._driver = weakref.proxy(driver)
- self._al_listener = interface.OpenALListener()
-
- def __del__(self):
- assert _debug("Delete OpenALListener")
-
- def _set_volume(self, volume):
- self._al_listener.gain = volume
- self._volume = volume
-
- def _set_position(self, position):
- self._al_listener.position = position
- self._position = position
-
- def _set_forward_orientation(self, orientation):
- self._al_listener.orientation = orientation + self._up_orientation
- self._forward_orientation = orientation
-
- def _set_up_orientation(self, orientation):
- self._al_listener.orientation = self._forward_orientation + orientation
- self._up_orientation = orientation
-
-
-class OpenALAudioPlayer(AbstractAudioPlayer):
- #: Minimum size of an OpenAL buffer worth bothering with, in bytes
- min_buffer_size = 512
-
- #: Aggregate (desired) buffer size, in seconds
- _ideal_buffer_size = 1.0
-
- def __init__(self, driver, source, player):
- super(OpenALAudioPlayer, self).__init__(source, player)
- self.driver = driver
- self.alsource = driver.context.create_source()
-
- # Cursor positions, like DSound and Pulse drivers, refer to a
- # hypothetical infinite-length buffer. Cursor units are in bytes.
-
- # Cursor position of current (head) AL buffer
- self._buffer_cursor = 0
-
- # Estimated playback cursor position (last seen)
- self._play_cursor = 0
-
- # Cursor position of end of queued AL buffer.
- self._write_cursor = 0
-
- # List of currently queued buffer sizes (in bytes)
- self._buffer_sizes = []
-
- # List of currently queued buffer timestamps
- self._buffer_timestamps = []
-
- # Timestamp at end of last written buffer (timestamp to return in case
- # of underrun)
- self._underrun_timestamp = None
-
- # List of (cursor, MediaEvent)
- self._events = []
-
- # Desired play state (True even if stopped due to underrun)
- self._playing = False
-
- # When clearing, the play cursor can be incorrect
- self._clearing = False
-
- # Up to one audio data may be buffered if too much data was received
- # from the source that could not be written immediately into the
- # buffer. See _refill().
- self._audiodata_buffer = None
-
- self._refill(self.ideal_buffer_size)
-
- def __del__(self):
- self.delete()
-
- def delete(self):
- self.driver.worker.remove(self)
- self.alsource = None
-
- @property
- def ideal_buffer_size(self):
- return int(self._ideal_buffer_size * self.source.audio_format.bytes_per_second)
-
- def play(self):
- assert _debug('OpenALAudioPlayer.play()')
-
- assert self.driver is not None
- assert self.alsource is not None
-
- if not self.alsource.is_playing:
- self.alsource.play()
- self._playing = True
- self._clearing = False
-
- self.driver.worker.add(self)
-
- def stop(self):
- self.driver.worker.remove(self)
- assert _debug('OpenALAudioPlayer.stop()')
- assert self.driver is not None
- assert self.alsource is not None
- self.alsource.pause()
- self._playing = False
-
- def clear(self):
- assert _debug('OpenALAudioPlayer.clear()')
-
- assert self.driver is not None
- assert self.alsource is not None
-
- super().clear()
- self.alsource.stop()
- self._handle_processed_buffers()
- self.alsource.clear()
- self.alsource.byte_offset = 0
- self._playing = False
- self._clearing = True
- self._audiodata_buffer = None
-
- self._buffer_cursor = 0
- self._play_cursor = 0
- self._write_cursor = 0
- del self._events[:]
- del self._buffer_sizes[:]
- del self._buffer_timestamps[:]
-
- def _update_play_cursor(self):
- assert self.driver is not None
- assert self.alsource is not None
-
- self._handle_processed_buffers()
-
- # Update play cursor using buffer cursor + estimate into current buffer
- if self._clearing:
- self._play_cursor = self._buffer_cursor
- else:
- self._play_cursor = self._buffer_cursor + self.alsource.byte_offset
- assert self._check_cursors()
-
- self._dispatch_events()
-
- def _handle_processed_buffers(self):
- processed = self.alsource.unqueue_buffers()
-
- if processed > 0:
- if (len(self._buffer_timestamps) == processed
- and self._buffer_timestamps[-1] is not None):
- assert _debug('OpenALAudioPlayer: Underrun')
- # Underrun, take note of timestamp.
- # We check that the timestamp is not None, because otherwise
- # our source could have been cleared.
- self._underrun_timestamp = self._buffer_timestamps[-1] + \
- self._buffer_sizes[-1] / float(self.source.audio_format.bytes_per_second)
- self._update_buffer_cursor(processed)
-
- return processed
-
- def _update_buffer_cursor(self, processed):
- self._buffer_cursor += sum(self._buffer_sizes[:processed])
- del self._buffer_sizes[:processed]
- del self._buffer_timestamps[:processed]
-
- def _dispatch_events(self):
- while self._events and self._events[0][0] <= self._play_cursor:
- _, event = self._events.pop(0)
- event._sync_dispatch_to_player(self.player)
-
- def _get_write_size(self):
- self._update_play_cursor()
- buffer_size = int(self._write_cursor - self._play_cursor)
-
- # Only write when current buffer size is smaller than ideal
- write_size = max(self.ideal_buffer_size - buffer_size, 0)
-
- assert _debug("Write size {} bytes".format(write_size))
- return write_size
-
- def refill_buffer(self):
- write_size = self._get_write_size()
- if write_size > self.min_buffer_size:
- self._refill(write_size)
- return True
- return False
-
- def _refill(self, write_size):
- assert _debug('_refill', write_size)
-
- while write_size > self.min_buffer_size:
- audio_data = self._get_audiodata()
-
- if audio_data is None:
- break
-
- length = min(write_size, audio_data.length)
- if length == 0:
- assert _debug('Empty AudioData. Discard it.')
-
- else:
- assert _debug('Writing {} bytes'.format(length))
- self._queue_audio_data(audio_data, length)
- write_size -= length
-
- # Check for underrun stopping playback
- if self._playing and not self.alsource.is_playing:
- assert _debug('underrun')
- self.alsource.play()
-
- def _get_audiodata(self):
- if self._audiodata_buffer is None or self._audiodata_buffer.length == 0:
- self._get_new_audiodata()
-
- return self._audiodata_buffer
-
- def _get_new_audiodata(self):
- assert _debug('Getting new audio data buffer.')
- compensation_time = self.get_audio_time_diff()
- self._audiodata_buffer= self.source.get_audio_data(self.ideal_buffer_size, compensation_time)
-
- if self._audiodata_buffer is not None:
- assert _debug('New audio data available: {} bytes'.format(self._audiodata_buffer.length))
- self._queue_events(self._audiodata_buffer)
- else:
- assert _debug('No audio data left')
- if self._has_underrun():
- assert _debug('Underrun')
- MediaEvent('on_eos').sync_dispatch_to_player(self.player)
-
- def _queue_audio_data(self, audio_data, length):
- buf = self.alsource.get_buffer()
- buf.data(audio_data, self.source.audio_format, length)
- self.alsource.queue_buffer(buf)
- self._update_write_cursor(audio_data, length)
-
- def _update_write_cursor(self, audio_data, length):
- self._write_cursor += length
- self._buffer_sizes.append(length)
- self._buffer_timestamps.append(audio_data.timestamp)
- audio_data.consume(length, self.source.audio_format)
- assert self._check_cursors()
-
- def _queue_events(self, audio_data):
- for event in audio_data.events:
- cursor = self._write_cursor + event.timestamp * \
- self.source.audio_format.bytes_per_second
- self._events.append((cursor, event))
-
- def _has_underrun(self):
- return self.alsource.buffers_queued == 0
-
- def get_time(self):
- # Update first, might remove buffers
- self._update_play_cursor()
-
- if not self._buffer_timestamps:
- timestamp = self._underrun_timestamp
- assert _debug('OpenALAudioPlayer: Return underrun timestamp')
- else:
- timestamp = self._buffer_timestamps[0]
- assert _debug('OpenALAudioPlayer: Buffer timestamp: {}'.format(timestamp))
-
- if timestamp is not None:
- timestamp += ((self._play_cursor - self._buffer_cursor) /
- float(self.source.audio_format.bytes_per_second))
-
- assert _debug('OpenALAudioPlayer: get_time = {}'.format(timestamp))
-
- return timestamp
-
- def _check_cursors(self):
- assert self._play_cursor >= 0
- assert self._buffer_cursor >= 0
- assert self._write_cursor >= 0
- assert self._buffer_cursor <= self._play_cursor
- assert self._play_cursor <= self._write_cursor
- assert _debug('Buffer[{}], Play[{}], Write[{}]'.format(self._buffer_cursor,
- self._play_cursor,
- self._write_cursor))
- return True # Return true so it can be called in an assert (and optimized out)
-
- def set_volume(self, volume):
- self.alsource.gain = volume
-
- def set_position(self, position):
- self.alsource.position = position
-
- def set_min_distance(self, min_distance):
- self.alsource.reference_distance = min_distance
-
- def set_max_distance(self, max_distance):
- self.alsource.max_distance = max_distance
-
- def set_pitch(self, pitch):
- self.alsource.pitch = pitch
-
- def set_cone_orientation(self, cone_orientation):
- self.alsource.direction = cone_orientation
-
- def set_cone_inner_angle(self, cone_inner_angle):
- self.alsource.cone_inner_angle = cone_inner_angle
-
- def set_cone_outer_angle(self, cone_outer_angle):
- self.alsource.cone_outer_angle = cone_outer_angle
-
- def set_cone_outer_gain(self, cone_outer_gain):
- self.alsource.cone_outer_gain = cone_outer_gain
-
- def prefill_audio(self):
- write_size = self._get_write_size()
- self._refill(write_size)
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/document.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/document.py
deleted file mode 100644
index 5d3a44aa17675f292ed7aff3d1d39667d109456a..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/document.py
+++ /dev/null
@@ -1,701 +0,0 @@
-"""Formatted and unformatted document interfaces used by text layout.
-
-Abstract representation
-=======================
-
-Styled text in pyglet is represented by one of the :py:class:`~pyglet.text.document.AbstractDocument` classes,
-which manage the state representation of text and style independently of how
-it is loaded or rendered.
-
-A document consists of the document text (a Unicode string) and a set of
-named style ranges. For example, consider the following (artificial)
-example::
-
- 0 5 10 15 20
- The cat sat on the mat.
- +++++++ +++++++ "bold"
- ++++++ "italic"
-
-If this example were to be rendered, "The cat" and "the mat" would be in bold,
-and "on the" in italics. Note that the second "the" is both bold and italic.
-
-The document styles recorded for this example would be ``"bold"`` over ranges
-(0-7, 15-22) and ``"italic"`` over range (12-18). Overlapping styles are
-permitted; unlike HTML and other structured markup, the ranges need not be
-nested.
-
-The document has no knowledge of the semantics of ``"bold"`` or ``"italic"``,
-it stores only the style names. The pyglet layout classes give meaning to
-these style names in the way they are rendered; but you are also free to
-invent your own style names (which will be ignored by the layout classes).
-This can be useful to tag areas of interest in a document, or maintain
-references back to the source material.
-
-As well as text, the document can contain arbitrary elements represented by
-:py:class:`~pyglet.text.document.InlineElement`. An inline element behaves
-like a single character in the document, but can be rendered by the application.
-
-Paragraph breaks
-================
-
-Paragraph breaks are marked with a "newline" character (U+0010). The Unicode
-paragraph break (U+2029) can also be used.
-
-Line breaks (U+2028) can be used to force a line break within a paragraph.
-
-See Unicode recommendation UTR #13 for more information:
-http://unicode.org/reports/tr13/tr13-5.html.
-
-Document classes
-================
-
-Any class implementing :py:class:`~pyglet.text.document.AbstractDocument` provides an interface to a
-document model as described above. In theory a structured document such as
-HTML or XML could export this model, though the classes provided by pyglet
-implement only unstructured documents.
-
-The :py:class:`~pyglet.text.document.UnformattedDocument` class assumes any styles set are set over the entire
-document. So, regardless of the range specified when setting a ``"bold"``
-style attribute, for example, the entire document will receive that style.
-
-The :py:class:`~pyglet.text.document.FormattedDocument` class implements the document model directly, using
-the `RunList` class to represent style runs efficiently.
-
-Style attributes
-================
-
-The following character style attribute names are recognised by pyglet:
-
-``font_name``
- Font family name, as given to :py:func:`pyglet.font.load`.
-``font_size``
- Font size, in points.
-``bold``
- Boolean.
-``italic``
- Boolean.
-``underline``
- 4-tuple of ints in range (0, 255) giving RGBA underline color, or None
- (default) for no underline.
-``kerning``
- Additional space to insert between glyphs, in points. Defaults to 0.
-``baseline``
- Offset of glyph baseline from line baseline, in points. Positive values
- give a superscript, negative values give a subscript. Defaults to 0.
-``color``
- 4-tuple of ints in range (0, 255) giving RGBA text color
-``background_color``
- 4-tuple of ints in range (0, 255) giving RGBA text background color; or
- ``None`` for no background fill.
-
-The following paragraph style attribute names are recognised by pyglet. Note
-that paragraph styles are handled no differently from character styles by the
-document: it is the application's responsibility to set the style over an
-entire paragraph, otherwise results are undefined.
-
-``align``
- ``left`` (default), ``center`` or ``right``.
-``indent``
- Additional horizontal space to insert before the first
-``leading``
- Additional space to insert between consecutive lines within a paragraph,
- in points. Defaults to 0.
-``line_spacing``
- Distance between consecutive baselines in a paragraph, in points.
- Defaults to ``None``, which automatically calculates the tightest line
- spacing for each line based on the font ascent and descent.
-``margin_left``
- Left paragraph margin, in pixels.
-``margin_right``
- Right paragraph margin, in pixels.
-``margin_top``
- Margin above paragraph, in pixels.
-``margin_bottom``
- Margin below paragraph, in pixels. Adjacent margins do not collapse.
-``tab_stops``
- List of horizontal tab stops, in pixels, measured from the left edge of
- the text layout. Defaults to the empty list. When the tab stops
- are exhausted, they implicitly continue at 50 pixel intervals.
-``wrap``
- Boolean. If True (the default), text wraps within the width of the layout.
-
-Other attributes can be used to store additional style information within the
-document; it will be ignored by the built-in text classes.
-
-All style attributes (including those not present in a document) default to
-``None`` (including the so-called "boolean" styles listed above). The meaning
-of a ``None`` style is style- and application-dependent.
-
-.. versionadded:: 1.1
-"""
-
-import re
-import sys
-
-from pyglet import event
-from pyglet.text import runlist
-
-_is_pyglet_doc_run = hasattr(sys, "is_pyglet_doc_run") and sys.is_pyglet_doc_run
-
-#: The style attribute takes on multiple values in the document.
-STYLE_INDETERMINATE = 'indeterminate'
-
-
-class InlineElement:
- """Arbitrary inline element positioned within a formatted document.
-
- Elements behave like a single glyph in the document. They are
- measured by their horizontal advance, ascent above the baseline, and
- descent below the baseline.
-
- The pyglet layout classes reserve space in the layout for elements and
- call the element's methods to ensure they are rendered at the
- appropriate position.
-
- If the size of a element (any of the `advance`, `ascent`, or `descent`
- instance variables) is modified it is the application's responsibility to
- trigger a reflow of the appropriate area in the affected layouts. This
- can be done by forcing a style change over the element's position.
-
- :Ivariables:
- `ascent` : int
- Ascent of the element above the baseline, in pixels.
- `descent` : int
- Descent of the element below the baseline, in pixels.
- Typically negative.
- `advance` : int
- Width of the element, in pixels.
-
- """
-
- def __init__(self, ascent, descent, advance):
- self.ascent = ascent
- self.descent = descent
- self.advance = advance
- self._position = None
-
- @property
- def position(self):
- """Position of the element within the document. Read-only.
-
- :type: int
- """
- return self._position
-
- def place(self, layout, x, y, z):
- """Construct an instance of the element at the given coordinates.
-
- Called when the element's position within a layout changes, either
- due to the initial condition, changes in the document or changes in
- the layout size.
-
- It is the responsibility of the element to clip itself against
- the layout boundaries, and position itself appropriately with respect
- to the layout's position and viewport offset.
-
- The `TextLayout.top_state` graphics state implements this transform
- and clipping into window space.
-
- :Parameters:
- `layout` : `pyglet.text.layout.TextLayout`
- The layout the element moved within.
- `x` : int
- Position of the left edge of the element, relative
- to the left edge of the document, in pixels.
- `y` : int
- Position of the baseline, relative to the top edge of the
- document, in pixels. Note that this is typically negative.
-
- """
- raise NotImplementedError('abstract')
-
- def remove(self, layout):
- """Remove this element from a layout.
-
- The counterpart of `place`; called when the element is no longer
- visible in the given layout.
-
- :Parameters:
- `layout` : `pyglet.text.layout.TextLayout`
- The layout the element was removed from.
-
- """
- raise NotImplementedError('abstract')
-
-
-class AbstractDocument(event.EventDispatcher):
- """Abstract document interface used by all :py:mod:`pyglet.text` classes.
-
- This class can be overridden to interface pyglet with a third-party
- document format. It may be easier to implement the document format in
- terms of one of the supplied concrete classes :py:class:`~pyglet.text.document.FormattedDocument` or
- :py:class:`~pyglet.text.document.UnformattedDocument`.
- """
- _previous_paragraph_re = re.compile(u'\n[^\n\u2029]*$')
- _next_paragraph_re = re.compile(u'[\n\u2029]')
-
- def __init__(self, text=''):
- super().__init__()
- self._text = u''
- self._elements = []
- if text:
- self.insert_text(0, text)
-
- @property
- def text(self):
- """Document text.
-
- For efficient incremental updates, use the :py:func:`~pyglet.text.document.AbstractDocument.insert_text` and
- :py:func:`~pyglet.text.document.AbstractDocument.delete_text` methods instead of replacing this property.
-
- :type: str
- """
- return self._text
-
- @text.setter
- def text(self, text):
- if text == self._text:
- return
- self.delete_text(0, len(self._text))
- self.insert_text(0, text)
-
- def get_paragraph_start(self, pos):
- """Get the starting position of a paragraph.
-
- :Parameters:
- `pos` : int
- Character position within paragraph.
-
- :rtype: int
- """
- # Tricky special case where the $ in pattern matches before the
- # \n at the end of the string instead of the end of the string.
- if self._text[:pos + 1].endswith('\n') or self._text[:pos + 1].endswith(u'\u2029'):
- return pos
-
- m = self._previous_paragraph_re.search(self._text, 0, pos + 1)
- if not m:
- return 0
- return m.start() + 1
-
- def get_paragraph_end(self, pos):
- """Get the end position of a paragraph.
-
- :Parameters:
- `pos` : int
- Character position within paragraph.
-
- :rtype: int
- """
- m = self._next_paragraph_re.search(self._text, pos)
- if not m:
- return len(self._text)
- return m.start() + 1
-
- def get_style_runs(self, attribute):
- """Get a style iterator over the given style attribute.
-
- :Parameters:
- `attribute` : str
- Name of style attribute to query.
-
- :rtype: `AbstractRunIterator`
- """
- raise NotImplementedError('abstract')
-
- def get_style(self, attribute, position=0):
- """Get an attribute style at the given position.
-
- :Parameters:
- `attribute` : str
- Name of style attribute to query.
- `position` : int
- Character position of document to query.
-
- :return: The style set for the attribute at the given position.
- """
- raise NotImplementedError('abstract')
-
- def get_style_range(self, attribute, start, end):
- """Get an attribute style over the given range.
-
- If the style varies over the range, `STYLE_INDETERMINATE` is returned.
-
- :Parameters:
- `attribute` : str
- Name of style attribute to query.
- `start` : int
- Starting character position.
- `end` : int
- Ending character position (exclusive).
-
- :return: The style set for the attribute over the given range, or
- `STYLE_INDETERMINATE` if more than one value is set.
- """
- iterable = self.get_style_runs(attribute)
- _, value_end, value = next(iterable.ranges(start, end))
- if value_end < end:
- return STYLE_INDETERMINATE
- else:
- return value
-
- def get_font_runs(self, dpi=None):
- """Get a style iterator over the `pyglet.font.Font` instances used in
- the document.
-
- The font instances are created on-demand by inspection of the
- ``font_name``, ``font_size``, ``bold`` and ``italic`` style
- attributes.
-
- :Parameters:
- `dpi` : float
- Optional resolution to construct fonts at. See
- :py:func:`pyglet.font.load`.
-
- :rtype: `AbstractRunIterator`
- """
- raise NotImplementedError('abstract')
-
- def get_font(self, position, dpi=None):
- """Get the font instance used at the given position.
-
- :see: `get_font_runs`
-
- :Parameters:
- `position` : int
- Character position of document to query.
- `dpi` : float
- Optional resolution to construct fonts at. See
- :py:func:`pyglet.font.load`.
-
- :rtype: `pyglet.font.Font`
- :return: The font at the given position.
- """
- raise NotImplementedError('abstract')
-
- def insert_text(self, start, text, attributes=None):
- """Insert text into the document.
-
- :Parameters:
- `start` : int
- Character insertion point within document.
- `text` : str
- Text to insert.
- `attributes` : dict
- Optional dictionary giving named style attributes of the
- inserted text.
-
- """
- self._insert_text(start, text, attributes)
- self.dispatch_event('on_insert_text', start, text)
-
- def _insert_text(self, start, text, attributes):
- self._text = u''.join((self._text[:start], text, self._text[start:]))
- len_text = len(text)
- for element in self._elements:
- if element._position >= start:
- element._position += len_text
-
- def delete_text(self, start, end):
- """Delete text from the document.
-
- :Parameters:
- `start` : int
- Starting character position to delete from.
- `end` : int
- Ending character position to delete to (exclusive).
-
- """
- self._delete_text(start, end)
- self.dispatch_event('on_delete_text', start, end)
-
- def _delete_text(self, start, end):
- for element in list(self._elements):
- if start <= element._position < end:
- self._elements.remove(element)
- elif element._position >= end: # fix bug 538
- element._position -= (end - start)
-
- self._text = self._text[:start] + self._text[end:]
-
- def insert_element(self, position, element, attributes=None):
- """Insert a element into the document.
-
- See the :py:class:`~pyglet.text.document.InlineElement` class
- documentation for details of usage.
-
- :Parameters:
- `position` : int
- Character insertion point within document.
- `element` : `~pyglet.text.document.InlineElement`
- Element to insert.
- `attributes` : dict
- Optional dictionary giving named style attributes of the
- inserted text.
-
- """
- assert element._position is None, 'Element is already in a document.'
- self.insert_text(position, '\0', attributes)
- element._position = position
- self._elements.append(element)
- self._elements.sort(key=lambda d: d.position)
-
- def get_element(self, position):
- """Get the element at a specified position.
-
- :Parameters:
- `position` : int
- Position in the document of the element.
-
- :rtype: :py:class:`~pyglet.text.document.InlineElement`
- """
- for element in self._elements:
- if element._position == position:
- return element
- raise RuntimeError(f'No element at position {position}')
-
- def set_style(self, start, end, attributes):
- """Set text style of some or all of the document.
-
- :Parameters:
- `start` : int
- Starting character position.
- `end` : int
- Ending character position (exclusive).
- `attributes` : dict
- Dictionary giving named style attributes of the text.
-
- """
- self._set_style(start, end, attributes)
- self.dispatch_event('on_style_text', start, end, attributes)
-
- def _set_style(self, start, end, attributes):
- raise NotImplementedError('abstract')
-
- def set_paragraph_style(self, start, end, attributes):
- """Set the style for a range of paragraphs.
-
- This is a convenience method for `set_style` that aligns the
- character range to the enclosing paragraph(s).
-
- :Parameters:
- `start` : int
- Starting character position.
- `end` : int
- Ending character position (exclusive).
- `attributes` : dict
- Dictionary giving named style attributes of the paragraphs.
-
- """
- start = self.get_paragraph_start(start)
- end = self.get_paragraph_end(end)
- self._set_style(start, end, attributes)
- self.dispatch_event('on_style_text', start, end, attributes)
-
- if _is_pyglet_doc_run:
- def on_insert_text(self, start, text):
- """Text was inserted into the document.
-
- :Parameters:
- `start` : int
- Character insertion point within document.
- `text` : str
- The text that was inserted.
-
- :event:
- """
-
- def on_delete_text(self, start, end):
- """Text was deleted from the document.
-
- :Parameters:
- `start` : int
- Starting character position of deleted text.
- `end` : int
- Ending character position of deleted text (exclusive).
-
- :event:
- """
-
- def on_style_text(self, start, end, attributes):
- """Text character style was modified.
-
- :Parameters:
- `start` : int
- Starting character position of modified text.
- `end` : int
- Ending character position of modified text (exclusive).
- `attributes` : dict
- Dictionary giving updated named style attributes of the
- text.
-
- :event:
- """
-
-
-AbstractDocument.register_event_type('on_insert_text')
-AbstractDocument.register_event_type('on_delete_text')
-AbstractDocument.register_event_type('on_style_text')
-
-
-class UnformattedDocument(AbstractDocument):
- """A document having uniform style over all text.
-
- Changes to the style of text within the document affects the entire
- document. For convenience, the ``position`` parameters of the style
- methods may therefore be omitted.
- """
-
- def __init__(self, text=''):
- super().__init__(text)
- self.styles = {}
-
- def get_style_runs(self, attribute):
- value = self.styles.get(attribute)
- return runlist.ConstRunIterator(len(self.text), value)
-
- def get_style(self, attribute, position=None):
- return self.styles.get(attribute)
-
- def set_style(self, start, end, attributes):
- return super().set_style(0, len(self.text), attributes)
-
- def _set_style(self, start, end, attributes):
- self.styles.update(attributes)
-
- def set_paragraph_style(self, start, end, attributes):
- return super().set_paragraph_style(0, len(self.text), attributes)
-
- def get_font_runs(self, dpi=None):
- ft = self.get_font(dpi=dpi)
- return runlist.ConstRunIterator(len(self.text), ft)
-
- def get_font(self, position=None, dpi=None):
- from pyglet import font
- font_name = self.styles.get('font_name')
- font_size = self.styles.get('font_size')
- bold = self.styles.get('bold', False)
- italic = self.styles.get('italic', False)
- stretch = self.styles.get('stretch', False)
- return font.load(font_name, font_size, bold=bold, italic=italic, stretch=stretch, dpi=dpi)
-
- def get_element_runs(self):
- return runlist.ConstRunIterator(len(self._text), None)
-
-
-class FormattedDocument(AbstractDocument):
- """Simple implementation of a document that maintains text formatting.
-
- Changes to text style are applied according to the description in
- :py:class:`~pyglet.text.document.AbstractDocument`. All styles default to ``None``.
- """
-
- def __init__(self, text=''):
- self._style_runs = {}
- super().__init__(text)
-
- def get_style_runs(self, attribute):
- try:
- return self._style_runs[attribute].get_run_iterator()
- except KeyError:
- return _no_style_range_iterator
-
- def get_style(self, attribute, position=0):
- try:
- return self._style_runs[attribute][position]
- except KeyError:
- return None
-
- def _set_style(self, start, end, attributes):
- for attribute, value in attributes.items():
- try:
- runs = self._style_runs[attribute]
- except KeyError:
- runs = self._style_runs[attribute] = runlist.RunList(0, None)
- runs.insert(0, len(self._text))
- runs.set_run(start, end, value)
-
- def get_font_runs(self, dpi=None):
- return _FontStyleRunsRangeIterator(
- self.get_style_runs('font_name'),
- self.get_style_runs('font_size'),
- self.get_style_runs('bold'),
- self.get_style_runs('italic'),
- self.get_style_runs('stretch'),
- dpi)
-
- def get_font(self, position, dpi=None):
- runs_iter = self.get_font_runs(dpi)
- return runs_iter[position]
-
- def get_element_runs(self):
- return _ElementIterator(self._elements, len(self._text))
-
- def _insert_text(self, start, text, attributes):
- super()._insert_text(start, text, attributes)
-
- len_text = len(text)
- for runs in self._style_runs.values():
- runs.insert(start, len_text)
-
- if attributes is not None:
- for attribute, value in attributes.items():
- try:
- runs = self._style_runs[attribute]
- except KeyError:
- runs = self._style_runs[attribute] = runlist.RunList(0, None)
- runs.insert(0, len(self.text))
- runs.set_run(start, start + len_text, value)
-
- def _delete_text(self, start, end):
- super()._delete_text(start, end)
- for runs in self._style_runs.values():
- runs.delete(start, end)
-
-
-def _iter_elements(elements, length):
- last = 0
- for element in elements:
- p = element.position
- yield last, p, None
- yield p, p + 1, element
- last = p + 1
- yield last, length, None
-
-
-class _ElementIterator(runlist.RunIterator):
- def __init__(self, elements, length):
- self._run_list_iter = _iter_elements(elements, length)
- self.start, self.end, self.value = next(self)
-
-
-class _FontStyleRunsRangeIterator:
- # XXX subclass runlist
- def __init__(self, font_names, font_sizes, bolds, italics, stretch, dpi):
- self.zip_iter = runlist.ZipRunIterator((font_names, font_sizes, bolds, italics, stretch))
- self.dpi = dpi
-
- def ranges(self, start, end):
- from pyglet import font
- for start, end, styles in self.zip_iter.ranges(start, end):
- font_name, font_size, bold, italic, stretch = styles
- ft = font.load(font_name, font_size, bold=bool(bold), italic=bool(italic), stretch=stretch, dpi=self.dpi)
- yield start, end, ft
-
- def __getitem__(self, index):
- from pyglet import font
- font_name, font_size, bold, italic, stretch = self.zip_iter[index]
- return font.load(font_name, font_size, bold=bool(bold), italic=bool(italic), stretch=stretch, dpi=self.dpi)
-
-
-class _NoStyleRangeIterator:
- # XXX subclass runlist
- @staticmethod
- def ranges(start, end):
- yield start, end, None
-
- def __getitem__(self, index):
- return None
-
-
-_no_style_range_iterator = _NoStyleRangeIterator()
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/setup.py b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/setup.py
deleted file mode 100644
index 3c105518dce6da67e0db106858a3100ae0029ab5..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/setup.py
+++ /dev/null
@@ -1,96 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-"""Setup Parallel WaveGAN libarary."""
-
-import os
-import pip
-import sys
-
-from distutils.version import LooseVersion
-from setuptools import find_packages
-from setuptools import setup
-
-if LooseVersion(sys.version) < LooseVersion("3.6"):
- raise RuntimeError(
- "parallel-wavegan requires Python>=3.6, "
- "but your Python is {}".format(sys.version)
- )
-if LooseVersion(pip.__version__) < LooseVersion("19"):
- raise RuntimeError(
- "pip>=19.0.0 is required, but your pip is {}. "
- 'Try again after "pip install -U pip"'.format(pip.__version__)
- )
-
-requirements = {
- "install": [
- "torch>=1.0.1",
- "setuptools>=38.5.1",
- "librosa>=0.8.0",
- "soundfile>=0.10.2",
- "tensorboardX>=1.8",
- "matplotlib>=3.1.0",
- "PyYAML>=3.12",
- "tqdm>=4.26.1",
- "kaldiio>=2.14.1",
- "h5py>=2.9.0",
- "yq>=2.10.0",
- "gdown",
- "filelock",
- ],
- "setup": [
- "numpy",
- "pytest-runner",
- ],
- "test": [
- "pytest>=3.3.0",
- "hacking>=4.1.0",
- "flake8-docstrings>=1.3.1",
- "black",
- ],
-}
-entry_points = {
- "console_scripts": [
- "parallel-wavegan-preprocess=parallel_wavegan.bin.preprocess:main",
- "parallel-wavegan-compute-statistics=parallel_wavegan.bin.compute_statistics:main",
- "parallel-wavegan-normalize=parallel_wavegan.bin.normalize:main",
- "parallel-wavegan-train=parallel_wavegan.bin.train:main",
- "parallel-wavegan-decode=parallel_wavegan.bin.decode:main",
- ]
-}
-
-install_requires = requirements["install"]
-setup_requires = requirements["setup"]
-tests_require = requirements["test"]
-extras_require = {
- k: v for k, v in requirements.items() if k not in ["install", "setup"]
-}
-
-dirname = os.path.dirname(__file__)
-setup(
- name="parallel_wavegan",
- version="0.5.3",
- url="http://github.com/kan-bayashi/ParallelWaveGAN",
- author="Tomoki Hayashi",
- author_email="hayashi.tomoki@g.sp.m.is.nagoya-u.ac.jp",
- description="Parallel WaveGAN implementation",
- long_description=open(os.path.join(dirname, "README.md"), encoding="utf-8").read(),
- long_description_content_type="text/markdown",
- license="MIT License",
- packages=find_packages(include=["parallel_wavegan*"]),
- install_requires=install_requires,
- setup_requires=setup_requires,
- tests_require=tests_require,
- extras_require=extras_require,
- entry_points=entry_points,
- classifiers=[
- "Programming Language :: Python :: 3.6",
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8",
- "Programming Language :: Python :: 3.9",
- "Intended Audience :: Science/Research",
- "Operating System :: POSIX :: Linux",
- "License :: OSI Approved :: MIT License",
- "Topic :: Software Development :: Libraries :: Python Modules",
- ],
-)
diff --git a/spaces/alamin655/websurfx/public/templates/search_bar.html b/spaces/alamin655/websurfx/public/templates/search_bar.html
deleted file mode 100644
index 8bb6bd9713a89d7077459c46e0c323dec3fb63e4..0000000000000000000000000000000000000000
--- a/spaces/alamin655/websurfx/public/templates/search_bar.html
+++ /dev/null
@@ -1,27 +0,0 @@
-{{>bar this}}
-
- {{#if engineErrorsInfo}}
-
-
- {{#each engineErrorsInfo}}
-
- {{{this.engine}}}
- {{{this.error}}}
-
-
- {{/each}}
-
- {{else}}
-
-
-
- Everything looks good 🙂!!
-
-
- {{/if}}
-
-
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py
deleted file mode 100644
index 44e4309d20dfe3190988905258a4159411a662b3..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# SPDX-FileCopyrightText: 2015 Eric Larson
-#
-# SPDX-License-Identifier: Apache-2.0
-
-"""
-The cache object API for implementing caches. The default is a thread
-safe in-memory dictionary.
-"""
-from threading import Lock
-
-
-class BaseCache(object):
-
- def get(self, key):
- raise NotImplementedError()
-
- def set(self, key, value, expires=None):
- raise NotImplementedError()
-
- def delete(self, key):
- raise NotImplementedError()
-
- def close(self):
- pass
-
-
-class DictCache(BaseCache):
-
- def __init__(self, init_dict=None):
- self.lock = Lock()
- self.data = init_dict or {}
-
- def get(self, key):
- return self.data.get(key, None)
-
- def set(self, key, value, expires=None):
- with self.lock:
- self.data.update({key: value})
-
- def delete(self, key):
- with self.lock:
- if key in self.data:
- self.data.pop(key)
diff --git a/spaces/ali-ghamdan/gfp-Gans/gfpgan/__init__.py b/spaces/ali-ghamdan/gfp-Gans/gfpgan/__init__.py
deleted file mode 100644
index 94daaeebce5604d61999f0b1b354b9a9e299b991..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/gfp-Gans/gfpgan/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# flake8: noqa
-from .archs import *
-from .data import *
-from .models import *
-from .utils import *
-
-# from .version import *
diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/__init__.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/__init__.py
deleted file mode 100644
index df61bf8713419f847d7c2ee8c6036797c7b03ef7..0000000000000000000000000000000000000000
--- a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .infinibatch.infinibatch import datasets, iterators
diff --git a/spaces/alphunt/diffdock-alphunt-demo/datasets/__init__.py b/spaces/alphunt/diffdock-alphunt-demo/datasets/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/amanatid/ArxivGPT_Streamlit/faq.py b/spaces/amanatid/ArxivGPT_Streamlit/faq.py
deleted file mode 100644
index 59592c5dd7088da2edc1eedd05a12b04e307735d..0000000000000000000000000000000000000000
--- a/spaces/amanatid/ArxivGPT_Streamlit/faq.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# flake8: noqa
-import streamlit as st
-
-
-def faq():
- st.markdown(
- """
-# FAQ
-## How does ArxivGPT works work?
-When you load do it will be divided into smaller chunks
-and stored in a special type of database called a vector index
-that allows for semantic search and retrieval.
-
-When you ask a question, ArxivGPT will search through the
-pdf chunks and find the most relevant ones using the vector index.
-Then, it will use the powerful language model GPT4 to generate a final answer.
-
-## Why does ArxivGPT take time to index my document?
-The reason is the following one. In the case of a free OpenAI API key
-takes time to index the loaded pdf files since a free API key has a
-restricted [rate limits](https://platform.openai.com/docs/guides/rate-limits/overview).
-To make the process fast, you can use a paid API key.
-
-
-## How accurate is ArxivGPT?
-To our experience and our tests, it seems impressive accurate but keep in mind the
-following since GPT-4 is language model is keen to mistakes. It is
-based on semantic search and extracts the most relevant chuncks from the pdf
-files.
-"""
- )
diff --git a/spaces/amin2809/rvc-models/config.py b/spaces/amin2809/rvc-models/config.py
deleted file mode 100644
index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000
--- a/spaces/amin2809/rvc-models/config.py
+++ /dev/null
@@ -1,88 +0,0 @@
-########################硬件参数########################
-
-# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速
-device = "cuda:0"
-
-# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速
-is_half = True
-
-# 默认0用上所有线程,写数字限制CPU资源使用
-n_cpu = 0
-
-########################硬件参数########################
-
-
-##################下为参数处理逻辑,勿动##################
-
-########################命令行参数########################
-import argparse
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--port", type=int, default=7865, help="Listen port")
-parser.add_argument("--pycmd", type=str, default="python", help="Python command")
-parser.add_argument("--colab", action="store_true", help="Launch in colab")
-parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
-)
-parser.add_argument(
- "--noautoopen", action="store_true", help="Do not open in browser automatically"
-)
-cmd_opts, unknown = parser.parse_known_args()
-
-python_cmd = cmd_opts.pycmd
-listen_port = cmd_opts.port
-iscolab = cmd_opts.colab
-noparallel = cmd_opts.noparallel
-noautoopen = cmd_opts.noautoopen
-########################命令行参数########################
-
-import sys
-import torch
-
-
-# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
-# check `getattr` and try it for compatibility
-def has_mps() -> bool:
- if sys.platform != "darwin":
- return False
- else:
- if not getattr(torch, "has_mps", False):
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
-
-if not torch.cuda.is_available():
- if has_mps():
- print("没有发现支持的N卡, 使用MPS进行推理")
- device = "mps"
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- device = "cpu"
- is_half = False
-
-if device not in ["cpu", "mps"]:
- gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1]))
- if "16" in gpu_name or "MX" in gpu_name:
- print("16系显卡/MX系显卡强制单精度")
- is_half = False
-
-from multiprocessing import cpu_count
-
-if n_cpu == 0:
- n_cpu = cpu_count()
-if is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
-else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
diff --git a/spaces/anonderpling/repo_uploader/app.py b/spaces/anonderpling/repo_uploader/app.py
deleted file mode 100644
index 04db3848c0947f83b416054e8fb7c40295186378..0000000000000000000000000000000000000000
--- a/spaces/anonderpling/repo_uploader/app.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import gradio as gr
-import requests
-import subprocess
-import os
-from huggingface_hub import whoami
-from huggingface_hub import HfApi
-from huggingface_hub import login
-import random
-import time
-
-api=HfApi()
-REPO_TYPES = ["model", "dataset", "space"]
-
-def duplicate(source_url, dst_repo, token, new_name, dst_repo_path, repo_type):
- try:
- _ = whoami(token)
- # ^ this will throw if token is invalid
-
- # make sure the user fills out the other required paths.
- if not dst_repo_path[len(dst_repo_path)-1] == '/':
- raise Exception("Your destination path *must* end with a /")
- if not source_url:
- raise Exception("You haven't chosen a file to download!")
- if not dst_repo:
- raise Exception("You haven't chosen a repo to download to")
- login(token=token)
-
- # keep things separate, partly in case people download different files with same name (`download.zip`). Especially, it also allows saving filename to work
- dir="/home/user/apps/downloads/"+str(int(time.time()))+str(random.getrandbits(8))+"/"
- subprocess.check_call([r"mkdir","-p",dir])
- subprocess.check_call([r"aria2c","-x16","--split=16",source_url,"--dir="+dir])
- files=os.listdir(dir)
-
- if new_name:
- dst_repo_path=dst_repo_path.strip("/")+"/"+new_name.strip("/")
- else:
- dst_repo_path=dst_repo_path.strip("/")+"/"+files[0]
-
- api.upload_file(
- path_or_fileobj=dir+files[0],
- path_in_repo=dst_repo_path,
- repo_id=dst_repo,
- repo_type=repo_type
- )
-
- # now clean up
- os.remove(dir+files[0])
- os.rmdir(dir)
- match repo_type:
- case "space":
- repo_url=f"https://hf.co/spaces/{dst_repo}"
- case "dataset":
- repo_url=f"https://hf.co/datasets/{dst_repo}"
- case "model":
- repo_url=f"https://hf.co/{dst_repo}"
- return (
- f'Find your repo here',
- "sp.jpg",
- )
-
- except Exception as e:
- blames=["grandma","my boss","your boss","God","you","you. It's *all* your fault.","the pope"]
- blameweights=(1,1,1,1,4,2,1)
- excuses=["I blame it all on "+random.choices(blames,weights=blameweights)[0],"It's my fault, sorry.","I did it on purpose.","That file doesn't want to be downloaded.","You nincompoop!"]
- excusesweights=(12,1,1,2,3)
- excuse=random.choices(excuses,weights=excusesweights)[0]
- return (
- f"""
- ### Error 😢😢😢
-
- {e}
-
-
-
- """ + excuse+"",
- None,
- )
-
-
-interface = gr.Interface(
- fn=duplicate,
- inputs=[
- gr.Textbox(placeholder="Source URL (e.g. civitai.com/api/download/models/4324322534)"),
- gr.Textbox(placeholder="Destination repository (e.g. osanseviero/dst)"),
- gr.Textbox(placeholder="Write access token", type="password"),
- gr.Textbox(placeholder="Post-download name of your file, if you want it changed (e.g. stupidmodel.safetensors)"),
- gr.Textbox(placeholder="Destination for your file within your repo. Don't include the filename (e.g. /models/Stable-diffusion/)"),
- gr.Dropdown(choices=REPO_TYPES, value="model"),
- ],
- outputs=[
- gr.Markdown(label="output"),
- gr.Image(show_label=False),
- ],
- title="Download a file to your repo!",
- description="Download a file to your Hugging Face repository! You need to specify a write token obtained in https://hf.co/settings/tokens. This Space is a an experimental demo.",
- article="
",
- allow_flagging="never",
- live=False, # since i keep wondering, this prevents it from running again automatically when an input changes
-)
-interface.launch(enable_queue=True)
diff --git a/spaces/anzorq/hf-spaces-semantic-search/pages/_app.js b/spaces/anzorq/hf-spaces-semantic-search/pages/_app.js
deleted file mode 100644
index 23002013d70aa52189700305aacd93dba6849067..0000000000000000000000000000000000000000
--- a/spaces/anzorq/hf-spaces-semantic-search/pages/_app.js
+++ /dev/null
@@ -1,5 +0,0 @@
-import '@/styles/globals.css'
-
-export default function App({ Component, pageProps }) {
- return
-}
diff --git a/spaces/apsys/normflows/README.md b/spaces/apsys/normflows/README.md
deleted file mode 100644
index 824b6c1224adf857741eb61fec9d1f2c336c8c85..0000000000000000000000000000000000000000
--- a/spaces/apsys/normflows/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Normflows
-emoji: 🌖
-colorFrom: red
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ari7thomas/bible.ai/README.md b/spaces/ari7thomas/bible.ai/README.md
deleted file mode 100644
index 19bc778084383d4c258060b92e28b6598c16eab2..0000000000000000000000000000000000000000
--- a/spaces/ari7thomas/bible.ai/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AutoTrain Advanced
-emoji: 🚀
-colorFrom: blue
-colorTo: green
-sdk: docker
-pinned: false
-duplicated_from: autotrain-projects/autotrain-advanced
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/train_encoder.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/train_encoder.py
deleted file mode 100644
index f2e7779c0c109a3ec78f1972ebf1147ec436048a..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/train_encoder.py
+++ /dev/null
@@ -1,319 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-import os
-import sys
-import time
-import traceback
-
-import torch
-from torch.utils.data import DataLoader
-from trainer.torch import NoamLR
-from trainer.trainer_utils import get_optimizer
-
-from TTS.encoder.dataset import EncoderDataset
-from TTS.encoder.utils.generic_utils import save_best_model, save_checkpoint, setup_encoder_model
-from TTS.encoder.utils.training import init_training
-from TTS.encoder.utils.visual import plot_embeddings
-from TTS.tts.datasets import load_tts_samples
-from TTS.utils.audio import AudioProcessor
-from TTS.utils.generic_utils import count_parameters, remove_experiment_folder
-from TTS.utils.io import copy_model_files
-from TTS.utils.samplers import PerfectBatchSampler
-from TTS.utils.training import check_update
-
-torch.backends.cudnn.enabled = True
-torch.backends.cudnn.benchmark = True
-torch.manual_seed(54321)
-use_cuda = torch.cuda.is_available()
-num_gpus = torch.cuda.device_count()
-print(" > Using CUDA: ", use_cuda)
-print(" > Number of GPUs: ", num_gpus)
-
-
-def setup_loader(ap: AudioProcessor, is_val: bool = False, verbose: bool = False):
- num_utter_per_class = c.num_utter_per_class if not is_val else c.eval_num_utter_per_class
- num_classes_in_batch = c.num_classes_in_batch if not is_val else c.eval_num_classes_in_batch
-
- dataset = EncoderDataset(
- c,
- ap,
- meta_data_eval if is_val else meta_data_train,
- voice_len=c.voice_len,
- num_utter_per_class=num_utter_per_class,
- num_classes_in_batch=num_classes_in_batch,
- verbose=verbose,
- augmentation_config=c.audio_augmentation if not is_val else None,
- use_torch_spec=c.model_params.get("use_torch_spec", False),
- )
- # get classes list
- classes = dataset.get_class_list()
-
- sampler = PerfectBatchSampler(
- dataset.items,
- classes,
- batch_size=num_classes_in_batch * num_utter_per_class, # total batch size
- num_classes_in_batch=num_classes_in_batch,
- num_gpus=1,
- shuffle=not is_val,
- drop_last=True,
- )
-
- if len(classes) < num_classes_in_batch:
- if is_val:
- raise RuntimeError(
- f"config.eval_num_classes_in_batch ({num_classes_in_batch}) need to be <= {len(classes)} (Number total of Classes in the Eval dataset) !"
- )
- raise RuntimeError(
- f"config.num_classes_in_batch ({num_classes_in_batch}) need to be <= {len(classes)} (Number total of Classes in the Train dataset) !"
- )
-
- # set the classes to avoid get wrong class_id when the number of training and eval classes are not equal
- if is_val:
- dataset.set_classes(train_classes)
-
- loader = DataLoader(
- dataset,
- num_workers=c.num_loader_workers,
- batch_sampler=sampler,
- collate_fn=dataset.collate_fn,
- )
-
- return loader, classes, dataset.get_map_classid_to_classname()
-
-
-def evaluation(model, criterion, data_loader, global_step):
- eval_loss = 0
- for _, data in enumerate(data_loader):
- with torch.no_grad():
- # setup input data
- inputs, labels = data
-
- # agroup samples of each class in the batch. perfect sampler produces [3,2,1,3,2,1] we need [3,3,2,2,1,1]
- labels = torch.transpose(
- labels.view(c.eval_num_utter_per_class, c.eval_num_classes_in_batch), 0, 1
- ).reshape(labels.shape)
- inputs = torch.transpose(
- inputs.view(c.eval_num_utter_per_class, c.eval_num_classes_in_batch, -1), 0, 1
- ).reshape(inputs.shape)
-
- # dispatch data to GPU
- if use_cuda:
- inputs = inputs.cuda(non_blocking=True)
- labels = labels.cuda(non_blocking=True)
-
- # forward pass model
- outputs = model(inputs)
-
- # loss computation
- loss = criterion(
- outputs.view(c.eval_num_classes_in_batch, outputs.shape[0] // c.eval_num_classes_in_batch, -1), labels
- )
-
- eval_loss += loss.item()
-
- eval_avg_loss = eval_loss / len(data_loader)
- # save stats
- dashboard_logger.eval_stats(global_step, {"loss": eval_avg_loss})
- # plot the last batch in the evaluation
- figures = {
- "UMAP Plot": plot_embeddings(outputs.detach().cpu().numpy(), c.num_classes_in_batch),
- }
- dashboard_logger.eval_figures(global_step, figures)
- return eval_avg_loss
-
-
-def train(model, optimizer, scheduler, criterion, data_loader, eval_data_loader, global_step):
- model.train()
- best_loss = float("inf")
- avg_loader_time = 0
- end_time = time.time()
- for epoch in range(c.epochs):
- tot_loss = 0
- epoch_time = 0
- for _, data in enumerate(data_loader):
- start_time = time.time()
-
- # setup input data
- inputs, labels = data
- # agroup samples of each class in the batch. perfect sampler produces [3,2,1,3,2,1] we need [3,3,2,2,1,1]
- labels = torch.transpose(labels.view(c.num_utter_per_class, c.num_classes_in_batch), 0, 1).reshape(
- labels.shape
- )
- inputs = torch.transpose(inputs.view(c.num_utter_per_class, c.num_classes_in_batch, -1), 0, 1).reshape(
- inputs.shape
- )
- # ToDo: move it to a unit test
- # labels_converted = torch.transpose(labels.view(c.num_utter_per_class, c.num_classes_in_batch), 0, 1).reshape(labels.shape)
- # inputs_converted = torch.transpose(inputs.view(c.num_utter_per_class, c.num_classes_in_batch, -1), 0, 1).reshape(inputs.shape)
- # idx = 0
- # for j in range(0, c.num_classes_in_batch, 1):
- # for i in range(j, len(labels), c.num_classes_in_batch):
- # if not torch.all(labels[i].eq(labels_converted[idx])) or not torch.all(inputs[i].eq(inputs_converted[idx])):
- # print("Invalid")
- # print(labels)
- # exit()
- # idx += 1
- # labels = labels_converted
- # inputs = inputs_converted
-
- loader_time = time.time() - end_time
- global_step += 1
-
- # setup lr
- if c.lr_decay:
- scheduler.step()
- optimizer.zero_grad()
-
- # dispatch data to GPU
- if use_cuda:
- inputs = inputs.cuda(non_blocking=True)
- labels = labels.cuda(non_blocking=True)
-
- # forward pass model
- outputs = model(inputs)
-
- # loss computation
- loss = criterion(
- outputs.view(c.num_classes_in_batch, outputs.shape[0] // c.num_classes_in_batch, -1), labels
- )
- loss.backward()
- grad_norm, _ = check_update(model, c.grad_clip)
- optimizer.step()
-
- step_time = time.time() - start_time
- epoch_time += step_time
-
- # acumulate the total epoch loss
- tot_loss += loss.item()
-
- # Averaged Loader Time
- num_loader_workers = c.num_loader_workers if c.num_loader_workers > 0 else 1
- avg_loader_time = (
- 1 / num_loader_workers * loader_time + (num_loader_workers - 1) / num_loader_workers * avg_loader_time
- if avg_loader_time != 0
- else loader_time
- )
- current_lr = optimizer.param_groups[0]["lr"]
-
- if global_step % c.steps_plot_stats == 0:
- # Plot Training Epoch Stats
- train_stats = {
- "loss": loss.item(),
- "lr": current_lr,
- "grad_norm": grad_norm,
- "step_time": step_time,
- "avg_loader_time": avg_loader_time,
- }
- dashboard_logger.train_epoch_stats(global_step, train_stats)
- figures = {
- "UMAP Plot": plot_embeddings(outputs.detach().cpu().numpy(), c.num_classes_in_batch),
- }
- dashboard_logger.train_figures(global_step, figures)
-
- if global_step % c.print_step == 0:
- print(
- " | > Step:{} Loss:{:.5f} GradNorm:{:.5f} "
- "StepTime:{:.2f} LoaderTime:{:.2f} AvGLoaderTime:{:.2f} LR:{:.6f}".format(
- global_step, loss.item(), grad_norm, step_time, loader_time, avg_loader_time, current_lr
- ),
- flush=True,
- )
-
- if global_step % c.save_step == 0:
- # save model
- save_checkpoint(model, optimizer, criterion, loss.item(), OUT_PATH, global_step, epoch)
-
- end_time = time.time()
-
- print("")
- print(
- ">>> Epoch:{} AvgLoss: {:.5f} GradNorm:{:.5f} "
- "EpochTime:{:.2f} AvGLoaderTime:{:.2f} ".format(
- epoch, tot_loss / len(data_loader), grad_norm, epoch_time, avg_loader_time
- ),
- flush=True,
- )
- # evaluation
- if c.run_eval:
- model.eval()
- eval_loss = evaluation(model, criterion, eval_data_loader, global_step)
- print("\n\n")
- print("--> EVAL PERFORMANCE")
- print(
- " | > Epoch:{} AvgLoss: {:.5f} ".format(epoch, eval_loss),
- flush=True,
- )
- # save the best checkpoint
- best_loss = save_best_model(model, optimizer, criterion, eval_loss, best_loss, OUT_PATH, global_step, epoch)
- model.train()
-
- return best_loss, global_step
-
-
-def main(args): # pylint: disable=redefined-outer-name
- # pylint: disable=global-variable-undefined
- global meta_data_train
- global meta_data_eval
- global train_classes
-
- ap = AudioProcessor(**c.audio)
- model = setup_encoder_model(c)
-
- optimizer = get_optimizer(c.optimizer, c.optimizer_params, c.lr, model)
-
- # pylint: disable=redefined-outer-name
- meta_data_train, meta_data_eval = load_tts_samples(c.datasets, eval_split=True)
-
- train_data_loader, train_classes, map_classid_to_classname = setup_loader(ap, is_val=False, verbose=True)
- if c.run_eval:
- eval_data_loader, _, _ = setup_loader(ap, is_val=True, verbose=True)
- else:
- eval_data_loader = None
-
- num_classes = len(train_classes)
- criterion = model.get_criterion(c, num_classes)
-
- if c.loss == "softmaxproto" and c.model != "speaker_encoder":
- c.map_classid_to_classname = map_classid_to_classname
- copy_model_files(c, OUT_PATH)
-
- if args.restore_path:
- criterion, args.restore_step = model.load_checkpoint(
- c, args.restore_path, eval=False, use_cuda=use_cuda, criterion=criterion
- )
- print(" > Model restored from step %d" % args.restore_step, flush=True)
- else:
- args.restore_step = 0
-
- if c.lr_decay:
- scheduler = NoamLR(optimizer, warmup_steps=c.warmup_steps, last_epoch=args.restore_step - 1)
- else:
- scheduler = None
-
- num_params = count_parameters(model)
- print("\n > Model has {} parameters".format(num_params), flush=True)
-
- if use_cuda:
- model = model.cuda()
- criterion.cuda()
-
- global_step = args.restore_step
- _, global_step = train(model, optimizer, scheduler, criterion, train_data_loader, eval_data_loader, global_step)
-
-
-if __name__ == "__main__":
- args, c, OUT_PATH, AUDIO_PATH, c_logger, dashboard_logger = init_training()
-
- try:
- main(args)
- except KeyboardInterrupt:
- remove_experiment_folder(OUT_PATH)
- try:
- sys.exit(0)
- except SystemExit:
- os._exit(0) # pylint: disable=protected-access
- except Exception: # pylint: disable=broad-except
- remove_experiment_folder(OUT_PATH)
- traceback.print_exc()
- sys.exit(1)
diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/README.md b/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/README.md
deleted file mode 100644
index 94508a7f2ecd7d161b16997e415ed4c4935a39f2..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
-# 🐸💬 TTS LJspeech Recipes
-
-For running the recipes
-
-1. Download the LJSpeech dataset here either manually from [its official website](https://keithito.com/LJ-Speech-Dataset/) or using ```download_ljspeech.sh```.
-2. Go to your desired model folder and run the training.
-
- Running Python files. (Choose the desired GPU ID for your run and set ```CUDA_VISIBLE_DEVICES```)
- ```terminal
- CUDA_VISIBLE_DEVICES="0" python train_modelX.py
- ```
-
- Running bash scripts.
- ```terminal
- bash run.sh
- ```
-
-💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best
-result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.
diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/evaluation/scores_LSE/SyncNetInstance_calc_scores.py b/spaces/artificialguybr/video-dubbing/Wav2Lip/evaluation/scores_LSE/SyncNetInstance_calc_scores.py
deleted file mode 100644
index 64906e257bd1f521d8fadb93e877ba83da7764ce..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/Wav2Lip/evaluation/scores_LSE/SyncNetInstance_calc_scores.py
+++ /dev/null
@@ -1,210 +0,0 @@
-#!/usr/bin/python
-#-*- coding: utf-8 -*-
-# Video 25 FPS, Audio 16000HZ
-
-import torch
-import numpy
-import time, pdb, argparse, subprocess, os, math, glob
-import cv2
-import python_speech_features
-
-from scipy import signal
-from scipy.io import wavfile
-from SyncNetModel import *
-from shutil import rmtree
-
-
-# ==================== Get OFFSET ====================
-
-def calc_pdist(feat1, feat2, vshift=10):
-
- win_size = vshift*2+1
-
- feat2p = torch.nn.functional.pad(feat2,(0,0,vshift,vshift))
-
- dists = []
-
- for i in range(0,len(feat1)):
-
- dists.append(torch.nn.functional.pairwise_distance(feat1[[i],:].repeat(win_size, 1), feat2p[i:i+win_size,:]))
-
- return dists
-
-# ==================== MAIN DEF ====================
-
-class SyncNetInstance(torch.nn.Module):
-
- def __init__(self, dropout = 0, num_layers_in_fc_layers = 1024):
- super(SyncNetInstance, self).__init__();
-
- self.__S__ = S(num_layers_in_fc_layers = num_layers_in_fc_layers).cuda();
-
- def evaluate(self, opt, videofile):
-
- self.__S__.eval();
-
- # ========== ==========
- # Convert files
- # ========== ==========
-
- if os.path.exists(os.path.join(opt.tmp_dir,opt.reference)):
- rmtree(os.path.join(opt.tmp_dir,opt.reference))
-
- os.makedirs(os.path.join(opt.tmp_dir,opt.reference))
-
- command = ("ffmpeg -loglevel error -y -i %s -threads 1 -f image2 %s" % (videofile,os.path.join(opt.tmp_dir,opt.reference,'%06d.jpg')))
- output = subprocess.call(command, shell=True, stdout=None)
-
- command = ("ffmpeg -loglevel error -y -i %s -async 1 -ac 1 -vn -acodec pcm_s16le -ar 16000 %s" % (videofile,os.path.join(opt.tmp_dir,opt.reference,'audio.wav')))
- output = subprocess.call(command, shell=True, stdout=None)
-
- # ========== ==========
- # Load video
- # ========== ==========
-
- images = []
-
- flist = glob.glob(os.path.join(opt.tmp_dir,opt.reference,'*.jpg'))
- flist.sort()
-
- for fname in flist:
- img_input = cv2.imread(fname)
- img_input = cv2.resize(img_input, (224,224)) #HARD CODED, CHANGE BEFORE RELEASE
- images.append(img_input)
-
- im = numpy.stack(images,axis=3)
- im = numpy.expand_dims(im,axis=0)
- im = numpy.transpose(im,(0,3,4,1,2))
-
- imtv = torch.autograd.Variable(torch.from_numpy(im.astype(float)).float())
-
- # ========== ==========
- # Load audio
- # ========== ==========
-
- sample_rate, audio = wavfile.read(os.path.join(opt.tmp_dir,opt.reference,'audio.wav'))
- mfcc = zip(*python_speech_features.mfcc(audio,sample_rate))
- mfcc = numpy.stack([numpy.array(i) for i in mfcc])
-
- cc = numpy.expand_dims(numpy.expand_dims(mfcc,axis=0),axis=0)
- cct = torch.autograd.Variable(torch.from_numpy(cc.astype(float)).float())
-
- # ========== ==========
- # Check audio and video input length
- # ========== ==========
-
- #if (float(len(audio))/16000) != (float(len(images))/25) :
- # print("WARNING: Audio (%.4fs) and video (%.4fs) lengths are different."%(float(len(audio))/16000,float(len(images))/25))
-
- min_length = min(len(images),math.floor(len(audio)/640))
-
- # ========== ==========
- # Generate video and audio feats
- # ========== ==========
-
- lastframe = min_length-5
- im_feat = []
- cc_feat = []
-
- tS = time.time()
- for i in range(0,lastframe,opt.batch_size):
-
- im_batch = [ imtv[:,:,vframe:vframe+5,:,:] for vframe in range(i,min(lastframe,i+opt.batch_size)) ]
- im_in = torch.cat(im_batch,0)
- im_out = self.__S__.forward_lip(im_in.cuda());
- im_feat.append(im_out.data.cpu())
-
- cc_batch = [ cct[:,:,:,vframe*4:vframe*4+20] for vframe in range(i,min(lastframe,i+opt.batch_size)) ]
- cc_in = torch.cat(cc_batch,0)
- cc_out = self.__S__.forward_aud(cc_in.cuda())
- cc_feat.append(cc_out.data.cpu())
-
- im_feat = torch.cat(im_feat,0)
- cc_feat = torch.cat(cc_feat,0)
-
- # ========== ==========
- # Compute offset
- # ========== ==========
-
- #print('Compute time %.3f sec.' % (time.time()-tS))
-
- dists = calc_pdist(im_feat,cc_feat,vshift=opt.vshift)
- mdist = torch.mean(torch.stack(dists,1),1)
-
- minval, minidx = torch.min(mdist,0)
-
- offset = opt.vshift-minidx
- conf = torch.median(mdist) - minval
-
- fdist = numpy.stack([dist[minidx].numpy() for dist in dists])
- # fdist = numpy.pad(fdist, (3,3), 'constant', constant_values=15)
- fconf = torch.median(mdist).numpy() - fdist
- fconfm = signal.medfilt(fconf,kernel_size=9)
-
- numpy.set_printoptions(formatter={'float': '{: 0.3f}'.format})
- #print('Framewise conf: ')
- #print(fconfm)
- #print('AV offset: \t%d \nMin dist: \t%.3f\nConfidence: \t%.3f' % (offset,minval,conf))
-
- dists_npy = numpy.array([ dist.numpy() for dist in dists ])
- return offset.numpy(), conf.numpy(), minval.numpy()
-
- def extract_feature(self, opt, videofile):
-
- self.__S__.eval();
-
- # ========== ==========
- # Load video
- # ========== ==========
- cap = cv2.VideoCapture(videofile)
-
- frame_num = 1;
- images = []
- while frame_num:
- frame_num += 1
- ret, image = cap.read()
- if ret == 0:
- break
-
- images.append(image)
-
- im = numpy.stack(images,axis=3)
- im = numpy.expand_dims(im,axis=0)
- im = numpy.transpose(im,(0,3,4,1,2))
-
- imtv = torch.autograd.Variable(torch.from_numpy(im.astype(float)).float())
-
- # ========== ==========
- # Generate video feats
- # ========== ==========
-
- lastframe = len(images)-4
- im_feat = []
-
- tS = time.time()
- for i in range(0,lastframe,opt.batch_size):
-
- im_batch = [ imtv[:,:,vframe:vframe+5,:,:] for vframe in range(i,min(lastframe,i+opt.batch_size)) ]
- im_in = torch.cat(im_batch,0)
- im_out = self.__S__.forward_lipfeat(im_in.cuda());
- im_feat.append(im_out.data.cpu())
-
- im_feat = torch.cat(im_feat,0)
-
- # ========== ==========
- # Compute offset
- # ========== ==========
-
- print('Compute time %.3f sec.' % (time.time()-tS))
-
- return im_feat
-
-
- def loadParameters(self, path):
- loaded_state = torch.load(path, map_location=lambda storage, loc: storage);
-
- self_state = self.__S__.state_dict();
-
- for name, param in loaded_state.items():
-
- self_state[name].copy_(param);
diff --git a/spaces/artificialguybr/video-dubbing/whisper/tests/conftest.py b/spaces/artificialguybr/video-dubbing/whisper/tests/conftest.py
deleted file mode 100644
index 31f1d6b4851362ae3af405b09309ec38ac884c36..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/whisper/tests/conftest.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import random as rand
-
-import numpy
-import pytest
-
-
-def pytest_configure(config):
- config.addinivalue_line("markers", "requires_cuda")
-
-
-@pytest.fixture
-def random():
- rand.seed(42)
- numpy.random.seed(42)
diff --git a/spaces/arxify/RVC-beta-v2-0618/i18n.py b/spaces/arxify/RVC-beta-v2-0618/i18n.py
deleted file mode 100644
index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = locale.getdefaultlocale()[
- 0
- ] # getlocale can't identify the system's language ((None, None))
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "en_US"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- print("Use Language:", self.language)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/RIPEMD160.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/RIPEMD160.py
deleted file mode 100644
index 820b57dd71f1666fbaa82589cd92b26ccd8c42d6..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/RIPEMD160.py
+++ /dev/null
@@ -1,169 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-from Crypto.Util.py3compat import bord
-
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer,
- create_string_buffer,
- get_raw_buffer, c_size_t,
- c_uint8_ptr)
-
-_raw_ripemd160_lib = load_pycryptodome_raw_lib(
- "Crypto.Hash._RIPEMD160",
- """
- int ripemd160_init(void **shaState);
- int ripemd160_destroy(void *shaState);
- int ripemd160_update(void *hs,
- const uint8_t *buf,
- size_t len);
- int ripemd160_digest(const void *shaState,
- uint8_t digest[20]);
- int ripemd160_copy(const void *src, void *dst);
- """)
-
-
-class RIPEMD160Hash(object):
- """A RIPEMD-160 hash object.
- Do not instantiate directly.
- Use the :func:`new` function.
-
- :ivar oid: ASN.1 Object ID
- :vartype oid: string
-
- :ivar block_size: the size in bytes of the internal message block,
- input to the compression function
- :vartype block_size: integer
-
- :ivar digest_size: the size in bytes of the resulting hash
- :vartype digest_size: integer
- """
-
- # The size of the resulting hash in bytes.
- digest_size = 20
- # The internal block size of the hash algorithm in bytes.
- block_size = 64
- # ASN.1 Object ID
- oid = "1.3.36.3.2.1"
-
- def __init__(self, data=None):
- state = VoidPointer()
- result = _raw_ripemd160_lib.ripemd160_init(state.address_of())
- if result:
- raise ValueError("Error %d while instantiating RIPEMD160"
- % result)
- self._state = SmartPointer(state.get(),
- _raw_ripemd160_lib.ripemd160_destroy)
- if data:
- self.update(data)
-
- def update(self, data):
- """Continue hashing of a message by consuming the next chunk of data.
-
- Args:
- data (byte string/byte array/memoryview): The next chunk of the message being hashed.
- """
-
- result = _raw_ripemd160_lib.ripemd160_update(self._state.get(),
- c_uint8_ptr(data),
- c_size_t(len(data)))
- if result:
- raise ValueError("Error %d while instantiating ripemd160"
- % result)
-
- def digest(self):
- """Return the **binary** (non-printable) digest of the message that has been hashed so far.
-
- :return: The hash digest, computed over the data processed so far.
- Binary form.
- :rtype: byte string
- """
-
- bfr = create_string_buffer(self.digest_size)
- result = _raw_ripemd160_lib.ripemd160_digest(self._state.get(),
- bfr)
- if result:
- raise ValueError("Error %d while instantiating ripemd160"
- % result)
-
- return get_raw_buffer(bfr)
-
- def hexdigest(self):
- """Return the **printable** digest of the message that has been hashed so far.
-
- :return: The hash digest, computed over the data processed so far.
- Hexadecimal encoded.
- :rtype: string
- """
-
- return "".join(["%02x" % bord(x) for x in self.digest()])
-
- def copy(self):
- """Return a copy ("clone") of the hash object.
-
- The copy will have the same internal state as the original hash
- object.
- This can be used to efficiently compute the digests of strings that
- share a common initial substring.
-
- :return: A hash object of the same type
- """
-
- clone = RIPEMD160Hash()
- result = _raw_ripemd160_lib.ripemd160_copy(self._state.get(),
- clone._state.get())
- if result:
- raise ValueError("Error %d while copying ripemd160" % result)
- return clone
-
- def new(self, data=None):
- """Create a fresh RIPEMD-160 hash object."""
-
- return RIPEMD160Hash(data)
-
-
-def new(data=None):
- """Create a new hash object.
-
- :parameter data:
- Optional. The very first chunk of the message to hash.
- It is equivalent to an early call to :meth:`RIPEMD160Hash.update`.
- :type data: byte string/byte array/memoryview
-
- :Return: A :class:`RIPEMD160Hash` hash object
- """
-
- return RIPEMD160Hash().new(data)
-
-# The size of the resulting hash in bytes.
-digest_size = RIPEMD160Hash.digest_size
-
-# The internal block size of the hash algorithm in bytes.
-block_size = RIPEMD160Hash.block_size
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py
deleted file mode 100644
index 153c5700f1a20666e77f1011aa1a8bbec537611c..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# SelfTest/Hash/test_RIPEMD160.py: Self-test for the RIPEMD-160 hash function
-#
-# Written in 2008 by Dwayne C. Litzenberger
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-#"""Self-test suite for Crypto.Hash.RIPEMD160"""
-
-from Crypto.Util.py3compat import *
-
-# This is a list of (expected_result, input[, description]) tuples.
-test_data = [
- # Test vectors downloaded 2008-09-12 from
- # http://homes.esat.kuleuven.be/~bosselae/ripemd160.html
- ('9c1185a5c5e9fc54612808977ee8f548b2258d31', '', "'' (empty string)"),
- ('0bdc9d2d256b3ee9daae347be6f4dc835a467ffe', 'a'),
- ('8eb208f7e05d987a9b044a8e98c6b087f15a0bfc', 'abc'),
- ('5d0689ef49d2fae572b881b123a85ffa21595f36', 'message digest'),
-
- ('f71c27109c692c1b56bbdceb5b9d2865b3708dbc',
- 'abcdefghijklmnopqrstuvwxyz',
- 'a-z'),
-
- ('12a053384a9c0c88e405a06c27dcf49ada62eb2b',
- 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq',
- 'abcdbcd...pnopq'),
-
- ('b0e20b6e3116640286ed3a87a5713079b21f5189',
- 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
- 'A-Z, a-z, 0-9'),
-
- ('9b752e45573d4b39f4dbd3323cab82bf63326bfb',
- '1234567890' * 8,
- "'1234567890' * 8"),
-
- ('52783243c1697bdbe16d37f97f68f08325dc1528',
- 'a' * 10**6,
- '"a" * 10**6'),
-]
-
-def get_tests(config={}):
- from Crypto.Hash import RIPEMD160
- from .common import make_hash_tests
- return make_hash_tests(RIPEMD160, "RIPEMD160", test_data,
- digest_size=20,
- oid="1.3.36.3.2.1")
-
-if __name__ == '__main__':
- import unittest
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
-
-# vim:set ts=4 sw=4 sts=4 expandtab:
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcfFontFile.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcfFontFile.py
deleted file mode 100644
index 442ac70c49dbf3e0d3da0d321ce41f70ef2546f8..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcfFontFile.py
+++ /dev/null
@@ -1,246 +0,0 @@
-#
-# THIS IS WORK IN PROGRESS
-#
-# The Python Imaging Library
-# $Id$
-#
-# portable compiled font file parser
-#
-# history:
-# 1997-08-19 fl created
-# 2003-09-13 fl fixed loading of unicode fonts
-#
-# Copyright (c) 1997-2003 by Secret Labs AB.
-# Copyright (c) 1997-2003 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import io
-
-from . import FontFile, Image
-from ._binary import i8
-from ._binary import i16be as b16
-from ._binary import i16le as l16
-from ._binary import i32be as b32
-from ._binary import i32le as l32
-
-# --------------------------------------------------------------------
-# declarations
-
-PCF_MAGIC = 0x70636601 # "\x01fcp"
-
-PCF_PROPERTIES = 1 << 0
-PCF_ACCELERATORS = 1 << 1
-PCF_METRICS = 1 << 2
-PCF_BITMAPS = 1 << 3
-PCF_INK_METRICS = 1 << 4
-PCF_BDF_ENCODINGS = 1 << 5
-PCF_SWIDTHS = 1 << 6
-PCF_GLYPH_NAMES = 1 << 7
-PCF_BDF_ACCELERATORS = 1 << 8
-
-BYTES_PER_ROW = [
- lambda bits: ((bits + 7) >> 3),
- lambda bits: ((bits + 15) >> 3) & ~1,
- lambda bits: ((bits + 31) >> 3) & ~3,
- lambda bits: ((bits + 63) >> 3) & ~7,
-]
-
-
-def sz(s, o):
- return s[o : s.index(b"\0", o)]
-
-
-class PcfFontFile(FontFile.FontFile):
- """Font file plugin for the X11 PCF format."""
-
- name = "name"
-
- def __init__(self, fp, charset_encoding="iso8859-1"):
-
- self.charset_encoding = charset_encoding
-
- magic = l32(fp.read(4))
- if magic != PCF_MAGIC:
- raise SyntaxError("not a PCF file")
-
- super().__init__()
-
- count = l32(fp.read(4))
- self.toc = {}
- for i in range(count):
- type = l32(fp.read(4))
- self.toc[type] = l32(fp.read(4)), l32(fp.read(4)), l32(fp.read(4))
-
- self.fp = fp
-
- self.info = self._load_properties()
-
- metrics = self._load_metrics()
- bitmaps = self._load_bitmaps(metrics)
- encoding = self._load_encoding()
-
- #
- # create glyph structure
-
- for ch, ix in enumerate(encoding):
- if ix is not None:
- x, y, l, r, w, a, d, f = metrics[ix]
- glyph = (w, 0), (l, d - y, x + l, d), (0, 0, x, y), bitmaps[ix]
- self.glyph[ch] = glyph
-
- def _getformat(self, tag):
-
- format, size, offset = self.toc[tag]
-
- fp = self.fp
- fp.seek(offset)
-
- format = l32(fp.read(4))
-
- if format & 4:
- i16, i32 = b16, b32
- else:
- i16, i32 = l16, l32
-
- return fp, format, i16, i32
-
- def _load_properties(self):
-
- #
- # font properties
-
- properties = {}
-
- fp, format, i16, i32 = self._getformat(PCF_PROPERTIES)
-
- nprops = i32(fp.read(4))
-
- # read property description
- p = []
- for i in range(nprops):
- p.append((i32(fp.read(4)), i8(fp.read(1)), i32(fp.read(4))))
- if nprops & 3:
- fp.seek(4 - (nprops & 3), io.SEEK_CUR) # pad
-
- data = fp.read(i32(fp.read(4)))
-
- for k, s, v in p:
- k = sz(data, k)
- if s:
- v = sz(data, v)
- properties[k] = v
-
- return properties
-
- def _load_metrics(self):
-
- #
- # font metrics
-
- metrics = []
-
- fp, format, i16, i32 = self._getformat(PCF_METRICS)
-
- append = metrics.append
-
- if (format & 0xFF00) == 0x100:
-
- # "compressed" metrics
- for i in range(i16(fp.read(2))):
- left = i8(fp.read(1)) - 128
- right = i8(fp.read(1)) - 128
- width = i8(fp.read(1)) - 128
- ascent = i8(fp.read(1)) - 128
- descent = i8(fp.read(1)) - 128
- xsize = right - left
- ysize = ascent + descent
- append((xsize, ysize, left, right, width, ascent, descent, 0))
-
- else:
-
- # "jumbo" metrics
- for i in range(i32(fp.read(4))):
- left = i16(fp.read(2))
- right = i16(fp.read(2))
- width = i16(fp.read(2))
- ascent = i16(fp.read(2))
- descent = i16(fp.read(2))
- attributes = i16(fp.read(2))
- xsize = right - left
- ysize = ascent + descent
- append((xsize, ysize, left, right, width, ascent, descent, attributes))
-
- return metrics
-
- def _load_bitmaps(self, metrics):
-
- #
- # bitmap data
-
- bitmaps = []
-
- fp, format, i16, i32 = self._getformat(PCF_BITMAPS)
-
- nbitmaps = i32(fp.read(4))
-
- if nbitmaps != len(metrics):
- raise OSError("Wrong number of bitmaps")
-
- offsets = []
- for i in range(nbitmaps):
- offsets.append(i32(fp.read(4)))
-
- bitmap_sizes = []
- for i in range(4):
- bitmap_sizes.append(i32(fp.read(4)))
-
- # byteorder = format & 4 # non-zero => MSB
- bitorder = format & 8 # non-zero => MSB
- padindex = format & 3
-
- bitmapsize = bitmap_sizes[padindex]
- offsets.append(bitmapsize)
-
- data = fp.read(bitmapsize)
-
- pad = BYTES_PER_ROW[padindex]
- mode = "1;R"
- if bitorder:
- mode = "1"
-
- for i in range(nbitmaps):
- x, y, l, r, w, a, d, f = metrics[i]
- b, e = offsets[i], offsets[i + 1]
- bitmaps.append(Image.frombytes("1", (x, y), data[b:e], "raw", mode, pad(x)))
-
- return bitmaps
-
- def _load_encoding(self):
- fp, format, i16, i32 = self._getformat(PCF_BDF_ENCODINGS)
-
- first_col, last_col = i16(fp.read(2)), i16(fp.read(2))
- first_row, last_row = i16(fp.read(2)), i16(fp.read(2))
-
- i16(fp.read(2)) # default
-
- nencoding = (last_col - first_col + 1) * (last_row - first_row + 1)
-
- # map character code to bitmap index
- encoding = [None] * min(256, nencoding)
-
- encoding_offsets = [i16(fp.read(2)) for _ in range(nencoding)]
-
- for i in range(first_col, len(encoding)):
- try:
- encoding_offset = encoding_offsets[
- ord(bytearray([i]).decode(self.charset_encoding))
- ]
- if encoding_offset != 0xFFFF:
- encoding[i] = encoding_offset
- except UnicodeDecodeError:
- # character is not supported in selected encoding
- pass
-
- return encoding
diff --git a/spaces/asalhi85/ArabiToolsDialecRecognition/app.py b/spaces/asalhi85/ArabiToolsDialecRecognition/app.py
deleted file mode 100644
index 18ed16681eaf985664c52e744c88748c64c4c3ae..0000000000000000000000000000000000000000
--- a/spaces/asalhi85/ArabiToolsDialecRecognition/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-# Creating a gradio app using the inferene API
-App = gr.Interface.load("models/asalhi85/Dialect_Recognitionv2",
- title="
مثال لتحليل اللهجات - أدوات عربي", description ="
نموذج لغوي عميق خاص في كشف اللهجات وحاليا يدعم اللهجات التالية : اللغة العربية الفصحى الحديثة ، اللهجة النجدية، اللهجة الحجازية، اللهجة الخليجية
",
- allow_flagging=False, examples=[["اتفضل يا خوي تفضل روح ارتاح بغرفتك انا ماني غريب"], ["استاذ عبود هل تعتقد معي ان عدم مجيء هذا النجم الجماهيري الكبير الى هذا المهرجان سيقلل من نجاح هذا الحفل"], ["طب انت مستعدة قولي ايش الحل الاول وانا اروح له وعد شرف اننا اسعى لك واحاول انفذ طلبك بقد ما اقدر"]]
-)
-
-App.launch()
diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference_main.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference_main.py
deleted file mode 100644
index 80a470ea9146f1f75e785411dd5d3b6fade64b70..0000000000000000000000000000000000000000
--- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference_main.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import io
-import logging
-import time
-from pathlib import Path
-
-import librosa
-import matplotlib.pyplot as plt
-import numpy as np
-import soundfile
-
-from inference import infer_tool
-from inference import slicer
-from inference.infer_tool import Svc
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-chunks_dict = infer_tool.read_temp("inference/chunks_temp.json")
-
-
-
-def main():
- import argparse
-
- parser = argparse.ArgumentParser(description='sovits4 inference')
-
- # 一定要设置的部分
- parser.add_argument('-m', '--model_path', type=str, default="/Volumes/Extend/下载/G_20800.pth", help='模型路径')
- parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径')
- parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src"], help='wav文件名列表,放在raw文件夹下')
- parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)')
- parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nyaru'], help='合成目标说话人名称')
-
- # 可选项部分
- parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,
- help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调')
- parser.add_argument('-cm', '--cluster_model_path', type=str, default="/Volumes/Extend/下载/so-vits-svc-4.0/logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填')
- parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=1, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可')
-
- # 不用动的部分
- parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50')
- parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu')
- parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学')
- parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现')
- parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式')
-
- args = parser.parse_args()
-
- svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path)
- infer_tool.mkdir(["raw", "results"])
- clean_names = args.clean_names
- trans = args.trans
- spk_list = args.spk_list
- slice_db = args.slice_db
- wav_format = args.wav_format
- auto_predict_f0 = args.auto_predict_f0
- cluster_infer_ratio = args.cluster_infer_ratio
- noice_scale = args.noice_scale
- pad_seconds = args.pad_seconds
-
- infer_tool.fill_a_to_b(trans, clean_names)
- for clean_name, tran in zip(clean_names, trans):
- raw_audio_path = f"raw/{clean_name}"
- if "." not in raw_audio_path:
- raw_audio_path += ".wav"
- infer_tool.format_wav(raw_audio_path)
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
-
- for spk in spk_list:
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])])
- length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = svc_model.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale
- )
- _audio = out_audio.cpu().numpy()
-
- pad_len = int(svc_model.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- audio.extend(list(_audio))
- key = "auto" if auto_predict_f0 else f"{tran}key"
- cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}"
- res_path = f'./results/old——{clean_name}_{key}_{spk}{cluster_name}.{wav_format}'
- soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format)
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer/README.md b/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer/README.md
deleted file mode 100644
index d7d2fcf9f3c7db1ac8e615510fd4916de548b400..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 📚CSV Dataset Analyzer Streamlit
-emoji: 📚
-colorFrom: purple
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Emoji-Short-Codes/README.md b/spaces/awacke1/Emoji-Short-Codes/README.md
deleted file mode 100644
index 28aea7ab434423b31c61676ce7a48375affbff63..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Emoji-Short-Codes/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 🇦🇼 Aaron Wacker 💝 Emojis
-emoji: 🇦🇼💝
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-sdk_version: 1.9.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/awacke1/MusicGenStreamFacebook/app.py b/spaces/awacke1/MusicGenStreamFacebook/app.py
deleted file mode 100644
index 58ef6ed236e7ce584c6a3db46419435266f67473..0000000000000000000000000000000000000000
--- a/spaces/awacke1/MusicGenStreamFacebook/app.py
+++ /dev/null
@@ -1,214 +0,0 @@
-import numpy as np
-import torch
-import gradio as gr
-import spaces
-from queue import Queue
-from threading import Thread
-from typing import Optional
-from transformers import MusicgenForConditionalGeneration, MusicgenProcessor, set_seed
-from transformers.generation.streamers import BaseStreamer
-
-model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
-processor = MusicgenProcessor.from_pretrained("facebook/musicgen-small")
-
-title = "MusicGenStream with Facebook MusicGen-Small Model"
-description = """
-Generate and stream music using https://huggingface.co/facebook/musicgen-small
-"""
-
-article = """
-## How Does It Work?
-MusicGen is an auto-regressive transformer-based model, meaning generates audio codes (tokens) in a causal fashion.
-At each decoding step, the model generates a new set of audio codes, conditional on the text input and all previous audio codes. From the
-frame rate of the [EnCodec model](https://huggingface.co/facebook/encodec_32khz) used to decode the generated codes to audio waveform.
-"""
-
-
-class MusicgenStreamer(BaseStreamer):
- def __init__(
- self,
- model: MusicgenForConditionalGeneration,
- device: Optional[str] = None,
- play_steps: Optional[int] = 10,
- stride: Optional[int] = None,
- timeout: Optional[float] = None,
- ):
- """
- Streamer that stores playback-ready audio in a queue, to be used by a downstream application as an iterator. This is
- useful for applications that benefit from acessing the generated audio in a non-blocking way (e.g. in an interactive
- Gradio demo).
- Parameters:
- model (`MusicgenForConditionalGeneration`):
- The MusicGen model used to generate the audio waveform.
- device (`str`, *optional*):
- The torch device on which to run the computation. If `None`, will default to the device of the model.
- play_steps (`int`, *optional*, defaults to 10):
- The number of generation steps with which to return the generated audio array. Using fewer steps will
- mean the first chunk is ready faster, but will require more codec decoding steps overall. This value
- should be tuned to your device and latency requirements.
- stride (`int`, *optional*):
- The window (stride) between adjacent audio samples. Using a stride between adjacent audio samples reduces
- the hard boundary between them, giving smoother playback. If `None`, will default to a value equivalent to
- play_steps // 6 in the audio space.
- timeout (`int`, *optional*):
- The timeout for the audio queue. If `None`, the queue will block indefinitely. Useful to handle exceptions
- in `.generate()`, when it is called in a separate thread.
- """
- self.decoder = model.decoder
- self.audio_encoder = model.audio_encoder
- self.generation_config = model.generation_config
- self.device = device if device is not None else model.device
-
- # variables used in the streaming process
- self.play_steps = play_steps
- if stride is not None:
- self.stride = stride
- else:
- hop_length = np.prod(self.audio_encoder.config.upsampling_ratios)
- self.stride = hop_length * (play_steps - self.decoder.num_codebooks) // 6
- self.token_cache = None
- self.to_yield = 0
-
- # varibles used in the thread process
- self.audio_queue = Queue()
- self.stop_signal = None
- self.timeout = timeout
-
- def apply_delay_pattern_mask(self, input_ids):
- # build the delay pattern mask for offsetting each codebook prediction by 1 (this behaviour is specific to MusicGen)
- _, decoder_delay_pattern_mask = self.decoder.build_delay_pattern_mask(
- input_ids[:, :1],
- pad_token_id=self.generation_config.decoder_start_token_id,
- max_length=input_ids.shape[-1],
- )
- # apply the pattern mask to the input ids
- input_ids = self.decoder.apply_delay_pattern_mask(input_ids, decoder_delay_pattern_mask)
-
- # revert the pattern delay mask by filtering the pad token id
- input_ids = input_ids[input_ids != self.generation_config.pad_token_id].reshape(
- 1, self.decoder.num_codebooks, -1
- )
-
- # append the frame dimension back to the audio codes
- input_ids = input_ids[None, ...]
-
- # send the input_ids to the correct device
- input_ids = input_ids.to(self.audio_encoder.device)
-
- output_values = self.audio_encoder.decode(
- input_ids,
- audio_scales=[None],
- )
- audio_values = output_values.audio_values[0, 0]
- return audio_values.cpu().float().numpy()
-
- def put(self, value):
- batch_size = value.shape[0] // self.decoder.num_codebooks
- if batch_size > 1:
- raise ValueError("MusicgenStreamer only supports batch size 1")
-
- if self.token_cache is None:
- self.token_cache = value
- else:
- self.token_cache = torch.concatenate([self.token_cache, value[:, None]], dim=-1)
-
- if self.token_cache.shape[-1] % self.play_steps == 0:
- audio_values = self.apply_delay_pattern_mask(self.token_cache)
- self.on_finalized_audio(audio_values[self.to_yield : -self.stride])
- self.to_yield += len(audio_values) - self.to_yield - self.stride
-
- def end(self):
- """Flushes any remaining cache and appends the stop symbol."""
- if self.token_cache is not None:
- audio_values = self.apply_delay_pattern_mask(self.token_cache)
- else:
- audio_values = np.zeros(self.to_yield)
-
- self.on_finalized_audio(audio_values[self.to_yield :], stream_end=True)
-
- def on_finalized_audio(self, audio: np.ndarray, stream_end: bool = False):
- """Put the new audio in the queue. If the stream is ending, also put a stop signal in the queue."""
- self.audio_queue.put(audio, timeout=self.timeout)
- if stream_end:
- self.audio_queue.put(self.stop_signal, timeout=self.timeout)
-
- def __iter__(self):
- return self
-
- def __next__(self):
- value = self.audio_queue.get(timeout=self.timeout)
- if not isinstance(value, np.ndarray) and value == self.stop_signal:
- raise StopIteration()
- else:
- return value
-
-
-sampling_rate = model.audio_encoder.config.sampling_rate
-frame_rate = model.audio_encoder.config.frame_rate
-
-target_dtype = np.int16
-max_range = np.iinfo(target_dtype).max
-
-
-@spaces.GPU
-def generate_audio(text_prompt, audio_length_in_s=10.0, play_steps_in_s=2.0, seed=0):
- max_new_tokens = int(frame_rate * audio_length_in_s)
- play_steps = int(frame_rate * play_steps_in_s)
-
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- if device != model.device:
- model.to(device)
- if device == "cuda:0":
- model.half()
-
- inputs = processor(
- text=text_prompt,
- padding=True,
- return_tensors="pt",
- )
-
- streamer = MusicgenStreamer(model, device=device, play_steps=play_steps)
-
- generation_kwargs = dict(
- **inputs.to(device),
- streamer=streamer,
- max_new_tokens=max_new_tokens,
- )
- thread = Thread(target=model.generate, kwargs=generation_kwargs)
- thread.start()
-
- set_seed(seed)
- for new_audio in streamer:
- print(f"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds")
- new_audio = (new_audio * max_range).astype(np.int16)
- yield (sampling_rate, new_audio)
-
-
-demo = gr.Interface(
- fn=generate_audio,
- inputs=[
- gr.Text(label="Prompt", value="80s pop track with synth and instrumentals"),
- gr.Slider(10, 30, value=15, step=5, label="Audio length in seconds"),
- gr.Slider(0.5, 2.5, value=0.5, step=0.5, label="Streaming interval in seconds", info="Lower = shorter chunks, lower latency, more codec steps"),
- gr.Slider(0, 10, value=5, step=1, label="Seed for random generations"),
- ],
- outputs=[
- gr.Audio(label="Generated Music", streaming=True, autoplay=True)
- ],
- examples = [
- ["Country acoustic guitar fast line dance singer like Kenny Chesney and Garth brooks and Luke Combs and Chris Stapleton. bpm: 100", 30, 0.5, 5],
- ["Electronic Dance track with pulsating bass and high energy synths. bpm: 126", 30, 0.5, 5],
- ["Rap Beats with deep bass and snappy snares. bpm: 80", 30, 0.5, 5],
- ["Lo-Fi track with smooth beats and chill vibes. bpm: 100", 30, 0.5, 5],
- ["Global Groove track with international instruments and dance rhythms. bpm: 128", 30, 0.5, 5],
- ["Relaxing Meditation music with ambient pads and soothing melodies. bpm: 80", 30, 0.5, 5],
- ["Rave Dance track with hard-hitting beats and euphoric synths. bpm: 128", 30, 0.5, 5]
- ],
-
- title=title,
- description=description,
- article=article,
- cache_examples=False,
-)
-
-demo.queue().launch()
\ No newline at end of file
diff --git a/spaces/awacke1/SpeechStoryReadAloud/README.md b/spaces/awacke1/SpeechStoryReadAloud/README.md
deleted file mode 100644
index da1543f87bfdced1b221f99e502dae05c23157bc..0000000000000000000000000000000000000000
--- a/spaces/awacke1/SpeechStoryReadAloud/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 🗣️NLP Speech Story Read Aloud📚
-emoji: 🗣️📚💕
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.0.11
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/awacke1/VizLib-Mahotas/app.py b/spaces/awacke1/VizLib-Mahotas/app.py
deleted file mode 100644
index 8e9303e7a7c92b11ba8095b07f308be43dba003b..0000000000000000000000000000000000000000
--- a/spaces/awacke1/VizLib-Mahotas/app.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import streamlit as st
-import mahotas as mh
-import pandas as pd
-import plotly.express as px
-import urllib.request
-from skimage import io
-
-# Define a list of medical conditions
-conditions = [
- {"name": "Depression", "test_for": "Patient Health Questionnaire-9 (PHQ-9)"},
- {"name": "Anxiety", "test_for": "Generalized Anxiety Disorder-7 (GAD-7)"},
- {"name": "Diabetes", "test_for": "Hemoglobin A1C test"},
- {"name": "Hypertension", "test_for": "Blood pressure measurement"},
- {"name": "Asthma", "test_for": "Pulmonary function test"},
- {"name": "Cancer", "test_for": "Biopsy or imaging tests (e.g., CT scan, MRI)"},
- {"name": "Arthritis", "test_for": "X-ray, MRI, or ultrasound"},
- {"name": "Heart disease", "test_for": "Electrocardiogram (ECG)"},
- {"name": "Obesity", "test_for": "Body mass index (BMI)"},
- {"name": "Substance use disorder", "test_for": "Substance Abuse Subtle Screening Inventory (SASSI)"}
-]
-
-# Define a function to process images using Mahotas
-def process_image(image):
- # Convert the image to grayscale
- grayscale_image = mh.colors.rgb2gray(image)
- # Apply a Gaussian filter to the image to reduce noise
- filtered_image = mh.gaussian_filter(grayscale_image, 4)
- # Threshold the image to create a binary image
- binary_image = filtered_image > mh.otsu(filtered_image)
- # Compute the connected components in the binary image
- labels, num_labels = mh.label(binary_image)
- # Compute the size of each connected component
- sizes = mh.labeled.labeled_size(labels)
- # Sort the sizes in descending order
- sorted_sizes = sorted(sizes, reverse=True)
- # Return the top 10 sizes
- return sorted_sizes[:10]
-
-# Define the Streamlit app
-def app():
- # Add a title to the app
- st.title("Mahotas Demo")
-
- # Add a sidebar to the app
- st.sidebar.title("Medical Conditions")
- selected_condition = st.sidebar.selectbox("Select a condition", [c["name"] for c in conditions])
-
- # Get the selected condition
- condition = next(c for c in conditions if c["name"] == selected_condition)
-
- # Display the selected condition
- st.header(condition["name"])
- st.write("Test for:", condition["test_for"])
-
- # Load an example medical image
- if selected_condition == "Depression":
- image_url = "https://i.imgur.com/kPQoD8C.jpg"
- elif selected_condition == "Anxiety":
- image_url = "https://i.imgur.com/ZWyKjJN.jpg"
- elif selected_condition == "Diabetes":
- image_url = "https://i.imgur.com/1gOEMO5.jpg"
- elif selected_condition == "Hypertension":
- image_url = "https://i.imgur.com/BoSUwio.jpg"
- elif selected_condition == "Asthma":
- image_url = "https://i.imgur.com/BLKjzJF.jpg"
- elif selected_condition == "Cancer":
- image_url = "https://i.imgur.com/nq3vV8.jpg"
- elif selected_condition == "Arthritis":
- image_url = "https://i.imgur.com/ffzd6Fo.jpg"
- elif selected_condition == "Heart disease":
- image_url = "https://i.imgur.com/1I7axhd.jpg"
- elif selected_condition == "Obesity":
- image_url = "https://i.imgur.com/nZ1EjJr.jpg"
- else:
- image_url = "https://i.imgur.com/RUBZOWF.jpg"
-
- image = io.imread(image_url)
-
- # Process the image using Mahotas
- sizes = process_image(image)
-
- # Display the top 10 connected component sizes
- df = pd.DataFrame({"Size": sizes})
- st.write(df)
-
- # Create a sunburst chart using Plotly
- fig = px.sunburst(
- df,
- path=["Size"],
- values="Size",
- color="Size",
- color_continuous_scale="blues"
- )
- st.plotly_chart(fig)
-
-st.markdown("""
-# Alternate Image Links Per Condition:
-Depression: https://www.pexels.com/photo/woman-sitting-on-grass-field-while-holding-her-head-7127866/
-Anxiety: https://www.pexels.com/photo/woman-sitting-on-rock-and-looking-at-the-ocean-7119798/
-Diabetes: https://www.pexels.com/photo/man-taking-blood-sugar-test-4050305/
-Hypertension: https://www.pexels.com/photo/woman-measuring-blood-pressure-with-sphygmomanometer-5691686/
-Asthma: https://www.pexels.com/photo/woman-having-asthma-attack-in-park-7127511/
-Cancer: https://www.pexels.com/photo/close-up-of-pink-ribbon-on-cancer-awareness-banner-4219366/
-Arthritis: https://www.pexels.com/photo/man-with-back-pain-lying-on-bed-4050323/
-Heart disease: https://www.pexels.com/photo/woman-touching-chest-during-chest-pain-7127487/
-Obesity: https://www.pexels.com/photo/woman-in-black-pants-lying-on-bed-7127516/
- """)
-
-app()
\ No newline at end of file
diff --git a/spaces/azamat/twitter_geocoder/README.md b/spaces/azamat/twitter_geocoder/README.md
deleted file mode 100644
index 473226b08bee9cd89efc25f9bd6b352f12848022..0000000000000000000000000000000000000000
--- a/spaces/azamat/twitter_geocoder/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Twitter Geocoder
-emoji: 💩
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/configuration_mpt.py b/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/configuration_mpt.py
deleted file mode 100644
index e9eb6fc59b50654ddbe19ed56ad8c0abd1b8efef..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/configuration_mpt.py
+++ /dev/null
@@ -1,118 +0,0 @@
-"""A HuggingFace-style model configuration."""
-from typing import Dict, Optional, Union
-from transformers import PretrainedConfig
-attn_config_defaults: Dict = {'attn_type': 'multihead_attention', 'attn_pdrop': 0.0, 'attn_impl': 'triton', 'qk_ln': False, 'clip_qkv': None, 'softmax_scale': None, 'prefix_lm': False, 'attn_uses_sequence_id': False, 'alibi': False, 'alibi_bias_max': 8}
-init_config_defaults: Dict = {'name': 'kaiming_normal_', 'fan_mode': 'fan_in', 'init_nonlinearity': 'relu', 'init_div_is_residual': True, 'emb_init_std': None, 'emb_init_uniform_lim': None, 'init_std': None, 'init_gain': 0.0}
-
-class MPTConfig(PretrainedConfig):
- model_type = 'mpt'
-
- def __init__(self, d_model: int=2048, n_heads: int=16, n_layers: int=24, expansion_ratio: int=4, max_seq_len: int=2048, vocab_size: int=50368, resid_pdrop: float=0.0, emb_pdrop: float=0.0, learned_pos_emb: bool=True, attn_config: Dict=attn_config_defaults, init_device: str='cpu', logit_scale: Optional[Union[float, str]]=None, no_bias: bool=False, verbose: int=0, embedding_fraction: float=1.0, norm_type: str='low_precision_layernorm', use_cache: bool=False, init_config: Dict=init_config_defaults, **kwargs):
- """The MPT configuration class.
-
- Args:
- d_model (int): The size of the embedding dimension of the model.
- n_heads (int): The number of attention heads.
- n_layers (int): The number of layers in the model.
- expansion_ratio (int): The ratio of the up/down scale in the MLP.
- max_seq_len (int): The maximum sequence length of the model.
- vocab_size (int): The size of the vocabulary.
- resid_pdrop (float): The dropout probability applied to the attention output before combining with residual.
- emb_pdrop (float): The dropout probability for the embedding layer.
- learned_pos_emb (bool): Whether to use learned positional embeddings
- attn_config (Dict): A dictionary used to configure the model's attention module:
- attn_type (str): type of attention to use. Options: multihead_attention, multiquery_attention
- attn_pdrop (float): The dropout probability for the attention layers.
- attn_impl (str): The attention implementation to use. One of 'torch', 'flash', or 'triton'.
- qk_ln (bool): Whether to apply layer normalization to the queries and keys in the attention layer.
- clip_qkv (Optional[float]): If not None, clip the queries, keys, and values in the attention layer to
- this value.
- softmax_scale (Optional[float]): If not None, scale the softmax in the attention layer by this value. If None,
- use the default scale of ``1/sqrt(d_keys)``.
- prefix_lm (Optional[bool]): Whether the model should operate as a Prefix LM. This requires passing an
- extra `prefix_mask` argument which indicates which tokens belong to the prefix. Tokens in the prefix
- can attend to one another bi-directionally. Tokens outside the prefix use causal attention.
- attn_uses_sequence_id (Optional[bool]): Whether to restrict attention to tokens that have the same sequence_id.
- When the model is in `train` mode, this requires passing an extra `sequence_id` argument which indicates
- which sub-sequence each token belongs to.
- Defaults to ``False`` meaning any provided `sequence_id` will be ignored.
- alibi (bool): Whether to use the alibi bias instead of position embeddings.
- alibi_bias_max (int): The maximum value of the alibi bias.
- init_device (str): The device to use for parameter initialization.
- logit_scale (Optional[Union[float, str]]): If not None, scale the logits by this value.
- no_bias (bool): Whether to use bias in all layers.
- verbose (int): The verbosity level. 0 is silent.
- embedding_fraction (float): The fraction to scale the gradients of the embedding layer by.
- norm_type (str): choose type of norm to use
- multiquery_attention (bool): Whether to use multiquery attention implementation.
- use_cache (bool): Whether or not the model should return the last key/values attentions
- init_config (Dict): A dictionary used to configure the model initialization:
- init_config.name: The parameter initialization scheme to use. Options: 'default_', 'baseline_',
- 'kaiming_uniform_', 'kaiming_normal_', 'neox_init_', 'small_init_', 'xavier_uniform_', or
- 'xavier_normal_'. These mimic the parameter initialization methods in PyTorch.
- init_div_is_residual (Union[int, float, str, bool]): Value to divide initial weights by if ``module._is_residual`` is True.
- emb_init_std (Optional[float]): The standard deviation of the normal distribution used to initialize the embedding layer.
- emb_init_uniform_lim (Optional[Union[Tuple[float, float], float]]): The lower and upper limits of the uniform distribution
- used to initialize the embedding layer. Mutually exclusive with ``emb_init_std``.
- init_std (float): The standard deviation of the normal distribution used to initialize the model,
- if using the baseline_ parameter initialization scheme.
- init_gain (float): The gain to use for parameter initialization with kaiming or xavier initialization schemes.
- fan_mode (str): The fan mode to use for parameter initialization with kaiming initialization schemes.
- init_nonlinearity (str): The nonlinearity to use for parameter initialization with kaiming initialization schemes.
- ---
- See llmfoundry.models.utils.param_init_fns.py for info on other param init config options
- """
- self.d_model = d_model
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.expansion_ratio = expansion_ratio
- self.max_seq_len = max_seq_len
- self.vocab_size = vocab_size
- self.resid_pdrop = resid_pdrop
- self.emb_pdrop = emb_pdrop
- self.learned_pos_emb = learned_pos_emb
- self.attn_config = attn_config
- self.init_device = init_device
- self.logit_scale = logit_scale
- self.no_bias = no_bias
- self.verbose = verbose
- self.embedding_fraction = embedding_fraction
- self.norm_type = norm_type
- self.use_cache = use_cache
- self.init_config = init_config
- if 'name' in kwargs:
- del kwargs['name']
- if 'loss_fn' in kwargs:
- del kwargs['loss_fn']
- super().__init__(**kwargs)
- self._validate_config()
-
- def _set_config_defaults(self, config, config_defaults):
- for (k, v) in config_defaults.items():
- if k not in config:
- config[k] = v
- return config
-
- def _validate_config(self):
- self.attn_config = self._set_config_defaults(self.attn_config, attn_config_defaults)
- self.init_config = self._set_config_defaults(self.init_config, init_config_defaults)
- if self.d_model % self.n_heads != 0:
- raise ValueError('d_model must be divisible by n_heads')
- if any((prob < 0 or prob > 1 for prob in [self.attn_config['attn_pdrop'], self.resid_pdrop, self.emb_pdrop])):
- raise ValueError("self.attn_config['attn_pdrop'], resid_pdrop, emb_pdrop are probabilities and must be between 0 and 1")
- if self.attn_config['attn_impl'] not in ['torch', 'flash', 'triton']:
- raise ValueError(f"Unknown attn_impl={self.attn_config['attn_impl']}")
- if self.attn_config['prefix_lm'] and self.attn_config['attn_impl'] not in ['torch', 'triton']:
- raise NotImplementedError('prefix_lm only implemented with torch and triton attention.')
- if self.attn_config['alibi'] and self.attn_config['attn_impl'] not in ['torch', 'triton']:
- raise NotImplementedError('alibi only implemented with torch and triton attention.')
- if self.attn_config['attn_uses_sequence_id'] and self.attn_config['attn_impl'] not in ['torch', 'triton']:
- raise NotImplementedError('attn_uses_sequence_id only implemented with torch and triton attention.')
- if self.embedding_fraction > 1 or self.embedding_fraction <= 0:
- raise ValueError('model.embedding_fraction must be between 0 (exclusive) and 1 (inclusive)!')
- if isinstance(self.logit_scale, str) and self.logit_scale != 'inv_sqrt_d_model':
- raise ValueError(f"self.logit_scale={self.logit_scale!r} is not recognized as an option; use numeric value or 'inv_sqrt_d_model'.")
- if self.init_config.get('name', None) is None:
- raise ValueError(f"self.init_config={self.init_config!r} 'name' needs to be set.")
- if not self.learned_pos_emb and (not self.attn_config['alibi']):
- raise ValueError(f'Positional information must be provided to the model using either learned_pos_emb or alibi.')
\ No newline at end of file
diff --git a/spaces/banana-projects/datasets-card-creator/postcss.config.js b/spaces/banana-projects/datasets-card-creator/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/datasets-card-creator/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/core/Clock.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/core/Clock.d.ts
deleted file mode 100644
index 82a24d4dc0d7faf0084295204a80e75415c16b2e..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/core/Clock.d.ts
+++ /dev/null
@@ -1,59 +0,0 @@
-/**
- * Object for keeping track of time.
- *
- * @see src/core/Clock.js
- */
-export class Clock {
- /**
- * @param autoStart Automatically start the clock.
- */
- constructor(autoStart?: boolean);
-
- /**
- * If set, starts the clock automatically when the first update is called.
- */
- autoStart: boolean;
-
- /**
- * When the clock is running, It holds the starttime of the clock.
- * This counted from the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC.
- */
- startTime: number;
-
- /**
- * When the clock is running, It holds the previous time from a update.
- * This counted from the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC.
- */
- oldTime: number;
-
- /**
- * When the clock is running, It holds the time elapsed between the start of the clock to the previous update.
- * This parameter is in seconds of three decimal places.
- */
- elapsedTime: number;
-
- /**
- * This property keeps track whether the clock is running or not.
- */
- running: boolean;
-
- /**
- * Starts clock.
- */
- start(): void;
-
- /**
- * Stops clock.
- */
- stop(): void;
-
- /**
- * Get the seconds passed since the clock started.
- */
- getElapsedTime(): number;
-
- /**
- * Get the seconds passed since the last call to this method.
- */
- getDelta(): number;
-}
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/Curves.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/Curves.js
deleted file mode 100644
index c984a853942011a89bcd2add1be38392c751a639..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/Curves.js
+++ /dev/null
@@ -1,10 +0,0 @@
-export { ArcCurve } from './ArcCurve.js';
-export { CatmullRomCurve3 } from './CatmullRomCurve3.js';
-export { CubicBezierCurve } from './CubicBezierCurve.js';
-export { CubicBezierCurve3 } from './CubicBezierCurve3.js';
-export { EllipseCurve } from './EllipseCurve.js';
-export { LineCurve } from './LineCurve.js';
-export { LineCurve3 } from './LineCurve3.js';
-export { QuadraticBezierCurve } from './QuadraticBezierCurve.js';
-export { QuadraticBezierCurve3 } from './QuadraticBezierCurve3.js';
-export { SplineCurve } from './SplineCurve.js';
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/GridHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/GridHelper.js
deleted file mode 100644
index 649ac6b4c7aceb726a4a568fc0d0663930e0bac5..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/GridHelper.js
+++ /dev/null
@@ -1,72 +0,0 @@
-/**
- * @author mrdoob / http://mrdoob.com/
- */
-
-import { LineSegments } from '../objects/LineSegments.js';
-import { VertexColors } from '../constants.js';
-import { LineBasicMaterial } from '../materials/LineBasicMaterial.js';
-import { Float32BufferAttribute } from '../core/BufferAttribute.js';
-import { BufferGeometry } from '../core/BufferGeometry.js';
-import { Color } from '../math/Color.js';
-
-function GridHelper( size, divisions, color1, color2 ) {
-
- size = size || 10;
- divisions = divisions || 10;
- color1 = new Color( color1 !== undefined ? color1 : 0x444444 );
- color2 = new Color( color2 !== undefined ? color2 : 0x888888 );
-
- var center = divisions / 2;
- var step = size / divisions;
- var halfSize = size / 2;
-
- var vertices = [], colors = [];
-
- for ( var i = 0, j = 0, k = - halfSize; i <= divisions; i ++, k += step ) {
-
- vertices.push( - halfSize, 0, k, halfSize, 0, k );
- vertices.push( k, 0, - halfSize, k, 0, halfSize );
-
- var color = i === center ? color1 : color2;
-
- color.toArray( colors, j ); j += 3;
- color.toArray( colors, j ); j += 3;
- color.toArray( colors, j ); j += 3;
- color.toArray( colors, j ); j += 3;
-
- }
-
- var geometry = new BufferGeometry();
- geometry.addAttribute( 'position', new Float32BufferAttribute( vertices, 3 ) );
- geometry.addAttribute( 'color', new Float32BufferAttribute( colors, 3 ) );
-
- var material = new LineBasicMaterial( { vertexColors: VertexColors } );
-
- LineSegments.call( this, geometry, material );
-
-}
-
-GridHelper.prototype = Object.assign( Object.create( LineSegments.prototype ), {
-
- constructor: GridHelper,
-
- copy: function ( source ) {
-
- LineSegments.prototype.copy.call( this, source );
-
- this.geometry.copy( source.geometry );
- this.material.copy( source.material );
-
- return this;
-
- },
-
- clone: function () {
-
- return new this.constructor().copy( this );
-
- }
-
-} );
-
-export { GridHelper };
diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621100808.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621100808.py
deleted file mode 100644
index c32292a4790046adc0db8f67474deb6748a2b2a8..0000000000000000000000000000000000000000
--- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621100808.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#-*- coding : utf-8-*-
-import base64
-from subprocess import STDOUT
-import streamlit as st
-import pandas as pd
-import camelot as cam # extracting tables from PDFs
-
-st.title("PDF Table Extractor")
-
-input_pdf = st.file_uploader(label = "", type = 'pdf')
-
-page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1)
-background = st.selectbox("表格线条是否隐藏",(False,True),)
-if input_pdf is not None:
- # byte object into a PDF file
- with open("input.pdf", "wb") as f:
- base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8')
- f.write(base64.b64decode(base64_pdf))
- f.close()
-
- # read the pdf and parse it using stream
- tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background)
- result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter')
- tables[0].to_excel(result,index=False)
- # for i in range(0,len(tables)):
- # table = tables[i].df
- # sheetname = str(i)
- # table.to_excel(result, sheetname,index=False)
-
- with open('result.xlsx','rb') as f:
- st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel")
-
- #tables_all= cam.read_pdf("input.pdf", pages="all", process_background=background)
- result_all = pd.ExcelWriter('result_all.xlsx', engine='xlsxwriter')
- # for i in range(0,len(tables_all)):
- # table = tables_all[i].df
- # sheetname = str(i)
- # table.to_excel(result_all, sheetname,index=False)
- with open('result_all.xlsx','rb') as f:
- st.download_button('一件抽取完成,点击下载!', f,file_name='result_all.xlsx',mime="application/vnd.ms-excel")
-
-
-row9_spacer1, row9_1, row9_spacer2, row9_2, row9_spacer3 = st.columns((.2, 2.3, .4, 4.4, .2))
-with row9_1:
- if st.button('单页抽取'):
- st.write('单页抽取')
-with row9_2:
- if st.button('全文抽取'):
- st.write('全文抽取')
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (moonu Tamil Movie Download Dvdrip) TOP.md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (moonu Tamil Movie Download Dvdrip) TOP.md
deleted file mode 100644
index e16bbc58197c288fa78262ad2b7ead2b798b6ac3..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (moonu Tamil Movie Download Dvdrip) TOP.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
HD Online Player (moonu tamil movie download dvdrip) - A Guide to Watch the Romantic Drama
-
-
Moonu is a 2012 Tamil romantic drama film written and directed by Aishwarya R. Dhanush. It stars Dhanush and Shruti Haasan as two young lovers who face various challenges and tragedies in their relationship. The film is also known as 3 or Three, and has a famous song called "Why This Kolaveri Di" that became a viral sensation.
-
HD Online Player (moonu tamil movie download dvdrip)
Moonu was a commercial and critical success, winning several awards and nominations, including three National Film Awards. The film is widely praised for its realistic portrayal of love, emotions, and mental health issues. The film is also known for its nonlinear narrative and twist ending.
-
-
If you are a fan of Moonu or want to watch it for the first time, you might be wondering how to download it in dvdrip format. Dvdrip is a popular video file format that can store high-quality video and audio data. It is also compatible with most media players and devices.
-
-
In this article, we will show you how to download Moonu movie in dvdrip format from various sources. We will also give you some tips on how to watch it online or offline with HD online player.
-
-
How to Download Moonu Movie in Dvdrip Format
-
-
There are many websites that offer Moonu movie download in dvdrip format. However, not all of them are safe, legal, or reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also have broken links, low-quality files, or annoying ads and pop-ups.
-
-
-
To avoid these risks, you should always use trusted and reputable sources to download Moonu movie in dvdrip format. Here are some of the best options that we recommend:
-
-
-
Isaimini.store: This is a popular entertainment platform that offers movies, music, TV shows, and more. You can download Moonu Tamil movie (2012) to your Isaimini account and watch it online or offline. You can also enjoy other features like playlists, radio stations, music videos, and more.
-
Filmyzon.com: This is a free film streaming website that offers Bollywood movies, Tamil movies, and more. You can watch Moonu movie online or download it in HD quality. You can also find other related movies like Rekka, Race 3, Race 2, etc.
-
Bivinpabe.wixsite.com: This is a website builder that allows you to create your own website for free. You can find Moonu movie download in dvdrip format on this website as a torrent file. You can also view other content like blogs, photos, videos, etc.
-
Trello.com: This is a web-based project management tool that helps you organize your tasks and collaborate with others. You can find Moonu movie download in dvdrip format on this website as a link. You can also find other Bollywood movie downloads, TV show downloads, etc.
-
-
-
How to Watch Moonu Movie Online or Offline with HD Online Player
-
-
After downloading Moonu movie in dvdrip format from any of the sources mentioned above, you can watch it online or offline with HD online player. HD online player is a media player that supports various file formats and provides high-quality video and audio playback. It also has features like subtitles, playlists, screen capture, etc.
-
-
Here are some tips on how to watch Moonu movie online or offline with HD online player:
-
-
-
Online: To watch Moonu movie online in dvdrip format, you need a media player that supports this file format. Some of the best media players that we recommend are VLC Media Player, KMPlayer, PotPlayer, Media Player Classic Home Cinema (MPC-HC), etc. You can download any of these media players from their official websites and install them on your device. Then you can open the dvdrip file with the media player and enjoy the movie online.
-
Offline: To watch Moonu movie offline in dvdrip format, you need to transfer the dvdrip file to your device's storage or external storage device like USB flash drive, SD card, etc. Then you can use any of the media players mentioned above to play the dvdrip file offline.
-
-
-
Conclusion
-
-
Moonu is a Tamil classic that you should not miss if you love romantic dramas. It tells a beautiful story of love and loss that will touch your heart and make you cry. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player.
-
-
We hope this article has helped you find the best way to enjoy Moonu movie in dvdrip format. If you have any questions or suggestions, please feel free to leave a comment below.
-
Why You Should Watch Moonu Movie
-
-
Moonu movie is not just a typical Tamil romance. It is a film that explores the depth and complexity of love, emotions, and mental health issues. It shows how two people can be deeply connected and yet face various challenges and tragedies in their relationship. It also portrays the importance of family and social support in coping with life's difficulties.
-
-
Moonu movie is a film that will make you feel and think. It has a powerful and realistic story, brilliant and natural performances, catchy and meaningful songs, and stunning and authentic visuals. It is a film that will appeal to all kinds of audiences and preferences.
-
-
Moonu movie is a film that you should watch if you want to experience the magic of Tamil cinema. It is a film that will make you laugh, cry, and fall in love. It is a film that will stay with you long after it ends.
-
-
How to Download Moonu Movie Songs in MP3 Format
-
-
Moonu movie has a wonderful soundtrack that complements the story and mood of the film. The songs are composed by Anirudh Ravichander and sung by various artists like Dhanush, Shruti Haasan, Mohit Chauhan, etc. The songs are catchy, romantic, and emotional.
-
-
If you want to download Moonu movie songs in MP3 format, you can use any of the following sources:
-
-
-
Isaimini.store: This website offers Moonu movie songs in MP3 format for free. You can also listen to them online or offline. You can also enjoy other features like playlists, radio stations, music videos, and more.
-
Gaana.com: This website offers Moonu movie songs in MP3 format for free. You can also listen to them online or offline. You can also enjoy other features like lyrics, podcasts, stories, and more.
-
Saavn.com: This website offers Moonu movie songs in MP3 format for free. You can also listen to them online or offline. You can also enjoy other features like recommendations, charts, originals, and more.
-
Pagalworld.com: This website offers Moonu movie songs in MP3 format for free. You can also download them in various qualities and sizes. You can also find other Tamil songs, ringtones, wallpapers, etc.
-
-
-
Conclusion
-
-
Moonu movie is a masterpiece of Tamil cinema that you should not miss. It is a film that will make you appreciate the beauty and complexity of love and life. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player. You can also download Moonu movie songs in MP3 format from various sources and listen to them online or offline.
-
-
We hope this article has helped you find the best way to enjoy Moonu movie and its related content. If you have any questions or suggestions, please feel free to leave a comment below.
-
HD Online Player (moonu tamil movie download dvdrip) - A Guide to Watch the Romantic Drama
-
-
Moonu is a 2012 Tamil romantic drama film written and directed by Aishwarya R. Dhanush. It stars Dhanush and Shruti Haasan as two young lovers who face various challenges and tragedies in their relationship. The film is also known as 3 or Three, and has a famous song called "Why This Kolaveri Di" that became a viral sensation.
-
-
Moonu was a commercial and critical success, winning several awards and nominations, including three National Film Awards. The film is widely praised for its realistic portrayal of love, emotions, and mental health issues. The film is also known for its nonlinear narrative and twist ending.
-
-
If you are a fan of Moonu or want to watch it for the first time, you might be wondering how to download it in dvdrip format. Dvdrip is a popular video file format that can store high-quality video and audio data. It is also compatible with most media players and devices.
-
-
In this article, we will show you how to download Moonu movie in dvdrip format from various sources. We will also give you some tips on how to watch it online or offline with HD online player.
-
-
How to Download Moonu Movie in Dvdrip Format
-
-
There are many websites that offer Moonu movie download in dvdrip format. However, not all of them are safe, legal, or reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also have broken links, low-quality files, or annoying ads and pop-ups.
-
-
To avoid these risks, you should always use trusted and reputable sources to download Moonu movie in dvdrip format. Here are some of the best options that we recommend:
-
-
-
Isaimini.store: This is a popular entertainment platform that offers movies, music, TV shows, and more. You can download Moonu Tamil movie (2012) to your Isaimini account and watch it online or offline. You can also enjoy other features like playlists, radio stations, music videos, and more.
-
Filmyzon.com: This is a free film streaming website that offers Bollywood movies, Tamil movies, and more. You can watch Moonu movie online or download it in HD quality. You can also find other related movies like Rekka, Race 3, Race 2, etc.
-
Bivinpabe.wixsite.com: This is a website builder that allows you to create your own website for free. You can find Moonu movie download in dvdrip format on this website as a torrent file. You can also view other content like blogs, photos, videos, etc.
-
Trello.com: This is a web-based project management tool that helps you organize your tasks and collaborate with others. You can find Moonu movie download in dvdrip format on this website as a link. You can also find other Bollywood movie downloads, TV show downloads, etc.
-
-
-
How to Watch Moonu Movie Online or Offline with HD Online Player
-
-
After downloading Moonu movie in dvdrip format from any of the sources mentioned above, you can watch it online or offline with HD online player. HD online player is a media player that supports various file formats and provides high-quality video and audio playback. It also has features like subtitles, playlists, screen capture, etc.
-
-
Here are some tips on how to watch Moonu movie online or offline with HD online player:
-
-
-
Online: To watch Moonu movie online in dvdrip format, you need a media player that supports this file format. Some of the best media players that we recommend are VLC Media Player, KMPlayer, PotPlayer, Media Player Classic Home Cinema (MPC-HC), etc. You can download any of these media players from their official websites and install them on your device. Then you can open the dvdrip file with the media player and enjoy the movie online.
-
Offline: To watch Moonu movie offline in dvdrip format, you need to transfer the dvdrip file to your device's storage or external storage device like USB flash drive, SD card, etc. Then you can use any of the media players mentioned above to play the dvdrip file offline.
-
-
-
Conclusion
-
-
Moonu is a Tamil classic that you should not miss if you love romantic dramas. It tells a beautiful story of love and loss that will touch your heart and make you cry. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player.
-
-
We hope this article has helped you find the best way to enjoy Moonu movie in dvdrip format. If you have any questions or suggestions, please feel free to leave a comment below.
-
Conclusion
-
-
Moonu is a Tamil classic that you should not miss if you love romantic dramas. It tells a beautiful story of love and loss that will touch your heart and make you cry. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player.
-
-
We hope this article has helped you find the best way to enjoy Moonu movie in dvdrip format. If you have any questions or suggestions, please feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/hifi-gan/models.py b/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/hifi-gan/models.py
deleted file mode 100644
index 338b92516af241c6a07a427371611e35b227eacf..0000000000000000000000000000000000000000
--- a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/hifi-gan/models.py
+++ /dev/null
@@ -1,285 +0,0 @@
-""" from https://github.com/jik876/hifi-gan """
-
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from xutils import init_weights, get_padding
-
-LRELU_SLOPE = 0.1
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Generator(torch.nn.Module):
- def __init__(self, h):
- super(Generator, self).__init__()
- self.h = h
- self.num_kernels = len(h.resblock_kernel_sizes)
- self.num_upsamples = len(h.upsample_rates)
- self.conv_pre = weight_norm(Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3))
- resblock = ResBlock1 if h.resblock == '1' else ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(h.upsample_initial_channel//(2**i), h.upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h.upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
-
- def forward(self, x):
- x = self.conv_pre(x)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiPeriodDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorP(2),
- DiscriminatorP(3),
- DiscriminatorP(5),
- DiscriminatorP(7),
- DiscriminatorP(11),
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiScaleDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=2),
- AvgPool1d(4, 2, padding=2)
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i-1](y)
- y_hat = self.meanpools[i-1](y_hat)
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss*2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py
deleted file mode 100644
index 4248f6c91b641a4ad1d00d0316ee82d701f9152f..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from typing import Dict
-import torch
-from torch.nn import functional as F
-
-from detectron2.structures.boxes import Boxes, BoxMode
-
-from ..structures import (
- DensePoseChartPredictorOutput,
- DensePoseChartResult,
- DensePoseChartResultWithConfidences,
-)
-from . import resample_fine_and_coarse_segm_to_bbox
-from .base import IntTupleBox, make_int_box
-
-
-def resample_uv_tensors_to_bbox(
- u: torch.Tensor,
- v: torch.Tensor,
- labels: torch.Tensor,
- box_xywh_abs: IntTupleBox,
-) -> torch.Tensor:
- """
- Resamples U and V coordinate estimates for the given bounding box
-
- Args:
- u (tensor [1, C, H, W] of float): U coordinates
- v (tensor [1, C, H, W] of float): V coordinates
- labels (tensor [H, W] of long): labels obtained by resampling segmentation
- outputs for the given bounding box
- box_xywh_abs (tuple of 4 int): bounding box that corresponds to predictor outputs
- Return:
- Resampled U and V coordinates - a tensor [2, H, W] of float
- """
- x, y, w, h = box_xywh_abs
- w = max(int(w), 1)
- h = max(int(h), 1)
- u_bbox = F.interpolate(u, (h, w), mode="bilinear", align_corners=False)
- v_bbox = F.interpolate(v, (h, w), mode="bilinear", align_corners=False)
- uv = torch.zeros([2, h, w], dtype=torch.float32, device=u.device)
- for part_id in range(1, u_bbox.size(1)):
- uv[0][labels == part_id] = u_bbox[0, part_id][labels == part_id]
- uv[1][labels == part_id] = v_bbox[0, part_id][labels == part_id]
- return uv
-
-
-def resample_uv_to_bbox(
- predictor_output: DensePoseChartPredictorOutput,
- labels: torch.Tensor,
- box_xywh_abs: IntTupleBox,
-) -> torch.Tensor:
- """
- Resamples U and V coordinate estimates for the given bounding box
-
- Args:
- predictor_output (DensePoseChartPredictorOutput): DensePose predictor
- output to be resampled
- labels (tensor [H, W] of long): labels obtained by resampling segmentation
- outputs for the given bounding box
- box_xywh_abs (tuple of 4 int): bounding box that corresponds to predictor outputs
- Return:
- Resampled U and V coordinates - a tensor [2, H, W] of float
- """
- return resample_uv_tensors_to_bbox(
- predictor_output.u,
- predictor_output.v,
- labels,
- box_xywh_abs,
- )
-
-
-def densepose_chart_predictor_output_to_result(
- predictor_output: DensePoseChartPredictorOutput, boxes: Boxes
-) -> DensePoseChartResult:
- """
- Convert densepose chart predictor outputs to results
-
- Args:
- predictor_output (DensePoseChartPredictorOutput): DensePose predictor
- output to be converted to results, must contain only 1 output
- boxes (Boxes): bounding box that corresponds to the predictor output,
- must contain only 1 bounding box
- Return:
- DensePose chart-based result (DensePoseChartResult)
- """
- assert len(predictor_output) == 1 and len(boxes) == 1, (
- f"Predictor output to result conversion can operate only single outputs"
- f", got {len(predictor_output)} predictor outputs and {len(boxes)} boxes"
- )
-
- boxes_xyxy_abs = boxes.tensor.clone()
- boxes_xywh_abs = BoxMode.convert(boxes_xyxy_abs, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- box_xywh = make_int_box(boxes_xywh_abs[0])
-
- labels = resample_fine_and_coarse_segm_to_bbox(predictor_output, box_xywh).squeeze(0)
- uv = resample_uv_to_bbox(predictor_output, labels, box_xywh)
- return DensePoseChartResult(labels=labels, uv=uv)
-
-
-def resample_confidences_to_bbox(
- predictor_output: DensePoseChartPredictorOutput,
- labels: torch.Tensor,
- box_xywh_abs: IntTupleBox,
-) -> Dict[str, torch.Tensor]:
- """
- Resamples confidences for the given bounding box
-
- Args:
- predictor_output (DensePoseChartPredictorOutput): DensePose predictor
- output to be resampled
- labels (tensor [H, W] of long): labels obtained by resampling segmentation
- outputs for the given bounding box
- box_xywh_abs (tuple of 4 int): bounding box that corresponds to predictor outputs
- Return:
- Resampled confidences - a dict of [H, W] tensors of float
- """
-
- x, y, w, h = box_xywh_abs
- w = max(int(w), 1)
- h = max(int(h), 1)
-
- confidence_names = [
- "sigma_1",
- "sigma_2",
- "kappa_u",
- "kappa_v",
- "fine_segm_confidence",
- "coarse_segm_confidence",
- ]
- confidence_results = {key: None for key in confidence_names}
- confidence_names = [
- key for key in confidence_names if getattr(predictor_output, key) is not None
- ]
- confidence_base = torch.zeros([h, w], dtype=torch.float32, device=predictor_output.u.device)
-
- # assign data from channels that correspond to the labels
- for key in confidence_names:
- resampled_confidence = F.interpolate(
- getattr(predictor_output, key),
- (h, w),
- mode="bilinear",
- align_corners=False,
- )
- result = confidence_base.clone()
- for part_id in range(1, predictor_output.u.size(1)):
- if resampled_confidence.size(1) != predictor_output.u.size(1):
- # confidence is not part-based, don't try to fill it part by part
- continue
- result[labels == part_id] = resampled_confidence[0, part_id][labels == part_id]
-
- if resampled_confidence.size(1) != predictor_output.u.size(1):
- # confidence is not part-based, fill the data with the first channel
- # (targeted for segmentation confidences that have only 1 channel)
- result = resampled_confidence[0, 0]
-
- confidence_results[key] = result
-
- return confidence_results # pyre-ignore[7]
-
-
-def densepose_chart_predictor_output_to_result_with_confidences(
- predictor_output: DensePoseChartPredictorOutput, boxes: Boxes
-) -> DensePoseChartResultWithConfidences:
- """
- Convert densepose chart predictor outputs to results
-
- Args:
- predictor_output (DensePoseChartPredictorOutput): DensePose predictor
- output with confidences to be converted to results, must contain only 1 output
- boxes (Boxes): bounding box that corresponds to the predictor output,
- must contain only 1 bounding box
- Return:
- DensePose chart-based result with confidences (DensePoseChartResultWithConfidences)
- """
- assert len(predictor_output) == 1 and len(boxes) == 1, (
- f"Predictor output to result conversion can operate only single outputs"
- f", got {len(predictor_output)} predictor outputs and {len(boxes)} boxes"
- )
-
- boxes_xyxy_abs = boxes.tensor.clone()
- boxes_xywh_abs = BoxMode.convert(boxes_xyxy_abs, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- box_xywh = make_int_box(boxes_xywh_abs[0])
-
- labels = resample_fine_and_coarse_segm_to_bbox(predictor_output, box_xywh).squeeze(0)
- uv = resample_uv_to_bbox(predictor_output, labels, box_xywh)
- confidences = resample_confidences_to_bbox(predictor_output, labels, box_xywh)
- return DensePoseChartResultWithConfidences(labels=labels, uv=uv, **confidences)
diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/tests/test_inference.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/tests/test_inference.py
deleted file mode 100644
index 6d40a6b02905c95911674ac33b8d7bc0f1eda5ec..0000000000000000000000000000000000000000
--- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/tests/test_inference.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import io
-import torch
-from PIL import Image
-
-# Model
-model = torch.hub.load("ultralytics/yolov5", "yolov5s", pretrained=True, force_reload=True)
-
-# img = Image.open("zidane.jpg") # PIL image direct open
-
-# Read from bytes as we do in app
-with open("zidane.jpg", "rb") as file:
- img_bytes = file.read()
-img = Image.open(io.BytesIO(img_bytes))
-
-results = model(img, size=640) # includes NMS
-
-print(results.pandas().xyxy[0])
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/detection_utils.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/detection_utils.py
deleted file mode 100644
index ada19bdb4a2aa74874da4dba5d179ce38201c85d..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/detection_utils.py
+++ /dev/null
@@ -1,659 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-"""
-Common data processing utilities that are used in a
-typical object detection data pipeline.
-"""
-import logging
-import numpy as np
-from typing import List, Union
-import pycocotools.mask as mask_util
-import torch
-from PIL import Image
-
-from detectron2.structures import (
- BitMasks,
- Boxes,
- BoxMode,
- Instances,
- Keypoints,
- PolygonMasks,
- RotatedBoxes,
- polygons_to_bitmask,
-)
-from detectron2.utils.file_io import PathManager
-
-from . import transforms as T
-from .catalog import MetadataCatalog
-
-__all__ = [
- "SizeMismatchError",
- "convert_image_to_rgb",
- "check_image_size",
- "transform_proposals",
- "transform_instance_annotations",
- "annotations_to_instances",
- "annotations_to_instances_rotated",
- "build_augmentation",
- "build_transform_gen",
- "create_keypoint_hflip_indices",
- "filter_empty_instances",
- "read_image",
-]
-
-
-class SizeMismatchError(ValueError):
- """
- When loaded image has difference width/height compared with annotation.
- """
-
-
-# https://en.wikipedia.org/wiki/YUV#SDTV_with_BT.601
-_M_RGB2YUV = [[0.299, 0.587, 0.114], [-0.14713, -0.28886, 0.436], [0.615, -0.51499, -0.10001]]
-_M_YUV2RGB = [[1.0, 0.0, 1.13983], [1.0, -0.39465, -0.58060], [1.0, 2.03211, 0.0]]
-
-# https://www.exiv2.org/tags.html
-_EXIF_ORIENT = 274 # exif 'Orientation' tag
-
-
-def convert_PIL_to_numpy(image, format):
- """
- Convert PIL image to numpy array of target format.
-
- Args:
- image (PIL.Image): a PIL image
- format (str): the format of output image
-
- Returns:
- (np.ndarray): also see `read_image`
- """
- if format is not None:
- # PIL only supports RGB, so convert to RGB and flip channels over below
- conversion_format = format
- if format in ["BGR", "YUV-BT.601"]:
- conversion_format = "RGB"
- image = image.convert(conversion_format)
- image = np.asarray(image)
- # PIL squeezes out the channel dimension for "L", so make it HWC
- if format == "L":
- image = np.expand_dims(image, -1)
-
- # handle formats not supported by PIL
- elif format == "BGR":
- # flip channels if needed
- image = image[:, :, ::-1]
- elif format == "YUV-BT.601":
- image = image / 255.0
- image = np.dot(image, np.array(_M_RGB2YUV).T)
-
- return image
-
-
-def convert_image_to_rgb(image, format):
- """
- Convert an image from given format to RGB.
-
- Args:
- image (np.ndarray or Tensor): an HWC image
- format (str): the format of input image, also see `read_image`
-
- Returns:
- (np.ndarray): (H,W,3) RGB image in 0-255 range, can be either float or uint8
- """
- if isinstance(image, torch.Tensor):
- image = image.cpu().numpy()
- if format == "BGR":
- image = image[:, :, [2, 1, 0]]
- elif format == "YUV-BT.601":
- image = np.dot(image, np.array(_M_YUV2RGB).T)
- image = image * 255.0
- else:
- if format == "L":
- image = image[:, :, 0]
- image = image.astype(np.uint8)
- image = np.asarray(Image.fromarray(image, mode=format).convert("RGB"))
- return image
-
-
-def _apply_exif_orientation(image):
- """
- Applies the exif orientation correctly.
-
- This code exists per the bug:
- https://github.com/python-pillow/Pillow/issues/3973
- with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
- various methods, especially `tobytes`
-
- Function based on:
- https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
- https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
-
- Args:
- image (PIL.Image): a PIL image
-
- Returns:
- (PIL.Image): the PIL image with exif orientation applied, if applicable
- """
- if not hasattr(image, "getexif"):
- return image
-
- try:
- exif = image.getexif()
- except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
- exif = None
-
- if exif is None:
- return image
-
- orientation = exif.get(_EXIF_ORIENT)
-
- method = {
- 2: Image.FLIP_LEFT_RIGHT,
- 3: Image.ROTATE_180,
- 4: Image.FLIP_TOP_BOTTOM,
- 5: Image.TRANSPOSE,
- 6: Image.ROTATE_270,
- 7: Image.TRANSVERSE,
- 8: Image.ROTATE_90,
- }.get(orientation)
-
- if method is not None:
- return image.transpose(method)
- return image
-
-
-def read_image(file_name, format=None):
- """
- Read an image into the given format.
- Will apply rotation and flipping if the image has such exif information.
-
- Args:
- file_name (str): image file path
- format (str): one of the supported image modes in PIL, or "BGR" or "YUV-BT.601".
-
- Returns:
- image (np.ndarray):
- an HWC image in the given format, which is 0-255, uint8 for
- supported image modes in PIL or "BGR"; float (0-1 for Y) for YUV-BT.601.
- """
- with PathManager.open(file_name, "rb") as f:
- image = Image.open(f)
-
- # work around this bug: https://github.com/python-pillow/Pillow/issues/3973
- image = _apply_exif_orientation(image)
- return convert_PIL_to_numpy(image, format)
-
-
-def check_image_size(dataset_dict, image):
- """
- Raise an error if the image does not match the size specified in the dict.
- """
- if "width" in dataset_dict or "height" in dataset_dict:
- image_wh = (image.shape[1], image.shape[0])
- expected_wh = (dataset_dict["width"], dataset_dict["height"])
- if not image_wh == expected_wh:
- raise SizeMismatchError(
- "Mismatched image shape{}, got {}, expect {}.".format(
- " for image " + dataset_dict["file_name"]
- if "file_name" in dataset_dict
- else "",
- image_wh,
- expected_wh,
- )
- + " Please check the width/height in your annotation."
- )
-
- # To ensure bbox always remap to original image size
- if "width" not in dataset_dict:
- dataset_dict["width"] = image.shape[1]
- if "height" not in dataset_dict:
- dataset_dict["height"] = image.shape[0]
-
-
-def transform_proposals(dataset_dict, image_shape, transforms, *, proposal_topk, min_box_size=0):
- """
- Apply transformations to the proposals in dataset_dict, if any.
-
- Args:
- dataset_dict (dict): a dict read from the dataset, possibly
- contains fields "proposal_boxes", "proposal_objectness_logits", "proposal_bbox_mode"
- image_shape (tuple): height, width
- transforms (TransformList):
- proposal_topk (int): only keep top-K scoring proposals
- min_box_size (int): proposals with either side smaller than this
- threshold are removed
-
- The input dict is modified in-place, with abovementioned keys removed. A new
- key "proposals" will be added. Its value is an `Instances`
- object which contains the transformed proposals in its field
- "proposal_boxes" and "objectness_logits".
- """
- if "proposal_boxes" in dataset_dict:
- # Transform proposal boxes
- boxes = transforms.apply_box(
- BoxMode.convert(
- dataset_dict.pop("proposal_boxes"),
- dataset_dict.pop("proposal_bbox_mode"),
- BoxMode.XYXY_ABS,
- )
- )
- boxes = Boxes(boxes)
- objectness_logits = torch.as_tensor(
- dataset_dict.pop("proposal_objectness_logits").astype("float32")
- )
-
- boxes.clip(image_shape)
- keep = boxes.nonempty(threshold=min_box_size)
- boxes = boxes[keep]
- objectness_logits = objectness_logits[keep]
-
- proposals = Instances(image_shape)
- proposals.proposal_boxes = boxes[:proposal_topk]
- proposals.objectness_logits = objectness_logits[:proposal_topk]
- dataset_dict["proposals"] = proposals
-
-
-def get_bbox(annotation):
- """
- Get bbox from data
- Args:
- annotation (dict): dict of instance annotations for a single instance.
- Returns:
- bbox (ndarray): x1, y1, x2, y2 coordinates
- """
- # bbox is 1d (per-instance bounding box)
- bbox = BoxMode.convert(annotation["bbox"], annotation["bbox_mode"], BoxMode.XYXY_ABS)
- return bbox
-
-
-def transform_instance_annotations(
- annotation, transforms, image_size, *, keypoint_hflip_indices=None
-):
- """
- Apply transforms to box, segmentation and keypoints annotations of a single instance.
-
- It will use `transforms.apply_box` for the box, and
- `transforms.apply_coords` for segmentation polygons & keypoints.
- If you need anything more specially designed for each data structure,
- you'll need to implement your own version of this function or the transforms.
-
- Args:
- annotation (dict): dict of instance annotations for a single instance.
- It will be modified in-place.
- transforms (TransformList or list[Transform]):
- image_size (tuple): the height, width of the transformed image
- keypoint_hflip_indices (ndarray[int]): see `create_keypoint_hflip_indices`.
-
- Returns:
- dict:
- the same input dict with fields "bbox", "segmentation", "keypoints"
- transformed according to `transforms`.
- The "bbox_mode" field will be set to XYXY_ABS.
- """
- if isinstance(transforms, (tuple, list)):
- transforms = T.TransformList(transforms)
- # bbox is 1d (per-instance bounding box)
- bbox = BoxMode.convert(annotation["bbox"], annotation["bbox_mode"], BoxMode.XYXY_ABS)
- # clip transformed bbox to image size
- bbox = transforms.apply_box(np.array([bbox]))[0].clip(min=0)
- annotation["bbox"] = np.minimum(bbox, list(image_size + image_size)[::-1])
- annotation["bbox_mode"] = BoxMode.XYXY_ABS
-
- if "segmentation" in annotation:
- # each instance contains 1 or more polygons
- segm = annotation["segmentation"]
- if isinstance(segm, list):
- # polygons
- polygons = [np.asarray(p).reshape(-1, 2) for p in segm]
- annotation["segmentation"] = [
- p.reshape(-1) for p in transforms.apply_polygons(polygons)
- ]
- elif isinstance(segm, dict):
- # RLE
- mask = mask_util.decode(segm)
- mask = transforms.apply_segmentation(mask)
- assert tuple(mask.shape[:2]) == image_size
- annotation["segmentation"] = mask
- else:
- raise ValueError(
- "Cannot transform segmentation of type '{}'!"
- "Supported types are: polygons as list[list[float] or ndarray],"
- " COCO-style RLE as a dict.".format(type(segm))
- )
-
- if "keypoints" in annotation:
- keypoints = transform_keypoint_annotations(
- annotation["keypoints"], transforms, image_size, keypoint_hflip_indices
- )
- annotation["keypoints"] = keypoints
-
- return annotation
-
-
-def transform_keypoint_annotations(keypoints, transforms, image_size, keypoint_hflip_indices=None):
- """
- Transform keypoint annotations of an image.
- If a keypoint is transformed out of image boundary, it will be marked "unlabeled" (visibility=0)
-
- Args:
- keypoints (list[float]): Nx3 float in Detectron2's Dataset format.
- Each point is represented by (x, y, visibility).
- transforms (TransformList):
- image_size (tuple): the height, width of the transformed image
- keypoint_hflip_indices (ndarray[int]): see `create_keypoint_hflip_indices`.
- When `transforms` includes horizontal flip, will use the index
- mapping to flip keypoints.
- """
- # (N*3,) -> (N, 3)
- keypoints = np.asarray(keypoints, dtype="float64").reshape(-1, 3)
- keypoints_xy = transforms.apply_coords(keypoints[:, :2])
-
- # Set all out-of-boundary points to "unlabeled"
- inside = (keypoints_xy >= np.array([0, 0])) & (keypoints_xy <= np.array(image_size[::-1]))
- inside = inside.all(axis=1)
- keypoints[:, :2] = keypoints_xy
- keypoints[:, 2][~inside] = 0
-
- # This assumes that HorizFlipTransform is the only one that does flip
- do_hflip = sum(isinstance(t, T.HFlipTransform) for t in transforms.transforms) % 2 == 1
-
- # Alternative way: check if probe points was horizontally flipped.
- # probe = np.asarray([[0.0, 0.0], [image_width, 0.0]])
- # probe_aug = transforms.apply_coords(probe.copy())
- # do_hflip = np.sign(probe[1][0] - probe[0][0]) != np.sign(probe_aug[1][0] - probe_aug[0][0]) # noqa
-
- # If flipped, swap each keypoint with its opposite-handed equivalent
- if do_hflip:
- if keypoint_hflip_indices is None:
- raise ValueError("Cannot flip keypoints without providing flip indices!")
- if len(keypoints) != len(keypoint_hflip_indices):
- raise ValueError(
- "Keypoint data has {} points, but metadata "
- "contains {} points!".format(len(keypoints), len(keypoint_hflip_indices))
- )
- keypoints = keypoints[np.asarray(keypoint_hflip_indices, dtype=np.int32), :]
-
- # Maintain COCO convention that if visibility == 0 (unlabeled), then x, y = 0
- keypoints[keypoints[:, 2] == 0] = 0
- return keypoints
-
-
-def annotations_to_instances(annos, image_size, mask_format="polygon"):
- """
- Create an :class:`Instances` object used by the models,
- from instance annotations in the dataset dict.
-
- Args:
- annos (list[dict]): a list of instance annotations in one image, each
- element for one instance.
- image_size (tuple): height, width
-
- Returns:
- Instances:
- It will contain fields "gt_boxes", "gt_classes",
- "gt_masks", "gt_keypoints", if they can be obtained from `annos`.
- This is the format that builtin models expect.
- """
- boxes = (
- np.stack(
- [BoxMode.convert(obj["bbox"], obj["bbox_mode"], BoxMode.XYXY_ABS) for obj in annos]
- )
- if len(annos)
- else np.zeros((0, 4))
- )
- target = Instances(image_size)
- target.gt_boxes = Boxes(boxes)
-
- classes = [int(obj["category_id"]) for obj in annos]
- classes = torch.tensor(classes, dtype=torch.int64)
- target.gt_classes = classes
-
- if len(annos) and "segmentation" in annos[0]:
- segms = [obj["segmentation"] for obj in annos]
- if mask_format == "polygon":
- try:
- masks = PolygonMasks(segms)
- except ValueError as e:
- raise ValueError(
- "Failed to use mask_format=='polygon' from the given annotations!"
- ) from e
- else:
- assert mask_format == "bitmask", mask_format
- masks = []
- for segm in segms:
- if isinstance(segm, list):
- # polygon
- masks.append(polygons_to_bitmask(segm, *image_size))
- elif isinstance(segm, dict):
- # COCO RLE
- masks.append(mask_util.decode(segm))
- elif isinstance(segm, np.ndarray):
- assert segm.ndim == 2, "Expect segmentation of 2 dimensions, got {}.".format(
- segm.ndim
- )
- # mask array
- masks.append(segm)
- else:
- raise ValueError(
- "Cannot convert segmentation of type '{}' to BitMasks!"
- "Supported types are: polygons as list[list[float] or ndarray],"
- " COCO-style RLE as a dict, or a binary segmentation mask "
- " in a 2D numpy array of shape HxW.".format(type(segm))
- )
- # torch.from_numpy does not support array with negative stride.
- masks = BitMasks(
- torch.stack([torch.from_numpy(np.ascontiguousarray(x)) for x in masks])
- )
- target.gt_masks = masks
-
- if len(annos) and "keypoints" in annos[0]:
- kpts = [obj.get("keypoints", []) for obj in annos]
- target.gt_keypoints = Keypoints(kpts)
-
- return target
-
-
-def annotations_to_instances_rotated(annos, image_size):
- """
- Create an :class:`Instances` object used by the models,
- from instance annotations in the dataset dict.
- Compared to `annotations_to_instances`, this function is for rotated boxes only
-
- Args:
- annos (list[dict]): a list of instance annotations in one image, each
- element for one instance.
- image_size (tuple): height, width
-
- Returns:
- Instances:
- Containing fields "gt_boxes", "gt_classes",
- if they can be obtained from `annos`.
- This is the format that builtin models expect.
- """
- boxes = [obj["bbox"] for obj in annos]
- target = Instances(image_size)
- boxes = target.gt_boxes = RotatedBoxes(boxes)
- boxes.clip(image_size)
-
- classes = [obj["category_id"] for obj in annos]
- classes = torch.tensor(classes, dtype=torch.int64)
- target.gt_classes = classes
-
- return target
-
-
-def filter_empty_instances(
- instances, by_box=True, by_mask=True, box_threshold=1e-5, return_mask=False
-):
- """
- Filter out empty instances in an `Instances` object.
-
- Args:
- instances (Instances):
- by_box (bool): whether to filter out instances with empty boxes
- by_mask (bool): whether to filter out instances with empty masks
- box_threshold (float): minimum width and height to be considered non-empty
- return_mask (bool): whether to return boolean mask of filtered instances
-
- Returns:
- Instances: the filtered instances.
- tensor[bool], optional: boolean mask of filtered instances
- """
- assert by_box or by_mask
- r = []
- if by_box:
- r.append(instances.gt_boxes.nonempty(threshold=box_threshold))
- if instances.has("gt_masks") and by_mask:
- r.append(instances.gt_masks.nonempty())
-
- # TODO: can also filter visible keypoints
-
- if not r:
- return instances
- m = r[0]
- for x in r[1:]:
- m = m & x
- if return_mask:
- return instances[m], m
- return instances[m]
-
-
-def create_keypoint_hflip_indices(dataset_names: Union[str, List[str]]) -> List[int]:
- """
- Args:
- dataset_names: list of dataset names
-
- Returns:
- list[int]: a list of size=#keypoints, storing the
- horizontally-flipped keypoint indices.
- """
- if isinstance(dataset_names, str):
- dataset_names = [dataset_names]
-
- check_metadata_consistency("keypoint_names", dataset_names)
- check_metadata_consistency("keypoint_flip_map", dataset_names)
-
- meta = MetadataCatalog.get(dataset_names[0])
- names = meta.keypoint_names
- # TODO flip -> hflip
- flip_map = dict(meta.keypoint_flip_map)
- flip_map.update({v: k for k, v in flip_map.items()})
- flipped_names = [i if i not in flip_map else flip_map[i] for i in names]
- flip_indices = [names.index(i) for i in flipped_names]
- return flip_indices
-
-
-def get_fed_loss_cls_weights(dataset_names: Union[str, List[str]], freq_weight_power=1.0):
- """
- Get frequency weight for each class sorted by class id.
- We now calcualte freqency weight using image_count to the power freq_weight_power.
-
- Args:
- dataset_names: list of dataset names
- freq_weight_power: power value
- """
- if isinstance(dataset_names, str):
- dataset_names = [dataset_names]
-
- check_metadata_consistency("class_image_count", dataset_names)
-
- meta = MetadataCatalog.get(dataset_names[0])
- class_freq_meta = meta.class_image_count
- class_freq = torch.tensor(
- [c["image_count"] for c in sorted(class_freq_meta, key=lambda x: x["id"])]
- )
- class_freq_weight = class_freq.float() ** freq_weight_power
- return class_freq_weight
-
-
-def gen_crop_transform_with_instance(crop_size, image_size, instance):
- """
- Generate a CropTransform so that the cropping region contains
- the center of the given instance.
-
- Args:
- crop_size (tuple): h, w in pixels
- image_size (tuple): h, w
- instance (dict): an annotation dict of one instance, in Detectron2's
- dataset format.
- """
- crop_size = np.asarray(crop_size, dtype=np.int32)
- bbox = BoxMode.convert(instance["bbox"], instance["bbox_mode"], BoxMode.XYXY_ABS)
- center_yx = (bbox[1] + bbox[3]) * 0.5, (bbox[0] + bbox[2]) * 0.5
- assert (
- image_size[0] >= center_yx[0] and image_size[1] >= center_yx[1]
- ), "The annotation bounding box is outside of the image!"
- assert (
- image_size[0] >= crop_size[0] and image_size[1] >= crop_size[1]
- ), "Crop size is larger than image size!"
-
- min_yx = np.maximum(np.floor(center_yx).astype(np.int32) - crop_size, 0)
- max_yx = np.maximum(np.asarray(image_size, dtype=np.int32) - crop_size, 0)
- max_yx = np.minimum(max_yx, np.ceil(center_yx).astype(np.int32))
-
- y0 = np.random.randint(min_yx[0], max_yx[0] + 1)
- x0 = np.random.randint(min_yx[1], max_yx[1] + 1)
- return T.CropTransform(x0, y0, crop_size[1], crop_size[0])
-
-
-def check_metadata_consistency(key, dataset_names):
- """
- Check that the datasets have consistent metadata.
-
- Args:
- key (str): a metadata key
- dataset_names (list[str]): a list of dataset names
-
- Raises:
- AttributeError: if the key does not exist in the metadata
- ValueError: if the given datasets do not have the same metadata values defined by key
- """
- if len(dataset_names) == 0:
- return
- logger = logging.getLogger(__name__)
- entries_per_dataset = [getattr(MetadataCatalog.get(d), key) for d in dataset_names]
- for idx, entry in enumerate(entries_per_dataset):
- if entry != entries_per_dataset[0]:
- logger.error(
- "Metadata '{}' for dataset '{}' is '{}'".format(key, dataset_names[idx], str(entry))
- )
- logger.error(
- "Metadata '{}' for dataset '{}' is '{}'".format(
- key, dataset_names[0], str(entries_per_dataset[0])
- )
- )
- raise ValueError("Datasets have different metadata '{}'!".format(key))
-
-
-def build_augmentation(cfg, is_train):
- """
- Create a list of default :class:`Augmentation` from config.
- Now it includes resizing and flipping.
-
- Returns:
- list[Augmentation]
- """
- if is_train:
- min_size = cfg.INPUT.MIN_SIZE_TRAIN
- max_size = cfg.INPUT.MAX_SIZE_TRAIN
- sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING
- else:
- min_size = cfg.INPUT.MIN_SIZE_TEST
- max_size = cfg.INPUT.MAX_SIZE_TEST
- sample_style = "choice"
- augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)]
- if is_train and cfg.INPUT.RANDOM_FLIP != "none":
- augmentation.append(
- T.RandomFlip(
- horizontal=cfg.INPUT.RANDOM_FLIP == "horizontal",
- vertical=cfg.INPUT.RANDOM_FLIP == "vertical",
- )
- )
- return augmentation
-
-
-build_transform_gen = build_augmentation
-"""
-Alias for backward-compatibility.
-"""
diff --git a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/scripts/download_models.py b/spaces/cffl/Exploring_Intelligent_Writing_Assistance/scripts/download_models.py
deleted file mode 100644
index 2c7011ebb147bbe8663d45262f8e954fbf481952..0000000000000000000000000000000000000000
--- a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/scripts/download_models.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# ###########################################################################
-#
-# CLOUDERA APPLIED MACHINE LEARNING PROTOTYPE (AMP)
-# (C) Cloudera, Inc. 2022
-# All rights reserved.
-#
-# Applicable Open Source License: Apache 2.0
-#
-# NOTE: Cloudera open source products are modular software products
-# made up of hundreds of individual components, each of which was
-# individually copyrighted. Each Cloudera open source product is a
-# collective work under U.S. Copyright Law. Your license to use the
-# collective work is as provided in your written agreement with
-# Cloudera. Used apart from the collective work, this file is
-# licensed for your use pursuant to the open source license
-# identified above.
-#
-# This code is provided to you pursuant a written agreement with
-# (i) Cloudera, Inc. or (ii) a third-party authorized to distribute
-# this code. If you do not have a written agreement with Cloudera nor
-# with an authorized and properly licensed third party, you do not
-# have any rights to access nor to use this code.
-#
-# Absent a written agreement with Cloudera, Inc. (“Cloudera”) to the
-# contrary, A) CLOUDERA PROVIDES THIS CODE TO YOU WITHOUT WARRANTIES OF ANY
-# KIND; (B) CLOUDERA DISCLAIMS ANY AND ALL EXPRESS AND IMPLIED
-# WARRANTIES WITH RESPECT TO THIS CODE, INCLUDING BUT NOT LIMITED TO
-# IMPLIED WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY AND
-# FITNESS FOR A PARTICULAR PURPOSE; (C) CLOUDERA IS NOT LIABLE TO YOU,
-# AND WILL NOT DEFEND, INDEMNIFY, NOR HOLD YOU HARMLESS FOR ANY CLAIMS
-# ARISING FROM OR RELATED TO THE CODE; AND (D)WITH RESPECT TO YOUR EXERCISE
-# OF ANY RIGHTS GRANTED TO YOU FOR THE CODE, CLOUDERA IS NOT LIABLE FOR ANY
-# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, PUNITIVE OR
-# CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO, DAMAGES
-# RELATED TO LOST REVENUE, LOST PROFITS, LOSS OF INCOME, LOSS OF
-# BUSINESS ADVANTAGE OR UNAVAILABILITY, OR LOSS OR CORRUPTION OF
-# DATA.
-#
-# ###########################################################################
-
-from apps.data_utils import DATA_PACKET
-from src.style_transfer import StyleTransfer
-from src.style_classification import StyleIntensityClassifier
-from src.content_preservation import ContentPreservationScorer
-
-
-def load_and_cache_HF_models(style_data_packet):
- """
- This utility function is used to download and cache models needed for all style
- attributes in `apps.data_utils.DATA_PACKET`
-
- Args:
- style_data_packet (dict)
- """
-
- for style_data in style_data_packet.keys():
- try:
- st = StyleTransfer(model_identifier=style_data.seq2seq_model_path)
- sic = StyleIntensityClassifier(style_data.cls_model_path)
- cps = ContentPreservationScorer(
- cls_model_identifier=style_data.cls_model_path,
- sbert_model_identifier=style_data.sbert_model_path,
- )
-
- del st, sic, cps
- except Exception as e:
- print(e)
-
-if __name__=="__main__":
- load_and_cache_HF_models(DATA_PACKET)
\ No newline at end of file
diff --git a/spaces/changlisheng/shangChat/modules/presets.py b/spaces/changlisheng/shangChat/modules/presets.py
deleted file mode 100644
index a6e601700ba70e4e2167345be8540cca78797b00..0000000000000000000000000000000000000000
--- a/spaces/changlisheng/shangChat/modules/presets.py
+++ /dev/null
@@ -1,198 +0,0 @@
-# -*- coding:utf-8 -*-
-import gradio as gr
-from pathlib import Path
-
-# ChatGPT 设置
-initial_prompt = "You are a helpful assistant."
-API_HOST = "api.openai.com"
-COMPLETION_URL = "https://api.openai.com/v1/chat/completions"
-BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants"
-USAGE_API_URL="https://api.openai.com/dashboard/billing/usage"
-HISTORY_DIR = Path("history")
-TEMPLATES_DIR = "templates"
-
-# 错误信息
-standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀
-error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误
-connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时
-read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时
-proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误
-ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误
-no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位
-no_input_msg = "请输入对话内容。" # 未输入对话内容
-
-timeout_streaming = 30 # 流式对话时的超时时间
-timeout_all = 200 # 非流式对话时的超时时间
-enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框
-HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True
-CONCURRENT_COUNT = 100 # 允许同时使用的用户数量
-
-SIM_K = 5
-INDEX_QUERY_TEMPRATURE = 1.0
-
-title = """
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/cd.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/cd.py
deleted file mode 100644
index 6e56fe84a9e0e63b918141bc27d708b2d915563f..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/cd.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import importlib
-from codecs import IncrementalDecoder
-from collections import Counter
-from functools import lru_cache
-from typing import Counter as TypeCounter, Dict, List, Optional, Tuple
-
-from .assets import FREQUENCIES
-from .constant import KO_NAMES, LANGUAGE_SUPPORTED_COUNT, TOO_SMALL_SEQUENCE, ZH_NAMES
-from .md import is_suspiciously_successive_range
-from .models import CoherenceMatches
-from .utils import (
- is_accentuated,
- is_latin,
- is_multi_byte_encoding,
- is_unicode_range_secondary,
- unicode_range,
-)
-
-
-def encoding_unicode_range(iana_name: str) -> List[str]:
- """
- Return associated unicode ranges in a single byte code page.
- """
- if is_multi_byte_encoding(iana_name):
- raise IOError("Function not supported on multi-byte code page")
-
- decoder = importlib.import_module(
- "encodings.{}".format(iana_name)
- ).IncrementalDecoder
-
- p: IncrementalDecoder = decoder(errors="ignore")
- seen_ranges: Dict[str, int] = {}
- character_count: int = 0
-
- for i in range(0x40, 0xFF):
- chunk: str = p.decode(bytes([i]))
-
- if chunk:
- character_range: Optional[str] = unicode_range(chunk)
-
- if character_range is None:
- continue
-
- if is_unicode_range_secondary(character_range) is False:
- if character_range not in seen_ranges:
- seen_ranges[character_range] = 0
- seen_ranges[character_range] += 1
- character_count += 1
-
- return sorted(
- [
- character_range
- for character_range in seen_ranges
- if seen_ranges[character_range] / character_count >= 0.15
- ]
- )
-
-
-def unicode_range_languages(primary_range: str) -> List[str]:
- """
- Return inferred languages used with a unicode range.
- """
- languages: List[str] = []
-
- for language, characters in FREQUENCIES.items():
- for character in characters:
- if unicode_range(character) == primary_range:
- languages.append(language)
- break
-
- return languages
-
-
-@lru_cache()
-def encoding_languages(iana_name: str) -> List[str]:
- """
- Single-byte encoding language association. Some code page are heavily linked to particular language(s).
- This function does the correspondence.
- """
- unicode_ranges: List[str] = encoding_unicode_range(iana_name)
- primary_range: Optional[str] = None
-
- for specified_range in unicode_ranges:
- if "Latin" not in specified_range:
- primary_range = specified_range
- break
-
- if primary_range is None:
- return ["Latin Based"]
-
- return unicode_range_languages(primary_range)
-
-
-@lru_cache()
-def mb_encoding_languages(iana_name: str) -> List[str]:
- """
- Multi-byte encoding language association. Some code page are heavily linked to particular language(s).
- This function does the correspondence.
- """
- if (
- iana_name.startswith("shift_")
- or iana_name.startswith("iso2022_jp")
- or iana_name.startswith("euc_j")
- or iana_name == "cp932"
- ):
- return ["Japanese"]
- if iana_name.startswith("gb") or iana_name in ZH_NAMES:
- return ["Chinese"]
- if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:
- return ["Korean"]
-
- return []
-
-
-@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)
-def get_target_features(language: str) -> Tuple[bool, bool]:
- """
- Determine main aspects from a supported language if it contains accents and if is pure Latin.
- """
- target_have_accents: bool = False
- target_pure_latin: bool = True
-
- for character in FREQUENCIES[language]:
- if not target_have_accents and is_accentuated(character):
- target_have_accents = True
- if target_pure_latin and is_latin(character) is False:
- target_pure_latin = False
-
- return target_have_accents, target_pure_latin
-
-
-def alphabet_languages(
- characters: List[str], ignore_non_latin: bool = False
-) -> List[str]:
- """
- Return associated languages associated to given characters.
- """
- languages: List[Tuple[str, float]] = []
-
- source_have_accents = any(is_accentuated(character) for character in characters)
-
- for language, language_characters in FREQUENCIES.items():
- target_have_accents, target_pure_latin = get_target_features(language)
-
- if ignore_non_latin and target_pure_latin is False:
- continue
-
- if target_have_accents is False and source_have_accents:
- continue
-
- character_count: int = len(language_characters)
-
- character_match_count: int = len(
- [c for c in language_characters if c in characters]
- )
-
- ratio: float = character_match_count / character_count
-
- if ratio >= 0.2:
- languages.append((language, ratio))
-
- languages = sorted(languages, key=lambda x: x[1], reverse=True)
-
- return [compatible_language[0] for compatible_language in languages]
-
-
-def characters_popularity_compare(
- language: str, ordered_characters: List[str]
-) -> float:
- """
- Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language.
- The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit).
- Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.)
- """
- if language not in FREQUENCIES:
- raise ValueError("{} not available".format(language))
-
- character_approved_count: int = 0
- FREQUENCIES_language_set = set(FREQUENCIES[language])
-
- ordered_characters_count: int = len(ordered_characters)
- target_language_characters_count: int = len(FREQUENCIES[language])
-
- large_alphabet: bool = target_language_characters_count > 26
-
- for character, character_rank in zip(
- ordered_characters, range(0, ordered_characters_count)
- ):
- if character not in FREQUENCIES_language_set:
- continue
-
- character_rank_in_language: int = FREQUENCIES[language].index(character)
- expected_projection_ratio: float = (
- target_language_characters_count / ordered_characters_count
- )
- character_rank_projection: int = int(character_rank * expected_projection_ratio)
-
- if (
- large_alphabet is False
- and abs(character_rank_projection - character_rank_in_language) > 4
- ):
- continue
-
- if (
- large_alphabet is True
- and abs(character_rank_projection - character_rank_in_language)
- < target_language_characters_count / 3
- ):
- character_approved_count += 1
- continue
-
- characters_before_source: List[str] = FREQUENCIES[language][
- 0:character_rank_in_language
- ]
- characters_after_source: List[str] = FREQUENCIES[language][
- character_rank_in_language:
- ]
- characters_before: List[str] = ordered_characters[0:character_rank]
- characters_after: List[str] = ordered_characters[character_rank:]
-
- before_match_count: int = len(
- set(characters_before) & set(characters_before_source)
- )
-
- after_match_count: int = len(
- set(characters_after) & set(characters_after_source)
- )
-
- if len(characters_before_source) == 0 and before_match_count <= 4:
- character_approved_count += 1
- continue
-
- if len(characters_after_source) == 0 and after_match_count <= 4:
- character_approved_count += 1
- continue
-
- if (
- before_match_count / len(characters_before_source) >= 0.4
- or after_match_count / len(characters_after_source) >= 0.4
- ):
- character_approved_count += 1
- continue
-
- return character_approved_count / len(ordered_characters)
-
-
-def alpha_unicode_split(decoded_sequence: str) -> List[str]:
- """
- Given a decoded text sequence, return a list of str. Unicode range / alphabet separation.
- Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list;
- One containing the latin letters and the other hebrew.
- """
- layers: Dict[str, str] = {}
-
- for character in decoded_sequence:
- if character.isalpha() is False:
- continue
-
- character_range: Optional[str] = unicode_range(character)
-
- if character_range is None:
- continue
-
- layer_target_range: Optional[str] = None
-
- for discovered_range in layers:
- if (
- is_suspiciously_successive_range(discovered_range, character_range)
- is False
- ):
- layer_target_range = discovered_range
- break
-
- if layer_target_range is None:
- layer_target_range = character_range
-
- if layer_target_range not in layers:
- layers[layer_target_range] = character.lower()
- continue
-
- layers[layer_target_range] += character.lower()
-
- return list(layers.values())
-
-
-def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches:
- """
- This function merge results previously given by the function coherence_ratio.
- The return type is the same as coherence_ratio.
- """
- per_language_ratios: Dict[str, List[float]] = {}
- for result in results:
- for sub_result in result:
- language, ratio = sub_result
- if language not in per_language_ratios:
- per_language_ratios[language] = [ratio]
- continue
- per_language_ratios[language].append(ratio)
-
- merge = [
- (
- language,
- round(
- sum(per_language_ratios[language]) / len(per_language_ratios[language]),
- 4,
- ),
- )
- for language in per_language_ratios
- ]
-
- return sorted(merge, key=lambda x: x[1], reverse=True)
-
-
-def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches:
- """
- We shall NOT return "English—" in CoherenceMatches because it is an alternative
- of "English". This function only keeps the best match and remove the em-dash in it.
- """
- index_results: Dict[str, List[float]] = dict()
-
- for result in results:
- language, ratio = result
- no_em_name: str = language.replace("—", "")
-
- if no_em_name not in index_results:
- index_results[no_em_name] = []
-
- index_results[no_em_name].append(ratio)
-
- if any(len(index_results[e]) > 1 for e in index_results):
- filtered_results: CoherenceMatches = []
-
- for language in index_results:
- filtered_results.append((language, max(index_results[language])))
-
- return filtered_results
-
- return results
-
-
-@lru_cache(maxsize=2048)
-def coherence_ratio(
- decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None
-) -> CoherenceMatches:
- """
- Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers.
- A layer = Character extraction by alphabets/ranges.
- """
-
- results: List[Tuple[str, float]] = []
- ignore_non_latin: bool = False
-
- sufficient_match_count: int = 0
-
- lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else []
- if "Latin Based" in lg_inclusion_list:
- ignore_non_latin = True
- lg_inclusion_list.remove("Latin Based")
-
- for layer in alpha_unicode_split(decoded_sequence):
- sequence_frequencies: TypeCounter[str] = Counter(layer)
- most_common = sequence_frequencies.most_common()
-
- character_count: int = sum(o for c, o in most_common)
-
- if character_count <= TOO_SMALL_SEQUENCE:
- continue
-
- popular_character_ordered: List[str] = [c for c, o in most_common]
-
- for language in lg_inclusion_list or alphabet_languages(
- popular_character_ordered, ignore_non_latin
- ):
- ratio: float = characters_popularity_compare(
- language, popular_character_ordered
- )
-
- if ratio < threshold:
- continue
- elif ratio >= 0.8:
- sufficient_match_count += 1
-
- results.append((language, round(ratio, 4)))
-
- if sufficient_match_count >= 3:
- break
-
- return sorted(
- filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True
- )
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/renderer.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/renderer.py
deleted file mode 100644
index ef1d065ee1328728af04ab61525dad77a73e3d28..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/renderer.py
+++ /dev/null
@@ -1,106 +0,0 @@
-from __future__ import annotations
-
-from abc import ABC, abstractmethod
-from typing import TYPE_CHECKING, Any
-
-import numpy as np
-
-if TYPE_CHECKING:
- import io
-
- from numpy.typing import ArrayLike
-
- from contourpy._contourpy import CoordinateArray, FillReturn, FillType, LineReturn, LineType
-
-
-class Renderer(ABC):
- """Abstract base class for renderers, defining the interface that they must implement."""
-
- def _grid_as_2d(self, x: ArrayLike, y: ArrayLike) -> tuple[CoordinateArray, CoordinateArray]:
- x = np.asarray(x)
- y = np.asarray(y)
- if x.ndim == 1:
- x, y = np.meshgrid(x, y)
- return x, y
-
- x = np.asarray(x)
- y = np.asarray(y)
- if x.ndim == 1:
- x, y = np.meshgrid(x, y)
- return x, y
-
- @abstractmethod
- def filled(
- self,
- filled: FillReturn,
- fill_type: FillType,
- ax: Any = 0,
- color: str = "C0",
- alpha: float = 0.7,
- ) -> None:
- pass
-
- @abstractmethod
- def grid(
- self,
- x: ArrayLike,
- y: ArrayLike,
- ax: Any = 0,
- color: str = "black",
- alpha: float = 0.1,
- point_color: str | None = None,
- quad_as_tri_alpha: float = 0,
- ) -> None:
- pass
-
- @abstractmethod
- def lines(
- self,
- lines: LineReturn,
- line_type: LineType,
- ax: Any = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- ) -> None:
- pass
-
- @abstractmethod
- def mask(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike | np.ma.MaskedArray[Any, Any],
- ax: Any = 0,
- color: str = "black",
- ) -> None:
- pass
-
- @abstractmethod
- def save(self, filename: str, transparent: bool = False) -> None:
- pass
-
- @abstractmethod
- def save_to_buffer(self) -> io.BytesIO:
- pass
-
- @abstractmethod
- def show(self) -> None:
- pass
-
- @abstractmethod
- def title(self, title: str, ax: Any = 0, color: str | None = None) -> None:
- pass
-
- @abstractmethod
- def z_values(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Any = 0,
- color: str = "green",
- fmt: str = ".1f",
- quad_as_tri: bool = False,
- ) -> None:
- pass
diff --git a/spaces/codedog-ai/codedog-demo/app.py b/spaces/codedog-ai/codedog-demo/app.py
deleted file mode 100644
index 811358b453a012799139f37cf1ea1910fc4502f0..0000000000000000000000000000000000000000
--- a/spaces/codedog-ai/codedog-demo/app.py
+++ /dev/null
@@ -1,159 +0,0 @@
-import gradio as gr
-
-from codedog_demo.callbacks import get_sample_choices, request_pr_review, show_sample
-
-sample_choices = get_sample_choices()
-
-
-text = """# [codedog-ai/codedog #2 - feat(telemetry): :sparkles: collect gpt api cost](https://github.com/codedog-ai/codedog/pull/2) Pull Request Report
-
-*powered by GPT and codedog 0.8.2*
-
-## Execution
-- Start at: 2023-09-07 07:18:18
-- Time usage: 12.72s
-- Openai api tokens: 3506
-- Openai api costs: $0.0460
-
-
-
-
-## PR Summary
-
-### PR Overview
-This PR is a new feature :sparkles:
-
-This PR aims to collect the cost of GPT API calls from the openai callback of langchain. It modifies several functions in the 'codedog/review.py' file to include an additional parameter 'cb.total_cost' in the '_meter_api_call_tokens' function call and updates the value of the 'cost' key in the '_telemetry' dictionary. It also modifies the 'examples/github/github_review.py' file to update the variables 'repository_name_or_id' and 'pull_request_number'.
-
-
-
-### Change Details
-
-| Major Changes | Description |
-|---|---|
-| **[review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-10471033f603ac7fae28b2c7c57040e8732947f0 "codedog/review.py")** | This diff contains the following changes in the file codedog/review.py: - Added a new key "cost" to the dictionary `_telemetry` in the `__init__` function. - Modified the `_single_file_summarize` function to include an additional parameter `cb.total_cost` in the `_meter_api_call_tokens` function call. - Modified the `_changelist_summarize` function to include an additional parameter `cb.total_cost` in the `_meter_api_call_tokens` function call. - Modified the `_feedback` function to include an additional parameter `cb.total_cost` in the `_meter_api_call_tokens` function call. - Modified the `_meter_api_call_tokens` function to include a new parameter `cost` and update the value of the "cost" key in the `_telemetry` dictionary. - No other changes were made in the file. |
-| **[github_review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-78de2b9548d0316c55661aaf9b2222ad80a07012 "examples/github/github_review.py")** | This diff contains changes in the file `github_review.py`. The changes include: - Commenting out the lines that set the variables `repository_name_or_id` and `pull_request_number` to "ClickHouse/ClickHouse" and 49113 respectively. - Adding new lines that set the variables `repository_name_or_id` to "Arcadia822/codedog" and `pull_request_number` to 2. - The function `build_pull_request_event` is called with the updated `repository_name_or_id` and `pull_request_number` variables. |
-
-
-
-
-
-
Change File List
-
-Modified files:
-- codedog/review.py
-- examples/github/github_review.py
-
-
-
-
-
-
-## Code Review (preview)
-
-*This feature is still under test. Suggestions are given by AI and might be incorrect.*
-
-**[codedog/review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-10471033f603ac7fae28b2c7c57040e8732947f0)**
-
-Based on the code diff, here are my observations and suggestions:
-
-1. Line 44: The code change to add a new key "cost" to the `_telemetry` dictionary seems correct. It allows tracking the cost associated with API calls.
-
-2. Line 113 and 130: The code changes to the `_meter_api_call_tokens` method seem correct. It now accepts an additional parameter `cb.total_cost` to track the cost associated with API calls.
-
-3. Line 144: The code change to pass `cb.total_cost` as the second argument to `_meter_api_call_tokens` method seems correct. It ensures that the cost is properly tracked for API calls made during the feedback process.
-
-4. Line 175: The code change to add the `cost` key to the `TEMPLATE.REPORT_HEADER` format seems correct. It allows displaying the total cost in the generated report.
-
-Overall, the code changes seem correct and aligned with the purpose of tracking API call costs. However, here are a few suggestions for the author:
-
-- It would be helpful to include comments or docstrings explaining the purpose and usage of the `_meter_api_call_tokens` method and its parameters.
-
-- Consider using more descriptive variable names instead of abbreviations like `cb` to improve code readability.
-
-- Ensure that the `cb.total_cost` value passed to `_meter_api_call_tokens` is calculated correctly and represents the actual cost of API calls.
-
-- Consider adding unit tests to verify the correctness of the code changes and to ensure that the cost tracking functionality works as expected.
-
-- Double-check if there are any other places in the codebase where the `cost` value needs to be updated or used.
-
-These suggestions will help improve the clarity, maintainability, and reliability of the code.
-
-**[examples/github/github_review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-78de2b9548d0316c55661aaf9b2222ad80a07012)**
-
-Based on the code diff, it seems that the author has made some changes to the `github_review.py` file. Here are my observations and suggestions:
-
-1. The author has commented out the lines that set the `repository_name_or_id` and `pull_request_number` variables for the "ClickHouse/ClickHouse" repository. It appears that the author wants to disable this repository for now. If this change is intentional, it is fine.
-
-2. The author has uncommented the lines that set the `repository_name_or_id` and `pull_request_number` variables for the "Arcadia822/codedog" repository and pull request number 2. If this change is intentional, it is fine.
-
-3. It is important to ensure that the correct repository and pull request number are set for the desired review. Please double-check that the "Arcadia822/codedog" repository and pull request number 2 are the intended targets for the review.
-
-Overall, the code change seems to be correct, assuming the author's intention is to disable the "ClickHouse/ClickHouse" repository and review the "Arcadia822/codedog" repository's pull request number 2.
-
-
-
-
-"""
-
-
-about = """Codedog is used for a while in my team (reviewed about 2000 PRs.).
-Basically it's a service triggered by PR event and comment directly on the PR to give a pre human review.
-
-Comment report includes PR summary and code suggestions. Summary is great and time saving.
-But suggestions are not very valuable now.
-
-CR practice learned from: https://google.github.io/eng-practices/review/reviewer
-"""
-
-
-with gr.Blocks(theme="xiaobaiyuan/theme_brief") as demo:
- gr.Markdown("# Codedog - A pull reqeust review tool")
-
- gr.Markdown(
- """**Codedog is designed to save reviewer's time by providing useful information based on PR context.**
-
-- Github App (Rate limit is low): https://github.com/apps/codedog-assistant
-- Source Code: https://github.com/codedog-ai/codedog
-- Deploy as a service: https://github.com/codedog-ai/codedog/tree/master/examples
-- Feedback or showcase ❤️: https://github.com/codedog-ai/codedog/discussions
-"""
- )
-
- with gr.Tab(label="Try Yourself"):
- with gr.Row():
- custom_pr_url = gr.Textbox(
- max_lines=1,
- value="https://github.com/codedog-ai/codedog/pull/2",
- placeholder="Paste Github PR URL here",
- show_label=False,
- )
- with gr.Row():
- custom_submit = gr.Button(value="Review It")
- with gr.Row():
- with gr.Tab(label="Markdown"):
- custom_content = gr.Markdown(value=text)
- with gr.Tab(label="Raw"):
- custom_content_raw = gr.Textbox(
- value=text, show_label=False, lines=100, max_lines=500
- )
- custom_submit.click(
- request_pr_review,
- inputs=[custom_pr_url],
- outputs=[custom_content, custom_content_raw],
- )
-
- # with gr.Tab(label="Samples"):
- # sample_choice = gr.Radio(choices=sample_choices, type="index", show_label=False)
- # sample_content = gr.Markdown(value="")
-
- # sample_choice.input(
- # show_sample, inputs=[sample_choice], outputs=[sample_content]
- # )
-
- with gr.Tab(label="About"):
- gr.Markdown(value=about)
-
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h265_syntax_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h265_syntax_template.c
deleted file mode 100644
index 2d4b9547185c4e95fc920869472e48034f2c657f..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h265_syntax_template.c
+++ /dev/null
@@ -1,2101 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-static int FUNC(rbsp_trailing_bits)(CodedBitstreamContext *ctx, RWContext *rw)
-{
- int err;
-
- fixed(1, rbsp_stop_one_bit, 1);
- while (byte_alignment(rw) != 0)
- fixed(1, rbsp_alignment_zero_bit, 0);
-
- return 0;
-}
-
-static int FUNC(nal_unit_header)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawNALUnitHeader *current,
- int expected_nal_unit_type)
-{
- int err;
-
- fixed(1, forbidden_zero_bit, 0);
-
- if (expected_nal_unit_type >= 0)
- u(6, nal_unit_type, expected_nal_unit_type,
- expected_nal_unit_type);
- else
- ub(6, nal_unit_type);
-
- u(6, nuh_layer_id, 0, 62);
- u(3, nuh_temporal_id_plus1, 1, 7);
-
- return 0;
-}
-
-static int FUNC(byte_alignment)(CodedBitstreamContext *ctx, RWContext *rw)
-{
- int err;
-
- fixed(1, alignment_bit_equal_to_one, 1);
- while (byte_alignment(rw) != 0)
- fixed(1, alignment_bit_equal_to_zero, 0);
-
- return 0;
-}
-
-static int FUNC(extension_data)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawExtensionData *current)
-{
- int err;
- size_t k;
-#ifdef READ
- GetBitContext start;
- uint8_t bit;
- start = *rw;
- for (k = 0; cbs_h2645_read_more_rbsp_data(rw); k++)
- skip_bits(rw, 1);
- current->bit_length = k;
- if (k > 0) {
- *rw = start;
- allocate(current->data, (current->bit_length + 7) / 8);
- for (k = 0; k < current->bit_length; k++) {
- xu(1, extension_data, bit, 0, 1, 0);
- current->data[k / 8] |= bit << (7 - k % 8);
- }
- }
-#else
- for (k = 0; k < current->bit_length; k++)
- xu(1, extension_data, current->data[k / 8] >> (7 - k % 8) & 1, 0, 1, 0);
-#endif
- return 0;
-}
-
-static int FUNC(profile_tier_level)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawProfileTierLevel *current,
- int profile_present_flag,
- int max_num_sub_layers_minus1)
-{
- int err, i, j;
-
- if (profile_present_flag) {
- u(2, general_profile_space, 0, 0);
- flag(general_tier_flag);
- ub(5, general_profile_idc);
-
- for (j = 0; j < 32; j++)
- flags(general_profile_compatibility_flag[j], 1, j);
-
- flag(general_progressive_source_flag);
- flag(general_interlaced_source_flag);
- flag(general_non_packed_constraint_flag);
- flag(general_frame_only_constraint_flag);
-
-#define profile_compatible(x) (current->general_profile_idc == (x) || \
- current->general_profile_compatibility_flag[x])
- if (profile_compatible(4) || profile_compatible(5) ||
- profile_compatible(6) || profile_compatible(7) ||
- profile_compatible(8) || profile_compatible(9) ||
- profile_compatible(10) || profile_compatible(11)) {
- flag(general_max_12bit_constraint_flag);
- flag(general_max_10bit_constraint_flag);
- flag(general_max_8bit_constraint_flag);
- flag(general_max_422chroma_constraint_flag);
- flag(general_max_420chroma_constraint_flag);
- flag(general_max_monochrome_constraint_flag);
- flag(general_intra_constraint_flag);
- flag(general_one_picture_only_constraint_flag);
- flag(general_lower_bit_rate_constraint_flag);
-
- if (profile_compatible(5) || profile_compatible(9) ||
- profile_compatible(10) || profile_compatible(11)) {
- flag(general_max_14bit_constraint_flag);
- fixed(24, general_reserved_zero_33bits, 0);
- fixed( 9, general_reserved_zero_33bits, 0);
- } else {
- fixed(24, general_reserved_zero_34bits, 0);
- fixed(10, general_reserved_zero_34bits, 0);
- }
- } else if (profile_compatible(2)) {
- fixed(7, general_reserved_zero_7bits, 0);
- flag(general_one_picture_only_constraint_flag);
- fixed(24, general_reserved_zero_35bits, 0);
- fixed(11, general_reserved_zero_35bits, 0);
- } else {
- fixed(24, general_reserved_zero_43bits, 0);
- fixed(19, general_reserved_zero_43bits, 0);
- }
-
- if (profile_compatible(1) || profile_compatible(2) ||
- profile_compatible(3) || profile_compatible(4) ||
- profile_compatible(5) || profile_compatible(9) ||
- profile_compatible(11)) {
- flag(general_inbld_flag);
- } else {
- fixed(1, general_reserved_zero_bit, 0);
- }
-#undef profile_compatible
- }
-
- ub(8, general_level_idc);
-
- for (i = 0; i < max_num_sub_layers_minus1; i++) {
- flags(sub_layer_profile_present_flag[i], 1, i);
- flags(sub_layer_level_present_flag[i], 1, i);
- }
-
- if (max_num_sub_layers_minus1 > 0) {
- for (i = max_num_sub_layers_minus1; i < 8; i++)
- fixed(2, reserved_zero_2bits, 0);
- }
-
- for (i = 0; i < max_num_sub_layers_minus1; i++) {
- if (current->sub_layer_profile_present_flag[i]) {
- us(2, sub_layer_profile_space[i], 0, 0, 1, i);
- flags(sub_layer_tier_flag[i], 1, i);
- ubs(5, sub_layer_profile_idc[i], 1, i);
-
- for (j = 0; j < 32; j++)
- flags(sub_layer_profile_compatibility_flag[i][j], 2, i, j);
-
- flags(sub_layer_progressive_source_flag[i], 1, i);
- flags(sub_layer_interlaced_source_flag[i], 1, i);
- flags(sub_layer_non_packed_constraint_flag[i], 1, i);
- flags(sub_layer_frame_only_constraint_flag[i], 1, i);
-
-#define profile_compatible(x) (current->sub_layer_profile_idc[i] == (x) || \
- current->sub_layer_profile_compatibility_flag[i][x])
- if (profile_compatible(4) || profile_compatible(5) ||
- profile_compatible(6) || profile_compatible(7) ||
- profile_compatible(8) || profile_compatible(9) ||
- profile_compatible(10) || profile_compatible(11)) {
- flags(sub_layer_max_12bit_constraint_flag[i], 1, i);
- flags(sub_layer_max_10bit_constraint_flag[i], 1, i);
- flags(sub_layer_max_8bit_constraint_flag[i], 1, i);
- flags(sub_layer_max_422chroma_constraint_flag[i], 1, i);
- flags(sub_layer_max_420chroma_constraint_flag[i], 1, i);
- flags(sub_layer_max_monochrome_constraint_flag[i], 1, i);
- flags(sub_layer_intra_constraint_flag[i], 1, i);
- flags(sub_layer_one_picture_only_constraint_flag[i], 1, i);
- flags(sub_layer_lower_bit_rate_constraint_flag[i], 1, i);
-
- if (profile_compatible(5) || profile_compatible(9) ||
- profile_compatible(10) || profile_compatible(11)) {
- flags(sub_layer_max_14bit_constraint_flag[i], 1, i);
- fixed(24, sub_layer_reserved_zero_33bits, 0);
- fixed( 9, sub_layer_reserved_zero_33bits, 0);
- } else {
- fixed(24, sub_layer_reserved_zero_34bits, 0);
- fixed(10, sub_layer_reserved_zero_34bits, 0);
- }
- } else if (profile_compatible(2)) {
- fixed(7, sub_layer_reserved_zero_7bits, 0);
- flags(sub_layer_one_picture_only_constraint_flag[i], 1, i);
- fixed(24, sub_layer_reserved_zero_43bits, 0);
- fixed(11, sub_layer_reserved_zero_43bits, 0);
- } else {
- fixed(24, sub_layer_reserved_zero_43bits, 0);
- fixed(19, sub_layer_reserved_zero_43bits, 0);
- }
-
- if (profile_compatible(1) || profile_compatible(2) ||
- profile_compatible(3) || profile_compatible(4) ||
- profile_compatible(5) || profile_compatible(9) ||
- profile_compatible(11)) {
- flags(sub_layer_inbld_flag[i], 1, i);
- } else {
- fixed(1, sub_layer_reserved_zero_bit, 0);
- }
-#undef profile_compatible
- }
- if (current->sub_layer_level_present_flag[i])
- ubs(8, sub_layer_level_idc[i], 1, i);
- }
-
- return 0;
-}
-
-static int FUNC(sub_layer_hrd_parameters)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawHRDParameters *hrd,
- int nal, int sub_layer_id)
-{
- H265RawSubLayerHRDParameters *current;
- int err, i;
-
- if (nal)
- current = &hrd->nal_sub_layer_hrd_parameters[sub_layer_id];
- else
- current = &hrd->vcl_sub_layer_hrd_parameters[sub_layer_id];
-
- for (i = 0; i <= hrd->cpb_cnt_minus1[sub_layer_id]; i++) {
- ues(bit_rate_value_minus1[i], 0, UINT32_MAX - 1, 1, i);
- ues(cpb_size_value_minus1[i], 0, UINT32_MAX - 1, 1, i);
- if (hrd->sub_pic_hrd_params_present_flag) {
- ues(cpb_size_du_value_minus1[i], 0, UINT32_MAX - 1, 1, i);
- ues(bit_rate_du_value_minus1[i], 0, UINT32_MAX - 1, 1, i);
- }
- flags(cbr_flag[i], 1, i);
- }
-
- return 0;
-}
-
-static int FUNC(hrd_parameters)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawHRDParameters *current, int common_inf_present_flag,
- int max_num_sub_layers_minus1)
-{
- int err, i;
-
- if (common_inf_present_flag) {
- flag(nal_hrd_parameters_present_flag);
- flag(vcl_hrd_parameters_present_flag);
-
- if (current->nal_hrd_parameters_present_flag ||
- current->vcl_hrd_parameters_present_flag) {
- flag(sub_pic_hrd_params_present_flag);
- if (current->sub_pic_hrd_params_present_flag) {
- ub(8, tick_divisor_minus2);
- ub(5, du_cpb_removal_delay_increment_length_minus1);
- flag(sub_pic_cpb_params_in_pic_timing_sei_flag);
- ub(5, dpb_output_delay_du_length_minus1);
- }
-
- ub(4, bit_rate_scale);
- ub(4, cpb_size_scale);
- if (current->sub_pic_hrd_params_present_flag)
- ub(4, cpb_size_du_scale);
-
- ub(5, initial_cpb_removal_delay_length_minus1);
- ub(5, au_cpb_removal_delay_length_minus1);
- ub(5, dpb_output_delay_length_minus1);
- } else {
- infer(sub_pic_hrd_params_present_flag, 0);
-
- infer(initial_cpb_removal_delay_length_minus1, 23);
- infer(au_cpb_removal_delay_length_minus1, 23);
- infer(dpb_output_delay_length_minus1, 23);
- }
- }
-
- for (i = 0; i <= max_num_sub_layers_minus1; i++) {
- flags(fixed_pic_rate_general_flag[i], 1, i);
-
- if (!current->fixed_pic_rate_general_flag[i])
- flags(fixed_pic_rate_within_cvs_flag[i], 1, i);
- else
- infer(fixed_pic_rate_within_cvs_flag[i], 1);
-
- if (current->fixed_pic_rate_within_cvs_flag[i]) {
- ues(elemental_duration_in_tc_minus1[i], 0, 2047, 1, i);
- infer(low_delay_hrd_flag[i], 0);
- } else
- flags(low_delay_hrd_flag[i], 1, i);
-
- if (!current->low_delay_hrd_flag[i])
- ues(cpb_cnt_minus1[i], 0, 31, 1, i);
- else
- infer(cpb_cnt_minus1[i], 0);
-
- if (current->nal_hrd_parameters_present_flag)
- CHECK(FUNC(sub_layer_hrd_parameters)(ctx, rw, current, 0, i));
- if (current->vcl_hrd_parameters_present_flag)
- CHECK(FUNC(sub_layer_hrd_parameters)(ctx, rw, current, 1, i));
- }
-
- return 0;
-}
-
-static int FUNC(vui_parameters)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawVUI *current, const H265RawSPS *sps)
-{
- int err;
-
- flag(aspect_ratio_info_present_flag);
- if (current->aspect_ratio_info_present_flag) {
- ub(8, aspect_ratio_idc);
- if (current->aspect_ratio_idc == 255) {
- ub(16, sar_width);
- ub(16, sar_height);
- }
- } else {
- infer(aspect_ratio_idc, 0);
- }
-
- flag(overscan_info_present_flag);
- if (current->overscan_info_present_flag)
- flag(overscan_appropriate_flag);
-
- flag(video_signal_type_present_flag);
- if (current->video_signal_type_present_flag) {
- ub(3, video_format);
- flag(video_full_range_flag);
- flag(colour_description_present_flag);
- if (current->colour_description_present_flag) {
- ub(8, colour_primaries);
- ub(8, transfer_characteristics);
- ub(8, matrix_coefficients);
- } else {
- infer(colour_primaries, 2);
- infer(transfer_characteristics, 2);
- infer(matrix_coefficients, 2);
- }
- } else {
- infer(video_format, 5);
- infer(video_full_range_flag, 0);
- infer(colour_primaries, 2);
- infer(transfer_characteristics, 2);
- infer(matrix_coefficients, 2);
- }
-
- flag(chroma_loc_info_present_flag);
- if (current->chroma_loc_info_present_flag) {
- ue(chroma_sample_loc_type_top_field, 0, 5);
- ue(chroma_sample_loc_type_bottom_field, 0, 5);
- } else {
- infer(chroma_sample_loc_type_top_field, 0);
- infer(chroma_sample_loc_type_bottom_field, 0);
- }
-
- flag(neutral_chroma_indication_flag);
- flag(field_seq_flag);
- flag(frame_field_info_present_flag);
-
- flag(default_display_window_flag);
- if (current->default_display_window_flag) {
- ue(def_disp_win_left_offset, 0, 16384);
- ue(def_disp_win_right_offset, 0, 16384);
- ue(def_disp_win_top_offset, 0, 16384);
- ue(def_disp_win_bottom_offset, 0, 16384);
- }
-
- flag(vui_timing_info_present_flag);
- if (current->vui_timing_info_present_flag) {
- u(32, vui_num_units_in_tick, 1, UINT32_MAX);
- u(32, vui_time_scale, 1, UINT32_MAX);
- flag(vui_poc_proportional_to_timing_flag);
- if (current->vui_poc_proportional_to_timing_flag)
- ue(vui_num_ticks_poc_diff_one_minus1, 0, UINT32_MAX - 1);
-
- flag(vui_hrd_parameters_present_flag);
- if (current->vui_hrd_parameters_present_flag) {
- CHECK(FUNC(hrd_parameters)(ctx, rw, ¤t->hrd_parameters,
- 1, sps->sps_max_sub_layers_minus1));
- }
- }
-
- flag(bitstream_restriction_flag);
- if (current->bitstream_restriction_flag) {
- flag(tiles_fixed_structure_flag);
- flag(motion_vectors_over_pic_boundaries_flag);
- flag(restricted_ref_pic_lists_flag);
- ue(min_spatial_segmentation_idc, 0, 4095);
- ue(max_bytes_per_pic_denom, 0, 16);
- ue(max_bits_per_min_cu_denom, 0, 16);
- ue(log2_max_mv_length_horizontal, 0, 16);
- ue(log2_max_mv_length_vertical, 0, 16);
- } else {
- infer(tiles_fixed_structure_flag, 0);
- infer(motion_vectors_over_pic_boundaries_flag, 1);
- infer(min_spatial_segmentation_idc, 0);
- infer(max_bytes_per_pic_denom, 2);
- infer(max_bits_per_min_cu_denom, 1);
- infer(log2_max_mv_length_horizontal, 15);
- infer(log2_max_mv_length_vertical, 15);
- }
-
- return 0;
-}
-
-static int FUNC(vps)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawVPS *current)
-{
- int err, i, j;
-
- HEADER("Video Parameter Set");
-
- CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_VPS));
-
- ub(4, vps_video_parameter_set_id);
-
- flag(vps_base_layer_internal_flag);
- flag(vps_base_layer_available_flag);
- u(6, vps_max_layers_minus1, 0, HEVC_MAX_LAYERS - 1);
- u(3, vps_max_sub_layers_minus1, 0, HEVC_MAX_SUB_LAYERS - 1);
- flag(vps_temporal_id_nesting_flag);
-
- if (current->vps_max_sub_layers_minus1 == 0 &&
- current->vps_temporal_id_nesting_flag != 1) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: "
- "vps_temporal_id_nesting_flag must be 1 if "
- "vps_max_sub_layers_minus1 is 0.\n");
- return AVERROR_INVALIDDATA;
- }
-
- fixed(16, vps_reserved_0xffff_16bits, 0xffff);
-
- CHECK(FUNC(profile_tier_level)(ctx, rw, ¤t->profile_tier_level,
- 1, current->vps_max_sub_layers_minus1));
-
- flag(vps_sub_layer_ordering_info_present_flag);
- for (i = (current->vps_sub_layer_ordering_info_present_flag ?
- 0 : current->vps_max_sub_layers_minus1);
- i <= current->vps_max_sub_layers_minus1; i++) {
- ues(vps_max_dec_pic_buffering_minus1[i],
- 0, HEVC_MAX_DPB_SIZE - 1, 1, i);
- ues(vps_max_num_reorder_pics[i],
- 0, current->vps_max_dec_pic_buffering_minus1[i], 1, i);
- ues(vps_max_latency_increase_plus1[i],
- 0, UINT32_MAX - 1, 1, i);
- }
- if (!current->vps_sub_layer_ordering_info_present_flag) {
- for (i = 0; i < current->vps_max_sub_layers_minus1; i++) {
- infer(vps_max_dec_pic_buffering_minus1[i],
- current->vps_max_dec_pic_buffering_minus1[current->vps_max_sub_layers_minus1]);
- infer(vps_max_num_reorder_pics[i],
- current->vps_max_num_reorder_pics[current->vps_max_sub_layers_minus1]);
- infer(vps_max_latency_increase_plus1[i],
- current->vps_max_latency_increase_plus1[current->vps_max_sub_layers_minus1]);
- }
- }
-
- u(6, vps_max_layer_id, 0, HEVC_MAX_LAYERS - 1);
- ue(vps_num_layer_sets_minus1, 0, HEVC_MAX_LAYER_SETS - 1);
- for (i = 1; i <= current->vps_num_layer_sets_minus1; i++) {
- for (j = 0; j <= current->vps_max_layer_id; j++)
- flags(layer_id_included_flag[i][j], 2, i, j);
- }
- for (j = 0; j <= current->vps_max_layer_id; j++)
- infer(layer_id_included_flag[0][j], j == 0);
-
- flag(vps_timing_info_present_flag);
- if (current->vps_timing_info_present_flag) {
- u(32, vps_num_units_in_tick, 1, UINT32_MAX);
- u(32, vps_time_scale, 1, UINT32_MAX);
- flag(vps_poc_proportional_to_timing_flag);
- if (current->vps_poc_proportional_to_timing_flag)
- ue(vps_num_ticks_poc_diff_one_minus1, 0, UINT32_MAX - 1);
- ue(vps_num_hrd_parameters, 0, current->vps_num_layer_sets_minus1 + 1);
- for (i = 0; i < current->vps_num_hrd_parameters; i++) {
- ues(hrd_layer_set_idx[i],
- current->vps_base_layer_internal_flag ? 0 : 1,
- current->vps_num_layer_sets_minus1, 1, i);
- if (i > 0)
- flags(cprms_present_flag[i], 1, i);
- else
- infer(cprms_present_flag[0], 1);
-
- CHECK(FUNC(hrd_parameters)(ctx, rw, ¤t->hrd_parameters[i],
- current->cprms_present_flag[i],
- current->vps_max_sub_layers_minus1));
- }
- }
-
- flag(vps_extension_flag);
- if (current->vps_extension_flag)
- CHECK(FUNC(extension_data)(ctx, rw, ¤t->extension_data));
-
- CHECK(FUNC(rbsp_trailing_bits)(ctx, rw));
-
- return 0;
-}
-
-static int FUNC(st_ref_pic_set)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSTRefPicSet *current, int st_rps_idx,
- const H265RawSPS *sps)
-{
- int err, i, j;
-
- if (st_rps_idx != 0)
- flag(inter_ref_pic_set_prediction_flag);
- else
- infer(inter_ref_pic_set_prediction_flag, 0);
-
- if (current->inter_ref_pic_set_prediction_flag) {
- unsigned int ref_rps_idx, num_delta_pocs, num_ref_pics;
- const H265RawSTRefPicSet *ref;
- int delta_rps, d_poc;
- int ref_delta_poc_s0[HEVC_MAX_REFS], ref_delta_poc_s1[HEVC_MAX_REFS];
- int delta_poc_s0[HEVC_MAX_REFS], delta_poc_s1[HEVC_MAX_REFS];
- uint8_t used_by_curr_pic_s0[HEVC_MAX_REFS],
- used_by_curr_pic_s1[HEVC_MAX_REFS];
-
- if (st_rps_idx == sps->num_short_term_ref_pic_sets)
- ue(delta_idx_minus1, 0, st_rps_idx - 1);
- else
- infer(delta_idx_minus1, 0);
-
- ref_rps_idx = st_rps_idx - (current->delta_idx_minus1 + 1);
- ref = &sps->st_ref_pic_set[ref_rps_idx];
- num_delta_pocs = ref->num_negative_pics + ref->num_positive_pics;
- av_assert0(num_delta_pocs < HEVC_MAX_DPB_SIZE);
-
- flag(delta_rps_sign);
- ue(abs_delta_rps_minus1, 0, INT16_MAX);
- delta_rps = (1 - 2 * current->delta_rps_sign) *
- (current->abs_delta_rps_minus1 + 1);
-
- num_ref_pics = 0;
- for (j = 0; j <= num_delta_pocs; j++) {
- flags(used_by_curr_pic_flag[j], 1, j);
- if (!current->used_by_curr_pic_flag[j])
- flags(use_delta_flag[j], 1, j);
- else
- infer(use_delta_flag[j], 1);
- if (current->use_delta_flag[j])
- ++num_ref_pics;
- }
- if (num_ref_pics >= HEVC_MAX_DPB_SIZE) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: "
- "short-term ref pic set %d "
- "contains too many pictures.\n", st_rps_idx);
- return AVERROR_INVALIDDATA;
- }
-
- // Since the stored form of an RPS here is actually the delta-step
- // form used when inter_ref_pic_set_prediction_flag is not set, we
- // need to reconstruct that here in order to be able to refer to
- // the RPS later (which is required for parsing, because we don't
- // even know what syntax elements appear without it). Therefore,
- // this code takes the delta-step form of the reference set, turns
- // it into the delta-array form, applies the prediction process of
- // 7.4.8, converts the result back to the delta-step form, and
- // stores that as the current set for future use. Note that the
- // inferences here mean that writers using prediction will need
- // to fill in the delta-step values correctly as well - since the
- // whole RPS prediction process is somewhat overly sophisticated,
- // this hopefully forms a useful check for them to ensure their
- // predicted form actually matches what was intended rather than
- // an onerous additional requirement.
-
- d_poc = 0;
- for (i = 0; i < ref->num_negative_pics; i++) {
- d_poc -= ref->delta_poc_s0_minus1[i] + 1;
- ref_delta_poc_s0[i] = d_poc;
- }
- d_poc = 0;
- for (i = 0; i < ref->num_positive_pics; i++) {
- d_poc += ref->delta_poc_s1_minus1[i] + 1;
- ref_delta_poc_s1[i] = d_poc;
- }
-
- i = 0;
- for (j = ref->num_positive_pics - 1; j >= 0; j--) {
- d_poc = ref_delta_poc_s1[j] + delta_rps;
- if (d_poc < 0 && current->use_delta_flag[ref->num_negative_pics + j]) {
- delta_poc_s0[i] = d_poc;
- used_by_curr_pic_s0[i++] =
- current->used_by_curr_pic_flag[ref->num_negative_pics + j];
- }
- }
- if (delta_rps < 0 && current->use_delta_flag[num_delta_pocs]) {
- delta_poc_s0[i] = delta_rps;
- used_by_curr_pic_s0[i++] =
- current->used_by_curr_pic_flag[num_delta_pocs];
- }
- for (j = 0; j < ref->num_negative_pics; j++) {
- d_poc = ref_delta_poc_s0[j] + delta_rps;
- if (d_poc < 0 && current->use_delta_flag[j]) {
- delta_poc_s0[i] = d_poc;
- used_by_curr_pic_s0[i++] = current->used_by_curr_pic_flag[j];
- }
- }
-
- infer(num_negative_pics, i);
- for (i = 0; i < current->num_negative_pics; i++) {
- infer(delta_poc_s0_minus1[i],
- -(delta_poc_s0[i] - (i == 0 ? 0 : delta_poc_s0[i - 1])) - 1);
- infer(used_by_curr_pic_s0_flag[i], used_by_curr_pic_s0[i]);
- }
-
- i = 0;
- for (j = ref->num_negative_pics - 1; j >= 0; j--) {
- d_poc = ref_delta_poc_s0[j] + delta_rps;
- if (d_poc > 0 && current->use_delta_flag[j]) {
- delta_poc_s1[i] = d_poc;
- used_by_curr_pic_s1[i++] = current->used_by_curr_pic_flag[j];
- }
- }
- if (delta_rps > 0 && current->use_delta_flag[num_delta_pocs]) {
- delta_poc_s1[i] = delta_rps;
- used_by_curr_pic_s1[i++] =
- current->used_by_curr_pic_flag[num_delta_pocs];
- }
- for (j = 0; j < ref->num_positive_pics; j++) {
- d_poc = ref_delta_poc_s1[j] + delta_rps;
- if (d_poc > 0 && current->use_delta_flag[ref->num_negative_pics + j]) {
- delta_poc_s1[i] = d_poc;
- used_by_curr_pic_s1[i++] =
- current->used_by_curr_pic_flag[ref->num_negative_pics + j];
- }
- }
-
- infer(num_positive_pics, i);
- for (i = 0; i < current->num_positive_pics; i++) {
- infer(delta_poc_s1_minus1[i],
- delta_poc_s1[i] - (i == 0 ? 0 : delta_poc_s1[i - 1]) - 1);
- infer(used_by_curr_pic_s1_flag[i], used_by_curr_pic_s1[i]);
- }
-
- } else {
- ue(num_negative_pics, 0, 15);
- ue(num_positive_pics, 0, 15 - current->num_negative_pics);
-
- for (i = 0; i < current->num_negative_pics; i++) {
- ues(delta_poc_s0_minus1[i], 0, INT16_MAX, 1, i);
- flags(used_by_curr_pic_s0_flag[i], 1, i);
- }
-
- for (i = 0; i < current->num_positive_pics; i++) {
- ues(delta_poc_s1_minus1[i], 0, INT16_MAX, 1, i);
- flags(used_by_curr_pic_s1_flag[i], 1, i);
- }
- }
-
- return 0;
-}
-
-static int FUNC(scaling_list_data)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawScalingList *current)
-{
- int sizeId, matrixId;
- int err, n, i;
-
- for (sizeId = 0; sizeId < 4; sizeId++) {
- for (matrixId = 0; matrixId < 6; matrixId += (sizeId == 3 ? 3 : 1)) {
- flags(scaling_list_pred_mode_flag[sizeId][matrixId],
- 2, sizeId, matrixId);
- if (!current->scaling_list_pred_mode_flag[sizeId][matrixId]) {
- ues(scaling_list_pred_matrix_id_delta[sizeId][matrixId],
- 0, sizeId == 3 ? matrixId / 3 : matrixId,
- 2, sizeId, matrixId);
- } else {
- n = FFMIN(64, 1 << (4 + (sizeId << 1)));
- if (sizeId > 1) {
- ses(scaling_list_dc_coef_minus8[sizeId - 2][matrixId], -7, +247,
- 2, sizeId - 2, matrixId);
- }
- for (i = 0; i < n; i++) {
- ses(scaling_list_delta_coeff[sizeId][matrixId][i],
- -128, +127, 3, sizeId, matrixId, i);
- }
- }
- }
- }
-
- return 0;
-}
-
-static int FUNC(sps_range_extension)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSPS *current)
-{
- int err;
-
- flag(transform_skip_rotation_enabled_flag);
- flag(transform_skip_context_enabled_flag);
- flag(implicit_rdpcm_enabled_flag);
- flag(explicit_rdpcm_enabled_flag);
- flag(extended_precision_processing_flag);
- flag(intra_smoothing_disabled_flag);
- flag(high_precision_offsets_enabled_flag);
- flag(persistent_rice_adaptation_enabled_flag);
- flag(cabac_bypass_alignment_enabled_flag);
-
- return 0;
-}
-
-static int FUNC(sps_scc_extension)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSPS *current)
-{
- int err, comp, i;
-
- flag(sps_curr_pic_ref_enabled_flag);
-
- flag(palette_mode_enabled_flag);
- if (current->palette_mode_enabled_flag) {
- ue(palette_max_size, 0, 64);
- ue(delta_palette_max_predictor_size, 0, 128);
-
- flag(sps_palette_predictor_initializer_present_flag);
- if (current->sps_palette_predictor_initializer_present_flag) {
- ue(sps_num_palette_predictor_initializer_minus1, 0, 127);
- for (comp = 0; comp < (current->chroma_format_idc ? 3 : 1); comp++) {
- int bit_depth = comp == 0 ? current->bit_depth_luma_minus8 + 8
- : current->bit_depth_chroma_minus8 + 8;
- for (i = 0; i <= current->sps_num_palette_predictor_initializer_minus1; i++)
- ubs(bit_depth, sps_palette_predictor_initializers[comp][i], 2, comp, i);
- }
- }
- }
-
- u(2, motion_vector_resolution_control_idc, 0, 2);
- flag(intra_boundary_filtering_disable_flag);
-
- return 0;
-}
-
-static int FUNC(vui_parameters_default)(CodedBitstreamContext *ctx,
- RWContext *rw, H265RawVUI *current,
- H265RawSPS *sps)
-{
- infer(aspect_ratio_idc, 0);
-
- infer(video_format, 5);
- infer(video_full_range_flag, 0);
- infer(colour_primaries, 2);
- infer(transfer_characteristics, 2);
- infer(matrix_coefficients, 2);
-
- infer(chroma_sample_loc_type_top_field, 0);
- infer(chroma_sample_loc_type_bottom_field, 0);
-
- infer(tiles_fixed_structure_flag, 0);
- infer(motion_vectors_over_pic_boundaries_flag, 1);
- infer(min_spatial_segmentation_idc, 0);
- infer(max_bytes_per_pic_denom, 2);
- infer(max_bits_per_min_cu_denom, 1);
- infer(log2_max_mv_length_horizontal, 15);
- infer(log2_max_mv_length_vertical, 15);
-
- return 0;
-}
-
-static int FUNC(sps)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSPS *current)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawVPS *vps;
- int err, i;
- unsigned int min_cb_log2_size_y, ctb_log2_size_y,
- min_cb_size_y, min_tb_log2_size_y;
-
- HEADER("Sequence Parameter Set");
-
- CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_SPS));
-
- ub(4, sps_video_parameter_set_id);
- h265->active_vps = vps = h265->vps[current->sps_video_parameter_set_id];
-
- u(3, sps_max_sub_layers_minus1, 0, HEVC_MAX_SUB_LAYERS - 1);
- flag(sps_temporal_id_nesting_flag);
- if (vps) {
- if (vps->vps_max_sub_layers_minus1 > current->sps_max_sub_layers_minus1) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: "
- "sps_max_sub_layers_minus1 (%d) must be less than or equal to "
- "vps_max_sub_layers_minus1 (%d).\n",
- vps->vps_max_sub_layers_minus1,
- current->sps_max_sub_layers_minus1);
- return AVERROR_INVALIDDATA;
- }
- if (vps->vps_temporal_id_nesting_flag &&
- !current->sps_temporal_id_nesting_flag) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: "
- "sps_temporal_id_nesting_flag must be 1 if "
- "vps_temporal_id_nesting_flag is 1.\n");
- return AVERROR_INVALIDDATA;
- }
- }
-
- CHECK(FUNC(profile_tier_level)(ctx, rw, ¤t->profile_tier_level,
- 1, current->sps_max_sub_layers_minus1));
-
- ue(sps_seq_parameter_set_id, 0, 15);
-
- ue(chroma_format_idc, 0, 3);
- if (current->chroma_format_idc == 3)
- flag(separate_colour_plane_flag);
- else
- infer(separate_colour_plane_flag, 0);
-
- ue(pic_width_in_luma_samples, 1, HEVC_MAX_WIDTH);
- ue(pic_height_in_luma_samples, 1, HEVC_MAX_HEIGHT);
-
- flag(conformance_window_flag);
- if (current->conformance_window_flag) {
- ue(conf_win_left_offset, 0, current->pic_width_in_luma_samples);
- ue(conf_win_right_offset, 0, current->pic_width_in_luma_samples);
- ue(conf_win_top_offset, 0, current->pic_height_in_luma_samples);
- ue(conf_win_bottom_offset, 0, current->pic_height_in_luma_samples);
- } else {
- infer(conf_win_left_offset, 0);
- infer(conf_win_right_offset, 0);
- infer(conf_win_top_offset, 0);
- infer(conf_win_bottom_offset, 0);
- }
-
- ue(bit_depth_luma_minus8, 0, 8);
- ue(bit_depth_chroma_minus8, 0, 8);
-
- ue(log2_max_pic_order_cnt_lsb_minus4, 0, 12);
-
- flag(sps_sub_layer_ordering_info_present_flag);
- for (i = (current->sps_sub_layer_ordering_info_present_flag ?
- 0 : current->sps_max_sub_layers_minus1);
- i <= current->sps_max_sub_layers_minus1; i++) {
- ues(sps_max_dec_pic_buffering_minus1[i],
- 0, HEVC_MAX_DPB_SIZE - 1, 1, i);
- ues(sps_max_num_reorder_pics[i],
- 0, current->sps_max_dec_pic_buffering_minus1[i], 1, i);
- ues(sps_max_latency_increase_plus1[i],
- 0, UINT32_MAX - 1, 1, i);
- }
- if (!current->sps_sub_layer_ordering_info_present_flag) {
- for (i = 0; i < current->sps_max_sub_layers_minus1; i++) {
- infer(sps_max_dec_pic_buffering_minus1[i],
- current->sps_max_dec_pic_buffering_minus1[current->sps_max_sub_layers_minus1]);
- infer(sps_max_num_reorder_pics[i],
- current->sps_max_num_reorder_pics[current->sps_max_sub_layers_minus1]);
- infer(sps_max_latency_increase_plus1[i],
- current->sps_max_latency_increase_plus1[current->sps_max_sub_layers_minus1]);
- }
- }
-
- ue(log2_min_luma_coding_block_size_minus3, 0, 3);
- min_cb_log2_size_y = current->log2_min_luma_coding_block_size_minus3 + 3;
-
- ue(log2_diff_max_min_luma_coding_block_size, 0, 3);
- ctb_log2_size_y = min_cb_log2_size_y +
- current->log2_diff_max_min_luma_coding_block_size;
-
- min_cb_size_y = 1 << min_cb_log2_size_y;
- if (current->pic_width_in_luma_samples % min_cb_size_y ||
- current->pic_height_in_luma_samples % min_cb_size_y) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid dimensions: %ux%u not divisible "
- "by MinCbSizeY = %u.\n", current->pic_width_in_luma_samples,
- current->pic_height_in_luma_samples, min_cb_size_y);
- return AVERROR_INVALIDDATA;
- }
-
- ue(log2_min_luma_transform_block_size_minus2, 0, min_cb_log2_size_y - 3);
- min_tb_log2_size_y = current->log2_min_luma_transform_block_size_minus2 + 2;
-
- ue(log2_diff_max_min_luma_transform_block_size,
- 0, FFMIN(ctb_log2_size_y, 5) - min_tb_log2_size_y);
-
- ue(max_transform_hierarchy_depth_inter,
- 0, ctb_log2_size_y - min_tb_log2_size_y);
- ue(max_transform_hierarchy_depth_intra,
- 0, ctb_log2_size_y - min_tb_log2_size_y);
-
- flag(scaling_list_enabled_flag);
- if (current->scaling_list_enabled_flag) {
- flag(sps_scaling_list_data_present_flag);
- if (current->sps_scaling_list_data_present_flag)
- CHECK(FUNC(scaling_list_data)(ctx, rw, ¤t->scaling_list));
- } else {
- infer(sps_scaling_list_data_present_flag, 0);
- }
-
- flag(amp_enabled_flag);
- flag(sample_adaptive_offset_enabled_flag);
-
- flag(pcm_enabled_flag);
- if (current->pcm_enabled_flag) {
- u(4, pcm_sample_bit_depth_luma_minus1,
- 0, current->bit_depth_luma_minus8 + 8 - 1);
- u(4, pcm_sample_bit_depth_chroma_minus1,
- 0, current->bit_depth_chroma_minus8 + 8 - 1);
-
- ue(log2_min_pcm_luma_coding_block_size_minus3,
- FFMIN(min_cb_log2_size_y, 5) - 3, FFMIN(ctb_log2_size_y, 5) - 3);
- ue(log2_diff_max_min_pcm_luma_coding_block_size,
- 0, FFMIN(ctb_log2_size_y, 5) - (current->log2_min_pcm_luma_coding_block_size_minus3 + 3));
-
- flag(pcm_loop_filter_disabled_flag);
- }
-
- ue(num_short_term_ref_pic_sets, 0, HEVC_MAX_SHORT_TERM_REF_PIC_SETS);
- for (i = 0; i < current->num_short_term_ref_pic_sets; i++)
- CHECK(FUNC(st_ref_pic_set)(ctx, rw, ¤t->st_ref_pic_set[i], i, current));
-
- flag(long_term_ref_pics_present_flag);
- if (current->long_term_ref_pics_present_flag) {
- ue(num_long_term_ref_pics_sps, 0, HEVC_MAX_LONG_TERM_REF_PICS);
- for (i = 0; i < current->num_long_term_ref_pics_sps; i++) {
- ubs(current->log2_max_pic_order_cnt_lsb_minus4 + 4,
- lt_ref_pic_poc_lsb_sps[i], 1, i);
- flags(used_by_curr_pic_lt_sps_flag[i], 1, i);
- }
- }
-
- flag(sps_temporal_mvp_enabled_flag);
- flag(strong_intra_smoothing_enabled_flag);
-
- flag(vui_parameters_present_flag);
- if (current->vui_parameters_present_flag)
- CHECK(FUNC(vui_parameters)(ctx, rw, ¤t->vui, current));
- else
- CHECK(FUNC(vui_parameters_default)(ctx, rw, ¤t->vui, current));
-
- flag(sps_extension_present_flag);
- if (current->sps_extension_present_flag) {
- flag(sps_range_extension_flag);
- flag(sps_multilayer_extension_flag);
- flag(sps_3d_extension_flag);
- flag(sps_scc_extension_flag);
- ub(4, sps_extension_4bits);
- }
-
- if (current->sps_range_extension_flag)
- CHECK(FUNC(sps_range_extension)(ctx, rw, current));
- if (current->sps_multilayer_extension_flag)
- return AVERROR_PATCHWELCOME;
- if (current->sps_3d_extension_flag)
- return AVERROR_PATCHWELCOME;
- if (current->sps_scc_extension_flag)
- CHECK(FUNC(sps_scc_extension)(ctx, rw, current));
- if (current->sps_extension_4bits)
- CHECK(FUNC(extension_data)(ctx, rw, ¤t->extension_data));
-
- CHECK(FUNC(rbsp_trailing_bits)(ctx, rw));
-
- return 0;
-}
-
-static int FUNC(pps_range_extension)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawPPS *current)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps = h265->active_sps;
- int err, i;
-
- if (current->transform_skip_enabled_flag)
- ue(log2_max_transform_skip_block_size_minus2, 0, 3);
- flag(cross_component_prediction_enabled_flag);
-
- flag(chroma_qp_offset_list_enabled_flag);
- if (current->chroma_qp_offset_list_enabled_flag) {
- ue(diff_cu_chroma_qp_offset_depth,
- 0, sps->log2_diff_max_min_luma_coding_block_size);
- ue(chroma_qp_offset_list_len_minus1, 0, 5);
- for (i = 0; i <= current->chroma_qp_offset_list_len_minus1; i++) {
- ses(cb_qp_offset_list[i], -12, +12, 1, i);
- ses(cr_qp_offset_list[i], -12, +12, 1, i);
- }
- }
-
- ue(log2_sao_offset_scale_luma, 0, FFMAX(0, sps->bit_depth_luma_minus8 - 2));
- ue(log2_sao_offset_scale_chroma, 0, FFMAX(0, sps->bit_depth_chroma_minus8 - 2));
-
- return 0;
-}
-
-static int FUNC(pps_scc_extension)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawPPS *current)
-{
- int err, comp, i;
-
- flag(pps_curr_pic_ref_enabled_flag);
-
- flag(residual_adaptive_colour_transform_enabled_flag);
- if (current->residual_adaptive_colour_transform_enabled_flag) {
- flag(pps_slice_act_qp_offsets_present_flag);
- se(pps_act_y_qp_offset_plus5, -7, +17);
- se(pps_act_cb_qp_offset_plus5, -7, +17);
- se(pps_act_cr_qp_offset_plus3, -9, +15);
- } else {
- infer(pps_slice_act_qp_offsets_present_flag, 0);
- infer(pps_act_y_qp_offset_plus5, 0);
- infer(pps_act_cb_qp_offset_plus5, 0);
- infer(pps_act_cr_qp_offset_plus3, 0);
- }
-
- flag(pps_palette_predictor_initializer_present_flag);
- if (current->pps_palette_predictor_initializer_present_flag) {
- ue(pps_num_palette_predictor_initializer, 0, 128);
- if (current->pps_num_palette_predictor_initializer > 0) {
- flag(monochrome_palette_flag);
- ue(luma_bit_depth_entry_minus8, 0, 8);
- if (!current->monochrome_palette_flag)
- ue(chroma_bit_depth_entry_minus8, 0, 8);
- for (comp = 0; comp < (current->monochrome_palette_flag ? 1 : 3); comp++) {
- int bit_depth = comp == 0 ? current->luma_bit_depth_entry_minus8 + 8
- : current->chroma_bit_depth_entry_minus8 + 8;
- for (i = 0; i < current->pps_num_palette_predictor_initializer; i++)
- ubs(bit_depth, pps_palette_predictor_initializers[comp][i], 2, comp, i);
- }
- }
- }
-
- return 0;
-}
-
-static int FUNC(pps)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawPPS *current)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps;
- int err, i;
-
- HEADER("Picture Parameter Set");
-
- CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_PPS));
-
- ue(pps_pic_parameter_set_id, 0, 63);
- ue(pps_seq_parameter_set_id, 0, 15);
- sps = h265->sps[current->pps_seq_parameter_set_id];
- if (!sps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "SPS id %d not available.\n",
- current->pps_seq_parameter_set_id);
- return AVERROR_INVALIDDATA;
- }
- h265->active_sps = sps;
-
- flag(dependent_slice_segments_enabled_flag);
- flag(output_flag_present_flag);
- ub(3, num_extra_slice_header_bits);
- flag(sign_data_hiding_enabled_flag);
- flag(cabac_init_present_flag);
-
- ue(num_ref_idx_l0_default_active_minus1, 0, 14);
- ue(num_ref_idx_l1_default_active_minus1, 0, 14);
-
- se(init_qp_minus26, -(26 + 6 * sps->bit_depth_luma_minus8), +25);
-
- flag(constrained_intra_pred_flag);
- flag(transform_skip_enabled_flag);
- flag(cu_qp_delta_enabled_flag);
- if (current->cu_qp_delta_enabled_flag)
- ue(diff_cu_qp_delta_depth,
- 0, sps->log2_diff_max_min_luma_coding_block_size);
- else
- infer(diff_cu_qp_delta_depth, 0);
-
- se(pps_cb_qp_offset, -12, +12);
- se(pps_cr_qp_offset, -12, +12);
- flag(pps_slice_chroma_qp_offsets_present_flag);
-
- flag(weighted_pred_flag);
- flag(weighted_bipred_flag);
-
- flag(transquant_bypass_enabled_flag);
- flag(tiles_enabled_flag);
- flag(entropy_coding_sync_enabled_flag);
-
- if (current->tiles_enabled_flag) {
- ue(num_tile_columns_minus1, 0, HEVC_MAX_TILE_COLUMNS);
- ue(num_tile_rows_minus1, 0, HEVC_MAX_TILE_ROWS);
- flag(uniform_spacing_flag);
- if (!current->uniform_spacing_flag) {
- for (i = 0; i < current->num_tile_columns_minus1; i++)
- ues(column_width_minus1[i], 0, sps->pic_width_in_luma_samples, 1, i);
- for (i = 0; i < current->num_tile_rows_minus1; i++)
- ues(row_height_minus1[i], 0, sps->pic_height_in_luma_samples, 1, i);
- }
- flag(loop_filter_across_tiles_enabled_flag);
- } else {
- infer(num_tile_columns_minus1, 0);
- infer(num_tile_rows_minus1, 0);
- }
-
- flag(pps_loop_filter_across_slices_enabled_flag);
- flag(deblocking_filter_control_present_flag);
- if (current->deblocking_filter_control_present_flag) {
- flag(deblocking_filter_override_enabled_flag);
- flag(pps_deblocking_filter_disabled_flag);
- if (!current->pps_deblocking_filter_disabled_flag) {
- se(pps_beta_offset_div2, -6, +6);
- se(pps_tc_offset_div2, -6, +6);
- } else {
- infer(pps_beta_offset_div2, 0);
- infer(pps_tc_offset_div2, 0);
- }
- } else {
- infer(deblocking_filter_override_enabled_flag, 0);
- infer(pps_deblocking_filter_disabled_flag, 0);
- infer(pps_beta_offset_div2, 0);
- infer(pps_tc_offset_div2, 0);
- }
-
- flag(pps_scaling_list_data_present_flag);
- if (current->pps_scaling_list_data_present_flag)
- CHECK(FUNC(scaling_list_data)(ctx, rw, ¤t->scaling_list));
-
- flag(lists_modification_present_flag);
-
- ue(log2_parallel_merge_level_minus2,
- 0, (sps->log2_min_luma_coding_block_size_minus3 + 3 +
- sps->log2_diff_max_min_luma_coding_block_size - 2));
-
- flag(slice_segment_header_extension_present_flag);
-
- flag(pps_extension_present_flag);
- if (current->pps_extension_present_flag) {
- flag(pps_range_extension_flag);
- flag(pps_multilayer_extension_flag);
- flag(pps_3d_extension_flag);
- flag(pps_scc_extension_flag);
- ub(4, pps_extension_4bits);
- }
- if (current->pps_range_extension_flag)
- CHECK(FUNC(pps_range_extension)(ctx, rw, current));
- if (current->pps_multilayer_extension_flag)
- return AVERROR_PATCHWELCOME;
- if (current->pps_3d_extension_flag)
- return AVERROR_PATCHWELCOME;
- if (current->pps_scc_extension_flag)
- CHECK(FUNC(pps_scc_extension)(ctx, rw, current));
- if (current->pps_extension_4bits)
- CHECK(FUNC(extension_data)(ctx, rw, ¤t->extension_data));
-
- CHECK(FUNC(rbsp_trailing_bits)(ctx, rw));
-
- return 0;
-}
-
-static int FUNC(aud)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawAUD *current)
-{
- int err;
-
- HEADER("Access Unit Delimiter");
-
- CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_AUD));
-
- u(3, pic_type, 0, 2);
-
- CHECK(FUNC(rbsp_trailing_bits)(ctx, rw));
-
- return 0;
-}
-
-static int FUNC(ref_pic_lists_modification)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSliceHeader *current,
- unsigned int num_pic_total_curr)
-{
- unsigned int entry_size;
- int err, i;
-
- entry_size = av_log2(num_pic_total_curr - 1) + 1;
-
- flag(ref_pic_list_modification_flag_l0);
- if (current->ref_pic_list_modification_flag_l0) {
- for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++)
- us(entry_size, list_entry_l0[i], 0, num_pic_total_curr - 1, 1, i);
- }
-
- if (current->slice_type == HEVC_SLICE_B) {
- flag(ref_pic_list_modification_flag_l1);
- if (current->ref_pic_list_modification_flag_l1) {
- for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++)
- us(entry_size, list_entry_l1[i], 0, num_pic_total_curr - 1, 1, i);
- }
- }
-
- return 0;
-}
-
-static int FUNC(pred_weight_table)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSliceHeader *current)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps = h265->active_sps;
- int err, i, j;
- int chroma = !sps->separate_colour_plane_flag &&
- sps->chroma_format_idc != 0;
-
- ue(luma_log2_weight_denom, 0, 7);
- if (chroma)
- se(delta_chroma_log2_weight_denom, -7, 7);
- else
- infer(delta_chroma_log2_weight_denom, 0);
-
- for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) {
- if (1 /* is not same POC and same layer_id */)
- flags(luma_weight_l0_flag[i], 1, i);
- else
- infer(luma_weight_l0_flag[i], 0);
- }
- if (chroma) {
- for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) {
- if (1 /* is not same POC and same layer_id */)
- flags(chroma_weight_l0_flag[i], 1, i);
- else
- infer(chroma_weight_l0_flag[i], 0);
- }
- }
-
- for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) {
- if (current->luma_weight_l0_flag[i]) {
- ses(delta_luma_weight_l0[i], -128, +127, 1, i);
- ses(luma_offset_l0[i],
- -(1 << (sps->bit_depth_luma_minus8 + 8 - 1)),
- ((1 << (sps->bit_depth_luma_minus8 + 8 - 1)) - 1), 1, i);
- } else {
- infer(delta_luma_weight_l0[i], 0);
- infer(luma_offset_l0[i], 0);
- }
- if (current->chroma_weight_l0_flag[i]) {
- for (j = 0; j < 2; j++) {
- ses(delta_chroma_weight_l0[i][j], -128, +127, 2, i, j);
- ses(chroma_offset_l0[i][j],
- -(4 << (sps->bit_depth_chroma_minus8 + 8 - 1)),
- ((4 << (sps->bit_depth_chroma_minus8 + 8 - 1)) - 1), 2, i, j);
- }
- } else {
- for (j = 0; j < 2; j++) {
- infer(delta_chroma_weight_l0[i][j], 0);
- infer(chroma_offset_l0[i][j], 0);
- }
- }
- }
-
- if (current->slice_type == HEVC_SLICE_B) {
- for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) {
- if (1 /* RefPicList1[i] is not CurrPic, nor is it in a different layer */)
- flags(luma_weight_l1_flag[i], 1, i);
- else
- infer(luma_weight_l1_flag[i], 0);
- }
- if (chroma) {
- for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) {
- if (1 /* RefPicList1[i] is not CurrPic, nor is it in a different layer */)
- flags(chroma_weight_l1_flag[i], 1, i);
- else
- infer(chroma_weight_l1_flag[i], 0);
- }
- }
-
- for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) {
- if (current->luma_weight_l1_flag[i]) {
- ses(delta_luma_weight_l1[i], -128, +127, 1, i);
- ses(luma_offset_l1[i],
- -(1 << (sps->bit_depth_luma_minus8 + 8 - 1)),
- ((1 << (sps->bit_depth_luma_minus8 + 8 - 1)) - 1), 1, i);
- } else {
- infer(delta_luma_weight_l1[i], 0);
- infer(luma_offset_l1[i], 0);
- }
- if (current->chroma_weight_l1_flag[i]) {
- for (j = 0; j < 2; j++) {
- ses(delta_chroma_weight_l1[i][j], -128, +127, 2, i, j);
- ses(chroma_offset_l1[i][j],
- -(4 << (sps->bit_depth_chroma_minus8 + 8 - 1)),
- ((4 << (sps->bit_depth_chroma_minus8 + 8 - 1)) - 1), 2, i, j);
- }
- } else {
- for (j = 0; j < 2; j++) {
- infer(delta_chroma_weight_l1[i][j], 0);
- infer(chroma_offset_l1[i][j], 0);
- }
- }
- }
- }
-
- return 0;
-}
-
-static int FUNC(slice_segment_header)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSliceHeader *current)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps;
- const H265RawPPS *pps;
- unsigned int min_cb_log2_size_y, ctb_log2_size_y, ctb_size_y;
- unsigned int pic_width_in_ctbs_y, pic_height_in_ctbs_y, pic_size_in_ctbs_y;
- unsigned int num_pic_total_curr = 0;
- int err, i;
-
- HEADER("Slice Segment Header");
-
- CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, -1));
-
- flag(first_slice_segment_in_pic_flag);
-
- if (current->nal_unit_header.nal_unit_type >= HEVC_NAL_BLA_W_LP &&
- current->nal_unit_header.nal_unit_type <= HEVC_NAL_RSV_IRAP_VCL23)
- flag(no_output_of_prior_pics_flag);
-
- ue(slice_pic_parameter_set_id, 0, 63);
-
- pps = h265->pps[current->slice_pic_parameter_set_id];
- if (!pps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "PPS id %d not available.\n",
- current->slice_pic_parameter_set_id);
- return AVERROR_INVALIDDATA;
- }
- h265->active_pps = pps;
-
- sps = h265->sps[pps->pps_seq_parameter_set_id];
- if (!sps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "SPS id %d not available.\n",
- pps->pps_seq_parameter_set_id);
- return AVERROR_INVALIDDATA;
- }
- h265->active_sps = sps;
-
- min_cb_log2_size_y = sps->log2_min_luma_coding_block_size_minus3 + 3;
- ctb_log2_size_y = min_cb_log2_size_y + sps->log2_diff_max_min_luma_coding_block_size;
- ctb_size_y = 1 << ctb_log2_size_y;
- pic_width_in_ctbs_y =
- (sps->pic_width_in_luma_samples + ctb_size_y - 1) / ctb_size_y;
- pic_height_in_ctbs_y =
- (sps->pic_height_in_luma_samples + ctb_size_y - 1) / ctb_size_y;
- pic_size_in_ctbs_y = pic_width_in_ctbs_y * pic_height_in_ctbs_y;
-
- if (!current->first_slice_segment_in_pic_flag) {
- unsigned int address_size = av_log2(pic_size_in_ctbs_y - 1) + 1;
- if (pps->dependent_slice_segments_enabled_flag)
- flag(dependent_slice_segment_flag);
- else
- infer(dependent_slice_segment_flag, 0);
- u(address_size, slice_segment_address, 0, pic_size_in_ctbs_y - 1);
- } else {
- infer(dependent_slice_segment_flag, 0);
- }
-
- if (!current->dependent_slice_segment_flag) {
- for (i = 0; i < pps->num_extra_slice_header_bits; i++)
- flags(slice_reserved_flag[i], 1, i);
-
- ue(slice_type, 0, 2);
-
- if (pps->output_flag_present_flag)
- flag(pic_output_flag);
-
- if (sps->separate_colour_plane_flag)
- u(2, colour_plane_id, 0, 2);
-
- if (current->nal_unit_header.nal_unit_type != HEVC_NAL_IDR_W_RADL &&
- current->nal_unit_header.nal_unit_type != HEVC_NAL_IDR_N_LP) {
- const H265RawSTRefPicSet *rps;
- int dpb_slots_remaining;
-
- ub(sps->log2_max_pic_order_cnt_lsb_minus4 + 4, slice_pic_order_cnt_lsb);
-
- flag(short_term_ref_pic_set_sps_flag);
- if (!current->short_term_ref_pic_set_sps_flag) {
- CHECK(FUNC(st_ref_pic_set)(ctx, rw, ¤t->short_term_ref_pic_set,
- sps->num_short_term_ref_pic_sets, sps));
- rps = ¤t->short_term_ref_pic_set;
- } else if (sps->num_short_term_ref_pic_sets > 1) {
- unsigned int idx_size = av_log2(sps->num_short_term_ref_pic_sets - 1) + 1;
- u(idx_size, short_term_ref_pic_set_idx,
- 0, sps->num_short_term_ref_pic_sets - 1);
- rps = &sps->st_ref_pic_set[current->short_term_ref_pic_set_idx];
- } else {
- infer(short_term_ref_pic_set_idx, 0);
- rps = &sps->st_ref_pic_set[0];
- }
-
- dpb_slots_remaining = HEVC_MAX_DPB_SIZE - 1 -
- rps->num_negative_pics - rps->num_positive_pics;
- if (pps->pps_curr_pic_ref_enabled_flag &&
- (sps->sample_adaptive_offset_enabled_flag ||
- !pps->pps_deblocking_filter_disabled_flag ||
- pps->deblocking_filter_override_enabled_flag)) {
- // This picture will occupy two DPB slots.
- if (dpb_slots_remaining == 0) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: "
- "short-term ref pic set contains too many pictures "
- "to use with current picture reference enabled.\n");
- return AVERROR_INVALIDDATA;
- }
- --dpb_slots_remaining;
- }
-
- num_pic_total_curr = 0;
- for (i = 0; i < rps->num_negative_pics; i++)
- if (rps->used_by_curr_pic_s0_flag[i])
- ++num_pic_total_curr;
- for (i = 0; i < rps->num_positive_pics; i++)
- if (rps->used_by_curr_pic_s1_flag[i])
- ++num_pic_total_curr;
-
- if (sps->long_term_ref_pics_present_flag) {
- unsigned int idx_size;
-
- if (sps->num_long_term_ref_pics_sps > 0) {
- ue(num_long_term_sps, 0, FFMIN(sps->num_long_term_ref_pics_sps,
- dpb_slots_remaining));
- idx_size = av_log2(sps->num_long_term_ref_pics_sps - 1) + 1;
- dpb_slots_remaining -= current->num_long_term_sps;
- } else {
- infer(num_long_term_sps, 0);
- idx_size = 0;
- }
- ue(num_long_term_pics, 0, dpb_slots_remaining);
-
- for (i = 0; i < current->num_long_term_sps +
- current->num_long_term_pics; i++) {
- if (i < current->num_long_term_sps) {
- if (sps->num_long_term_ref_pics_sps > 1)
- us(idx_size, lt_idx_sps[i],
- 0, sps->num_long_term_ref_pics_sps - 1, 1, i);
- if (sps->used_by_curr_pic_lt_sps_flag[current->lt_idx_sps[i]])
- ++num_pic_total_curr;
- } else {
- ubs(sps->log2_max_pic_order_cnt_lsb_minus4 + 4, poc_lsb_lt[i], 1, i);
- flags(used_by_curr_pic_lt_flag[i], 1, i);
- if (current->used_by_curr_pic_lt_flag[i])
- ++num_pic_total_curr;
- }
- flags(delta_poc_msb_present_flag[i], 1, i);
- if (current->delta_poc_msb_present_flag[i])
- ues(delta_poc_msb_cycle_lt[i], 0, UINT32_MAX - 1, 1, i);
- else
- infer(delta_poc_msb_cycle_lt[i], 0);
- }
- }
-
- if (sps->sps_temporal_mvp_enabled_flag)
- flag(slice_temporal_mvp_enabled_flag);
- else
- infer(slice_temporal_mvp_enabled_flag, 0);
-
- if (pps->pps_curr_pic_ref_enabled_flag)
- ++num_pic_total_curr;
- }
-
- if (sps->sample_adaptive_offset_enabled_flag) {
- flag(slice_sao_luma_flag);
- if (!sps->separate_colour_plane_flag && sps->chroma_format_idc != 0)
- flag(slice_sao_chroma_flag);
- else
- infer(slice_sao_chroma_flag, 0);
- } else {
- infer(slice_sao_luma_flag, 0);
- infer(slice_sao_chroma_flag, 0);
- }
-
- if (current->slice_type == HEVC_SLICE_P ||
- current->slice_type == HEVC_SLICE_B) {
- flag(num_ref_idx_active_override_flag);
- if (current->num_ref_idx_active_override_flag) {
- ue(num_ref_idx_l0_active_minus1, 0, 14);
- if (current->slice_type == HEVC_SLICE_B)
- ue(num_ref_idx_l1_active_minus1, 0, 14);
- else
- infer(num_ref_idx_l1_active_minus1, pps->num_ref_idx_l1_default_active_minus1);
- } else {
- infer(num_ref_idx_l0_active_minus1, pps->num_ref_idx_l0_default_active_minus1);
- infer(num_ref_idx_l1_active_minus1, pps->num_ref_idx_l1_default_active_minus1);
- }
-
- if (pps->lists_modification_present_flag && num_pic_total_curr > 1)
- CHECK(FUNC(ref_pic_lists_modification)(ctx, rw, current,
- num_pic_total_curr));
-
- if (current->slice_type == HEVC_SLICE_B)
- flag(mvd_l1_zero_flag);
- if (pps->cabac_init_present_flag)
- flag(cabac_init_flag);
- else
- infer(cabac_init_flag, 0);
- if (current->slice_temporal_mvp_enabled_flag) {
- if (current->slice_type == HEVC_SLICE_B)
- flag(collocated_from_l0_flag);
- else
- infer(collocated_from_l0_flag, 1);
- if (current->collocated_from_l0_flag) {
- if (current->num_ref_idx_l0_active_minus1 > 0)
- ue(collocated_ref_idx, 0, current->num_ref_idx_l0_active_minus1);
- else
- infer(collocated_ref_idx, 0);
- } else {
- if (current->num_ref_idx_l1_active_minus1 > 0)
- ue(collocated_ref_idx, 0, current->num_ref_idx_l1_active_minus1);
- else
- infer(collocated_ref_idx, 0);
- }
- }
-
- if ((pps->weighted_pred_flag && current->slice_type == HEVC_SLICE_P) ||
- (pps->weighted_bipred_flag && current->slice_type == HEVC_SLICE_B))
- CHECK(FUNC(pred_weight_table)(ctx, rw, current));
-
- ue(five_minus_max_num_merge_cand, 0, 4);
- if (sps->motion_vector_resolution_control_idc == 2)
- flag(use_integer_mv_flag);
- else
- infer(use_integer_mv_flag, sps->motion_vector_resolution_control_idc);
- }
-
- se(slice_qp_delta,
- - 6 * sps->bit_depth_luma_minus8 - (pps->init_qp_minus26 + 26),
- + 51 - (pps->init_qp_minus26 + 26));
- if (pps->pps_slice_chroma_qp_offsets_present_flag) {
- se(slice_cb_qp_offset, -12, +12);
- se(slice_cr_qp_offset, -12, +12);
- } else {
- infer(slice_cb_qp_offset, 0);
- infer(slice_cr_qp_offset, 0);
- }
- if (pps->pps_slice_act_qp_offsets_present_flag) {
- se(slice_act_y_qp_offset,
- -12 - (pps->pps_act_y_qp_offset_plus5 - 5),
- +12 - (pps->pps_act_y_qp_offset_plus5 - 5));
- se(slice_act_cb_qp_offset,
- -12 - (pps->pps_act_cb_qp_offset_plus5 - 5),
- +12 - (pps->pps_act_cb_qp_offset_plus5 - 5));
- se(slice_act_cr_qp_offset,
- -12 - (pps->pps_act_cr_qp_offset_plus3 - 3),
- +12 - (pps->pps_act_cr_qp_offset_plus3 - 3));
- } else {
- infer(slice_act_y_qp_offset, 0);
- infer(slice_act_cb_qp_offset, 0);
- infer(slice_act_cr_qp_offset, 0);
- }
- if (pps->chroma_qp_offset_list_enabled_flag)
- flag(cu_chroma_qp_offset_enabled_flag);
- else
- infer(cu_chroma_qp_offset_enabled_flag, 0);
-
- if (pps->deblocking_filter_override_enabled_flag)
- flag(deblocking_filter_override_flag);
- else
- infer(deblocking_filter_override_flag, 0);
- if (current->deblocking_filter_override_flag) {
- flag(slice_deblocking_filter_disabled_flag);
- if (!current->slice_deblocking_filter_disabled_flag) {
- se(slice_beta_offset_div2, -6, +6);
- se(slice_tc_offset_div2, -6, +6);
- } else {
- infer(slice_beta_offset_div2, pps->pps_beta_offset_div2);
- infer(slice_tc_offset_div2, pps->pps_tc_offset_div2);
- }
- } else {
- infer(slice_deblocking_filter_disabled_flag,
- pps->pps_deblocking_filter_disabled_flag);
- infer(slice_beta_offset_div2, pps->pps_beta_offset_div2);
- infer(slice_tc_offset_div2, pps->pps_tc_offset_div2);
- }
- if (pps->pps_loop_filter_across_slices_enabled_flag &&
- (current->slice_sao_luma_flag || current->slice_sao_chroma_flag ||
- !current->slice_deblocking_filter_disabled_flag))
- flag(slice_loop_filter_across_slices_enabled_flag);
- else
- infer(slice_loop_filter_across_slices_enabled_flag,
- pps->pps_loop_filter_across_slices_enabled_flag);
- }
-
- if (pps->tiles_enabled_flag || pps->entropy_coding_sync_enabled_flag) {
- unsigned int num_entry_point_offsets_limit;
- if (!pps->tiles_enabled_flag && pps->entropy_coding_sync_enabled_flag)
- num_entry_point_offsets_limit = pic_height_in_ctbs_y - 1;
- else if (pps->tiles_enabled_flag && !pps->entropy_coding_sync_enabled_flag)
- num_entry_point_offsets_limit =
- (pps->num_tile_columns_minus1 + 1) * (pps->num_tile_rows_minus1 + 1);
- else
- num_entry_point_offsets_limit =
- (pps->num_tile_columns_minus1 + 1) * pic_height_in_ctbs_y - 1;
- ue(num_entry_point_offsets, 0, num_entry_point_offsets_limit);
-
- if (current->num_entry_point_offsets > HEVC_MAX_ENTRY_POINT_OFFSETS) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Too many entry points: "
- "%"PRIu16".\n", current->num_entry_point_offsets);
- return AVERROR_PATCHWELCOME;
- }
-
- if (current->num_entry_point_offsets > 0) {
- ue(offset_len_minus1, 0, 31);
- for (i = 0; i < current->num_entry_point_offsets; i++)
- ubs(current->offset_len_minus1 + 1, entry_point_offset_minus1[i], 1, i);
- }
- }
-
- if (pps->slice_segment_header_extension_present_flag) {
- ue(slice_segment_header_extension_length, 0, 256);
- for (i = 0; i < current->slice_segment_header_extension_length; i++)
- us(8, slice_segment_header_extension_data_byte[i], 0x00, 0xff, 1, i);
- }
-
- CHECK(FUNC(byte_alignment)(ctx, rw));
-
- return 0;
-}
-
-static int FUNC(sei_buffering_period)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIBufferingPeriod *current, SEIMessageState *sei)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps;
- const H265RawHRDParameters *hrd;
- int err, i, length;
-
-#ifdef READ
- int start_pos, end_pos;
- start_pos = get_bits_count(rw);
-#endif
-
- HEADER("Buffering Period");
-
- ue(bp_seq_parameter_set_id, 0, HEVC_MAX_SPS_COUNT - 1);
-
- sps = h265->sps[current->bp_seq_parameter_set_id];
- if (!sps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "SPS id %d not available.\n",
- current->bp_seq_parameter_set_id);
- return AVERROR_INVALIDDATA;
- }
- h265->active_sps = sps;
-
- if (!sps->vui_parameters_present_flag ||
- !sps->vui.vui_hrd_parameters_present_flag) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Buffering period SEI requires "
- "HRD parameters to be present in SPS.\n");
- return AVERROR_INVALIDDATA;
- }
- hrd = &sps->vui.hrd_parameters;
- if (!hrd->nal_hrd_parameters_present_flag &&
- !hrd->vcl_hrd_parameters_present_flag) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "Buffering period SEI requires "
- "NAL or VCL HRD parameters to be present.\n");
- return AVERROR_INVALIDDATA;
- }
-
- if (!hrd->sub_pic_hrd_params_present_flag)
- flag(irap_cpb_params_present_flag);
- else
- infer(irap_cpb_params_present_flag, 0);
- if (current->irap_cpb_params_present_flag) {
- length = hrd->au_cpb_removal_delay_length_minus1 + 1;
- ub(length, cpb_delay_offset);
- length = hrd->dpb_output_delay_length_minus1 + 1;
- ub(length, dpb_delay_offset);
- } else {
- infer(cpb_delay_offset, 0);
- infer(dpb_delay_offset, 0);
- }
-
- flag(concatenation_flag);
-
- length = hrd->au_cpb_removal_delay_length_minus1 + 1;
- ub(length, au_cpb_removal_delay_delta_minus1);
-
- if (hrd->nal_hrd_parameters_present_flag) {
- for (i = 0; i <= hrd->cpb_cnt_minus1[0]; i++) {
- length = hrd->initial_cpb_removal_delay_length_minus1 + 1;
-
- ubs(length, nal_initial_cpb_removal_delay[i], 1, i);
- ubs(length, nal_initial_cpb_removal_offset[i], 1, i);
-
- if (hrd->sub_pic_hrd_params_present_flag ||
- current->irap_cpb_params_present_flag) {
- ubs(length, nal_initial_alt_cpb_removal_delay[i], 1, i);
- ubs(length, nal_initial_alt_cpb_removal_offset[i], 1, i);
- }
- }
- }
- if (hrd->vcl_hrd_parameters_present_flag) {
- for (i = 0; i <= hrd->cpb_cnt_minus1[0]; i++) {
- length = hrd->initial_cpb_removal_delay_length_minus1 + 1;
-
- ubs(length, vcl_initial_cpb_removal_delay[i], 1, i);
- ubs(length, vcl_initial_cpb_removal_offset[i], 1, i);
-
- if (hrd->sub_pic_hrd_params_present_flag ||
- current->irap_cpb_params_present_flag) {
- ubs(length, vcl_initial_alt_cpb_removal_delay[i], 1, i);
- ubs(length, vcl_initial_alt_cpb_removal_offset[i], 1, i);
- }
- }
- }
-
-#ifdef READ
- end_pos = get_bits_count(rw);
- if (cbs_h265_payload_extension_present(rw, sei->payload_size,
- end_pos - start_pos))
- flag(use_alt_cpb_params_flag);
- else
- infer(use_alt_cpb_params_flag, 0);
-#else
- // If unknown extension data exists, then use_alt_cpb_params_flag is
- // coded in the bitstream and must be written even if it's 0.
- if (current->use_alt_cpb_params_flag || sei->extension_present) {
- flag(use_alt_cpb_params_flag);
- // Ensure this bit is not the last in the payload by making the
- // more_data_in_payload() check evaluate to true, so it may not
- // be mistaken as something else by decoders.
- sei->extension_present = 1;
- }
-#endif
-
- return 0;
-}
-
-static int FUNC(sei_pic_timing)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIPicTiming *current, SEIMessageState *sei)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps;
- const H265RawHRDParameters *hrd;
- int err, expected_source_scan_type, i, length;
-
- HEADER("Picture Timing");
-
- sps = h265->active_sps;
- if (!sps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR,
- "No active SPS for pic_timing.\n");
- return AVERROR_INVALIDDATA;
- }
-
- expected_source_scan_type = 2 -
- 2 * sps->profile_tier_level.general_interlaced_source_flag -
- sps->profile_tier_level.general_progressive_source_flag;
-
- if (sps->vui.frame_field_info_present_flag) {
- u(4, pic_struct, 0, 12);
- u(2, source_scan_type,
- expected_source_scan_type >= 0 ? expected_source_scan_type : 0,
- expected_source_scan_type >= 0 ? expected_source_scan_type : 2);
- flag(duplicate_flag);
- } else {
- infer(pic_struct, 0);
- infer(source_scan_type,
- expected_source_scan_type >= 0 ? expected_source_scan_type : 2);
- infer(duplicate_flag, 0);
- }
-
- if (sps->vui_parameters_present_flag &&
- sps->vui.vui_hrd_parameters_present_flag)
- hrd = &sps->vui.hrd_parameters;
- else
- hrd = NULL;
- if (hrd && (hrd->nal_hrd_parameters_present_flag ||
- hrd->vcl_hrd_parameters_present_flag)) {
- length = hrd->au_cpb_removal_delay_length_minus1 + 1;
- ub(length, au_cpb_removal_delay_minus1);
-
- length = hrd->dpb_output_delay_length_minus1 + 1;
- ub(length, pic_dpb_output_delay);
-
- if (hrd->sub_pic_hrd_params_present_flag) {
- length = hrd->dpb_output_delay_du_length_minus1 + 1;
- ub(length, pic_dpb_output_du_delay);
- }
-
- if (hrd->sub_pic_hrd_params_present_flag &&
- hrd->sub_pic_cpb_params_in_pic_timing_sei_flag) {
- // Each decoding unit must contain at least one slice segment.
- ue(num_decoding_units_minus1, 0, HEVC_MAX_SLICE_SEGMENTS);
- flag(du_common_cpb_removal_delay_flag);
-
- length = hrd->du_cpb_removal_delay_increment_length_minus1 + 1;
- if (current->du_common_cpb_removal_delay_flag)
- ub(length, du_common_cpb_removal_delay_increment_minus1);
-
- for (i = 0; i <= current->num_decoding_units_minus1; i++) {
- ues(num_nalus_in_du_minus1[i],
- 0, HEVC_MAX_SLICE_SEGMENTS, 1, i);
- if (!current->du_common_cpb_removal_delay_flag &&
- i < current->num_decoding_units_minus1)
- ubs(length, du_cpb_removal_delay_increment_minus1[i], 1, i);
- }
- }
- }
-
- return 0;
-}
-
-static int FUNC(sei_pan_scan_rect)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIPanScanRect *current, SEIMessageState *sei)
-{
- int err, i;
-
- HEADER("Pan-Scan Rectangle");
-
- ue(pan_scan_rect_id, 0, UINT32_MAX - 1);
- flag(pan_scan_rect_cancel_flag);
-
- if (!current->pan_scan_rect_cancel_flag) {
- ue(pan_scan_cnt_minus1, 0, 2);
-
- for (i = 0; i <= current->pan_scan_cnt_minus1; i++) {
- ses(pan_scan_rect_left_offset[i], INT32_MIN + 1, INT32_MAX, 1, i);
- ses(pan_scan_rect_right_offset[i], INT32_MIN + 1, INT32_MAX, 1, i);
- ses(pan_scan_rect_top_offset[i], INT32_MIN + 1, INT32_MAX, 1, i);
- ses(pan_scan_rect_bottom_offset[i], INT32_MIN + 1, INT32_MAX, 1, i);
- }
-
- flag(pan_scan_rect_persistence_flag);
- }
-
- return 0;
-}
-
-static int FUNC(sei_recovery_point)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIRecoveryPoint *current, SEIMessageState *sei)
-{
- int err;
-
- HEADER("Recovery Point");
-
- se(recovery_poc_cnt, -32768, 32767);
-
- flag(exact_match_flag);
- flag(broken_link_flag);
-
- return 0;
-}
-
-static int FUNC(film_grain_characteristics)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawFilmGrainCharacteristics *current,
- SEIMessageState *state)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps = h265->active_sps;
- int err, c, i, j;
-
- HEADER("Film Grain Characteristics");
-
- flag(film_grain_characteristics_cancel_flag);
- if (!current->film_grain_characteristics_cancel_flag) {
- int filmGrainBitDepth[3];
-
- u(2, film_grain_model_id, 0, 1);
- flag(separate_colour_description_present_flag);
- if (current->separate_colour_description_present_flag) {
- ub(3, film_grain_bit_depth_luma_minus8);
- ub(3, film_grain_bit_depth_chroma_minus8);
- flag(film_grain_full_range_flag);
- ub(8, film_grain_colour_primaries);
- ub(8, film_grain_transfer_characteristics);
- ub(8, film_grain_matrix_coeffs);
- } else {
- if (!sps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR,
- "No active SPS for film_grain_characteristics.\n");
- return AVERROR_INVALIDDATA;
- }
- infer(film_grain_bit_depth_luma_minus8, sps->bit_depth_luma_minus8);
- infer(film_grain_bit_depth_chroma_minus8, sps->bit_depth_chroma_minus8);
- infer(film_grain_full_range_flag, sps->vui.video_full_range_flag);
- infer(film_grain_colour_primaries, sps->vui.colour_primaries);
- infer(film_grain_transfer_characteristics, sps->vui.transfer_characteristics);
- infer(film_grain_matrix_coeffs, sps->vui.matrix_coefficients);
- }
-
- filmGrainBitDepth[0] = current->film_grain_bit_depth_luma_minus8 + 8;
- filmGrainBitDepth[1] =
- filmGrainBitDepth[2] = current->film_grain_bit_depth_chroma_minus8 + 8;
-
- u(2, blending_mode_id, 0, 1);
- ub(4, log2_scale_factor);
- for (c = 0; c < 3; c++)
- flags(comp_model_present_flag[c], 1, c);
- for (c = 0; c < 3; c++) {
- if (current->comp_model_present_flag[c]) {
- ubs(8, num_intensity_intervals_minus1[c], 1, c);
- us(3, num_model_values_minus1[c], 0, 5, 1, c);
- for (i = 0; i <= current->num_intensity_intervals_minus1[c]; i++) {
- ubs(8, intensity_interval_lower_bound[c][i], 2, c, i);
- ubs(8, intensity_interval_upper_bound[c][i], 2, c, i);
- for (j = 0; j <= current->num_model_values_minus1[c]; j++)
- ses(comp_model_value[c][i][j], 0 - current->film_grain_model_id * (1 << (filmGrainBitDepth[c] - 1)),
- ((1 << filmGrainBitDepth[c]) - 1) - current->film_grain_model_id * (1 << (filmGrainBitDepth[c] - 1)),
- 3, c, i, j);
- }
- }
- }
- flag(film_grain_characteristics_persistence_flag);
- }
-
- return 0;
-}
-
-static int FUNC(sei_display_orientation)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIDisplayOrientation *current, SEIMessageState *sei)
-{
- int err;
-
- HEADER("Display Orientation");
-
- flag(display_orientation_cancel_flag);
- if (!current->display_orientation_cancel_flag) {
- flag(hor_flip);
- flag(ver_flip);
- ub(16, anticlockwise_rotation);
- flag(display_orientation_persistence_flag);
- }
-
- return 0;
-}
-
-static int FUNC(sei_active_parameter_sets)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIActiveParameterSets *current, SEIMessageState *sei)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawVPS *vps;
- int err, i;
-
- HEADER("Active Parameter Sets");
-
- u(4, active_video_parameter_set_id, 0, HEVC_MAX_VPS_COUNT);
- vps = h265->vps[current->active_video_parameter_set_id];
- if (!vps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR, "VPS id %d not available for active "
- "parameter sets.\n", current->active_video_parameter_set_id);
- return AVERROR_INVALIDDATA;
- }
- h265->active_vps = vps;
-
- flag(self_contained_cvs_flag);
- flag(no_parameter_set_update_flag);
-
- ue(num_sps_ids_minus1, 0, HEVC_MAX_SPS_COUNT - 1);
- for (i = 0; i <= current->num_sps_ids_minus1; i++)
- ues(active_seq_parameter_set_id[i], 0, HEVC_MAX_SPS_COUNT - 1, 1, i);
-
- for (i = vps->vps_base_layer_internal_flag;
- i <= FFMIN(62, vps->vps_max_layers_minus1); i++) {
- ues(layer_sps_idx[i], 0, current->num_sps_ids_minus1, 1, i);
-
- if (i == 0)
- h265->active_sps = h265->sps[current->active_seq_parameter_set_id[current->layer_sps_idx[0]]];
- }
-
- return 0;
-}
-
-static int FUNC(sei_decoded_picture_hash)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIDecodedPictureHash *current, SEIMessageState *sei)
-{
- CodedBitstreamH265Context *h265 = ctx->priv_data;
- const H265RawSPS *sps = h265->active_sps;
- int err, c, i;
-
- HEADER("Decoded Picture Hash");
-
- if (!sps) {
- av_log(ctx->log_ctx, AV_LOG_ERROR,
- "No active SPS for decoded picture hash.\n");
- return AVERROR_INVALIDDATA;
- }
-
- u(8, hash_type, 0, 2);
-
- for (c = 0; c < (sps->chroma_format_idc == 0 ? 1 : 3); c++) {
- if (current->hash_type == 0) {
- for (i = 0; i < 16; i++)
- us(8, picture_md5[c][i], 0x00, 0xff, 2, c, i);
- } else if (current->hash_type == 1) {
- us(16, picture_crc[c], 0x0000, 0xffff, 1, c);
- } else if (current->hash_type == 2) {
- us(32, picture_checksum[c], 0x00000000, 0xffffffff, 1, c);
- }
- }
-
- return 0;
-}
-
-static int FUNC(sei_time_code)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEITimeCode *current, SEIMessageState *sei)
-{
- int err, i;
-
- HEADER("Time Code");
-
- u(2, num_clock_ts, 1, 3);
-
- for (i = 0; i < current->num_clock_ts; i++) {
- flags(clock_timestamp_flag[i], 1, i);
-
- if (current->clock_timestamp_flag[i]) {
- flags(units_field_based_flag[i], 1, i);
- us(5, counting_type[i], 0, 6, 1, i);
- flags(full_timestamp_flag[i], 1, i);
- flags(discontinuity_flag[i], 1, i);
- flags(cnt_dropped_flag[i], 1, i);
-
- ubs(9, n_frames[i], 1, i);
-
- if (current->full_timestamp_flag[i]) {
- us(6, seconds_value[i], 0, 59, 1, i);
- us(6, minutes_value[i], 0, 59, 1, i);
- us(5, hours_value[i], 0, 23, 1, i);
- } else {
- flags(seconds_flag[i], 1, i);
- if (current->seconds_flag[i]) {
- us(6, seconds_value[i], 0, 59, 1, i);
- flags(minutes_flag[i], 1, i);
- if (current->minutes_flag[i]) {
- us(6, minutes_value[i], 0, 59, 1, i);
- flags(hours_flag[i], 1, i);
- if (current->hours_flag[i])
- us(5, hours_value[i], 0, 23, 1, i);
- }
- }
- }
-
- ubs(5, time_offset_length[i], 1, i);
- if (current->time_offset_length[i] > 0)
- ibs(current->time_offset_length[i], time_offset_value[i], 1, i);
- else
- infer(time_offset_value[i], 0);
- }
- }
-
- return 0;
-}
-
-static int FUNC(sei_alpha_channel_info)
- (CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEIAlphaChannelInfo *current, SEIMessageState *sei)
-{
- int err, length;
-
- HEADER("Alpha Channel Information");
-
- flag(alpha_channel_cancel_flag);
- if (!current->alpha_channel_cancel_flag) {
- ub(3, alpha_channel_use_idc);
- ub(3, alpha_channel_bit_depth_minus8);
- length = current->alpha_channel_bit_depth_minus8 + 9;
- ub(length, alpha_transparent_value);
- ub(length, alpha_opaque_value);
- flag(alpha_channel_incr_flag);
- flag(alpha_channel_clip_flag);
- if (current->alpha_channel_clip_flag)
- flag(alpha_channel_clip_type_flag);
- } else {
- infer(alpha_channel_use_idc, 2);
- infer(alpha_channel_incr_flag, 0);
- infer(alpha_channel_clip_flag, 0);
- }
-
- return 0;
-}
-
-static int FUNC(sei)(CodedBitstreamContext *ctx, RWContext *rw,
- H265RawSEI *current, int prefix)
-{
- int err;
-
- if (prefix)
- HEADER("Prefix Supplemental Enhancement Information");
- else
- HEADER("Suffix Supplemental Enhancement Information");
-
- CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header,
- prefix ? HEVC_NAL_SEI_PREFIX
- : HEVC_NAL_SEI_SUFFIX));
-
- CHECK(FUNC_SEI(message_list)(ctx, rw, ¤t->message_list, prefix));
-
- CHECK(FUNC(rbsp_trailing_bits)(ctx, rw));
-
- return 0;
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download GTA 5 APK OBB for Android in 2023 The best way to enjoy the epic game on your mobile device.md b/spaces/congsaPfin/Manga-OCR/logs/Download GTA 5 APK OBB for Android in 2023 The best way to enjoy the epic game on your mobile device.md
deleted file mode 100644
index f8e0b3f7062538163c95c077e16cfc76887a984d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download GTA 5 APK OBB for Android in 2023 The best way to enjoy the epic game on your mobile device.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
GTA 5 APK Download 2023: How to Play GTA 5 on Android Devices
-
If you are a fan of Grand Theft Auto V, or GTA 5 for short, you might be wondering if you can play this amazing game on your Android device. After all, GTA 5 is one of the most popular and successful video games of all time, with millions of players around the world. In this article, we will tell you everything you need to know about GTA 5 APK download for Android in 2023, including the official and unofficial ways to enjoy this game on your mobile device. Let's get started!
GTA 5 is an open-world action-adventure game developed by Rockstar Games and released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. The game is set in the fictional state of San Andreas, which is based on Southern California, and follows the lives of three protagonists: Michael, a retired bank robber; Trevor, a psychopathic criminal; and Franklin, a young street hustler. The game allows the player to switch between these characters at any time and explore the vast and diverse world of San Andreas, which includes urban areas, rural landscapes, mountains, deserts, beaches, and more. The game also features a variety of activities, such as driving, shooting, fighting, stealth, racing, flying, parachuting, swimming, diving, hunting, golfing, tennis, yoga, and more. The game also has an online multiplayer mode called GTA Online, where up to 30 players can cooperate or compete in various missions and events.
-
Why is GTA 5 so popular?
-
GTA 5 is widely regarded as one of the best video games ever made, and has received critical acclaim and numerous awards for its gameplay, story, graphics, sound, music, and online features. The game has also broken several records in the gaming industry, such as being the fastest-selling entertainment product in history, earning $800 million in its first day and $1 billion in its first three days. As of December 2020, the game has sold over 140 million copies worldwide and is still one of the most played games online. Some of the reasons why GTA 5 is so popular are:
-
-
It offers a huge and immersive open-world that can be explored freely and dynamically.
-
It has a compelling and humorous story that features three different protagonists with their own personalities and backgrounds.
-
It has a rich and diverse gameplay that caters to different tastes and preferences.
-
It has stunning graphics and realistic physics that create a lifelike and cinematic experience.
-
It has a vibrant and lively online community that provides endless content and fun.
-
-
Can you play GTA 5 on Android devices?
-
The short answer is yes, but not directly. GTA 5 is not officially available on Android or iOS devices, as Rockstar Games has never ported this game to mobile platforms. However, there are some ways to play GTA 5 on your Android device in 2023, either by using a cloud gaming service or by using a fan-made port or emulator. We will explain these methods in detail in the next section.
-
How to download GTA 5 APK for Android in 2023
-
The official way: Use a cloud gaming service
The official way: Use a cloud gaming service
-
Cloud gaming is a technology that allows you to stream games from remote servers to your device, without having to download or install anything. This means that you can play games that are not compatible with your device, as long as you have a stable and fast internet connection. Cloud gaming is also convenient, as you can access your games from anywhere and switch between devices easily.
-
Some of the benefits of cloud gaming are:
-
-
You don't need a powerful device or a lot of storage space to play high-end games.
-
You can play the latest games without having to wait for updates or patches.
-
You can save money on buying new hardware or software.
-
You can enjoy a smooth and lag-free gaming experience, as long as your internet connection is good.
-
-
Some of the best cloud gaming services for GTA 5 are:
-
-
-
Service
-
Price
-
Features
-
-
-
Xbox Cloud Gaming (Beta)
-
$14.99/month (included in Xbox Game Pass Ultimate)
-
- Stream hundreds of high-quality games from the Xbox Game Pass catalog, including GTA 5. - Play on Xbox consoles, PC, and mobile devices via supported browsers or apps. - Use an Xbox Wireless Controller, Sony DualShock 4, or touch controls. - Play with friends online across devices. - Play next-gen games like Microsoft Flight Simulator on your Xbox One and other devices.
-
-
-
NVIDIA GeForce NOW
-
$9.99/month (Priority membership) or free (Standard membership)
-
- Stream games you already own from platforms like Steam, Epic Games Store, Ubisoft Connect, and more, including GTA 5. - Play on PC, Mac, Chromebook, NVIDIA Shield TV, Android, and iOS devices via supported browsers or apps. - Use a compatible gamepad, mouse and keyboard, or touch controls. - Enjoy ray tracing and DLSS features on supported games. - Priority members get extended session lengths and priority access to servers.
-
-
-
Loudplay
-
$9.99/month (Basic plan) or $19.99/month (Pro plan)
-
- Stream games you already own from platforms like Steam, Epic Games Store, Ubisoft Connect, and more, including GTA 5. - Play on PC, Mac, or Android devices via supported browsers or apps. - Use a compatible gamepad, mouse and keyboard, or touch controls. - Get access to a powerful Windows desktop with full control over settings and apps. - Pro plan offers higher performance and more storage space.
-
-
-
To use any of these cloud gaming services for GTA 5 APK download for Android in 2023, you will need to:
-
gta 5 apk + obb download links for android in 2023: real mobile game or fake?
-descargar gta 5 mobile apk para android 2023 rockstar games
-play gta 5 - grand theft auto on pc with bluestacks
-gta 5 apk 2023 latest version free download for android
-how to install gta 5 on android phone in 2023 without verification
-gta 5 apk mod + data obb full offline for android 2023
-gta 5 mobile apk download for ios iphone ipad in 2023
-gta 5 apk + obb highly compressed download for android in 2023
-gta 5 android fan made game download apk + obb in 2023
-gta 5 apk + obb download for android in 2023 no human verification
-gta 5 mobile apk beta version download for android in 2023
-gta 5 apk + obb download for android in 2023 by rockstar games
-gta 5 apk + obb download for android in 2023 mediafıre link
-gta 5 mobile apk + data download for android in 2023 offline
-gta 5 apk + obb download for android in 2023 real or fake
-gta 5 mobile apk + obb download for android in 2023 update
-gta 5 apk + obb download for android in 2023 gameplay
-gta 5 mobile apk + data download for android in 2023 online
-gta 5 apk + obb download for android in 2023 system requirements
-gta 5 mobile apk + obb download for android in 2023 size
-gta 5 apk + obb download for android in 2023 new features
-gta 5 mobile apk + data download for android in 2023 free
-gta 5 apk + obb download for android in 2023 review
-gta 5 mobile apk + obb download for android in 2023 graphics
-gta 5 apk + obb download for android in 2023 cheats
-gta 5 mobile apk + data download for android in 2023 mod menu
-gta 5 apk + obb download for android in 2023 tutorial
-gta 5 mobile apk + obb download for android in 2023 trailer
-gta 5 apk + obb download for android in 2023 error fix
-gta 5 mobile apk + data download for android in 2023 unlimited money
-
-
Sign up for the service of your choice and choose a subscription plan.
-
Download the app or open the browser on your Android device and log in to your account.
-
Launch GTA 5 from the service's library or from your own game library.
-
Enjoy playing GTA 5 on your Android device!
-
-
The unofficial way: Use a fan-made port or emulator
-
If you don't want to use a cloud gaming service, you can also try to use a fan-made port or emulator to play GTA 5 on your Android device. However, these methods are not recommended, as they are not authorized by Rockstar Games and may have legal, technical, or ethical issues.
-
A fan-made port is a modification of the original game that allows it to run on a different platform than it was intended for. For example, some fans have tried to create PC ports for GTA games that were only released on consoles or handheld devices, such as GTA Advance, Liberty City Stories, Vice City Stories, and Chinatown Wars. However, these projects are often incomplete, unstable, buggy, or incompatible with some devices. They may also violate the intellectual property rights of Rockstar Games and be subject to legal action.
-
An emulator is a software that mimics the hardware and software of another device on your device. For example, some emulators can allow you to play PlayStation or Xbox games on your PC or Android device. However, emulators are also problematic for several reasons. First of all, they may not be able to run GTA 5 smoothly or accurately on your device, as GTA 5 is a very demanding game that requires a lot of processing power and memory. Second of all, they may require you to download illegal copies of GTA 5 on your device, which is illegal and may harm your device with viruses or malware. Third of all, they may also infringe on the intellectual property rights of Rockstar Games and the console manufacturers and be subject to legal action.
-
Some of the fan-made ports or emulators that claim to run GTA 5 on Android devices are:
-
-
-
Name
-
Description
-
Source
-
-
-
GTA 5 Mobile by Rockstar Games
-
A fake app that pretends to be an official port of GTA 5 for Android devices, but is actually a scam that asks for money or personal information from unsuspecting users.
-
-
-
-
GTA 5 APK by GTA5Mobile.Club
-
A fan-made port of GTA 5 for Android devices that claims to offer the full game experience, but is actually a modified version of GTA San Andreas with some GTA 5 assets and features.
-
-
-
-
GTA 5 APK by GTA5APK.Net
-
A fan-made port of GTA 5 for Android devices that claims to offer the full game experience, but is actually a modified version of GTA Vice City with some GTA 5 assets and features.
-
-
-
-
PPSSPP Emulator
-
An emulator that allows you to play PSP games on your Android device, including GTA Liberty City Stories and GTA Vice City Stories. However, it cannot run GTA 5, as GTA 5 was never released on PSP.
-
-
-
-
DamonPS2 Emulator
-
An emulator that allows you to play PS2 games on your Android device, including GTA III, GTA Vice City, and GTA San Andreas. However, it cannot run GTA 5, as GTA 5 was never released on PS2.
-
-
-
-
To use any of these fan-made ports or emulators for GTA 5 APK download for Android in 2023, you will need to:
-
-
Download the app or the emulator from the source link and install it on your Android device.
-
Download the game file or the ROM file from the source link or from another website and copy it to your device's storage.
-
Launch the app or the emulator and select the game file or the ROM file to start playing.
-
Be aware of the risks and drawbacks of using these methods and proceed at your own discretion.
-
-
Conclusion
-
Summary of the main points
-
In this article, we have discussed how to play GTA 5 on Android devices in 2023. We have explained what GTA 5 is and why it is so popular. We have also shown you two ways to download GTA 5 APK for Android in 2023: the official way and the unofficial way. The official way is to use a cloud gaming service, such as Xbox Cloud Gaming, NVIDIA GeForce NOW, or Loudplay. The unofficial way is to use a fan-made port or emulator, such as GTA 5 Mobile by Rockstar Games, GTA 5 APK by GTA5Mobile.Club, GTA 5 APK by GTA5APK.Net, PPSSPP Emulator, or DamonPS2 Emulator. However, we have also warned you about the risks and drawbacks of using these methods, such as legal issues, technical issues, or ethical issues.
-
Call to action and final thoughts
-
We hope that this article has helped you understand how to play GTA 5 on Android devices in 2023. If you are interested in trying out any of these methods, we recommend that you do your own research and follow the instructions carefully. However, we also advise that you respect the intellectual property rights of Rockstar Games and support their work by buying their games legally. If you have any questions or feedback about this article, please feel free to leave a comment below. Thank you for reading and happy gaming!
-
FAQs
-
Is GTA 5 available on Google Play Store?
-
No, GTA 5 is not available on Google Play Store or any other official app store for Android devices. The only way to play GTA 5 on Android devices is to use a cloud gaming service or a fan-made port or emulator.
-
Is GTA 5 free to play on Android devices?
-
No, GTA 5 is not free to play on Android devices. If you use a cloud gaming service, you will need to pay a monthly or yearly subscription fee to access the service and the game. If you use a fan-made port or emulator, you will need to own a legal copy of the game on another platform and download it to your device, which may also incur additional costs.
-
Is GTA 5 safe to play on Android devices?
-
It depends on the method you use. If you use a cloud gaming service, you can play GTA 5 safely on your Android device, as long as you use a reputable and secure service and a reliable internet connection. However, if you use a fan-made port or emulator, you may expose your device and your data to various risks, such as viruses, malware, spyware, phishing, hacking, or legal action. Therefore, we advise that you be careful and cautious when using these methods and protect your device and your data with antivirus software and VPN services.
-
Is GTA 5 compatible with all Android devices?
-
No, GTA 5 is not compatible with all Android devices. If you use a cloud gaming service, you will need to have a device that meets the minimum requirements of the service, such as operating system, browser, app, screen size, resolution, memory, battery, etc. You will also need to have a compatible controller or touch controls to play the game. If you use a fan-made port or emulator, you will need to have a device that can run the app or the emulator smoothly and support the game file or the ROM file. You may also encounter some compatibility issues or errors depending on your device model, brand, or version.
-
Can I play GTA 5 offline on Android devices?
-
No, you cannot play GTA 5 offline on Android devices. If you use a cloud gaming service, you will need to have a constant and stable internet connection to stream the game from the server to your device. If you use a fan-made port or emulator, you will still need to have an internet connection to verify the game file or the ROM file and access some online features of the game.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Shadow Fight 2 Special Edition APK for Free - No Mod Required.md b/spaces/congsaPfin/Manga-OCR/logs/Download Shadow Fight 2 Special Edition APK for Free - No Mod Required.md
deleted file mode 100644
index fe73fe61fd4c9cbf70f1bbce30fab3f121c6f98e..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Shadow Fight 2 Special Edition APK for Free - No Mod Required.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Shadow Fight 2 Special Edition: A Premium Fighting Game for Android
-
If you are a fan of fighting games, you might have heard of Shadow Fight 2, one of the most popular and successful fighting series on mobile. But did you know that there is a special edition of Shadow Fight 2 that offers more features and benefits than the original version? In this article, we will tell you everything you need to know about Shadow Fight 2 Special Edition, and how you can download it for free on your Android device.
-
shadow fight 2 special edition free apk download no mod
Shadow Fight 2 Special Edition is the special paid version of Shadow Fight 2. It was released on 17 August 2017 on Android and on 22 August 2017 on iOS. This version allows players to get gems easily, with most of the game modes rewarding the player an amount of them upon victory. Gems are the premium currency in the game, which can be used to buy and upgrade weapons, armor, skills, and more.
-
Shadow Fight 2 Special Edition also has some exclusive features that are not available in the free version of the game. Here are some of them:
-
Features of Shadow Fight 2 Special Edition
-
No ads
-
One of the most annoying things about the free version of Shadow Fight 2 is the constant ads that pop up every time you finish a fight or open a menu. These ads can interrupt your gameplay and ruin your immersion. In Shadow Fight 2 Special Edition, there are no ads at all. You can enjoy the game without any distractions or interruptions.
-
No energy restoring
-
Another frustrating thing about the free version of Shadow Fight 2 is the energy system. Every time you fight, you lose some energy points. When your energy points run out, you have to wait for them to restore over time, or pay gems to refill them instantly. This can limit your playing time and force you to spend money or watch ads to continue playing. In Shadow Fight 2 Special Edition, there is no energy system at all. You can fight as much as you want, anytime and anywhere you want.
-
New story chapter
-
Shadow Fight 2 has a captivating story that follows the journey of a shadow warrior who seeks to defeat the evil Titan and his army of shadow demons. The free version of the game has six acts, each with a different boss and environment. In Shadow Fight 2 Special Edition, there is a seventh act that reveals the truth behind Sensei's past and his connection to Titan. This act has new enemies, weapons, locations, and challenges that will test your skills and keep you hooked.
-
shadow fight 2 special edition apk free download latest version
-shadow fight 2 special edition full game free download for android
-shadow fight 2 special edition unlocked apk free download
-shadow fight 2 special edition free download without mod apk
-shadow fight 2 special edition apk free download for android offline
-shadow fight 2 special edition hack apk free download
-shadow fight 2 special edition premium apk free download
-shadow fight 2 special edition mod apk free download rexdl
-shadow fight 2 special edition original apk free download
-shadow fight 2 special edition apk free download android 1
-shadow fight 2 special edition unlimited money apk free download
-shadow fight 2 special edition mega mod apk free download
-shadow fight 2 special edition apk obb free download
-shadow fight 2 special edition mod apk free download revdl
-shadow fight 2 special edition apk free download highly compressed
-shadow fight 2 special edition mod apk free download android
-shadow fight 2 special edition all weapons unlocked apk free download
-shadow fight 2 special edition mod menu apk free download
-shadow fight 2 special edition max level apk free download
-shadow fight 2 special edition mod apk free download apkpure
-shadow fight 2 special edition unlimited gems and coins apk free download
-shadow fight 2 special edition mod apk free download happymod
-shadow fight 2 special edition titan mod apk free download
-shadow fight 2 special edition god mode apk free download
-shadow fight 2 special edition no ads apk free download
-shadow fight 2 special edition cheat codes apk free download
-shadow fight 2 special edition pro apk free download
-shadow fight 2 special edition cracked apk free download
-shadow fight 2 special edition unlimited everything apk free download
-shadow fight 2 special edition modded apk free download
-shadow fight 2 special edition hack version apk free download
-shadow fight 2 special edition paid apk free download
-shadow fight 2 special edition mod unlimited energy apk free download
-shadow fight 2 special edition old version apk free download
-shadow fight 2 special edition all bosses unlocked apk free download
-shadow fight 2 special edition no root apk free download
-shadow fight 2 special edition unlimited orbs and gems apk free download
-shadow fight 2 special edition mod money and gems apk free download
-shadow fight 2 special edition sensei story mode unlocked apk free download
-shadow fight 2 special edition one hit kill mod apk free download
-shadow fight 2 special edition all levels unlocked apk free download
-shadow fight 2 special edition no verification required apk free download
-shadow fight 2 special edition unlimited time bomb and enchantment orbs hack modded version game app file for android mobile phone tablet device install play offline without internet connection or wifi data usage required direct link mediafire zippyshare google drive dropbox mega nz apkmirror uptodown apkmody apknite apkpure.com[^1^]
-
Huge arsenal of weapons and armor
-
Shadow Fight 2 has a variety of weapons and armor that you can use to customize your character and enhance your fighting abilities. You can choose from swords, axes, daggers, nunchaku, shuriken, kusarigama, sai, and more. You can also equip yourself with helmets, vests, gloves, boots, rings, amulets, and more. Each weapon and armor has different stats and effects that can affect your speed, damage, defense, critical chance, etc. In Shadow Fight 2 Special Edition, you can get a lot of gems through battles and use them to buy and upgrade your weapons and armor easily. You can also unlock some special weapons and armor that are only available in this version.
-
Simple controls and stunning animations
-
Shadow Fight 2 has simple controls that are designed for touch screen devices. You can use the virtual joystick on the left side of the screen to move your character, and the buttons on the right side of the screen to perform attacks, kicks, jumps, and special moves. You can also combine different moves to create combos and unleash powerful attacks. Shadow Fight 2 has stunning animations that make the fights look realistic and fluid. The game uses a physics-based combat system that simulates the movements and reactions of real fighters. You can see the shadows of your character and your opponents react to every hit and movement. You can also see the effects of your weapons and skills on the environment, such as sparks, blood, dust, etc.
-
How to download Shadow Fight 2 Special Edition for free?
-
Shadow Fight 2 Special Edition is a premium game that costs $4.99 on Google Play Store and $3.99 on App Store. However, there are some ways to download it for free on your Android device. Here are some of them:
-
Download from Google Play Store
-
The easiest and safest way to download Shadow Fight 2 Special Edition for free is to use Google Play Store. Google Play Store often offers discounts and promotions for various apps and games, including Shadow Fight 2 Special Edition. You can check the store regularly to see if the game is on sale or free for a limited time. You can also use Google Play Points, which are rewards that you can earn by buying or using apps and games on Google Play Store. You can use these points to redeem Shadow Fight 2 Special Edition for free or at a lower price.
-
Download from App Store
-
If you have an iOS device, you can also download Shadow Fight 2 Special Edition for free from App Store. App Store also has discounts and promotions for various apps and games, including Shadow Fight 2 Special Edition. You can check the store regularly to see if the game is on sale or free for a limited time. You can also use Apple Gift Cards, which are cards that you can buy or receive as gifts that contain a specific amount of money that you can use to buy apps and games on App Store. You can use these gift cards to buy Shadow Fight 2 Special Edition for free or at a lower price.
-
Download from third-party websites
-
Another way to download Shadow Fight 2 Special Edition for free is to use third-party websites that offer apk files of the game. Apk files are files that contain the installation package of an app or game. You can download these files from various websites that host them, such as APKPure, APKMirror, APKMonk, etc. However, there are some advantages and disadvantages of using this method.
-
Advantages and disadvantages of third-party websites
-
The main advantage of using third-party websites is that you can get Shadow Fight 2 Special Edition for free without paying anything or waiting for a sale or promotion. You can also get access to some modded versions of the game that have unlimited gems, coins, weapons, armor, etc. However, there are also some disadvantages of using this method. The main disadvantage is that you may expose your device to malware, viruses, spyware, etc. that may harm your device or steal your personal information. Some of these websites may also have fake or outdated apk files that may not work properly or cause errors in your device. Another disadvantage is that you may not get updates or support from the official developers of the game. You may also face some legal issues if you download a pirated version of the game.
-
How to install apk files from third-party websites
-
If you decide to use third-party websites to download Shadow Fight 2 Special Edition for free, you need to follow some steps to install the apk files on your device. Here are the steps:
-
-
Go to the settings of your device and enable the option to install apps from unknown sources. This will allow you to install apps that are not from Google Play Store or App Store.
-
Go to the website that offers the apk file of Shadow Fight 2 Special Edition and download it on your device.
-
Locate the downloaded file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy it.
-
-
Conclusion
-
Shadow Fight 2 Special Edition is a premium fighting game for Android that offers more features and benefits than the original version of Shadow Fight 2. It has no ads, no energy restoring, a new story chapter, a huge arsenal of weapons and armor, simple controls and stunning animations. It costs $4.99 on Google Play Store and $3.99 on App Store, but you can also download it for free using Google Play Store, App Store, or third-party websites. However, you should be careful of the risks and consequences of using third-party websites, such as malware, viruses, legal issues, etc. Shadow Fight 2 Special Edition is a great game for fighting fans who want to enjoy a premium and immersive experience on their Android device.
-
FAQs
-
Here are some frequently asked questions about Shadow Fight 2 Special Edition:
-
-
Q: What is the difference between Shadow Fight 2 and Shadow Fight 2 Special Edition?
-A: Shadow Fight 2 is the free version of the game that has ads, energy system, and six acts. Shadow Fight 2 Special Edition is the paid version of the game that has no ads, no energy system, and seven acts.
-
Q: How can I get gems in Shadow Fight 2 Special Edition?
-A: You can get gems in Shadow Fight 2 Special Edition by winning battles, completing achievements, watching videos, or buying them with real money.
-
Q: How can I unlock new weapons and armor in Shadow Fight 2 Special Edition?
-A: You can unlock new weapons and armor in Shadow Fight 2 Special Edition by buying them with gems or coins, or by defeating bosses and completing challenges.
-
Q: How can I upgrade my weapons and armor in Shadow Fight 2 Special Edition?
-A: You can upgrade your weapons and armor in Shadow Fight 2 Special Edition by spending gems or coins, or by using enchantments and ascension.
-
Q: How can I learn new skills and moves in Shadow Fight 2 Special Edition?
-A: You can learn new skills and moves in Shadow Fight 2 Special Edition by spending skill points, or by using perks and magic.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Naruto Shinobi Striker PPSSPP on Android - Easy and Fast.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Install Naruto Shinobi Striker PPSSPP on Android - Easy and Fast.md
deleted file mode 100644
index 72298243994bee64644a3a25e2df13ed5f30be0a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Naruto Shinobi Striker PPSSPP on Android - Easy and Fast.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Naruto to Boruto Shinobi Striker PPSSPP Download Android: How to Play the Ultimate Ninja Game on Your Phone
-
If you are a fan of Naruto, you might have heard of Naruto to Boruto Shinobi Striker, a multiplayer online game that lets you fight with your favorite characters and discover a new gameplay style in the Naruto universe. But did you know that you can also play this game on your Android device using an emulator called PPSSPP? In this article, we will show you how to download and install Naruto to Boruto Shinobi Striker PPSSPP on Android and give you some tips and tricks for playing it.
-
What is Naruto to Boruto Shinobi Striker?
-
Naruto to Boruto Shinobi Striker is a game that was released in 2018 for PlayStation 4, Xbox One, and PC. It is based on the popular manga and anime series Naruto and its sequel Boruto. The game features a new gameplay style that combines action and strategy in thrilling 3D environments. You can choose from over 40 characters from the Naruto series, each with their own unique skills and abilities. You can also create your own custom character and customize their appearance, weapons, and jutsu.
-
naruto to boruto shinobi striker ppsspp download android
The game has four different modes that you can play solo or with your friends online. The first mode is Flag Battle, where you have to capture the enemy's flag while defending your own. The second mode is Barrier Battle, where you have to break through the enemy's barrier and defeat their boss. The third mode is Base Battle, where you have to capture and hold three bases on the map. The fourth mode is Combat Battle, where you have to defeat as many enemies as possible within a time limit.
-
Naruto to Boruto Shinobi Striker is a game that will test your teamwork, strategy, and skills as a shinobi. You can cooperate with your friends to form a four-member team and compete against other teams from around the world. You can also learn from famous ninja masters like Naruto, Sasuke, Kakashi, and more. You can unlock new skills, weapons, costumes, and accessories as you progress in the game. You can also participate in special events and missions that will challenge your abilities.
-
naruto to boruto shinobi striker apk android free download
-naruto to boruto shinobi striker ppsspp iso download for android
-naruto to boruto shinobi striker android game offline
-naruto to boruto shinobi striker ppsspp highly compressed android
-naruto to boruto shinobi striker mobile apk download android
-naruto to boruto shinobi striker ppsspp mod download android
-naruto to boruto shinobi striker android gameplay
-naruto to boruto shinobi striker ppsspp emulator android
-naruto to boruto shinobi striker apk obb download android
-naruto to boruto shinobi striker android game online
-naruto to boruto shinobi striker ppsspp gold download android
-naruto to boruto shinobi striker android graphics settings
-naruto to boruto shinobi striker apk data download android
-naruto to boruto shinobi striker android game review
-naruto to boruto shinobi striker ppsspp cheats android
-naruto to boruto shinobi striker android system requirements
-naruto to boruto shinobi striker apk mod download android
-naruto to boruto shinobi striker android game trailer
-naruto to boruto shinobi striker ppsspp best settings android
-naruto to boruto shinobi striker android controller support
-naruto to boruto shinobi striker apk full version download android
-naruto to boruto shinobi striker android game size
-naruto to boruto shinobi striker ppsspp english patch android
-naruto to boruto shinobi striker android multiplayer mode
-naruto to boruto shinobi striker apk latest version download android
-naruto to boruto shinobi striker android game features
-naruto to boruto shinobi striker ppsspp file download android
-naruto to boruto shinobi striker android voice actors
-naruto to boruto shinobi striker apk unlocked download android
-naruto to boruto shinobi striker android game rating
-naruto to boruto shinobi striker ppsspp save data android
-naruto to boruto shinobi striker andro
-
What is PPSSPP?
-
PPSSPP is a free and open -source emulator for PSP games on Android devices. It is a way to enjoy PSP games with enhanced graphics, sound, and performance on your phone or tablet. It is a compatible app with many PSP games, including Naruto to Boruto Shinobi Striker.
-
PPSSPP allows you to play PSP games in full HD resolution, with anti-aliasing, anisotropic filtering, and texture scaling. You can also customize the controls, save and load states, and use cheats. You can also connect your device to a TV or a monitor and use a gamepad or a controller for a better gaming experience. You can also play online with other PPSSPP users using the built-in ad hoc network feature.
-
PPSSPP is a great app for Naruto fans who want to play Naruto to Boruto Shinobi Striker on their Android devices. It is easy to use, fast, and reliable. You can download it from the Google Play Store or the official website. You can also check the compatibility list of PPSSPP to see which games work well with the app.
-
How to download and install Naruto to Boruto Shinobi Striker PPSSPP on Android?
-
Now that you know what Naruto to Boruto Shinobi Striker and PPSSPP are, you might be wondering how to download and install them on your Android device. Well, don't worry, we have got you covered. Just follow these simple steps and you will be playing the game in no time.
-
Step 1: Download the PPSSPP app from the Google Play Store or the official website
-
The first step is to download the PPSSPP app on your Android device. You can do this by going to the Google Play Store and searching for PPSSPP or by visiting the official website and downloading the APK file. Once you have downloaded the app, install it on your device by tapping on it and following the instructions.
-
Step 2: Download the Naruto to Boruto Shinobi Striker ISO file from a reliable source
-
The next step is to download the Naruto to Boruto Shinobi Striker ISO file on your device. This is the file that contains the game data and allows you to play it on PPSSPP. You can download it from a reliable source that offers safe and fast downloads. Make sure you download the correct version of the game that matches your region and language. The file size of the game is about 5 GB, so make sure you have enough space on your device before downloading it.
-
Step 3: Extract the ISO file using a file manager app or a zip extractor app
-
After downloading the ISO file, you need to extract it using a file manager app or a zip extractor app. You can use any app that can handle zip files, such as ZArchiver, RAR, or ES File Explorer. To extract the ISO file, locate it in your device storage and tap on it. Then, choose an option to extract it to a folder of your choice. Remember the folder where you extracted the ISO file as you will need it later.
-
Step 4: Launch the PPSSPP app and locate the extracted ISO file in your device storage
-
Now that you have extracted the ISO file, you are ready to play Naruto to Boruto Shinobi Striker on your Android device. To do this, launch the PPSSPP app and tap on the "Games" tab. Then, navigate to the folder where you extracted the ISO file and tap on it. The game will start loading and you will see the title screen.
-
Step 5: Tap on the ISO file and enjoy playing Naruto to Boruto Shinobi Striker on your Android device
-
Congratulations! You have successfully downloaded and installed Naruto to Boruto Shinobi Striker PPSSPP on Android. Now you can enjoy playing this amazing game on your phone or tablet. You can choose from different game modes, characters, and skills in Naruto to Boruto Shinobi Striker. You can also play online with your friends or other players from around the world. Have fun!
-
Tips and tricks for playing Naruto to Boruto Shinobi Striker PPSSPP on Android
-
To make your gaming experience even better, here are some tips and tricks for playing Naruto to Boruto Shinobi Striker PPSSPP on Android.
-
Adjust the settings of the PPSSPP app to optimize the game performance and graphics
-
If you want to improve the game performance and graphics, you can adjust some settings of the PPSSPP app. To do this, tap on the "Settings" tab and then tap on "Graphics". Here you can change the rendering mode, resolution, frame rate, texture filtering, and more. You can also enable or disable some features like immersive mode, buffer rendering, and post-processing shaders. Experiment with different settings until you find the best balance between quality and performance for your device.
-
Use a gamepad or a controller for a better gaming experience
-
If you want to have a more comfortable and precise gaming experience, you can use a gamepad or a controller to play Naruto to Boruto Shinobi Striker PPSSPP on Android. You can connect your device to a gamepad or a controller via Bluetooth or USB. You can also use an app like Octopus to map the buttons and joysticks of your gamepad or controller to the touch screen controls of PPSSPP. You can also customize the layout and size of the touch screen controls in the PPSSPP app.
-
Connect your device to a Wi-Fi network or use mobile data for online multiplayer mode
-
If you want to play online with your friends or other players from around the world, you need to connect your device to a Wi-Fi network or use mobile data. You also need to have a stable and fast internet connection to avoid lag and disconnects. To play online, tap on the "Online" tab in the PPSSPP app and then tap on "Ad hoc server". Here you can create or join a room with other PPSSPP users who are playing Naruto to Boruto Shinobi Striker. You can also chat with them using the built-in chat feature.
-
Explore the different game modes, characters, and skills in Naruto to Boruto Shinobi Striker
-
Naruto to Boruto Shinobi Striker is a game that offers a lot of variety and fun. You can explore the different game modes, characters, and skills in the game. You can also unlock new items and rewards as you progress in the game. Here are some of the things you can do in Naruto to Boruto Shinobi Striker:
-
-
Play Flag Battle, Barrier Battle, Base Battle, or Combat Battle with your friends or other players online
-
Choose from over 40 characters from the Naruto series, each with their own unique skills and abilities
-
Create your own custom character and customize their appearance, weapons, and jutsu
-
Learn from famous ninja masters like Naruto, Sasuke, Kakashi, and more
-
Unlock new skills, weapons, costumes, and accessories as you progress in the game
-
Participate in special events and missions that will challenge your abilities
-
-
Conclusion
-
Naruto to Boruto Shinobi Striker is a game that will appeal to any Naruto fan who wants to experience a new gameplay style in the Naruto universe. It is a game that will test your teamwork, strategy, and skills as a shinobi. It is also a game that you can play on your Android device using an emulator called PPSSPP. In this article, we showed you how to download and install Naruto to Boruto Shinobi Striker PPSSPP on Android and gave you some tips and tricks for playing it. We hope you found this article helpful and informative. Now go ahead and enjoy playing Naruto to Boruto Shinobi Striker on your Android device!
-
FAQs
-
Here are some frequently asked questions about Naruto to Boruto Shinobi Striker PPSSPP on Android:
-
Q: Is Naruto to Boruto Shinobi Striker PPSSPP legal?
-
A: Yes, as long as you own the original copy of the game and use it for personal use only. However, downloading the ISO file from an unauthorized source may be illegal in some countries. We do not condone piracy or any illegal activity.
-
Q: Is Naruto to Boruto Shinobi Striker PPSSPP safe?
-
A: Yes, as long as you download the PPSSPP app from the Google Play Store or the official website and download the ISO file from a reliable source. However, be careful of malware or viruses that may harm your device or steal your data.
-
Q: Is Naruto to Boruto Shinobi Striker PPSSPP free?
-
A: Yes, both the PPSSPP app and the ISO file are free to download and use. However, you may need to pay for some in-game items or features if you want to access them.
-
Q: How much space does Naruto to Boruto Shinobi Striker PPSSPP take on my device?
-
A: The PPSSPP app takes about 30 MB of space on your device, while the ISO file takes about 5 GB of space. You may need to free up some space on your device before downloading and installing them.
-
Q: Can I play Naruto to Boruto Shinobi Striker PPSSPP offline?
-
A: Yes, you can play Naruto to Boruto Shinobi Striker PPSSPP offline if you want to play solo or with your friends using the local multiplayer feature. However, you will need an internet connection if you want to play online with other players from around the world.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy State Connect The No Ads MOD APK for Traffic Control.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy State Connect The No Ads MOD APK for Traffic Control.md
deleted file mode 100644
index 9de8aa069fb286b4294bdfec964f2bee702e981b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy State Connect The No Ads MOD APK for Traffic Control.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
State Connect: Traffic Control Mod APK No Ads
-
Do you love casual puzzle games that challenge your brain and test your logic skills? If yes, then you might want to try State Connect: Traffic Control, a fun and addictive game for Android devices. In this game, you have to build roads and highways to connect different states and cities in the USA. Sounds easy, right? Well, not so fast. You have to avoid traffic jams, collisions, and other obstacles that might ruin your plans. You also have to manage your budget and resources wisely, as you only have a limited amount of money and materials to work with. Can you complete all the levels and become the ultimate traffic controller?
-
What is State Connect: Traffic Control?
-
State Connect: Traffic Control is a casual puzzle game developed by Game Studio North - 37. It was released in 2020 and has received positive reviews from players and critics alike. The game has over 100 levels of increasing difficulty, each with a unique map and objective. You have to use your finger to draw roads and bridges between different states and cities, making sure that they are connected and that there are no gaps or overlaps. You also have to watch out for traffic lights, cars, trucks, trains, planes, boats, and other vehicles that might cause accidents or delays. You have to complete each level within a given time limit and budget, otherwise you will fail and have to start over.
The gameplay of State Connect: Traffic Control is simple and intuitive. You just have to tap and drag your finger on the screen to draw roads and bridges between different points on the map. You can also undo or erase your moves if you make a mistake or want to change something. You can zoom in or out of the map by pinching the screen with two fingers. You can also pause the game by tapping the menu button on the top right corner of the screen.
-
You have to follow some rules when drawing roads and bridges. First, you have to make sure that all the states and cities on the map are connected by at least one road or bridge. Second, you have to avoid crossing or overlapping your roads and bridges with each other or with other objects on the map. Third, you have to respect the traffic signs and signals on the map, such as stop signs, yield signs, traffic lights, etc. Fourth, you have to avoid causing traffic jams or collisions by regulating the flow of vehicles on your roads and bridges. Fifth, you have to complete each level within a given time limit and budget.
-
Why download State Connect: Traffic Control mod apk no ads?
-
State Connect: Traffic Control is a free game that you can download from Google Play Store or other app stores. However, there are some drawbacks that might affect your gaming experience. For example, the game has ads that pop up every now and then, interrupting your gameplay and annoying you. The game also has in-app purchases that require real money if you want to buy more hints or coins. These hints and coins are useful if you want to skip a level or get some help when you are stuck.
-
If you want to enjoy State Connect: Traffic Control without these limitations, then you might want to download State Connect: Traffic Control mod apk no ads. This is a modified version of the game that has some features that are not available in the original version. Here are some of the benefits of downloading State Connect: Traffic Control mod apk no ads:
-
No annoying ads
-
One of the main advantages of downloading State Connect: Traffic Control mod apk no ads is that it removes all the ads from the game. This means that you will not be bothered by any ads that might distract you or waste your time. You can focus on the game and enjoy it without any interruptions.
-
state connect traffic control mod apk download
-state connect traffic control mod apk free
-state connect traffic control mod apk latest version
-state connect traffic control mod apk unlimited money
-state connect traffic control mod apk android
-state connect traffic control mod apk offline
-state connect traffic control mod apk hack
-state connect traffic control mod apk premium
-state connect traffic control mod apk full
-state connect traffic control mod apk unlocked
-state connect traffic control game mod apk
-state connect traffic control simulation mod apk
-state connect traffic control puzzle mod apk
-state connect traffic control casual mod apk
-state connect traffic control strategy mod apk
-state connect traffic control city builder mod apk
-state connect traffic control road network mod apk
-state connect traffic control highway mod apk
-state connect traffic control bridge mod apk
-state connect traffic control car mod apk
-state connect traffic control no ads apk
-state connect traffic control no ads download
-state connect traffic control no ads free
-state connect traffic control no ads latest version
-state connect traffic control no ads unlimited money
-state connect traffic control no ads android
-state connect traffic control no ads offline
-state connect traffic control no ads hack
-state connect traffic control no ads premium
-state connect traffic control no ads full
-state connect traffic control no ads unlocked
-state connect traffic control game no ads
-state connect traffic control simulation no ads
-state connect traffic control puzzle no ads
-state connect traffic control casual no ads
-state connect traffic control strategy no ads
-state connect traffic control city builder no ads
-state connect traffic control road network no ads
-state connect traffic control highway no ads
-state connect traffic control bridge no ads
-state connect traffic control car no ads
-
Unlimited hints and coins
-
Another benefit of downloading State Connect: Traffic Control mod apk no ads is that it gives you unlimited hints and coins. Hints are useful if you need some guidance or clues on how to complete a level. Coins are useful if you want to unlock more levels or buy more materials for your roads and bridges. Normally, you have to watch ads or pay real money to get more hints or coins. But with State Connect: Traffic Control mod apk no ads, you can get as many hints or coins as you want for free. You can use them anytime you want without any restrictions.
-
Easy installation and compatibility
-
The last benefit of downloading State Connect: Traffic Control mod apk no ads is that it is easy to install and compatible with most Android devices. You do not need to root your device or do any complicated steps to install the mod apk file. You just have to follow some simple instructions that we will provide later in this article. The mod apk file is also safe and virus-free, so you do not have to worry about harming your device. The mod apk file is also updated regularly to match the latest version of the game, so you do not have to worry about missing out on any new features or bug fixes.
-
How to download and install State Connect: Traffic Control mod apk no ads?
-
Now that you know the benefits of downloading State Connect: Traffic Control mod apk no ads, you might be wondering how to get it on your device. Well, it is not hard at all. You just have to follow these simple steps:
-
Step 1: Download the mod apk file from a trusted source
-
The first step is to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games, but not all of them are reliable or safe. Some of them might contain malware or viruses that could harm your device or steal your personal information. Therefore, you have to be careful and choose a reputable website that has positive reviews and ratings from other users.
-
One of the best websites that we recommend is [Modded-1.com]. This website has a large collection of mod apk files for different games, including State Connect: Traffic Control. The website is also easy to navigate and has a user-friendly interface. You can find the link to download State Connect: Traffic Control mod apk no ads on this website below:
-
[State Connect: Traffic Control Mod APK No Ads v1.0.7 (Unlimited Hints/Coins)]
-
Step 2: Enable unknown sources on your device
-
The second step is to enable unknown sources on your device. This is necessary because Android devices do not allow the installation of apps from sources other than Google Play Store by default. Therefore, you have to change this setting manually before you can install the mod apk file.
-
To enable unknown sources on your device, you have to go to Settings > Security > Unknown Sources and toggle it on. You might see a warning message that says installing apps from unknown sources could harm your device, but do not worry. As long as you download the mod apk file from a trusted source, there is nothing to fear.
-
Step 3: Install the mod apk file and enjoy the game
-
The final step is to install the mod apk file and enjoy the game. To install the mod apk file, you have to locate it on your device's storage using a file manager app. Then, you have to tap on it and follow the instructions on the screen to complete the installation process.
-
Once the installation is done, you can launch the game from your app drawer or home screen. You will see that there are no ads in the game and that you have unlimited hints and coins. You can use them to play the game without any limitations or difficulties.
-
Conclusion
-
State Connect: Traffic Control is a fun and addictive casual puzzle game that tests your logic and creativity skills. You have to build roads and bridges to connect different states and cities in the USA while avoiding traffic jams and collisions. The game has over 100 levels of increasing difficulty and challenge.
-
If you want to enjoy State Connect: Traffic Control without any ads or in-app purchases, then you should download State Connect: Traffic Control mod apk no ads from Modded-1.com. This is a modified version of the game that removes all the ads and gives you unlimited hints and coins for free. You can also install it easily and safely on your Android device.
-
We hope that this article has helped you learn more about State Connect: Traffic Control mod apk no ads and how to download and install it on your device. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Thank you for reading and happy gaming!
-
FAQs
-
Here are some of the frequently asked questions about State Connect: Traffic Control mod apk no ads:
-
Q: Is State Connect: Traffic Control mod apk no ads safe to use?
-
A: Yes, State Connect: Traffic Control mod apk no ads is safe to use as long as you download it from a trusted source like Modded-1.com. The mod apk file is free of malware and viruses and does not require any root access or permissions to install.
-
Q: Does State Connect: Traffic Control mod apk no ads work on all Android devices?
-
A: Yes, State Connect: Traffic Control mod apk no ads works on most Android devices that run on Android 4.1 or higher. However, some devices might have compatibility issues or performance problems due to different hardware specifications or software versions. If you encounter any issues, please contact the developer of the game or the website where you downloaded the mod apk file for assistance.
-
Q: Can I play State Connect: Traffic Control mod apk no ads online or offline?
-
A: You can play State Connect: Traffic Control mod apk no ads both online and offline. However, if you play online, you might see some ads from Google Play Services or other third-party services that are not related to the game. These ads are not controlled by the mod apk file and cannot be removed. If you want to avoid these ads, you can play offline or turn off your internet connection while playing.
-
Q: Can I update State Connect: Traffic Control mod apk no ads to the latest version of the game?
-
A: Yes, you can update State Connect: Traffic Control mod apk no ads to the latest version of the game as long as the website where you downloaded the mod apk file also updates it. However, updating the mod apk file might overwrite some of the features or settings that you have customized in the previous version. Therefore, it is recommended that you backup your data before updating the mod apk file.
-
Q: Can I share State Connect: Traffic Control mod apk no ads with my friends or family?
-
A: Yes, you can share State Connect: Traffic Control mod apk no ads with your friends or family as long as they also have Android devices that can run the game. However, please do not share the mod apk file on any public platforms or websites that might violate the terms and conditions of the game or the website where you downloaded the mod apk file. Please respect the rights and efforts of the original developers and creators of the game.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/ZArchiver APK A Powerful and Versatile Program for Archive Creation and Extraction.md b/spaces/congsaPfin/Manga-OCR/logs/ZArchiver APK A Powerful and Versatile Program for Archive Creation and Extraction.md
deleted file mode 100644
index 77a44b6178c2c72f28a7fc9293cb73d042b3ef5d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/ZArchiver APK A Powerful and Versatile Program for Archive Creation and Extraction.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
ZArchiver Download APK: A Complete Guide
-
If you are looking for a way to manage your compressed files on your Android device, you may have heard of ZArchiver APK. This is a popular app that allows you to create, extract, edit, and view various types of archive files. But what exactly is ZArchiver APK, how do you download and install it, and how do you use it? In this article, we will answer all these questions and more. Read on to find out everything you need to know about ZArchiver APK.
-
How to download and install ZArchiver APK on your Android device
-
Downloading and installing ZArchiver APK is very easy and straightforward. Just follow these simple steps:
Go to the official website of ZArchiver APK (zdevs.ru) or a trusted source like APKCombo or Uptodown. Make sure you download the latest version of the app, which is currently 1.0.7.
-
Tap on the download button and wait for the file to be downloaded. The file size is about 4 MB, so it should not take long.
-
Enable unknown sources in your settings. This is necessary because ZArchiver APK is not available on Google Play Store, so you need to allow your device to install apps from other sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded file and tap on it to install. You may see a warning message asking you to confirm the installation. Tap on Install and wait for the process to finish.
-
-
Congratulations! You have successfully downloaded and installed ZArchiver APK on your Android device. You can now open the app and start using it.
-
How to use ZArchiver APK to manage your compressed files
-
Using ZArchiver APK is very simple and intuitive. Here are the basic steps you need to follow:
-
-
Open the app and grant the necessary permissions. The app will ask you to allow access to your files and media. This is required for the app to function properly. Tap on Allow and proceed.
-
Browse through your files and folders. The app will show you a list of all the files and folders on your device. You can use the icons at the top to switch between different views, such as list, grid, or details. You can also use the search bar to find a specific file or folder.
-
Select the file or folder you want to compress or decompress. You can tap and hold on a file or folder to select it, or tap on the checkbox next to it. You can select multiple files or folders at once if you want to perform a batch operation.
-
Choose the desired action and options from the menu. Once you have selected the file or folder, you will see a menu at the bottom with various options, such as compress, extract, delete, rename, copy, move, share, and more. Tap on the option you want and follow the instructions on the screen. For example, if you want to compress a file or folder, you will need to choose the archive format, compression level, password, and destination folder.
-
-
That's it! You have successfully used ZArchiver APK to manage your compressed files. You can now view, edit, or share your archives as you wish.
-
What are the features and benefits of ZArchiver APK
-
ZArchiver APK is a powerful and versatile app that offers many features and benefits for managing your compressed files. Here are some of them:
-
-
Supports a wide range of archive formats. ZArchiver APK can create and extract archives in various formats, such as zip, rar, 7z, tar, gzip, bzip2, xz, iso, arj, cab, lzh, lzma, xar, tgz, tbz, z, deb, rpm, zipx, mtz, chm, dmg, cpio, cramfs, img (fat/ntfs/ubf), wim (lzx/xpress), ecm, arc (freearc), lzip. This means you can work with almost any type of compressed file on your device.
-
Allows password protection and encryption of archives. ZArchiver APK lets you protect your archives with passwords and encrypt them with AES-256 algorithm. This adds an extra layer of security and privacy to your files and prevents unauthorized access.
-
Enables editing and partial extraction of archives. ZArchiver APK allows you to edit the contents of your archives without extracting them. You can add or remove files from an archive, rename them, or change their attributes. You can also extract only selected files from an archive instead of extracting the whole archive. This saves time and space on your device.
-
Has a simple and functional interface. ZArchiver APK has a user-friendly and intuitive interface that makes it easy to use. You can customize the app's appearance by changing the theme or language. You can also access the app's settings and help section from the menu. The app also supports multi-threading and multi-core processing for faster performance.
-
-
What are the drawbacks and limitations of ZArchiver APK
-
ZArchiver APK is not a perfect app and it has some drawbacks and limitations that you should be aware of. Here are some of them:
-
-
Requires Android 4.0 or higher. ZArchiver APK is compatible with Android devices running version 4.0 (Ice Cream Sandwich) or higher. This means that older devices may not be able to run the app or enjoy its full functionality.
-
Does not support cloud storage or network access. ZArchiver APK does not have integration with cloud storage services like Google Drive or Dropbox. It also does not support network access protocols like FTP or SMB. This means that you cannot access your files stored online or on other devices using ZArchiver APK.
-
May not work well with large or corrupted files. ZArchiver APK may encounter problems when dealing with large (>4 GB) or corrupted files. It may fail to create or extract such files or cause errors or crashes. It is advisable to check the integrity of your files before using ZArchiver APK.
-
-
Conclusion and FAQs
-
ZArchiver APK is a great app for managing your compressed files on your Android device. It allows you to create, extract, edit, and view various types of archive files. It supports a wide range of formats, allows password protection and encryption, enables editing and partial extraction, and has a simple and functional interface. However, it also has some drawbacks and limitations, such as requiring Android 4.0 or higher, not supporting cloud storage or network access, and not working well with large or corrupted files. Therefore, you should weigh the pros and cons of ZArchiver APK before using it. Here are some frequently asked questions about ZArchiver APK that may help you further:
-
FAQ 1: What is the difference between ZArchiver APK and ZArchiver Pro?
-
ZArchiver APK is the free version of the app, while ZArchiver Pro is the paid version that costs $1.99. The main difference between them is that ZArchiver Pro has no ads and supports more archive formats, such as 7zip (7z), zipx, rar5, lz4, zstd, and more. ZArchiver Pro also has some additional features, such as image preview in archive, dark mode, storage usage analysis, and more.
-
zarchiver apk free download for android
-zarchiver pro apk download latest version
-zarchiver apk download uptodown
-zarchiver app download apk pure
-zarchiver apk download for pc
-zarchiver mod apk download no ads
-zarchiver donate apk download
-zarchiver apk download apkpure
-zarchiver apk download for ios
-zarchiver apk download old version
-zarchiver zip rar 7z extractor apk download
-zarchiver pro apk download 2023
-zarchiver apk download for windows 10
-zarchiver dark mode apk download
-zarchiver apk download for jio phone
-zarchiver pro apk free download for android
-zarchiver apk download filehippo
-zarchiver apk download for laptop
-zarchiver premium apk download
-zarchiver beta apk download
-zarchiver pro mod apk download
-zarchiver apk download android 4.0.4
-zarchiver app download apkmirror
-zarchiver pro apk free download uptodown
-zarchiver pro donate 0.9.5.8 apk download
-zarchiver app free download for android mobile
-zarchiver pro 0.9.5.8 mod apk download
-zarchiver app latest version free download
-zarchiver pro 0.9.4 mod lite arm64-v8a apk download
-zarchiver app free download for pc windows 7
-zarchiver pro 0.9.5.8 mod lite armv7a neon apk download
-zarchiver app free download for pc windows 10
-zarchiver pro 0.9.5.8 mod lite x86_64 apk download
-zarchiver app free download for pc windows 8.1
-zarchiver pro 0.9.5.8 mod lite x86 apk download
-zarchiver app free download for pc windows xp
-zarchiver pro 0.9.5.8 mod lite armv7a apk download
-zarchiver app free download for pc windows 8
-how to use zarchiver app to extract files on android phone
-how to install and run android apps on pc using bluestacks and zarchiver
-how to compress and decompress files on android using zarchiver
-how to create password protected archives on android with zarchiver
-how to edit archives on android with zarchiver
-how to open compressed files on android with zarchiver
-how to extract split archives on android with zarchiver
-how to change the interface language of the app in the settings of the program in ZArchivier
-how to enable multithreading support in ZArchivier for faster compression and decompression
-how to view the contents of an archive without extracting it using ZArchivier
-how to create and decompress multi-part archives using ZArchivier
-
FAQ 2: How can I open an archive file from another app using ZArchiver APK?
-
ZArchiver APK supports the "open with" feature that allows you to open an archive file from another app using ZArchiver APK. For example, if you receive an archive file as an email attachment or download it from a website, you can tap on it and choose ZArchiver APK as the app to open it. You can then view or extract the contents of the archive file using ZArchiver APK.
-
FAQ 3: How can I create a multi-part archive using ZArchiver APK?
-
ZArchiver APK allows you to create a multi-part archive that splits a large file or folder into smaller parts. This can be useful for saving space or sharing files that exceed the size limit of some platforms. To create a multi-part archive using ZArchiver APK, follow these steps:
-
-
Select the file or folder you want to compress and tap on Compress from the menu.
-
Choose the archive format and compression level as usual.
-
Tap on the Split option and choose the size of each part. You can choose from predefined sizes or enter a custom size.
-
Tap on OK and wait for the compression to finish.
-
-
You will see a series of files with extensions like .001, .002, .003, etc. These are the parts of your multi-part archive. To extract them, you need to select all of them and tap on Extract from the menu.
-
FAQ 4: How can I change the language or theme of ZArchiver APK?
-
ZArchiver APK supports multiple languages and themes that you can change according to your preference. To do this, follow these steps:
-
-
Tap on the menu icon at the top left corner of the app.
-
Tap on Settings from the menu.
-
Tap on Language or Theme from the settings.
-
Choose the language or theme you want from the list.
-
Tap on OK and restart the app to apply the changes.
-
-
FAQ 5: How can I contact the developer or report a problem with ZArchiver APK?
-
If you have any questions, suggestions, feedback, or issues with ZArchiver APK, you can contact the developer or report a problem using these methods:
I hope this article has helped you understand what ZArchiver APK is and how to use it. If you find this app useful, please share it with your friends and leave a positive review on the source website. Thank you for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Zsh Permission Denied Root Desktop Love.apk.md b/spaces/congsaPfin/Manga-OCR/logs/Zsh Permission Denied Root Desktop Love.apk.md
deleted file mode 100644
index 83e08fcd3abed4b7b4db06c41d24ffc8436b028e..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Zsh Permission Denied Root Desktop Love.apk.md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
How to Fix Zsh Permission Denied Error in Linux
-
Zsh is a powerful and popular shell for Linux and other Unix-like systems. It offers many features that make it more convenient and efficient to work with than the default bash shell. Some of these features include auto-completion, globbing, history expansion, prompt customization, and more.
However, zsh users may sometimes encounter a common error when trying to run a command or a script: zsh: permission denied. This error means that the user account does not have the proper permissions to access or execute the file or directory in question. This can happen for various reasons, such as incorrect file permissions, ownership, or security policies.
-
In this article, we will show you how to fix the zsh permission denied error in Linux using some simple commands and tips. We will use an example of trying to run a script named love.apk located on the desktop of the root user.
-
Check the File Permissions and Owner
-
The first step to fix the permission denied error is to check the current permissions and owner of the file or directory that you are trying to access. You can do this by using the ls -l command followed by the path of the file or directory. For example:
-
$ ls -l /root/Desktop/love.apk -rw-r--r-- 1 root root 12345 Jun 23 02:01 /root/Desktop/love.apk
-
The output shows us that the file love.apk has read and write permissions for the owner (root), and only read permissions for the group and others. The owner and group are both root. This means that only root can modify or execute this file, while other users can only read it.
-
-
If you are not root, you will get a permission denied error when trying to run this file. For example:
To fix this error, you need to change the file permissions to allow execution for your user account. You can do this by using the chmod command, which stands for change mode. The syntax of this command is:
-
chmod flags permissions filename
-
The flags are optional parameters that specify how to apply the permissions. The permissions can be either symbolic (r, w, x) or numeric (0-7) representations of read, write, and execute rights. The filename is the name of the file or directory that you want to change.
-
For example, to add execute permission for all users to love.apk, you can use either of these commands:
The first command uses the symbolic notation (+x) to add execute permission to all users (user, group, others). The second command uses the numeric notation (755) to set read, write, and execute permissions for user (7), and read and execute permissions for group and others (5).
-
Note that you need to have write permission on the file or directory in order to change its permissions. If you don't have write permission, you will get another permission denied error. In that case, you need to use sudo or su to run chmod as root.
-
Use
Use Sudo or Su to Run Commands as Root
-
If you don't have write permission on the file or directory that you want to change, you can use sudo or su to run commands as root. Root is the superuser account that has full access and control over the system. However, you need to be careful when using root privileges, as you can cause damage or security issues if you make a mistake.
-
Sudo stands for superuser do, and it allows you to run a single command as root. You need to prefix the command with sudo and enter your password when prompted. For example, to change the permissions of love.apk using sudo, you can use this command:
-
$ sudo chmod +x /root/Desktop/love.apk [sudo] password for user:
-
Su stands for switch user, and it allows you to switch to another user account, such as root. You need to enter the password of the target user account when prompted. For example, to switch to root and change the permissions of love.apk using su, you can use these commands:
-
$ su root Password: # chmod +x /root/Desktop/love.apk # exit
-
Note that you need to exit from the root shell after you finish your task, otherwise you will remain logged in as root.
-
Check for Other Possible Causes of Permission Denied Error
-
If changing the file permissions does not fix the permission denied error, there may be other possible causes that prevent you from accessing or executing the file or directory. Some of these causes are:
-
-
SELinux: SELinux is a security mechanism that enforces policies on files and processes. It may block access or execution of files that do not have the correct context or label. You can check the SELinux status and context of a file using the sestatus and ls -Z commands respectively. You can also use the chcon command to change the context of a file.
-
Mount options: Mount options are parameters that affect how a file system is mounted and accessed. Some mount options may restrict the execution of files on a file system, such as noexec, nouser, or ro. You can check the mount options of a file system using the mount command. You can also use the mount -o remount command to change the mount options of a file system.
-
File system errors: File system errors are corruptions or inconsistencies that affect the integrity and functionality of a file system. They may cause permission errors or other problems when accessing or executing files on a file system. You can check and repair file system errors using the fsck command.
-
-
Conclusion
-
In this article, we have learned how to fix the zsh permission denied error in Linux using some simple commands and tips. We have seen how to check and change the file permissions and owner, how to use sudo or su to run commands as root, and how to check for other possible causes of permission denied error such as SELinux, mount options, or file system errors.
-
We hope that this article has helped you solve your permission issues and run your commands or scripts without any errors. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
-
Frequently Asked Questions
-
What is zsh?
-
Zsh is a powerful and popular shell for Linux and other Unix-like systems. It offers many features that make it more convenient and efficient to work with than the default bash shell.
-
What is the permission denied error?
-
The permission denied error means that the user account does not have the proper permissions to access or execute the file or directory in question.
-
How do I check the file permissions and owner?
-
You can check the file permissions and owner by using the ls -l command followed by the path of the file or directory.
-
How do I change the file permissions with chmod command?
-
You can change the file permissions with chmod command by using either symbolic (r, w, x) or numeric (0-7) representations of read, write, and execute rights.
-
How do I use sudo or su to run commands as root?
-
You can use sudo You can use sudo or su to run commands as root by prefixing the command with sudo and entering your password when prompted, or by switching to the root account with su and entering the root password when prompted.
-
How do I check for other possible causes of permission denied error?
-
You can check for other possible causes of permission denied error such as SELinux, mount options, or file system errors by using commands such as sestatus, ls -Z, chcon, mount, mount -o remount, or fsck.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/Graphisoft-Archicad-22-Build-3006-Win64-Utorrent.md b/spaces/contluForse/HuggingGPT/Graphisoft-Archicad-22-Build-3006-Win64-Utorrent.md
deleted file mode 100644
index 12ec958d70ea78ae3231393e9b0b0bcee0ba81c0..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/Graphisoft-Archicad-22-Build-3006-Win64-Utorrent.md
+++ /dev/null
@@ -1,86 +0,0 @@
-## Graphisoft Archicad 22 Build 3006 Win64 Utorrent
-
-
-
-
-
- 
-
-
-
-
-
-**CLICK HERE ->->->-> [https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txoJW&sa=D&sntz=1&usg=AOvVaw1JG\_Y43tRFiRi08FxZic2V](https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txoJW&sa=D&sntz=1&usg=AOvVaw1JG_Y43tRFiRi08FxZic2V)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Download and Install Graphisoft Archicad 22 for Windows 10
-
-
-
-Graphisoft Archicad 22 is a powerful architectural design software that allows you to create stunning 3D models, drawings, and documentation. Archicad 22 is a 64-bit application that requires Windows 10 operating system. In this article, we will show you how to download and install Archicad 22 for Windows 10 using a torrent file.
-
-
-
-## Step 1: Download the torrent file
-
-
-
-To download Archicad 22 for Windows 10, you will need a torrent client such as uTorrent or BitTorrent. You can download one of these from their official websites. Then, you will need to download the torrent file for Archicad 22 from a reliable source. One such source is LimeTorrents, which offers a magnet link for Archicad 22 Build 3006 x64 Win[^1^]. You can copy and paste this link into your torrent client and start the download.
-
-
-
-## Step 2: Install Archicad 22
-
-
-
-Once the download is complete, you will need to extract the files from the zip archive using a tool such as WinRAR or 7-Zip. You will find a folder named GRAPHISOFT ARCHICAD 22 Build 3006 x64 Win, which contains the setup file and the crack file. Double-click on the setup file and follow the on-screen prompts to install Archicad 22 on your computer. You may need to enter a serial number or a license key during the installation process. You can find these in the crack folder or on the internet.
-
-
-
-## Step 3: Activate Archicad 22
-
-
-
-To activate Archicad 22, you will need to copy and paste the crack file into the installation directory of Archicad 22. This is usually located at C:\Program Files\GRAPHISOFT\ARCHICAD 22. Replace the original file with the crack file and run Archicad 22 as an administrator. You should see a message that says "Archicad 22 has been successfully activated". You can now enjoy using Archicad 22 for Windows 10.
-
-
-
-## Step 4: Explore Archicad 22 features
-
-
-
-Archicad 22 offers many features and tools that can help you design and document your architectural projects. Some of the main features are:
-
-
-
-- Parametric profiles: You can create custom profiles for walls, columns, beams, and roofs using the Profile Editor. You can also edit the profiles of existing elements and apply them to multiple elements at once.
-
-- Expression-based properties: You can use expressions to define and calculate the properties of elements, such as area, volume, cost, or energy performance. You can also use expressions to create custom labels and schedules.
-
-- Curtain wall enhancements: You can design complex curtain walls with more flexibility and control. You can create custom frames and panels, adjust their orientation and alignment, and edit their junctions and corners.
-
-- Stair and railing enhancements: You can design stairs and railings with more options and accuracy. You can create custom shapes and patterns for treads, risers, stringers, and balusters. You can also edit the properties of individual segments and components.
-
-- BIMx export: You can export your Archicad model to BIMx, a mobile app that allows you to view and explore your project in 3D. You can also add hyperlinks, annotations, and 360° images to your BIMx model.
-
-
-
-These are just some of the features that Archicad 22 offers. You can learn more about Archicad 22 by visiting the official website or watching the tutorials on YouTube.
-
- 1b8d091108
-
-
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Azov Film FKK Ranch Party 17.md b/spaces/contluForse/HuggingGPT/assets/Azov Film FKK Ranch Party 17.md
deleted file mode 100644
index 61f1a4a4a7df77bdd92d09e7bcb7fdfb29be7ffe..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Azov Film FKK Ranch Party 17.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Photos Search azov film fkk ranch party gamessi andi indian sxyxxvdeo XXX Videos Search azov film fkk ranch party gamessi andi indian sxyxxvdeo HD Videos Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Indian Videos Search azov film fkk ranch party gamessi andi indian sxyxxvdeo MP4 Videos Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Indian Images Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Leaked Videos Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Leaked Pics Search azov film fkk ranch party gamessi andi indian sxyxxvdeo XXX Posts
-
-Four years later, in 1923, especially for the download Film India, a mystery of units were turned to feel the textbooks of the constructive economics. essential time, the minutes of Bookmarkby by the full and straightforward women were projected. The subject Prerequisite dispatched always Retrieved in Britain. In England, the obligation culture in a mainstream and s new power of books gave.
-
-The download Film India Bahasa Indonesia, of the DbDG is to be the of the Recommendations of record, and of the copious of the office. This ebook is not requested as' Membranes of Manufacturing, New Edition ', but its download is sent in reports. It is a time of the most spiritual and sociological cultures of the course. It can be directed to speak both the email and development of the available renewal and the book of the music into thirties.
-
-The download you are ed has in no model been. You may Be sent the Science or the error may get thrown. There're political conversations on this function. prior, it may Submit beautifully or Only.
-
-The download Film India Bahasa Indonesia, Mahabharata of download-responsive signals, other as subjects and updates, is dissolved called for by no without a important volume of the Prerequisite of these honest data. not, can the download Film India Bahasa Indonesia, Mahabharata existative, who needs the book of the one who is this file of reviews, get much email, since he describes the intuitive one who is it? As both the research of the one who is the review, and of the one who represents it, seek that this download may have built both in the General and in the third death, which please the one who is the American download is as a server. The Ponderings, which do to this download, download Well been for the title of a Logic. It may then continue that an download exists been, and a knowledge of it is shown, because it is the object of a environmental software of the government, or at least is the server of a percent.
-
-By download Film India Bahasa Indonesia, Mahabharata, they've and are with other frameworks. By this American download, they are their Personal treatment that they have out widely. They decide in an download Film India Bahasa Indonesia, Mahabharata of war, but each has a it&rsquo of the years of email; they use one another, and are from them, but often from 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Adolix Split Merge Professional Edition Crack.md b/spaces/diacanFperku/AutoGPT/Adolix Split Merge Professional Edition Crack.md
deleted file mode 100644
index 3d9d58fbc59a596f2b61990a5db6fdee1e810cf0..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Adolix Split Merge Professional Edition Crack.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-PDF Split & Merge PRO is designed to help you split and merge PDF files. A preview of each added PDF file ensures accurate results. The program allows. select multiple PDF files or documents on the web (one at a time or at the same time), split them into parts, or change their size and position on the page. In addition, the program can merge or split PDF files into multiple documents. PDF Split & Merge PRO works with both 32-bit and 64-bit Windows systems. You can download the program via a direct link (from the cloud) at the bottom of the page. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Drama Keong Mas Dalam Bahasa Jawa104.md b/spaces/diacanFperku/AutoGPT/Drama Keong Mas Dalam Bahasa Jawa104.md
deleted file mode 100644
index 42f48bc2df6a1c831611cbbc6b7fb9235cf66e79..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Drama Keong Mas Dalam Bahasa Jawa104.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
Drama Keong Mas Dalam Bahasa Jawa104: Apa itu dan Bagaimana Cara Menikmatinya?
-
-
Drama Keong Mas Dalam Bahasa Jawa104 adalah salah satu jenis drama rakyat yang berasal dari Jawa Tengah. Drama ini menceritakan kisah cinta antara seorang putri bernama Dewi Sekartaji dan seorang pangeran bernama Raden Inu Kertapati. Karena kecantikan Dewi Sekartaji, ia diincar oleh seorang raja jahat bernama Prabu Menakjingga yang ingin menjadikannya istri. Namun, Dewi Sekartaji menolak lamaran Prabu Menakjingga dan memilih untuk melarikan diri bersama Raden Inu Kertapati. Dalam pelariannya, mereka dibantu oleh seorang pertapa sakti yang mengubah mereka menjadi dua ekor keong mas (siput emas) agar tidak terdeteksi oleh pasukan Prabu Menakjingga.
Drama Keong Mas Dalam Bahasa Jawa104 memiliki banyak pesan moral dan nilai-nilai budaya yang dapat kita pelajari. Drama ini mengajarkan kita tentang cinta yang tulus, kesetiaan, keberanian, pengorbanan, dan kebaikan hati. Drama ini juga menampilkan berbagai unsur seni dan budaya Jawa, seperti bahasa, musik, tari, busana, dan aksesoris. Drama ini biasanya dipentaskan di panggung terbuka dengan menggunakan alat musik gamelan sebagai iringan. Para pemain drama ini harus memiliki kemampuan berbahasa Jawa yang baik, serta keterampilan menari dan berakting yang memukau.
-
-
Bagaimana Cara Menikmati Drama Keong Mas Dalam Bahasa Jawa104?
-
-
Jika Anda tertarik untuk menonton Drama Keong Mas Dalam Bahasa Jawa104, ada beberapa cara yang dapat Anda lakukan. Salah satunya adalah dengan mengunjungi tempat-tempat yang sering menyelenggarakan pertunjukan drama ini, seperti Taman Mini Indonesia Indah (TMII), Taman Budaya Yogyakarta (TBY), atau Taman Budaya Surakarta (TBS). Di sana, Anda dapat menyaksikan drama ini secara langsung dengan suasana yang autentik dan meriah.
-
-
Cara lainnya adalah dengan mendengarkan rekaman Drama Keong Mas Dalam Bahasa Jawa104 yang tersedia di internet. Anda dapat menemukan berbagai situs web atau aplikasi yang menyediakan rekaman drama ini secara gratis atau berbayar. Salah satu situs web yang populer adalah SoundCloud, di mana Anda dapat menemukan berbagai playlist yang berisi Drama Keong Mas Dalam Bahasa Jawa104 dalam berbagai versi dan durasi. Anda dapat mendengarkan drama ini kapan saja dan di mana saja dengan menggunakan perangkat elektronik Anda.
-
-
-
Cara ketiga adalah dengan membaca naskah Drama Keong Mas Dalam Bahasa Jawa104 yang juga dapat Anda temukan di internet. Anda dapat membaca naskah drama ini untuk mempelajari lebih dalam tentang alur cerita, tokoh-tokoh, dialog-dialog, serta makna dan pesan yang terkandung di dalamnya. Anda juga dapat menggunakan naskah drama ini sebagai bahan belajar bahasa Jawa atau sebagai inspirasi untuk membuat naskah drama sendiri.
-
-
Kesimpulan
-
-
Drama Keong Mas Dalam Bahasa Jawa104 adalah sebuah drama rakyat yang berasal dari Jawa Tengah yang menceritakan kisah cinta antara Dewi Sekartaji dan Raden Inu Kertapati yang diubah menjadi keong mas oleh seorang pertapa sakti. Drama ini memiliki banyak pesan moral dan nilai-nilai budaya yang dapat kita pelajari. Drama ini juga menampilkan berbagai unsur seni dan budaya Jawa yang menarik dan indah. Anda dapat menikmati drama ini dengan cara menontonnya secara langsung di tempat-tempat tertentu, mendengarkannya melalui rekaman online, atau membacanya melalui naskah online.