diff --git a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/aicolors/__init__.py b/spaces/101-5/gpt4free/g4f/.v1/gpt4free/aicolors/__init__.py
deleted file mode 100644
index a69276b81076c8a25c30ed9c8ab45e09fb20aabf..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/aicolors/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import fake_useragent
-import requests
-import json
-from .typings import AiColorsResponse
-
-
-class Completion:
- @staticmethod
- def create(
- query: str = "",
- ) -> AiColorsResponse:
- headers = {
- "authority": "jsuifmbqefnxytqwmaoy.functions.supabase.co",
- "accept": "*/*",
- "accept-language": "en-US,en;q=0.5",
- "cache-control": "no-cache",
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "same-origin",
- "user-agent": fake_useragent.UserAgent().random,
- }
-
- json_data = {"query": query}
-
- url = "https://jsuifmbqefnxytqwmaoy.functions.supabase.co/chatgpt"
- request = requests.post(url, headers=headers, json=json_data, timeout=30)
- data = request.json().get("text").get("content")
- json_data = json.loads(data.replace("\n ", ""))
-
- return AiColorsResponse(**json_data)
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fl Depth Of Field Plugin For After Effects Free A Must-Have for Any Motion Designer.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fl Depth Of Field Plugin For After Effects Free A Must-Have for Any Motion Designer.md
deleted file mode 100644
index 29cade5c0262ed757a5493478cb4d5c9eda31e4f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fl Depth Of Field Plugin For After Effects Free A Must-Have for Any Motion Designer.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Fl Depth Of Field Plugin For After Effects: A Review
-
If you are looking for a way to add realistic and cinematic depth of field effects to your 3D footage in After Effects, you may want to check out Fl Depth Of Field Plugin. This plugin is designed to move depth of field and out of focus generation to post-production, saving you time and resources from rendering them in your 3D app. In this article, we will review the features, pros and cons of Fl Depth Of Field Plugin and see how it can help you create stunning visuals.
-
Introduction
-
Fl Depth Of Field Plugin is a plugin for Adobe After Effects that allows you to create high-quality camera blurs with the flexibility of 2D post-processing. It is developed by Frischluft, a company that specializes in lens effects for computer graphics. According to their website, "The key aspect during the development of these filters was to match the real thing as good as possible."
Depth of field is a phenomenon that occurs in real optical devices, such as cameras, where objects that are closer or farther away from the focal point appear blurred, while objects at the focal point appear sharp. This effect is used in photography and film as a style element, to draw attention to certain subjects, create a sense of depth and realism, or evoke a mood or atmosphere.
-
However, generating depth of field effects in computer graphics can be challenging and time-consuming, as it usually requires ray tracing techniques that increase rendering times considerably. Fl Depth Of Field Plugin solves this problem by generating depth of field effects fast as a post-process, using a depth buffer for its calculations. It can also create out of focus effects without depth information, using a constant blur radius over the entire image.
-
Fl Depth Of Field Plugin is not the only plugin that offers depth of field effects for After Effects. There are other plugins, such as DOF PRO, that also claim to provide photorealistic depth of field effects. However, Fl Depth Of Field Plugin has some advantages over other plugins, such as its ability to simulate different lens apertures, its highlights and brightness boost features, and its background distortion option.
-
Features of Fl Depth Of Field Plugin
-
Depth of Field
-
The main feature of Fl Depth Of Field Plugin is its depth of field effect, which blurs pixels based on their depth value. To use this effect, you need a depth buffer for your 3D footage, which is an image that stores the distance information for each pixel. You can either render a depth buffer in your 3D app or use a plugin like ZbornToy to generate one in After Effects.
-
Once you have a depth buffer, you can apply Fl Depth Of Field Plugin to your footage layer and adjust the parameters according to your needs. You can control the focal point, the focal range, the blur amount, the blur quality, and more. You can also adjust the lens aperture shape and size, which greatly defines the look of the blur.
-
The lens aperture is the opening in the camera lens that controls how much light enters the camera. The shape and size of the aperture affect how the out-of-focus areas look like in an image. For example, a circular aperture produces circular bokeh (the aesthetic quality of the blur), while a hexagonal aperture produces hexagonal bokeh.
-
Fl Depth Of Field Plugin allows you to simulate different kinds of real cameras by altering the lens aperture shape and size. You can choose from several presets or create your own custom shape using bezier curves. You can also animate the aperture shape and size over time for dynamic effects.
-
How to use Fl Depth Of Field Plugin in After Effects
-Fl Depth Of Field Plugin tutorial for After Effects beginners
-Best settings for Fl Depth Of Field Plugin in After Effects
-Fl Depth Of Field Plugin review and comparison with other plugins
-Fl Depth Of Field Plugin download link and installation guide
-Fl Depth Of Field Plugin alternatives and similar plugins
-Fl Depth Of Field Plugin license and activation code
-Fl Depth Of Field Plugin tips and tricks for realistic results
-Fl Depth Of Field Plugin examples and showcase of projects
-Fl Depth Of Field Plugin compatibility and system requirements
-Fl Depth Of Field Plugin update and new features
-Fl Depth Of Field Plugin support and customer service
-Fl Depth Of Field Plugin discount and coupon code
-Fl Depth Of Field Plugin pros and cons and user feedback
-Fl Depth Of Field Plugin vs native After Effects depth of field
-How to create cinematic depth of field with Fl Plugin
-How to animate depth of field with Fl Plugin in After Effects
-How to adjust depth of field with Fl Plugin in After Effects
-How to blur background with Fl Plugin in After Effects
-How to add bokeh effects with Fl Plugin in After Effects
-How to control focus with Fl Plugin in After Effects
-How to optimize performance with Fl Plugin in After Effects
-How to fix errors and bugs with Fl Plugin in After Effects
-How to customize depth of field with Fl Plugin in After Effects
-How to use expressions with Fl Plugin in After Effects
-How to use masks with Fl Plugin in After Effects
-How to use 3D layers with Fl Plugin in After Effects
-How to use cameras with Fl Plugin in After Effects
-How to use lights with Fl Plugin in After Effects
-How to use presets with Fl Plugin in After Effects
-How to use keyframes with Fl Plugin in After Effects
-How to use motion blur with Fl Plugin in After Effects
-How to use color grading with Fl Plugin in After Effects
-How to use noise reduction with Fl Plugin in After Effects
-How to use lens distortion with Fl Plugin in After Effects
-How to use chromatic aberration with Fl Plugin in After Effects
-How to use vignette with Fl Plugin in After Effects
-How to use grain with Fl Plugin in After Effects
-How to use glow with Fl Plugin in After Effects
-How to use lens flare with Fl Plugin in After Effects
-How to use depth map with Fl Plugin in After Effects
-How to use Z-depth pass with Fl Plugin in After Effects
-How to use depth matte with Fl Plugin in After Effects
-How to use depth channel with Fl Plugin in After Effects
-How to use depth buffer with Fl Plugin in After Effects
-How to use depth data with Fl Plugin in After Effects
-How to use depth information with Fl Plugin in After Effects
-How to use depth values with Fl Plugin in After Effects
-
Out of Focus
-
The other feature of Fl Depth Of Field Plugin is its out of focus effect, which creates a blur with a constant radius over the entire image. This effect does not require a depth buffer and can be used as a complement or an alternative to the depth of field effect.
-
The out of focus effect can be useful when you want to create a shallow depth of field look without having accurate depth information or when you want to add some extra blur to your footage for artistic reasons. You can control the blur amount, quality, threshold, gamma correction, and more.
-
One unique feature of the out of focus effect is that it allows you to use a custom image as a lens aperture instead of generating one. This means that you can use any image layer in your composition as an aperture texture and create interesting shapes and patterns in your blur. For example, you can use an image of a star or a heart as an aperture texture and create star-shaped or heart-shaped bokeh.
-
Another unique feature of the out of focus effect is that it offers background distortion for semi-transparent areas. This means that when you look through a blurred object in your footage, such as glass or smoke, the background behind it will be distorted due to refraction. This effect is subtle but adds realism and believability to your comp.
-
Highlights and Brightness Boost
-
A common characteristic of real camera blurs is that very bright image parts are predominant when being out-of-focus. This is especially noticeable in highlights or light sources that appear as bright spots or discs in blurred areas. However, most graphic formats cut off bright parts above a certain threshold, resulting in dull or flat-looking blurs.
-
To solve this problem, Fl Depth Of Field Plugin offers two features: highlights and brightness boost. The highlights feature allows you to simulate realistic highlights in out-of-focus areas by selecting parts that are supposed to be brighter than normal and giving them an extra boost. You can control the threshold, amount, saturation, tint color, blend mode, and more.
-
The brightness boost feature allows you to select parts that are supposed to be brighter than normal but are not necessarily highlights (such as reflections or glows) and give them an extra boost as well. You can control the threshold, amount, gamma correction, saturation limit, blend mode, and more.
-
Pros and Cons of Fl Depth Of Field Plugin
-
Pros
-
-
It is fast and easy to use compared to rendering depth-of-field effects in 3D apps.
-
It produces high-quality and realistic results that match real cameras.
-
It offers flexible and customizable options for different styles and scenarios.
-
-
Cons
-
-
It requires a depth buffer for depth-of-field effect which may not be available or accurate for some footage.
-
It may not work well with motion blur or complex scenes with overlapping objects or transparency.
-
-
Conclusion
-
In conclusion, Fl Depth Of Field Plugin For After Effects is a powerful plugin that allows you to create realistic and cinematic depth-of-field effects fast as a post-process. It has many features that make it stand out from other plugins, such as its ability to simulate different lens apertures, its highlights and brightness boost features, and its background distortion option. It is suitable for anyone who wants to add some extra polish and realism to their 3D footage without spending too much time or resources on rendering. If you are interested in trying out Fl Depth Of Field Plugin, from their official website and see for yourself how it can improve your visuals.
-
FAQs
-
What is the difference between depth of field and out of focus effects?
-
Depth of field effects blur pixels based on their distance from the focal point, creating a realistic and cinematic look. Out of focus effects blur pixels with a constant radius over the entire image, creating a simple and artistic look.
-
How can I get a depth buffer for my 3D footage?
-
You can either render a depth buffer in your 3D app or use a plugin like ZbornToy to generate one in After Effects.
-
How can I create custom lens apertures for my blur effects?
-
You can either use the built-in presets or create your own custom shape using bezier curves in the depth of field effect. You can also use any image layer in your composition as an aperture texture in the out of focus effect.
-
How can I simulate realistic highlights in out of focus areas?
-
You can use the highlights and brightness boost features to select and enhance bright parts of the image that are supposed to be brighter than normal when being out of focus.
-
How can I add background distortion for semi-transparent areas?
-
You can use the background distortion option in the out of focus effect to distort the background behind blurred objects such as glass or smoke.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Chemdraw 12 Crack ((NEW)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Chemdraw 12 Crack ((NEW)).md
deleted file mode 100644
index b965131b720c942f6bfe50dc522c3a7596bb6c27..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Chemdraw 12 Crack ((NEW)).md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-MDL MOLD CHEMICS
-
----------------
-
-The MDL MOLD CHEMICS software is a one-step or two-step process to molecular modeling. Two main classes of software, one is CASMI, Carbon Multi-purpose Individualized MOLD CHEMICS (CASMI/MOLD CHEMICS). CASMI is a one-step software to model molecules with MOLD CHEMICS that generates the molecules . This one-step approach is very time efficient as there is no need for computational chemistry, and a user is required to enter the chemical name and atom types of the target molecule only once. The time factor of CASMI/MOLD CHEMICS is the most important feature of this software.
-
-The other class is called CASMI/MOLD CHEMICS integration. CASMI/MOLD CHEMICS integration is a two-step process where CASMI or CASMI/MOLD CHEMICS performs a partial quantum mechanical calculation and the output can be fed to MOLD CHEMICS for the rest of the calculation. CASMI/MOLD CHEMICS integration is more time-consuming than CASMI.
-
-For CASMI, this software is available in three different flavors: Basic, MOLPRO, and GASP. GASP is now only available as CASMI, whereas MOLPRO and Basic are both available in CASMI/MOLD CHEMICS integration as well as CASMI.
-
-In order to simplify the process of quantum mechanical calculations with CASMI/MOLD CHEMICS, this software provides the user interface and the option of automatic input/output to a common directory or a database (PostgreSQL \[PC\]/SQLite3 \[iPhone/iPad\]). All calculations are performed using the MOLCAS 6.2 \[PC\]/MOLCAS 8.0 \[iPhone/iPad\] program. It is a software that employs the density functional theory (DFT) method for calculation of both the geometry and harmonic vibrational frequencies. DFT is computationally demanding, but in practice is still the method of choice for prediction of the structures and properties of organic and inorganic molecules. The software uses the Gaussian 09 package \[PC\]/Gaussian 16 package \[iPhone/iPad\] and is under the GNU General Public License.
-
-![The user interface of CASMI (carbon multi-purpose individualized MOLD CHEM 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bhop Go The Ultimate Guide to Jumping and Surfing Online.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bhop Go The Ultimate Guide to Jumping and Surfing Online.md
deleted file mode 100644
index b2d97d7dc68a40d6becdbbc0620435cbdcde65fa..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bhop Go The Ultimate Guide to Jumping and Surfing Online.md
+++ /dev/null
@@ -1,183 +0,0 @@
-
-
Bhop Go: A Fun and Challenging Parkour Game for Mobile Devices
-
If you are looking for a game that can test your skills, speed, and reflexes, you might want to try Bhop Go. Bhop Go is a mobile game that lets you experience the thrill and challenge of bhop, a skill that involves jumping faster in first-person shooter and simulation games. In this article, we will tell you everything you need to know about Bhop Go, including what it is, how to play it, and how to download it.
Bhop Go is a game developed by Shockapp, a studio that specializes in creating casual and action games for mobile devices. Bhop Go is one of their most popular games, with over 5 million downloads on Google Play Store and 4.8 stars rating on App Store. But what exactly is bhop, and why is it so fun and addictive?
-
The history and origin of bhop
-
Bhop stands for bunny hop, a term that refers to a technique that allows players to move faster in first-person shooter and simulation games. Bhop was first discovered in the late 1990s in Quake, a game that used the same engine as Half-Life and Counter-Strike. By turning left and right (strafing) while jumping, players could gain more speed and momentum than running normally. This gave them an advantage in combat, movement, and exploration.
-
Bhop soon became a popular skill among gamers, especially in the Counter-Strike series. Many players practiced bhop to improve their movement skills and compete with other players. However, bhop was also considered a form of cheating by some developers, who tried to patch it or limit it in their games. For example, Valve introduced a stamina system in Counter-Strike: Source that prevented players from bhopping continuously.
-
Despite these attempts, bhop remained a beloved skill among many gamers, who continued to find ways to perform it or create mods that enabled it. Bhop also spawned a subculture of speedrunners, map makers, and community servers that focused on bhop as a form of art and entertainment. Some examples of these are KZ (climbing), surf (sliding), and bhop (jumping) servers.
-
The features and gameplay of Bhop Go
-
Bhop Go is a game that brings the essence of bhop to mobile devices. It is a parkour game that challenges players to jump on blocks and go as far as possible. It is not a realistic simulation of bhop, but rather a simplified and stylized version that is easy to play but hard to master.
-
bhop go online free
-bhop go web app
-bhop go play store
-bhop go y8 game
-bhop go construct 3
-bhop go bunny hop skill
-bhop go parkour game
-bhop go air strafes
-bhop go speed run
-bhop go offline mode
-bhop go progressive web app
-bhop go install chrome
-bhop go fps game
-bhop go simulation game
-bhop go wall jump
-bhop go blue and green portal
-bhop go level warp
-bhop go alphabuild20.c3p
-bhop go controller support
-bhop go main menu
-bhop go screenshots
-bhop go version history
-bhop expert game mode
-bhop expert no download
-bhop expert y8.com
-bhop expert software update needed
-bhop expert webgl missing features
-bhop expert game controls
-bhop expert report a bug
-bhop expert did you like this game?
-bhop expert tags 1 player 3d free html5 jumping platform trap
-bhop expert add this game to your web page by embedding the simple code line
-bhop expert join other players talking about games in the y8 forum
-bhop expert try cryptoserval game nft game backed by y8.com
-bhop expert game details no victories and no winners almost all cards used in this mod are designed for fun walkthrough only so enjoy them as it is category action & adventure added on 25 jan 2022
-construct.net free online games bhop 17219 play
-construct.net build and publish your own games just like bhop to this arcade with construct 3 full game
-construct.net embed share non_performing published on 5 sep 2020
-construct.net bunny hop your way to victory in this game high speeds deadly black floors and wall jumping included touch the blue and green portal to go to the next level
-construct.net why you should use bhop-c3.web.app ability to install bhop as a progressive web app on your device imagine a bhop icon on your home screen slightly better performance than playing on the construct arcade play offline or on an unstable connection auto updates just like the construct arcade cleaner interface and ui
-construct.net how to install bhop as a game on your device with 5 easy steps you must be using a fairly modern version of google chrome for this to work head to bhop-c3.web.app wait a few seconds 5 10 seconds click the plus icon on the top right of the address bar press install finished
-construct.net instructions simple keyboard and mouse w or space bunny hop wall jump a or left arrow move left d or right arrow move right advanced keyboard and mouse up arrow jump once wall jump once a or left arrow move left d or right arrow move right mobile up arrow bunny hop wall jump left arrow move left right arrow move right controller left stick up dpad up a or b bunny hop wall jump left stick left or dpad left move left left stick right or dpad right move right use your mouse or touchscreen to select buttons in the main menu to warp to levels
-construct.net screenshots version history id date size engine plays
-
Bhop Go has many features that make it fun and engaging, such as:
-
-
Multiplayer mode: You can play online with friends or strangers in different maps and modes.
-
Single player mode: You can play offline without internet connection in various maps.
-
Collecting loot: You can find trampolines, bounce pads, knives, weapons, skins, gloves, and other items on maps.
-
Jumping bounce pads: You can use these pads to boost your speed and height.
-
Moving 3D obstacles: You can avoid or interact with these obstacles that can slow you down or help you.
-
Racing for world records: You can compete with other players for the best time and distance on each map.
-
Customizing your character: You can change your appearance, outfit, and accessories.
-
Creating your own maps: You can design and share your own maps with other players.
-
-
The gameplay of Bhop Go is simple but challenging. You have to tap the screen to jump and tilt your device to strafe. You have to time your jumps and strafes correctly to maintain your speed and direction. You also have to avoid falling off the blocks or hitting the obstacles. The game requires skill, concentration, and practice to master.
-
The benefits and challenges of bhop
-
Bhop is not only a fun and exciting game, but also a skill that can benefit you in many ways. Some of the benefits of bhop are:
-
-
It improves your hand-eye coordination and reaction time.
-
It enhances your spatial awareness and navigation skills.
-
It stimulates your brain and creativity.
-
It boosts your confidence and self-esteem.
-
It relieves your stress and boredom.
-
-
However, bhop also has some challenges that you need to overcome. Some of the challenges of bhop are:
-
-
It can be frustrating and discouraging at first.
-
It can be addictive and time-consuming.
-
It can cause motion sickness or eye strain.
-
It can be hard to find suitable games or servers that support bhop.
-
It can be seen as cheating or unfair by some players or developers.
-
-
Therefore, you need to balance your bhop experience with moderation, patience, and respect. You also need to find the right game or platform that suits your preferences and goals.
-
How to Play Bhop Go?
-
Now that you know what Bhop Go is and why it is fun and beneficial, you might want to try it yourself. But how do you play Bhop Go? Here are some tips and instructions that can help you get started.
-
The controls and interface of Bhop Go
-
The controls and interface of Bhop Go are simple and intuitive. You can see them in the screenshot below.
-
-
The controls are as follows:
-
-
To jump, tap the screen with your right thumb.
-
To strafe left or right, tilt your device left or right with your left hand.
-
To look around, swipe the screen with your left thumb.
-
To use items, tap the icons on the bottom left corner of the screen.
-
-
The interface shows the following information:
-
-
Your speed in units per second (UPS).
-
Your distance in meters (M).
-
Your time in seconds (S).
-
Your rank among other players (R).
-
Your health points (HP).
-
-
The modes and maps of Bhop Go
-
Bhop Go has two modes: multiplayer and single player. In multiplayer mode, you can play online with other players in different maps and modes. You can choose from casual, competitive, deathmatch, race, or custom modes. You can also chat with other players, join clans, or create private rooms.
-
In single player mode, you can play offline without internet connection in various maps. You can choose from easy, medium, hard, or extreme maps. You can also create your own maps using the map editor.
-
Bhop Go has over 100 maps that you can play on. Each map has a different theme, layout, difficulty, and length. Some maps are based on real locations, such as Paris, Tokyo, New York, or Dubai. Some maps are inspired by other games, such as Minecraft, Portal, or Half-Life. Some maps are original creations by the developers or the community.
-
The tips and tricks for bhop
-
Bhop is a skill that requires practice and patience to master. However, there are some tips and tricks that can help you improve your bhop performance. Here are some of them:
-
-
Practice on easy maps first before moving on to harder ones.
-
Adjust your sensitivity and tilt settings to suit your preference.
-
Use headphones or earphones to hear the sound cues for jumping and landing.
-
Aim for smooth and consistent jumps rather than fast and erratic ones.
-
Use the bounce pads and trampolines to gain more speed and height.
-
Use the knives and weapons to slash or shoot the blocks or obstacles.
-
Collect the loot and skins to customize your character and items.
-
Watch videos or streams of other players to learn from their techniques and strategies.
-
Have fun and enjoy the game!
-
-
How to Download Bhop Go?
-
Bhop Go is a free game that you can download and play on your mobile device. However, you need to make sure that your device meets the requirements and compatibility of the game. You also need to follow the steps and sources for downloading the game. Here are some details that can help you with that.
-
The requirements and compatibility of Bhop Go
-
Bhop Go is a game that requires a decent device to run smoothly and properly. The minimum requirements for Bhop Go are:
-
-
Android 4.4 or higher, or iOS 10.0 or higher.
-
At least 1 GB of RAM.
-
At least 100 MB of free storage space.
-
A stable internet connection (for multiplayer mode).
-
-
Bhop Go is compatible with most mobile devices, such as smartphones, tablets, or iPads. However, some devices may experience lag, glitches, or crashes due to hardware or software issues. If you encounter any problems with Bhop Go, you can try the following solutions:
-
-
Update your device's operating system and apps.
-
Clear your device's cache and memory.
-
Restart your device or the game.
-
Contact the developer's support team or report a bug.
-
-
The steps and sources for downloading Bhop Go
-
Bhop Go is a game that you can download from official and trusted sources, such as Google Play Store or App Store. You can also download it from third-party websites or platforms, but you need to be careful of malware or viruses. Here are the steps and sources for downloading Bhop Go:
-
-
Source
Steps
-
Google Play Store
Open Google Play Store on your Android device.
Search for "Bhop Go" in the search bar.
Select the game from the results and tap "Install".
Wait for the game to download and install on your device.
Tap "Open" to launch the game and enjoy!
-
App Store
Open App Store on your iOS device.
Search for "Bhop Go" in the search bar.
Select the game from the results and tap "Get".
Enter your Apple ID password or use Touch ID or Face ID to confirm.
Wait for the game to download and install on your device.
Tap "Open" to launch the game and enjoy!
-
Aptoide
Open Aptoide on your Android device. If you don't have it, you can download it from .
Search for "Bhop Go" in the search bar.
Select the game from the results and tap "Install".
Wait for the game to download and install on your device.
Tap "Open" to launch the game and enjoy!
-
TweakBox
Open TweakBox on your iOS device. If you don't have it, you can download it from .
Search for "Bhop Go" in the search bar.
Select the game from the results and tap "Install".
Wait for the game to download and install on your device.
Tap "Open" to launch the game and enjoy!
-
-
The alternatives and similar games to Bhop Go
-
Bhop Go is a great game that can provide you with hours of fun and challenge. However, if you want to try something different or explore other options, there are some alternatives and similar games to Bhop Go that you can check out. Here are some of them:
-
-
Name
Description
-
Bunny Hop League
A game that combines bhop with soccer. You can play online with other players in different stadiums and score goals by jumping and kicking the ball.
-
Bhop Jump
A game that simulates bhop in a realistic way. You can play on different maps and modes, such as speedrun, freestyle, or multiplayer. You can also customize your character and settings.
-
Bhop Pro
A game that teaches you how to bhop in a step-by-step way. You can learn the basics and advanced techniques of bhop, such as strafing, air strafing, or sync. You can also practice on various maps and modes.
-
Surf VPN
A game that lets you surf on ramps and slides in a 3D environment. You can play online with other players or offline on different maps. You can also collect coins and skins to upgrade your character.
-
Flip Runner
A game that lets you perform parkour stunts and flips in a cityscape. You can run, jump, flip, and slide on buildings, cars, or objects. You can also unlock new characters and locations.
-
-
Conclusion
-
Bhop Go is a game that can provide you with a fun and challenging experience of bhop, a skill that involves jumping faster in first-person shooter and simulation games. Bhop Go has many features and gameplay options that make it engaging and enjoyable, such as multiplayer mode, single player mode, collecting loot, jumping bounce pads, moving 3D obstacles, racing for world records, customizing your character, and creating your own maps.
-
Bhop Go is also a game that can benefit you in many ways, such as improving your hand-eye coordination, spatial awareness, creativity, confidence, and stress relief. However, bhop Go also has some challenges that you need to overcome, such as frustration, addiction, motion sickness, compatibility issues, and cheating accusations.
-
Bhop Go is a game that you can download and play on your mobile device for free. However, you need to make sure that your device meets the requirements and compatibility of the game. You also need to follow the steps and sources for downloading the game from official and trusted platforms.
-
Bhop Go is a great game that can satisfy your bhop cravings and curiosity. However, if you want to try something different or explore other options, there are some alternatives and similar games to Bhop Go that you can check out.
-
If you are looking for a game that can test your skills, speed, and reflexes, you might want to try Bhop Go. Bhop Go is a game that lets you experience the thrill and challenge of bhop on your mobile device. Download Bhop Go today and see how far you can go!
-
FAQs
-
Here are some frequently asked questions about Bhop Go:
-
-
What is the difference between bhop and surf?
-
How do I get more skins and items in Bhop Go?
-
How do I share my maps with other players in Bhop Go?
-
How do I report a bug or a cheater in Bhop Go?
-
How do I join a clan or a private room in Bhop Go?
-
-
The answers are:
-
-
Bhop and surf are both skills that involve moving faster in first-person shooter and simulation games. However, bhop is about jumping on blocks while surf is about sliding on ramps.
-
You can get more skins and items in Bhop Go by finding them on maps, buying them with coins or real money, or watching ads.
-
You can share your maps with other players in Bhop Go by uploading them to the cloud server or sending them via email or social media.
-
You can report a bug or a cheater in Bhop Go by contacting the developer's support team via email or social media.
-
You can join a clan or a private room in Bhop Go by tapping the clan or room icon on the main menu or the multiplayer mode.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash Royale for Android The Most Fun and Addictive Strategy Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash Royale for Android The Most Fun and Addictive Strategy Game Ever.md
deleted file mode 100644
index 047733a881ed0184fe34ad08ddb88b4b843b1358..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash Royale for Android The Most Fun and Addictive Strategy Game Ever.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Clash Royale Unblocked Download: How to Play the Epic Real-Time Card Battle Game for Free
-
Clash Royale is one of the most successful free-to-play mobile games available in the market. It's more than just a card game, it's also a multiplayer tower-defense game that is super fun to play either solo or with a friend. If you are a fan of Clash of Clans, you will love Clash Royale as it features the same characters and spells, as well as new ones. But what if you want to play Clash Royale without any restrictions or limitations? What if you want to enjoy the game without spending any money or waiting for chests to open? Well, there is a way to do that. It's called Clash Royale unblocked download.
Clash Royale unblocked download is a way to play the game without having to go through the Google Play Store or any other official app store. It involves downloading an APK file from a third-party website and installing it on your Android device. This way, you can bypass any regional or device restrictions, as well as get access to the latest updates and features before anyone else. You can also play the game without any ads or in-app purchases, making it completely free and fair.
-
In this article, we will show you how to play Clash Royale unblocked download, what you need to do it, what you can expect from it, and some tips and tricks to improve your gameplay. Let's get started!
-
What You Need to Play Clash Royale Unblocked
-
To play Clash Royale unblocked download, you will need the following things:
-
-
An Android device that can run the game. The minimum requirements are Android 5.0 or higher, 150 MB of free storage space, and an internet connection.
-
An APK file of Clash Royale. This is a file that contains the game's data and can be installed on your device. You can get it from various websites that offer APK downloads, but one of the most reliable and safe ones is Uptodown. Uptodown is a website that offers APK downloads for thousands of Android games and apps, including Clash Royale. It also has a user-friendly interface, a rating system, and a blog that covers the latest news and updates about the game.
-
-
How to Download and Install Clash Royale APK from Uptodown
-
Here are the steps you need to follow to download and install Clash Royale APK from Uptodown:
-
-
Go to the Uptodown website and search for Clash Royale in the search bar. You can also use this link to go directly to the game's page.
-
On the game's page, you will see a green button that says "Download". Click on it and wait for the download to start. You may need to allow your browser to download files from unknown sources.
-
Once the download is complete, locate the APK file on your device's file manager and tap on it. You may need to enable the installation of apps from unknown sources on your device's settings.
-
Follow the instructions on the screen and wait for the installation to finish. You may see a warning message that says "This app was built for an older version of Android and may not work properly". Ignore it and tap on "Install anyway".
-
After the installation is done, you can launch the game from your app drawer or home screen. You may need to grant some permissions to the game, such as access to your storage, contacts, and location.
-
-
How to Update Clash Royale APK
-
One of the advantages of using Uptodown to download Clash Royale APK is that you can get the latest updates as soon as they are released by the developers. Here are some tips on how to update Clash Royale APK:
-
clash royale apk download for android
-clash royale free download for pc
-clash royale online play without download
-clash royale mod apk unlimited gems and coins
-clash royale hack version download 2023
-clash royale private server download ios
-clash royale emulator for windows 10
-clash royale best deck for arena 13
-clash royale tips and tricks for beginners
-clash royale latest update download apk
-clash royale tournaments with prizes 2023
-clash royale clan wars 2 strategy guide
-clash royale season 25 rewards and skins
-clash royale legendary cards list and ranking
-clash royale fan art and wallpapers download
-clash royale characters names and abilities
-clash royale gameplay videos and live streams
-clash royale memes and jokes funny images
-clash royale reddit community and discussions
-clash royale wiki and database of cards
-clash royale support and customer service
-clash royale official website and blog
-clash royale merchandise and gift cards
-clash royale esports and competitive scene
-clash royale history and development story
-
-
Check for updates regularly on the Uptodown website or app. You can also enable notifications to get alerted when a new version is available.
-
To update Clash Royale APK, you just need to repeat the same steps as downloading and installing it. You don't need to uninstall the previous version or lose your progress.
-
If you encounter any problems or errors while updating, you can try clearing the cache and data of the game or reinstalling it from scratch.
-
-
What You Can Expect from Clash Royale Unblocked
-
Clash Royale unblocked download is not much different from the official version of the game, except that it has no ads or in-app purchases. You can still enjoy all the features and content that make Clash Royale one of the best mobile games ever. Here are some of them:
such as skins, emotes, and magic items. You can also take part in fun and challenging events that test your skills and creativity. For example, you can play with a random deck, a special card, or a different set of rules. These events are a great way to earn more rewards and have fun.
-
Tips and Tricks to Improve Your Gameplay in Clash Royale Unblocked
-
Clash Royale unblocked download is not an easy game to master. It requires a lot of practice, patience, and learning. Here are some tips and tricks that can help you improve your gameplay and win more matches:
-
Don't Waste Gold or Gems
-
Gold and gems are the two main currencies in Clash Royale unblocked download. You can use them to buy cards, chests, upgrades, and more. However, they are not easy to come by, so you should use them wisely and avoid unnecessary purchases. For example, you should not buy cards from the shop unless you really need them or they are on sale. You should also not spend gems on speeding up chests or buying low-quality chests. Instead, you should save them for special offers or high-value chests.
-
Create a Versatile and Powerful Deck
-
Your deck is the key to your success in Clash Royale unblocked download. You should create a deck that suits your playstyle and strategy, as well as the current meta and trends. You should also make sure that your deck is versatile and powerful enough to deal with different situations and opponents. A good deck should have the following characteristics:
-
-
A balance of elixir cost. Your average elixir cost should be between 3.0 and 4.0, depending on your deck type. You don't want to have a deck that is too expensive or too cheap, as it will affect your elixir management and tempo.
-
A balance of card types. Your deck should have a mix of different card types, such as troops, spells, buildings, and win conditions. You don't want to have a deck that is too weak or too strong against certain cards or strategies.
-
A balance of roles. Your deck should have cards that can perform different roles, such as offense, defense, support, control, and cycle. You don't want to have a deck that is too one-dimensional or too dependent on certain cards.
-
A synergy of cards. Your deck should have cards that work well together and complement each other's strengths and weaknesses. You don't want to have a deck that is too random or too predictable.
-
-
Don't Waste Elixir
-
Elixir is the resource that you use to play cards in Clash Royale unblocked download. It regenerates at a constant rate of 1 elixir per 2.8 seconds (or 1 elixir per 1.4 seconds in double elixir time). Elixir management is one of the most important skills in the game, as it determines how much you can do in each match. Here are some tips on how to manage your elixir efficiently:
-
-
Don't overcommit or underdefend. You should always try to spend less elixir than your opponent while defending or attacking, unless you have a clear advantage or opportunity. You should also avoid playing unnecessary cards or wasting elixir on low-value targets.
-
Don't leak elixir. You should always try to keep your elixir bar full or near full, unless you are waiting for a specific card or situation. You should also avoid playing cards too early or too late, as it will affect your elixir flow and timing.
-
Don't ignore elixir trades. You should always pay attention to how much elixir you and your opponent spend on each interaction and try to gain an elixir advantage whenever possible. You should also use spells wisely and only when they can give you a positive or equal elixir trade.
-
-
Aim for Princess Towers First
-
The main objective of Clash Royale unblocked download is to destroy your opponent's towers while protecting your own. There are three types of towers in the game: the king tower, which is located at the center of each side; and the two princess towers, which are located at the corners of each side. The princess towers have less health and damage than the king tower, but they also shoot faster and farther. Here are some tips on how to target the enemy's towers strategically:
-
-
Aim for the princess towers first. You should always try to destroy at least one princess tower before going for the king tower, as it will give you more space, options, and pressure on the enemy's side. You should also avoid activating the king tower prematurely by hitting it with spells or troops that have splash damage or area damage, such as fireball, rocket, or balloon. Activating the king tower will make it join the defense and make it harder for you to win.
-
Aim for the weaker princess tower. You should always try to focus your attacks on the princess tower that has less health or is more vulnerable to your deck, as it will make it easier for you to destroy it and gain an advantage. You should also avoid splitting your attacks or switching targets too often, as it will make it harder for you to finish off a tower and waste your elixir.
-
Aim for the opposite princess tower. You should always try to attack the princess tower that is opposite to the one that your opponent is attacking, as it will create a counter-push and force your opponent to defend both sides. You should also avoid attacking the same princess tower as your opponent, as it will create a stalemate and give your opponent more time to recover and counterattack.
-
-
Conclusion
-
Clash Royale unblocked download is a great way to play the epic real-time card battle game for free and without any restrictions or limitations. You can download and install the APK file from Uptodown, a reliable and safe website that offers APK downloads for thousands of Android games and apps. You can enjoy all the features and content that make Clash Royale one of the best mobile games ever, such as collecting and upgrading cards, battling in real-time duels, joining clans and participating in clan wars, and enjoying seasonal events and challenges. You can also improve your gameplay by following some tips and tricks, such as not wasting gold or gems, creating a versatile and powerful deck, not wasting elixir, and aiming for princess towers first. If you are looking for a fun and addictive game that will keep you entertained for hours, you should definitely try Clash Royale unblocked download. You won't regret it!
-
FAQs
-
Here are some common questions and answers about Clash Royale unblocked download:
-
-
Q: Is Clash Royale unblocked download safe?
-
A: Yes, Clash Royale unblocked download is safe as long as you download the APK file from a trusted website like Uptodown. Uptodown scans all the APK files for viruses and malware before uploading them to their website. However, you should always be careful when downloading files from unknown sources and check the permissions and reviews before installing them.
-
Q: Is Clash Royale unblocked download legal?
-
A: Yes, Clash Royale unblocked download is legal as long as you don't use it for any illegal or unethical purposes. Clash Royale is a free-to-play game that does not require any license or registration to play. However, you should respect the intellectual property rights of the developers and not use any hacks or cheats that may harm the game or other players.
-
Q: Is Clash Royale unblocked download compatible with my device?
-
A: Clash Royale unblocked download is compatible with most Android devices that can run the game. The minimum requirements are Android 5.0 or higher, 150 MB of free storage space, and an internet connection. However, some devices may have issues with performance or compatibility due to different hardware or software specifications. If you encounter any problems or errors while playing Clash Royale unblocked download, you can try clearing the cache and data of the game or reinstalling it from scratch.
-
Q: Can I play Clash Royale unblocked download with my friends?
-
A: Yes, you can play Clash Royale unblocked download with your friends either online or offline. You can invite your friends to join your clan or challenge them to friendly battles. You can also play with random players from around the world in 1v1 or 2v2 matches. However, you may not be able to play with players who are using the official version of the game or a different version of the APK file.
-
Q: Can I transfer my progress from Clash Royale unblocked download to the official version of the game?
-
A: Yes, you can transfer your progress from Clash Royale unblocked download to the official version of the game by using your Google Play Games account or your Supercell ID. You can link your account to either of these services in the game's settings menu. However, you may lose some of your progress or rewards if you switch between different versions of the game frequently.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/APK 5play Download How to Access the Latest and Greatest Apps and Games for Free.md b/spaces/1phancelerku/anime-remove-background/APK 5play Download How to Access the Latest and Greatest Apps and Games for Free.md
deleted file mode 100644
index 5f2fb48dc246a90b1eaceab02ae20f534cf2d89b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/APK 5play Download How to Access the Latest and Greatest Apps and Games for Free.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Download APK 5play: A Guide to Free Apps and Games
-
Do you love playing mobile games and using mobile apps, but hate paying for them? If yes, then you should try 5play, a platform where you can find thousands of free APK and Mod APK games and apps for Android devices. In this article, we will show you how to download and install 5play on your device, how to use it to find and download free apps and games, and what are the benefits and drawbacks of using it.
-
How to download and install 5play on your device
-
Downloading and installing 5play on your device is very easy. Just follow these simple steps:
Visit the official website of 5play, where you can find all the latest and popular apps and games for Android.
-
Choose the app or game you want to download from the categories or use the search bar to find what you are looking for.
-
Click on the download button and wait for the APK file to be downloaded on your device.
-
Enable unknown sources on your device settings. This will allow you to install apps and games from sources other than Google Play Store or App Store.
-
Locate the APK file on your device storage and tap on it to install it.
-
-
Congratulations! You have successfully downloaded and installed using 5play for free apps and games. Here are some of them:
-
-
Access to thousands of apps and games that are not available on Google Play Store or App Store. You can find apps and games that are banned, removed, or restricted in your region or country.
-
Access to modded versions of apps and games that have premium features unlocked or unlimited resources. You can enjoy the full potential of your favorite apps and games without spending any money.
-
Access to updated versions of apps and games that have bug fixes and new features. You can always get the latest and best version of your apps and games from 5play.
-
Access to safe and reliable downloads that are tested and verified by the 5play team. You can download apps and games without worrying about malware or viruses that can harm your device or data.
-
Access to a user-friendly interface that is easy to navigate and use. You can find what you are looking for in a matter of seconds and download it with a single tap.
-
-
These are some of the benefits of using 5play for free apps and games. However, there are also some drawbacks that you should be aware of.
-
The drawbacks of using 5play for free apps and games
-
Using 5play for free apps and games is not without risks. Here are some of the drawbacks that you should be aware of:
-
-
The risk of downloading malware or viruses that can harm your device or data. Although the 5play team tries to ensure the safety and reliability of the downloads, there is no guarantee that they are 100% secure. You should always scan the files before installing them and use a reputable antivirus software on your device.
-
The risk of violating the terms and conditions of the original developers or publishers of the apps and games. By downloading and using apps and games from 5play, you may be infringing on their intellectual property rights or breaking their rules. This may result in legal issues or penalties if you are caught.
-
The risk of losing your progress or data if you uninstall or update the app or game from another source. If you download an app or game from 5play, you may not be able to sync your progress or data with the original version from Google Play Store or App Store. This means that if you uninstall or update the app or game from another source, you may lose your progress or data.
-
The risk of facing legal issues or penalties if you use pirated or cracked apps and games. Some of the apps and games on 5play may be pirated or cracked, which means that they are illegally obtained or modified. This may violate the laws of your country or region, and you may face legal issues or penalties if you are caught.
-
-
These are some of the drawbacks of using 5play for free apps and games. You should weigh the pros and cons before deciding to use it.
-
Conclusion and FAQs
-
In conclusion, 5play is a platform where you can find thousands of free APK and Mod APK games and apps for Android devices. It has many benefits, such as access to apps and games that are not available on Google Play Store or App Store, access to modded versions of apps and games that have premium features unlocked or unlimited resources, access to updated versions of apps and games that have bug fixes and new features, access to safe and reliable downloads that are tested and verified by the 5play team, and access to a user-friendly interface that is easy to navigate and use. However, it also has some drawbacks, such as the risk of downloading malware or viruses that can harm your device or data, the risk of violating the terms and conditions of the original developers or publishers of the apps and games, the risk of losing your progress or data if you uninstall or update the app or game from another source, and the risk of facing legal issues or penalties if you use pirated or cracked apps and games.
-
If you want to download free APK and Mod APK games and apps for Android devices, you can try 5play at your own risk. However, you should always be careful about what you download and install on your device, and respect the rights of the original developers or publishers of the apps and games.
-
download apk 5play app
-download apk 5play mod
-download apk 5play games
-download apk 5play free
-download apk 5play android
-download apk 5play toca boca
-download apk 5play minecraft
-download apk 5play brawl stars
-download apk 5play prequel
-download apk 5play inshot
-download apk 5play chikii
-download apk 5play drastic
-download apk 5play true skate
-download apk 5play papers please
-download apk 5play top war
-download apk 5play evony
-download apk 5play klondike
-download apk 5play moon reader
-download apk 5play torque pro
-download apk 5play hotschedules
-download apk 5play simple gallery
-download apk 5play getapps
-download apk 5play ultimate guitar
-download apk 5play my radio
-download apk 5play ibomma
-download apk 5play nova chatgpt
-download apk 5play chatgpt pro
-download apk 5play camera translator
-download apk 5play swing vpn
-download apk 5play rebahin
-download apk 5play meitu vip
-download apk 5play resso premium
-download apk 5play pixiv ads removed
-download apk 5play pandora plus unlocked
-download apk 5play world of tanks blitz mod money and gold
-download apk 5play nuls brawl private server gems unlimited
-download apk 5play chicken gun mega menu coins unlimited
-download apk 5play bendy and the ink machine full game obb
-download apk 5play super meat boy forever full game paid
-download apk 5play legend of slime god mode money unlimited
-download apk 5play incredibox unlocked money unlimited
-download apk 5play dawncaster deckbuilding rpg full game paid
-download apk 5play last viking god of valhalla money unlimited
-download apk 5play mob control menu money unlimited
-download apk 5play truckers of europe menu money unlimited
-
Here are some FAQs that may help you understand more about 5play:
-
What is an APK file?
-
An APK file is an Android Package file that contains all the files needed to install an app or game on an Android device. It is similar to an EXE file on Windows computers.
-
What is a Mod APK file?
-
A Mod APK file is a modified version of an original APK file that has premium features unlocked or unlimited resources. It is created by third-party developers or hackers who modify the original code of the app or game.
-
How can I update the apps and games downloaded from 5play?
-
You can update the apps and games downloaded from 5play by using the 5play app itself. You can check for updates from the downloads, updates, or favorites section of the app. You can also enable the auto-update feature to get the latest versions of your apps and games automatically.
-
How can I contact the support team of 5play?
-
You can contact the support team of 5play by using the feedback or contact us option in the 5play app. You can also visit their Facebook page or Twitter account to get in touch with them. They are always ready to help you with any issues or queries you may have.
-
Is it legal to use 5play?
-
The legality of using 5play depends on your country or region's laws and regulations regarding downloading and using apps and games from unofficial sources. Some countries or regions may allow it, while others may prohibit it. You should always check your local laws and regulations before using 5play, and use it at your own risk.
-
I hope you enjoyed reading this article and learned something new. If you have any questions or comments, please feel free to leave them below. Thank you for your time and attention.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/src/facerender/sync_batchnorm/batchnorm.py b/spaces/4Taps/SadTalker/src/facerender/sync_batchnorm/batchnorm.py
deleted file mode 100644
index 5f4e763f0366dffa10320116413f8c7181a8aeb1..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/facerender/sync_batchnorm/batchnorm.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import collections
-
-import torch
-import torch.nn.functional as F
-
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
-
-from .comm import SyncMaster
-
-__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']
-
-
-def _sum_ft(tensor):
- """sum over the first and last dimention"""
- return tensor.sum(dim=0).sum(dim=-1)
-
-
-def _unsqueeze_ft(tensor):
- """add new dementions at the front and the tail"""
- return tensor.unsqueeze(0).unsqueeze(-1)
-
-
-_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
-_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
-
-
-class _SynchronizedBatchNorm(_BatchNorm):
- def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):
- super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
-
- self._sync_master = SyncMaster(self._data_parallel_master)
-
- self._is_parallel = False
- self._parallel_id = None
- self._slave_pipe = None
-
- def forward(self, input):
- # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
- if not (self._is_parallel and self.training):
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- self.training, self.momentum, self.eps)
-
- # Resize the input to (B, C, -1).
- input_shape = input.size()
- input = input.view(input.size(0), self.num_features, -1)
-
- # Compute the sum and square-sum.
- sum_size = input.size(0) * input.size(2)
- input_sum = _sum_ft(input)
- input_ssum = _sum_ft(input ** 2)
-
- # Reduce-and-broadcast the statistics.
- if self._parallel_id == 0:
- mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
- else:
- mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
-
- # Compute the output.
- if self.affine:
- # MJY:: Fuse the multiplication for speed.
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
- else:
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
-
- # Reshape it.
- return output.view(input_shape)
-
- def __data_parallel_replicate__(self, ctx, copy_id):
- self._is_parallel = True
- self._parallel_id = copy_id
-
- # parallel_id == 0 means master device.
- if self._parallel_id == 0:
- ctx.sync_master = self._sync_master
- else:
- self._slave_pipe = ctx.sync_master.register_slave(copy_id)
-
- def _data_parallel_master(self, intermediates):
- """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
-
- # Always using same "device order" makes the ReduceAdd operation faster.
- # Thanks to:: Tete Xiao (http://tetexiao.com/)
- intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
-
- to_reduce = [i[1][:2] for i in intermediates]
- to_reduce = [j for i in to_reduce for j in i] # flatten
- target_gpus = [i[1].sum.get_device() for i in intermediates]
-
- sum_size = sum([i[1].sum_size for i in intermediates])
- sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
- mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
-
- broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
-
- outputs = []
- for i, rec in enumerate(intermediates):
- outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
-
- return outputs
-
- def _compute_mean_std(self, sum_, ssum, size):
- """Compute the mean and standard-deviation with sum and square-sum. This method
- also maintains the moving average on the master device."""
- assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
- mean = sum_ / size
- sumvar = ssum - sum_ * mean
- unbias_var = sumvar / (size - 1)
- bias_var = sumvar / size
-
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
-
- return mean, bias_var.clamp(self.eps) ** -0.5
-
-
-class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
- r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
- mini-batch.
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm1d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of size
- `batch_size x num_features [x width]`
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C)` or :math:`(N, C, L)`
- - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 2 and input.dim() != 3:
- raise ValueError('expected 2D or 3D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
- of 3d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm2d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, H, W)`
- - Output: :math:`(N, C, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 4:
- raise ValueError('expected 4D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
- of 4d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm3d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
- or Spatio-temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x depth x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, D, H, W)`
- - Output: :math:`(N, C, D, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm3d, self)._check_input_dim(input)
diff --git a/spaces/4com/4com-license/app.py b/spaces/4com/4com-license/app.py
deleted file mode 100644
index c5f5ffb786eb52fb67108be3937dd26b944f4f0f..0000000000000000000000000000000000000000
--- a/spaces/4com/4com-license/app.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import gradio as gr
-
-with gr.Blocks() as demo:
-
- gr.Markdown("""
-
-
- EasyOCR demo version
- supports 80+ languages. To use
- it, simply upload your image and select a language from the drop-down menu
- or click on one of the examples to download it. Most of the properties
- provided by the library are available in the advanced settings
- Read more
-
-
-
diff --git a/spaces/Ayakasuki/anime-ai-detect/README.md b/spaces/Ayakasuki/anime-ai-detect/README.md
deleted file mode 100644
index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000
--- a/spaces/Ayakasuki/anime-ai-detect/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Anime Ai Detect
-emoji: 🤖
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: saltacc/anime-ai-detect
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Banbri/zcvzcv/src/app/interface/panel/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/panel/index.tsx
deleted file mode 100644
index f4e67946773eb8fddc73c2dd7d696f932eaf3040..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/app/interface/panel/index.tsx
+++ /dev/null
@@ -1,347 +0,0 @@
-"use client"
-
-import { useEffect, useRef, useState, useTransition } from "react"
-import { RxReload } from "react-icons/rx"
-
-import { RenderedScene } from "@/types"
-
-import { getRender, newRender } from "@/app/engine/render"
-import { useStore } from "@/app/store"
-
-import { cn } from "@/lib/utils"
-import { getInitialRenderedScene } from "@/lib/getInitialRenderedScene"
-import { Progress } from "@/app/interface/progress"
-
-export function Panel({
- panel,
- className = "",
- width = 1,
- height = 1,
-}: {
- panel: number
- className?: string
- width?: number
- height?: number
- }) {
- const panelId = `${panel}`
-
- const [mouseOver, setMouseOver] = useState(false)
- const ref = useRef(null)
- const font = useStore(state => state.font)
- const preset = useStore(state => state.preset)
-
- const setGeneratingImages = useStore(state => state.setGeneratingImages)
-
- const panels = useStore(state => state.panels)
- const prompt = panels[panel] || ""
-
- const captions = useStore(state => state.captions)
- const caption = captions[panel] || ""
-
- const zoomLevel = useStore(state => state.zoomLevel)
- const showCaptions = useStore(state => state.showCaptions)
-
- const addToUpscaleQueue = useStore(state => state.addToUpscaleQueue)
-
- const [_isPending, startTransition] = useTransition()
- const renderedScenes = useStore(state => state.renderedScenes)
- const setRendered = useStore(state => state.setRendered)
-
- const rendered = renderedScenes[panel] || getInitialRenderedScene()
-
- const [revision, setRevision] = useState(0)
-
- // keep a ref in sync
- const renderedRef = useRef()
- const renderedKey = JSON.stringify(rendered)
- useEffect(() => { renderedRef.current = rendered }, [renderedKey])
-
- const timeoutRef = useRef(null)
-
- const enableRateLimiter = `${process.env.NEXT_PUBLIC_ENABLE_RATE_LIMITER}` === "true"
-
- const delay = enableRateLimiter ? (1000 + (500 * panel)) : 1000
-
-
- const startImageGeneration = ({ prompt, width, height, revision }: {
- prompt: string
- width: number
- height: number
- revision: number
- }) => {
- if (!prompt?.length) { return }
-
- // important: update the status, and clear the scene
- setGeneratingImages(panelId, true)
-
- // just to empty it
- setRendered(panelId, getInitialRenderedScene())
-
- setTimeout(() => {
- startTransition(async () => {
-
- const withCache = revision === 0
-
- // atrocious and very, very, very, very, very, very, very ugly hack for the Inference API
- // as apparently "use_cache: false" doesn't work, or doesn't do what we want it to do
- let cacheInvalidationHack = ""
- const nbMaxRevisions = 6
- for (let i = 0; i < revision && revision < nbMaxRevisions; i++) {
- const j = Math.random()
- cacheInvalidationHack += j < 0.3 ? "_" : j < 0.6 ? "," : "-"
- }
-
- let newRendered: RenderedScene
- try {
-
- newRendered = await newRender({
- prompt: cacheInvalidationHack + " " + prompt,
- width,
- height,
-
- // TODO: here we never reset the revision, so only the first user
- // comic will be cached (we should fix that later)
- withCache: revision === 0
- })
- } catch (err) {
- // "Failed to load the panel! Don't worry, we are retrying..")
- newRendered = await newRender({
- prompt: cacheInvalidationHack + " " + prompt,
- width,
- height,
- withCache,
- })
- }
-
- if (newRendered) {
- setRendered(panelId, newRendered)
-
- if (newRendered.status === "completed") {
- setGeneratingImages(panelId, false)
- addToUpscaleQueue(panelId, newRendered)
- }
-
- // but we are still loading!
- } else {
- setRendered(panelId, {
- renderId: "",
- status: "pending",
- assetUrl: "",
- alt: "",
- maskUrl: "",
- error: "",
- segments: []
- })
- setGeneratingImages(panelId, false)
- return
- }
- })
- }, enableRateLimiter ? 1000 * panel : 0)
- }
-
-
- const checkStatus = () => {
- startTransition(async () => {
- clearTimeout(timeoutRef.current)
-
- if (!renderedRef.current?.renderId || renderedRef.current?.status !== "pending") {
- timeoutRef.current = setTimeout(checkStatus, delay)
- return
- }
-
- try {
- setGeneratingImages(panelId, true)
- const newRendered = await getRender(renderedRef.current.renderId)
-
- if (JSON.stringify(renderedRef.current) !== JSON.stringify(newRendered)) {
- setRendered(panelId, renderedRef.current = newRendered)
- setGeneratingImages(panelId, true)
- }
-
- if (newRendered.status === "pending") {
- timeoutRef.current = setTimeout(checkStatus, delay)
- } else if (newRendered.status === "error" ||
- (newRendered.status === "completed" && !newRendered.assetUrl?.length)) {
- try {
- const newAttempt = await newRender({
- prompt,
- width,
- height,
- withCache: false,
- })
- setRendered(panelId, newAttempt)
- } catch (err) {
- console.error("yeah sorry, something is wrong.. aborting", err)
- setGeneratingImages(panelId, false)
- }
- } else {
- console.log("panel finished!")
- setGeneratingImages(panelId, false)
- addToUpscaleQueue(panelId, newRendered)
- }
- } catch (err) {
- console.error(err)
- timeoutRef.current = setTimeout(checkStatus, delay)
- }
- })
- }
-
- useEffect(() => {
- if (!prompt.length) { return }
-
- startImageGeneration({ prompt, width, height, revision })
-
- clearTimeout(timeoutRef.current)
-
- // normally it should reply in < 1sec, but we could also use an interval
- timeoutRef.current = setTimeout(checkStatus, delay)
-
- return () => {
- clearTimeout(timeoutRef.current)
- }
- }, [prompt, width, height, revision])
-
- /*
- doing the captionning from the browser is expensive
- a simpler solution is to caption directly during SDXL generation
-
- useEffect(() => {
- if (!rendered.assetUrl) { return }
- // the asset url can evolve with time (link to a better resolution image)
- // however it would be costly to ask for the caption, the low resolution is enough for the semantic resolution
- // so we just do nothing if we already have the caption
- if (caption) { return }
- startTransition(async () => {
- try {
- const newCaption = await see({
- prompt: "please caption the following image",
- imageBase64: rendered.assetUrl
- })
- if (newCaption) {
- setCaption(newCaption)
- }
- } catch (err) {
- console.error(`failed to generate the caption:`, err)
- }
- })
- }, [rendered.assetUrl, caption])
- */
-
- const frameClassName = cn(
- //`flex`,
- `w-full h-full`,
- `border-stone-800`,
- `transition-all duration-200 ease-in-out`,
- zoomLevel > 140 ? `border-[2px] md:border-[4px] rounded-sm md:rounded-md` :
- zoomLevel > 120 ? `border-[1.5px] md:border-[3px] rounded-xs md:rounded-sm` :
- zoomLevel > 90 ? `border-[1px] md:border-[2px] rounded-xs md:rounded-sm` :
- zoomLevel > 40 ? `border-[0.5px] md:border-[1px] rounded-none md:rounded-xs` :
- `border-transparent md:border-[0.5px] rounded-none md:rounded-none`,
- `shadow-sm`,
- `overflow-hidden`,
- `print:border-[1.5px] print:shadow-none`,
- )
-
- const handleReload = () => {
- console.log(`Asked to reload panel ${panelId}`)
- setRevision(revision + 1)
- }
-
- if (prompt && !rendered.assetUrl) {
- return (
-
- {rendered.assetUrl &&
- }
- {
- // there is an issue, this env check doesn't work..
- // process.env.NEXT_PUBLIC_CAN_REDRAW === "true" ?
-
-
-
-
- Redraw
-
-
-
- //: null
- }
-
- )
-}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apkpro.me Carx Calle.md b/spaces/Benson/text-generation/Examples/Descargar Apkpro.me Carx Calle.md
deleted file mode 100644
index db6c7c94187665b64ac78f545ac20f1d95782729..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apkpro.me Carx Calle.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
CarX Street: un juego de carreras de mundo abierto y gratuito para dispositivos móviles
-
Si eres un fan de los juegos de carreras, es posible que hayas oído hablar de CarX Street, un nuevo juego de carreras de mundo abierto para dispositivos móviles. CarX Street es desarrollado por CarX Technologies, la misma compañía detrás de la popular serie CarX Drift Racing. En este artículo, te contaremos todo lo que necesitas saber sobre CarX Street, incluyendo qué es, cómo descargarlo de apkpro.me, cómo jugarlo y cómo se compara con otros juegos de carreras.
CarX Street es un videojuego de carreras de simulación que ofrece física realista del automóvil y deriva a alta velocidad. El juego también cuenta con diferentes tipos de mapas de todo el mundo, y los jugadores pueden elegir entre varios modos de juego diferentes. Los jugadores pueden competir contra otros jugadores, o participar en carreras y eventos.
-
Un juego de carreras realista y dinámico con personalización de coches y multijugador en línea
-
Una de las características destacadas de CarX Street es su motor de física realista. Este motor simula el comportamiento de los coches en la carretera, dando a los jugadores una experiencia de carreras real. Los jugadores pueden sentir la emoción de las carreras de alta velocidad mientras maniobran sus autos a través de curvas cerradas y se entretienen dentro y fuera del tráfico. El juego también permite a los jugadores personalizar sus coches con varias partes y opciones de ajuste, desbloqueando todo el potencial de sus vehículos. Los jugadores también pueden desafiar a otros jugadores en carreras de red reales, o unirse a clubes y competir con otros corredores.
-
Un mapa de mundo abierto grande y diverso con ciclo de día y noche y diferentes entornos
-
-
Un modo de carrera con clubes, eventos y jefes para desafiar
-
Para aquellos que quieren una experiencia de juego más estructurada, CarX Street también ofrece un modo de carrera. En este modo, los jugadores pueden unirse a diferentes clubes, cada uno con su propio estilo, tema y objetivos. Los jugadores también pueden participar en varios eventos, como sprints, drifts, contrarrelojes, etc., para ganar dinero y reputación. Los jugadores también pueden desafiar a los jefes de cada club, que son los mejores corredores en sus respectivas áreas. Al derrotar a los jefes, los jugadores pueden desbloquear nuevos coches, piezas, ubicaciones y más.
-
-
Cómo descargar CarX Street desde apkpro.me?
Cómo descargar CarX Street desde apkpro.me?
-
Si usted está interesado en probar CarX Street, es posible que se pregunte cómo descargarlo de apkpro.me. Apkpro.me es un sitio web que proporciona descargas gratuitas y seguras de varias aplicaciones y juegos de Android, incluyendo CarX Street. Estos son algunos de los beneficios, pasos y precauciones de descargar CarX Street de apkpro.me.
-
Los beneficios de descargar desde apkpro.me
-
Hay varias razones por las que es posible que desee descargar CarX Street de apkpro.me en lugar de la tienda oficial de Google Play. Algunos de los beneficios son:
-
-
Puede descargar la última versión de CarX Street sin esperar la actualización oficial.
-
Puede acceder a la versión modificada de CarX Street, que le da dinero ilimitado, monedas y gemas para comprar y actualizar sus coches.
-
Puede evitar las restricciones regionales y jugar CarX Street en cualquier país.
-
Puedes disfrutar del juego sin anuncios ni compras en la aplicación.
-
-
Los pasos para descargar e instalar CarX Street desde apkpro.me
-
El proceso de descargar e instalar CarX Street desde apkpro.me es simple y sencillo. Estos son los pasos que debe seguir:
-
-
Vaya a apkpro.me en su navegador y busque CarX Street.
-
-
Haga clic en el botón de descarga y espere a que el archivo se descargue en su dispositivo.
-
Una vez descargado el archivo, vaya a su administrador de archivos y localice el archivo. Toque en él para iniciar el proceso de instalación.
-
Si ves un mensaje de advertencia que dice "Instalar bloqueado", ve a tu configuración y habilita la opción de instalar aplicaciones de fuentes desconocidas.
-
Siga las instrucciones en la pantalla y complete el proceso de instalación.
-
Iniciar el juego y disfrutar!
-
-
Las precauciones a tomar antes de descargar desde apkpro.me
-
Aunque apkpro.me es un sitio web confiable y seguro, hay algunas precauciones que debe tomar antes de descargar cualquier aplicación o juego de ella. Algunas de las precauciones son:
-
-
Asegúrese de que su dispositivo tiene suficiente espacio de almacenamiento y duración de la batería para descargar e instalar el juego.
-
Asegúrese de que su dispositivo cumple con los requisitos mínimos para ejecutar el juego sin problemas.
-
Asegúrese de tener una conexión a Internet estable para evitar interrupciones o errores durante el proceso de descarga o instalación.
-
Asegúrate de tener una copia de seguridad de tus datos y archivos en caso de que algo salga mal o quieras desinstalar el juego más tarde.
-
Asegúrese de escanear el archivo con un antivirus o un escáner de malware antes de abrirlo para asegurarse de que está libre de virus o código malicioso.
-
-
¿Cómo se juega CarX Street?
Cómo se juega CarX Street?
-
Ahora que ha descargado e instalado CarX Street desde apkpro.me, es posible que se pregunte cómo jugarlo. CarX Street es un divertido y adictivo juego de carreras que te mantendrá enganchado durante horas. Estos son algunos de los controles básicos y la mecánica de juego de CarX Street, así como algunos consejos y trucos para mejorar sus habilidades de carreras y rendimiento.
-
Los controles básicos y la mecánica de juego de CarX Street
-
-
La mecánica de juego de CarX Street se basa en la física realista y el comportamiento del automóvil. El juego simula los efectos de la velocidad, la gravedad, la tracción, la inercia y la fricción en su coche. Usted tiene que tener en cuenta estos factores al conducir su coche, especialmente cuando se deriva. La deriva es una característica clave de CarX Street, ya que le permite realizar maniobras espectaculares y ganar puntos extra. Puede desviarse pulsando el botón de freno mientras gira, o usando el botón de freno de mano. También puede ajustar el ángulo y la intensidad de su deriva mediante la dirección y la aceleración en consecuencia.
-
Los consejos y trucos para mejorar sus habilidades de carreras y rendimiento en CarX Street
-
Si quieres convertirte en un mejor corredor en CarX Street, necesitas practicar y dominar el arte de la deriva. La deriva no solo es divertida y fresca, sino también útil y estratégica. Aquí hay algunos consejos y trucos para ayudarte a mejorar tus habilidades y rendimiento en CarX Street:
-
-
Elija el coche adecuado para su estilo y preferencia. Diferentes coches tienen diferentes características, tales como velocidad, aceleración, manejo, peso, etc. Algunos coches son más adecuados para la deriva que otros, así que experimenta con diferentes coches y encontrar el que más le convenga.
-
Actualizar y afinar su coche con regularidad. Puede mejorar el rendimiento de su automóvil comprando e instalando nuevas piezas, como motor, turbo, suspensión, neumáticos, etc. También puede ajustar la configuración de su automóvil, como camber, toe, differential, etc., para optimizar su comportamiento en la carretera.
-
Conozca el diseño y las características de cada mapa. Cada mapa tiene sus propios desafíos y oportunidades para la deriva. Necesitas familiarizarte con el diseño y las características de cada mapa, como curvas, esquinas, rampas, obstáculos, atajos, etc. También necesitas adaptar tu estilo y estrategia de conducción de acuerdo con las condiciones del mapa, como el clima, el tráfico, la hora del día, etc.
-
-
Mira las repeticiones y aprende de otros jugadores. Puedes ver las repeticiones de tus propias carreras u otras carreras de jugadores accediendo al modo de repetición en el juego. Ver repeticiones puede ayudarte a analizar tus errores y mejorar tus habilidades. También puedes aprender de las técnicas y estrategias de otros jugadores viendo sus repeticiones.
-
-
Las características y modos para explorar en CarX Street
-
CarX Street no es solo un juego de carreras; también es un juego social que te permite interactuar con otros jugadores y unirte a una comunidad de corredores. Estas son algunas de las características y modos que puedes explorar en CarX Street:
-
-
Modo multijugador en línea: Puedes competir contra otros jugadores de todo el mundo en carreras de red real. Puedes elegir entre diferentes modos, como carrera de sprint, carrera de deriva, carrera de ataque temporal, etc. También puedes chatear con otros jugadores en el vestíbulo o durante la carrera.
-
Modo Club: Puedes unirte o crear un club con otros jugadores que compartan tus intereses y objetivos. Puede cooperar con los miembros de su club para participar en eventos del club, como guerras de clubes o torneos de clubes. También puedes competir con otros clubes por la fama y la gloria.
-
Modo de carrera: Puede progresar a través de una historia que involucra diferentes clubes, eventos y jefes. Puedes ganar dinero y reputación completando misiones y desafíos. También puede desbloquear nuevos coches, piezas, ubicaciones y más derrotando a los jefes.
-
Modo de garaje: Puede personalizar sus coches con varias partes y opciones de ajuste. Puede cambiar el aspecto de sus coches mediante la aplicación de diferentes pinturas, calcomanías, ruedas, alerones, etc. También puede mejorar el rendimiento de sus coches mediante la mejora y ajuste del motor, turbo, suspensión, neumáticos, etc.
-
-
-
¿Cómo se compara CarX Street con otros juegos de carreras?
-
CarX Street no es el único juego de carreras disponible para dispositivos móviles. Hay muchos otros juegos de carreras que usted podría haber jugado o oído hablar de, como Asphalt 9, Need for Speed, Real Racing 3, etc. ¿Cómo CarX Street se compara con estos juegos? Estas son algunas de las similitudes y diferencias entre CarX Street y otros juegos de carreras populares, así como los pros y los contras de CarX Street como un juego de carreras.
-
Las similitudes y diferencias entre CarX Street y otros juegos de carreras populares
-
CarX Street comparte algunas características comunes con otros juegos de carreras, como:
-
-
Tiene gráficos y efectos de sonido de alta calidad que crean una experiencia de carreras inmersiva.
-
Tiene una variedad de coches y lugares para elegir, cada uno con sus propias características y desafíos.
-
Tiene un modo multijugador que le permite competir contra otros jugadores en línea.
-
Tiene un modo de carrera que sigue una historia y ofrece diferentes misiones y recompensas.
-
-
Sin embargo, CarX Street también tiene algunas características únicas que lo distinguen de otros juegos de carreras, como:
-
-
Tiene un motor de física realista que simula el comportamiento de los coches en la carretera.
-
Tiene un enfoque en la deriva como una mecánica de juego central y una fuente de diversión y emoción.
-
Tiene un mapa de mundo abierto grande y diverso que puedes explorar libremente.
-
Tiene un dinámico ciclo de día y noche y diferentes condiciones climáticas que afectan el entorno del juego.
-
-
Los pros y los contras de CarX Street como juego de carreras
-
Como cualquier juego, CarX Street tiene sus pros y sus contras como un juego de carreras. Aquí están algunos de los pros y contras que usted debe considerar antes de jugar CarX Street:
-
-
Pros
Contras
-
-
Ofrece un mapa de mundo abierto grande y diverso con diferentes entornos y características.
Puede ser abrumador y confuso navegar por el mapa y encontrar el camino.
-
Ofrece una variedad de modos de juego y características para adaptarse a diferentes preferencias y gustos.
Puede ser repetitivo y aburrido jugar los mismos modos y eventos una y otra vez.
-
Ofrece un modelo gratuito que te permite descargar y jugar el juego sin gastar dinero.
Tiene anuncios y compras en la aplicación que pueden ser molestos y tentadores para gastar dinero en.
-
-
Los comentarios y valoraciones de los usuarios de CarX Street
-
Si quieres saber lo que otros jugadores piensan de CarX Street, puedes consultar las opiniones de los usuarios y las valoraciones del juego en varias plataformas. Aquí hay algunos ejemplos de reseñas de usuarios y valoraciones de CarX Street:
-
"Este juego es increíble! Los gráficos son impresionantes, la física es realista, los coches son personalizables, el mapa es enorme, el juego es adictivo. Me encanta la deriva en este juego, se siente tan satisfactorio. El modo multijugador también es divertido, me gusta correr con otros jugadores en línea. Este es uno de los mejores juegos de carreras que he jugado en mi teléfono."
-
"Este juego es bueno, pero tiene algunos defectos. Los controles son difíciles de acostumbrarse, especialmente la deriva. El mapa es demasiado grande y confuso, a menudo me pierdo o me quedo atascado. El juego también se bloquea a veces o se retrasa cuando hay demasiados jugadores o coches en la pantalla. El juego también tiene demasiados anuncios y compras en la aplicación que arruinan la experiencia."
-
-
Conclusión
-
En conclusión, CarX Street
En conclusión, CarX Street es un juego de carreras de mundo abierto para dispositivos móviles que ofrece una experiencia de carreras realista y dinámica con la física del automóvil y la deriva. El juego también cuenta con un mapa de mundo abierto grande y diverso con diferentes entornos y características, una variedad de modos de juego y características para adaptarse a diferentes preferencias y gustos, y un modo multijugador que le permite competir contra otros jugadores en línea. El juego se puede descargar de apkpro.me, un sitio web que proporciona descargas gratuitas y seguras de varias aplicaciones y juegos de Android, incluyendo CarX Street. Sin embargo, el juego también tiene algunos defectos, como controles duros, mapa confuso, jugabilidad repetitiva, anuncios y compras en la aplicación, etc. Por lo tanto, el juego no es perfecto, pero aún vale la pena probarlo si eres fanático de los juegos de carreras.
-
Si estás interesado en jugar CarX Street, puedes descargarlo desde apkpro.me siguiendo los pasos y precauciones mencionados en este artículo. También puede mejorar sus habilidades de carreras y rendimiento siguiendo los consejos y trucos mencionados en este artículo. También puede comparar CarX Street con otros juegos de carreras populares leyendo los comentarios y valoraciones de los usuarios mencionados en este artículo. Esperamos que este artículo te haya ayudado a aprender más sobre CarX Street y cómo descargarlo desde apkpro.me. También esperamos que disfrutes jugando a CarX Street y compartas tus comentarios y opiniones al respecto.
-
Gracias por leer este artículo. ¡Buen día!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre CarX Street y apkpro.me:
-
-
¿Cuáles son los requisitos mínimos para jugar CarX Street en mi dispositivo?
-
Los requisitos mínimos para jugar CarX Street en tu dispositivo son:
-
-
Android 5.0 o superior
-
2 GB de RAM o superior
-
1 GB de espacio de almacenamiento libre o superior
-
Una conexión a Internet estable
-
-
-
Sí, CarX Street es seguro para descargar de apkpro.me, siempre y cuando siga las precauciones mencionadas en este artículo. Apkpro.me es un sitio web confiable y seguro que proporciona descargas gratuitas y seguras de varias aplicaciones y juegos de Android, incluyendo CarX Street. Sin embargo, siempre debe escanear el archivo con un antivirus o un escáner de malware antes de abrirlo para asegurarse de que está libre de virus o código malicioso.
-
¿Cómo puedo contactar a los desarrolladores de CarX Street o apkpro.me?
-
Puede ponerse en contacto con los desarrolladores de CarX Street o apkpro.me utilizando los siguientes métodos:
-
- """
-
- state = gr.State()
- notice = gr.Markdown(notice_markdown, elem_id="notice_markdown")
-
- with gr.Row(elem_id="model_selector_row", visible=False):
- model_selector = gr.Dropdown(
- choices=models,
- value=models[0] if len(models) > 0 else "",
- interactive=True,
- show_label=False,
- ).style(container=False)
-
- chatbot = gr.Chatbot(elem_id="chatbot", visible=False).style(height=550)
- with gr.Row(elem_id="text-box-style"):
- with gr.Column(scale=20):
- textbox = gr.Textbox(
- show_label=False,
- placeholder="Enter text and press ENTER",
- visible=False,
- ).style(container=False)
- with gr.Column(scale=1, min_width=50):
- send_btn = gr.Button(value="Send", visible=False, elem_id="btn-send-style")
-
- with gr.Accordion("Parameters", open=False, visible=False, elem_id="btn-style") as parameter_row:
- temperature = gr.Slider(
- minimum=0.0,
- maximum=1.0,
- value=0.001,
- step=0.1,
- interactive=True,
- label="Temperature",
- visible=False,
- )
- max_output_tokens = gr.Slider(
- minimum=0,
- maximum=1024,
- value=1024,
- step=1,
- interactive=True,
- label="Max output tokens",
- )
- topk = gr.Slider(
- minimum=1,
- maximum=10,
- value=1,
- step=1,
- interactive=True,
- label="TOP K",
- )
-
-
- with gr.Row(visible=False, elem_id="btn-style") as button_row:
- upvote_btn = gr.Button(value="👍 Upvote", interactive=False, visible=False, elem_id="btn-list-style")
- downvote_btn = gr.Button(value="👎 Downvote", interactive=False, visible=False, elem_id="btn-list-style")
- flag_btn = gr.Button(value="⚠️ Flag", interactive=False, visible=False, elem_id="btn-list-style")
- # stop_btn = gr.Button(value="⏹️ Stop Generation", interactive=False)
- regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False, elem_id="btn-list-style")
- clear_btn = gr.Button(value="🗑️ Clear history", interactive=False, elem_id="btn-list-style")
-
-
- gr.Markdown(learn_more_markdown)
-
- # Register listeners
- btn_list = [upvote_btn, downvote_btn, flag_btn, regenerate_btn, clear_btn]
- upvote_btn.click(
- upvote_last_response,
- [state, model_selector],
- [textbox, upvote_btn, downvote_btn, flag_btn],
- )
- downvote_btn.click(
- downvote_last_response,
- [state, model_selector],
- [textbox, upvote_btn, downvote_btn, flag_btn],
- )
- flag_btn.click(
- flag_last_response,
- [state, model_selector],
- [textbox, upvote_btn, downvote_btn, flag_btn],
- )
- regenerate_btn.click(regenerate, state, [state, chatbot, textbox] + btn_list).then(
- http_bot,
- [state, model_selector, temperature, max_output_tokens, topk],
- [state, chatbot] + btn_list,
- )
- clear_btn.click(clear_history, None, [state, chatbot, textbox] + btn_list)
-
- model_selector.change(clear_history, None, [state, chatbot, textbox] + btn_list)
-
- textbox.submit(
- add_text, [state, textbox], [state, chatbot, textbox] + btn_list
- ).then(
- http_bot,
- [state, model_selector, temperature, max_output_tokens, topk],
- [state, chatbot] + btn_list,
- )
- send_btn.click(
- add_text, [state, textbox], [state, chatbot, textbox] + btn_list
- ).then(
- http_bot,
- [state, model_selector, temperature, max_output_tokens, topk],
- [state, chatbot] + btn_list,
- )
-
- return state, model_selector, chatbot, textbox, send_btn, button_row, parameter_row
-
-
-def build_demo(models):
- with gr.Blocks(
- title="NeuralChat · Intel",
- theme=gr.themes.Base(),
- css=block_css,
- ) as demo:
- url_params = gr.JSON(visible=False)
-
- (
- state,
- model_selector,
- chatbot,
- textbox,
- send_btn,
- button_row,
- parameter_row,
- ) = build_single_model_ui(models)
-
- if model_list_mode == "once":
- demo.load(
- load_demo,
- [url_params],
- [
- state,
- model_selector,
- chatbot,
- textbox,
- send_btn,
- button_row,
- parameter_row,
- ],
- _js=get_window_url_params,
- )
- else:
- raise ValueError(f"Unknown model list mode: {model_list_mode}")
-
- return demo
-
-
-if __name__ == "__main__":
-
- controller_url = "http://3.94.111.246:80"
- host = "0.0.0.0"
-
- concurrency_count = 10
- model_list_mode = "once"
- share = False
- moderate = False
-
- set_global_vars(controller_url, moderate)
- models = get_model_list(controller_url)
-
- demo = build_demo(models)
- demo.queue(
- concurrency_count=concurrency_count, status_update_rate=10, api_open=False
- ).launch(
- server_name=host, share=share, max_threads=200
- )
diff --git a/spaces/JLD/docker-hello-world/README.md b/spaces/JLD/docker-hello-world/README.md
deleted file mode 100644
index ea98f525d68b16dce3f76e005b3f33f9161ec8b6..0000000000000000000000000000000000000000
--- a/spaces/JLD/docker-hello-world/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Docker Hello World
-emoji: 😻
-colorFrom: indigo
-colorTo: yellow
-sdk: docker
-app_port: 7860
-pinned: false
-license: unlicense
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/flow_viz.py b/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/flow_viz.py
deleted file mode 100644
index dcee65e89b91b07ee0496aeb4c7e7436abf99641..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/flow_viz.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# Flow visualization code used from https://github.com/tomrunia/OpticalFlow_Visualization
-
-
-# MIT License
-#
-# Copyright (c) 2018 Tom Runia
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to conditions.
-#
-# Author: Tom Runia
-# Date Created: 2018-08-03
-
-import numpy as np
-
-def make_colorwheel():
- """
- Generates a color wheel for optical flow visualization as presented in:
- Baker et al. "A Database and Evaluation Methodology for Optical Flow" (ICCV, 2007)
- URL: http://vision.middlebury.edu/flow/flowEval-iccv07.pdf
-
- Code follows the original C++ source code of Daniel Scharstein.
- Code follows the the Matlab source code of Deqing Sun.
-
- Returns:
- np.ndarray: Color wheel
- """
-
- RY = 15
- YG = 6
- GC = 4
- CB = 11
- BM = 13
- MR = 6
-
- ncols = RY + YG + GC + CB + BM + MR
- colorwheel = np.zeros((ncols, 3))
- col = 0
-
- # RY
- colorwheel[0:RY, 0] = 255
- colorwheel[0:RY, 1] = np.floor(255*np.arange(0,RY)/RY)
- col = col+RY
- # YG
- colorwheel[col:col+YG, 0] = 255 - np.floor(255*np.arange(0,YG)/YG)
- colorwheel[col:col+YG, 1] = 255
- col = col+YG
- # GC
- colorwheel[col:col+GC, 1] = 255
- colorwheel[col:col+GC, 2] = np.floor(255*np.arange(0,GC)/GC)
- col = col+GC
- # CB
- colorwheel[col:col+CB, 1] = 255 - np.floor(255*np.arange(CB)/CB)
- colorwheel[col:col+CB, 2] = 255
- col = col+CB
- # BM
- colorwheel[col:col+BM, 2] = 255
- colorwheel[col:col+BM, 0] = np.floor(255*np.arange(0,BM)/BM)
- col = col+BM
- # MR
- colorwheel[col:col+MR, 2] = 255 - np.floor(255*np.arange(MR)/MR)
- colorwheel[col:col+MR, 0] = 255
- return colorwheel
-
-
-def flow_uv_to_colors(u, v, convert_to_bgr=False):
- """
- Applies the flow color wheel to (possibly clipped) flow components u and v.
-
- According to the C++ source code of Daniel Scharstein
- According to the Matlab source code of Deqing Sun
-
- Args:
- u (np.ndarray): Input horizontal flow of shape [H,W]
- v (np.ndarray): Input vertical flow of shape [H,W]
- convert_to_bgr (bool, optional): Convert output image to BGR. Defaults to False.
-
- Returns:
- np.ndarray: Flow visualization image of shape [H,W,3]
- """
- flow_image = np.zeros((u.shape[0], u.shape[1], 3), np.uint8)
- colorwheel = make_colorwheel() # shape [55x3]
- ncols = colorwheel.shape[0]
- rad = np.sqrt(np.square(u) + np.square(v))
- a = np.arctan2(-v, -u)/np.pi
- fk = (a+1) / 2*(ncols-1)
- k0 = np.floor(fk).astype(np.int32)
- k1 = k0 + 1
- k1[k1 == ncols] = 0
- f = fk - k0
- for i in range(colorwheel.shape[1]):
- tmp = colorwheel[:,i]
- col0 = tmp[k0] / 255.0
- col1 = tmp[k1] / 255.0
- col = (1-f)*col0 + f*col1
- idx = (rad <= 1)
- col[idx] = 1 - rad[idx] * (1-col[idx])
- col[~idx] = col[~idx] * 0.75 # out of range
- # Note the 2-i => BGR instead of RGB
- ch_idx = 2-i if convert_to_bgr else i
- flow_image[:,:,ch_idx] = np.floor(255 * col)
- return flow_image
-
-
-def flow_to_image(flow_uv, clip_flow=None, convert_to_bgr=False):
- """
- Expects a two dimensional flow image of shape.
-
- Args:
- flow_uv (np.ndarray): Flow UV image of shape [H,W,2]
- clip_flow (float, optional): Clip maximum of flow values. Defaults to None.
- convert_to_bgr (bool, optional): Convert output image to BGR. Defaults to False.
-
- Returns:
- np.ndarray: Flow visualization image of shape [H,W,3]
- """
- assert flow_uv.ndim == 3, 'input flow must have three dimensions'
- assert flow_uv.shape[2] == 2, 'input flow must have shape [H,W,2]'
- if clip_flow is not None:
- flow_uv = np.clip(flow_uv, 0, clip_flow)
- u = flow_uv[:,:,0]
- v = flow_uv[:,:,1]
- rad = np.sqrt(np.square(u) + np.square(v))
- rad_max = np.max(rad)
- epsilon = 1e-5
- u = u / (rad_max + epsilon)
- v = v / (rad_max + epsilon)
- return flow_uv_to_colors(u, v, convert_to_bgr)
\ No newline at end of file
diff --git a/spaces/KPCGD/bingo/src/components/ui/textarea.tsx b/spaces/KPCGD/bingo/src/components/ui/textarea.tsx
deleted file mode 100644
index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/components/ui/textarea.tsx
+++ /dev/null
@@ -1,24 +0,0 @@
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-
-export interface TextareaProps
- extends React.TextareaHTMLAttributes {}
-
-const Textarea = React.forwardRef(
- ({ className, ...props }, ref) => {
- return (
-
- )
- }
-)
-Textarea.displayName = 'Textarea'
-
-export { Textarea }
diff --git a/spaces/Kabriske/Multilingual_Video_Subtitler/audio_to_transcript.py b/spaces/Kabriske/Multilingual_Video_Subtitler/audio_to_transcript.py
deleted file mode 100644
index a978fcb49bc79af6eec9130820b361e46641ee80..0000000000000000000000000000000000000000
--- a/spaces/Kabriske/Multilingual_Video_Subtitler/audio_to_transcript.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import os
-from typing import Dict
-
-import torch
-import whisper
-
-import numpy as np # for counting parameters
-
-from utils import log
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-
-class TranscribeAudio:
- def __init__(self):
- self.model = whisper.load_model("base", device=device)
- log(
- f"Model is {'multilingual' if self.model.is_multilingual else 'English-only'} "
- f"and has {sum(np.prod(p.shape) for p in self.model.parameters()):,} parameters."
- )
-
- def transcribe(self, audio_file_path: str, language: str = "en") -> Dict:
- log(f"Transcribing {audio_file_path} in {language}")
- options = dict(language=language, beam_size=5, best_of=5)
- transcribe_options = dict(task="transcribe", **options)
- result = self.model.transcribe(audio_file_path, **transcribe_options)
- return result
-
- def save_output(self, transcript_output: Dict, audio_file_path: str) -> str:
- filename, ext = os.path.splitext(audio_file_path)
- directory = os.path.dirname(filename)
- log(f"Saving output to {directory} directory as {filename}.vtt")
- srt_writer = whisper.utils.get_writer("srt", directory)
- vtt_writer = whisper.utils.get_writer("vtt", directory)
-
- # Save as an SRT file
- srt_writer(result=transcript_output, audio_path=audio_file_path)
-
- # Save as a VTT file
- vtt_writer(result=transcript_output, audio_path=audio_file_path)
-
- return f"{filename}.vtt"
-
- def __call__(self, audio_file_path: str, output_dir: str, input_language: str = "en") -> str:
- transcript = self.transcribe(audio_file_path)
- transcript_path = self.save_output(transcript, audio_file_path)
- return transcript_path
-
-
-if __name__ == '__main__':
- transcribe_audio = TranscribeAudio()
- transcribe_audio('sample', 'iPhone_14_Pro.mp3')
diff --git a/spaces/Kajise/Demucs_v4-FT_4s/app.py b/spaces/Kajise/Demucs_v4-FT_4s/app.py
deleted file mode 100644
index 152fda06a141c375b8c64955cd4e60b0d69f48da..0000000000000000000000000000000000000000
--- a/spaces/Kajise/Demucs_v4-FT_4s/app.py
+++ /dev/null
@@ -1,104 +0,0 @@
-from __future__ import annotations
-from typing import Iterable
-
-import os
-from scipy.io.wavfile import write
-
-import gradio as Gradio
-from gradio.themes.base import Base
-from gradio.themes.utils import colors, fonts, sizes
-
-theme = Gradio.themes.Monochrome(
- primary_hue="purple",
- secondary_hue="purple",
- neutral_hue="neutral",
- radius_size=Gradio.themes.sizes.radius_sm,
- font=[Gradio.themes.GoogleFont("Inter"), "ui-sans-serif", "system-ui", "sans-serif"],
-)
-
-class PurpleTheme(Base):
- def __init__(
- self,
- *,
- primary_hue: colors.Color | str = colors.purple,
- secondary_hue: colors.Color | str = colors.purple,
- neutral_hue: colors.Color | str = colors.neutral,
- spacing_size: sizes.Size | str = sizes.spacing_md,
- radius_size: sizes.Size | str = sizes.radius_md,
- font: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("Inter"),
- "ui-sans-serif",
- "sans-serif",
- ),
- font_mono: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("Space Grotesk"),
- "ui-monospace",
- "monospace",
- ),
- ):
- super().__init__(
- primary_hue=primary_hue,
- secondary_hue=secondary_hue,
- neutral_hue=neutral_hue,
- spacing_size=spacing_size,
- radius_size=radius_size,
- font=font,
- font_mono=font_mono,
- )
- super().set(
- button_primary_background_fill="linear-gradient(90deg, *primary_300, *secondary_400)",
- button_primary_background_fill_hover="linear-gradient(90deg, *primary_200, *secondary_300)",
- button_primary_text_color="white",
- button_primary_background_fill_dark="linear-gradient(90deg, *primary_600, *secondary_800)",
- block_shadow="*shadow_drop_lg",
- button_shadow="*shadow_drop_lg",
- input_background_fill="zinc",
- input_border_color="*secondary_300",
- input_shadow="*shadow_drop",
- input_shadow_focus="*shadow_drop_lg",
- )
-
-custom_theme = PurpleTheme()
-
-def run_demucs(audio):
- os.makedirs("out", exist_ok=True)
- write('test.wav', audio[0], audio[1])
- result = os.system("python3 -m demucs.separate -n htdemucs_ft -d cpu test.wav -o out")
- print(f"Demucs result: {result}")
-
- # Check if files exist before returning
- files = ["./out/htdemucs_ft/test/vocals.wav",
- "./out/htdemucs_ft/test/bass.wav",
- "./out/htdemucs_ft/test/drums.wav",
- "./out/htdemucs_ft/test/other.wav"]
-
- for file in files:
- if not os.path.isfile(file):
- print(f"File not found: {file}")
- else:
- print(f"File exists: {file}")
-
- return files;
-
-title = "Demucs (finetuned_4s)"
-description = "
Uses the 'canary bleeding-edge' version of Demucs (v4) that introduces the latest Hybrid Transformer model Heavily inspired from Thafx's Demucs v4 Space, which is based on akhaliq's PIP Demucs Space
"
-
-Gradio.Interface(
- run_demucs,
- Gradio.Audio(type="numpy", label="Input"),
- [Gradio.Audio(type="filepath", label="Vocals", interactive=False),
- Gradio.Audio(type="filepath", label="Bass", interactive=False),
- Gradio.Audio(type="filepath", label="Drums", interactive=False),
- Gradio.Audio(type="filepath", label="Other", interactive=False)],
- title=title,
- description=description,
- article=article,
- theme=custom_theme,
- analytics_enabled=False,
- css=".generating {visibility: hidden}"
-).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Kamtera/Persian-tts-CoquiTTS/README.md b/spaces/Kamtera/Persian-tts-CoquiTTS/README.md
deleted file mode 100644
index 3ddf10b5bbb3506431508468babe65904d260aa0..0000000000000000000000000000000000000000
--- a/spaces/Kamtera/Persian-tts-CoquiTTS/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Persian Tts CoquiTTS
-emoji: 🚀
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/linear_assignment.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/linear_assignment.py
deleted file mode 100644
index 931a9685a594eab4b553e497bbe1eaca090861e1..0000000000000000000000000000000000000000
--- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/linear_assignment.py
+++ /dev/null
@@ -1,240 +0,0 @@
-# vim: expandtab:ts=4:sw=4
-from __future__ import absolute_import
-import numpy as np
-# The linear sum assignment problem is also known as minimum weight matching in bipartite graphs.
-from scipy.optimize import linear_sum_assignment as linear_assignment
-from . import kalman_filter
-
-
-INFTY_COST = 1e+5
-
-# min_cost_matching 使用匈牙利算法解决线性分配问题。
-# 传入 门控余弦距离成本 或 iou cost
-def min_cost_matching(
- distance_metric, max_distance, tracks, detections, track_indices=None,
- detection_indices=None):
- """Solve linear assignment problem.
-
- Parameters
- ----------
- distance_metric : Callable[List[Track], List[Detection], List[int], List[int]) -> ndarray
- The distance metric is given a list of tracks and detections as well as
- a list of N track indices and M detection indices. The metric should
- return the NxM dimensional cost matrix, where element (i, j) is the
- association cost between the i-th track in the given track indices and
- the j-th detection in the given detection_indices.
- max_distance : float
- Gating threshold. Associations with cost larger than this value are
- disregarded.
- tracks : List[track.Track]
- A list of predicted tracks at the current time step.
- detections : List[detection.Detection]
- A list of detections at the current time step.
- track_indices : List[int]
- List of track indices that maps rows in `cost_matrix` to tracks in
- `tracks` (see description above).
- detection_indices : List[int]
- List of detection indices that maps columns in `cost_matrix` to
- detections in `detections` (see description above).
-
- Returns
- -------
- (List[(int, int)], List[int], List[int])
- Returns a tuple with the following three entries:
- * A list of matched track and detection indices.
- * A list of unmatched track indices.
- * A list of unmatched detection indices.
-
- """
- if track_indices is None:
- track_indices = np.arange(len(tracks))
- if detection_indices is None:
- detection_indices = np.arange(len(detections))
-
- if len(detection_indices) == 0 or len(track_indices) == 0:
- return [], track_indices, detection_indices # Nothing to match.
-
- # 计算成本矩阵
- cost_matrix = distance_metric(
- tracks, detections, track_indices, detection_indices)
- cost_matrix[cost_matrix > max_distance] = max_distance + 1e-5
-
- # 执行匈牙利算法,得到指派成功的索引对,行索引为tracks的索引,列索引为detections的索引
- row_indices, col_indices = linear_assignment(cost_matrix)
-
- matches, unmatched_tracks, unmatched_detections = [], [], []
- # 找出未匹配的detections
- for col, detection_idx in enumerate(detection_indices):
- if col not in col_indices:
- unmatched_detections.append(detection_idx)
- # 找出未匹配的tracks
- for row, track_idx in enumerate(track_indices):
- if row not in row_indices:
- unmatched_tracks.append(track_idx)
- # 遍历匹配的(track, detection)索引对
- for row, col in zip(row_indices, col_indices):
- track_idx = track_indices[row]
- detection_idx = detection_indices[col]
- # 如果相应的cost大于阈值max_distance,也视为未匹配成功
- if cost_matrix[row, col] > max_distance:
- unmatched_tracks.append(track_idx)
- unmatched_detections.append(detection_idx)
- else:
- matches.append((track_idx, detection_idx))
- return matches, unmatched_tracks, unmatched_detections
-
-
-def matching_cascade(
- distance_metric, max_distance, cascade_depth, tracks, detections,
- track_indices=None, detection_indices=None):
- """Run matching cascade.
-
- Parameters
- ----------
- distance_metric : Callable[List[Track], List[Detection], List[int], List[int]) -> ndarray
- The distance metric is given a list of tracks and detections as well as
- a list of N track indices and M detection indices. The metric should
- return the NxM dimensional cost matrix, where element (i, j) is the
- association cost between the i-th track in the given track indices and
- the j-th detection in the given detection indices.
- 距离度量:
- 输入:一个轨迹和检测列表,以及一个N个轨迹索引和M个检测索引的列表。
- 返回:NxM维的代价矩阵,其中元素(i,j)是给定轨迹索引中第i个轨迹与
- 给定检测索引中第j个检测之间的关联成本。
- max_distance : float
- Gating threshold. Associations with cost larger than this value are
- disregarded.
- 门控阈值。成本大于此值的关联将被忽略。
- cascade_depth: int
- The cascade depth, should be se to the maximum track age.
- 级联深度应设置为最大轨迹寿命。
- tracks : List[track.Track]
- A list of predicted tracks at the current time step.
- 当前时间步的预测轨迹列表。
- detections : List[detection.Detection]
- A list of detections at the current time step.
- 当前时间步的检测列表。
- track_indices : Optional[List[int]]
- List of track indices that maps rows in `cost_matrix` to tracks in
- `tracks` (see description above). Defaults to all tracks.
- 轨迹索引列表,用于将 cost_matrix中的行映射到tracks的
- 轨迹(请参见上面的说明)。 默认为所有轨迹。
- detection_indices : Optional[List[int]]
- List of detection indices that maps columns in `cost_matrix` to
- detections in `detections` (see description above). Defaults to all
- detections.
- 将 cost_matrix中的列映射到的检测索引列表
- detections中的检测(请参见上面的说明)。 默认为全部检测。
-
- Returns
- -------
- (List[(int, int)], List[int], List[int])
- Returns a tuple with the following three entries:
- * A list of matched track and detection indices.
- * A list of unmatched track indices.
- * A list of unmatched detection indices.
-
- 返回包含以下三个条目的元组:
-
- 匹配的跟踪和检测的索引列表,
- 不匹配的轨迹索引的列表,
- 未匹配的检测索引的列表。
-
- """
-
- # 分配track_indices和detection_indices两个列表
- if track_indices is None:
- track_indices = list(range(len(tracks)))
- if detection_indices is None:
- detection_indices = list(range(len(detections)))
-
- # 初始化匹配集matches M ← ∅
- # 未匹配检测集unmatched_detections U ← D
- unmatched_detections = detection_indices
- matches = []
- # 由小到大依次对每个level的tracks做匹配
- for level in range(cascade_depth):
- # 如果没有detections,退出循环
- if len(unmatched_detections) == 0: # No detections left
- break
-
- # 当前level的所有tracks索引
- # 步骤6:Select tracks by age
- track_indices_l = [
- k for k in track_indices
- if tracks[k].time_since_update == 1 + level
- ]
- # 如果当前level没有track,继续
- if len(track_indices_l) == 0: # Nothing to match at this level
- continue
-
- # 步骤7:调用min_cost_matching函数进行匹配
- matches_l, _, unmatched_detections = \
- min_cost_matching(
- distance_metric, max_distance, tracks, detections,
- track_indices_l, unmatched_detections)
- matches += matches_l # 步骤8
- unmatched_tracks = list(set(track_indices) - set(k for k, _ in matches)) # 步骤9
- return matches, unmatched_tracks, unmatched_detections
-
-'''
-门控成本矩阵:通过计算卡尔曼滤波的状态分布和测量值之间的距离对成本矩阵进行限制,
-成本矩阵中的距离是track和detection之间的外观相似度。
-如果一个轨迹要去匹配两个外观特征非常相似的 detection,很容易出错;
-分别让两个detection计算与这个轨迹的马氏距离,并使用一个阈值gating_threshold进行限制,
-就可以将马氏距离较远的那个detection区分开,从而减少错误的匹配。
-'''
-def gate_cost_matrix(
- kf, cost_matrix, tracks, detections, track_indices, detection_indices,
- gated_cost=INFTY_COST, only_position=False):
- """Invalidate infeasible entries in cost matrix based on the state
- distributions obtained by Kalman filtering.
-
- Parameters
- ----------
- kf : The Kalman filter.
- cost_matrix : ndarray
- The NxM dimensional cost matrix, where N is the number of track indices
- and M is the number of detection indices, such that entry (i, j) is the
- association cost between `tracks[track_indices[i]]` and
- `detections[detection_indices[j]]`.
- tracks : List[track.Track]
- A list of predicted tracks at the current time step.
- detections : List[detection.Detection]
- A list of detections at the current time step.
- track_indices : List[int]
- List of track indices that maps rows in `cost_matrix` to tracks in
- `tracks` (see description above).
- detection_indices : List[int]
- List of detection indices that maps columns in `cost_matrix` to
- detections in `detections` (see description above).
- gated_cost : Optional[float]
- Entries in the cost matrix corresponding to infeasible associations are
- set this value. Defaults to a very large value.
- 代价矩阵中与不可行关联相对应的条目设置此值。 默认为一个很大的值。
- only_position : Optional[bool]
- If True, only the x, y position of the state distribution is considered
- during gating. Defaults to False.
- 如果为True,则在门控期间仅考虑状态分布的x,y位置。默认为False。
-
- Returns
- -------
- ndarray
- Returns the modified cost matrix.
-
- """
- # 根据通过卡尔曼滤波获得的状态分布,使成本矩阵中的不可行条目无效。
- gating_dim = 2 if only_position else 4 # 测量空间维度
- # 马氏距离通过测算检测与平均轨迹位置的距离超过多少标准差来考虑状态估计的不确定性。
- # 通过从逆chi^2分布计算95%置信区间的阈值,排除可能性小的关联。
- # 四维测量空间对应的马氏阈值为9.4877
- gating_threshold = kalman_filter.chi2inv95[gating_dim]
- measurements = np.asarray(
- [detections[i].to_xyah() for i in detection_indices])
- for row, track_idx in enumerate(track_indices):
- track = tracks[track_idx]
- #KalmanFilter.gating_distance 计算状态分布和测量之间的选通距离
- gating_distance = kf.gating_distance(
- track.mean, track.covariance, measurements, only_position)
- cost_matrix[row, gating_distance > gating_threshold] = gated_cost
- return cost_matrix
diff --git a/spaces/Keenlol/Wood_Classification/app.py b/spaces/Keenlol/Wood_Classification/app.py
deleted file mode 100644
index 8d895231e2adf59803f4ada70733d5349a97c0e6..0000000000000000000000000000000000000000
--- a/spaces/Keenlol/Wood_Classification/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-import skimage
-import pathlib
-import os
-
-
-plt = platform.system()
-if plt == 'Linux': pathlib.WindowsPath = pathlib.PosixPath
-
-
-learn = load_learner('model.pkl')
-examples = [str(x) for x in get_image_files('images')]
-
-
-labels = learn.dls.vocab
-def predict(img):
- img = PILImage.create(img)
- pred,pred_idx,probs = learn.predict(img)
- val, idx = probs.topk(3)
- pred_labels = labels[idx]
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-
-examples_list = [['examples/' + image] for image in os.listdir('examples/')]
-examples_list
-
-
-title = "Wood Type Classifier"
-description = "
Image classification Model trained with a dataset from Zenodo using fastai"
-article="
Dataset"
-interpretation='default'
-enable_queue=True
-inputs = gr.Image(shape=(224, 224))
-
-gr.Interface(fn=predict,
- inputs=inputs,
- outputs=gr.Label(num_top_classes=3),
- title=title,
- description=description,
- article=article,
- interpretation=interpretation,
- examples=examples_list).launch(inline=False, enable_queue=enable_queue)
\ No newline at end of file
diff --git a/spaces/KonradSzafer/HF-QA-Demo/discord_bot/__main__.py b/spaces/KonradSzafer/HF-QA-Demo/discord_bot/__main__.py
deleted file mode 100644
index c1a831c4e01f43013080ec34df20321204952ef8..0000000000000000000000000000000000000000
--- a/spaces/KonradSzafer/HF-QA-Demo/discord_bot/__main__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from qa_engine import logger, Config, QAEngine
-from discord_bot.client import DiscordClient
-
-
-config = Config()
-qa_engine = QAEngine(
- llm_model_id=config.question_answering_model_id,
- embedding_model_id=config.embedding_model_id,
- index_repo_id=config.index_repo_id,
- prompt_template=config.prompt_template,
- use_docs_for_context=config.use_docs_for_context,
- num_relevant_docs=config.num_relevant_docs,
- add_sources_to_response=config.add_sources_to_response,
- use_messages_for_context=config.use_messages_in_context,
- debug=config.debug
-)
-client = DiscordClient(
- qa_engine=qa_engine,
- num_last_messages=config.num_last_messages,
- use_names_in_context=config.use_names_in_context,
- enable_commands=config.enable_commands,
- debug=config.debug
-)
-
-
-if __name__ == '__main__':
- logger.info('Starting Application...')
- client.run(config.discord_token)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/engine/hooks/yolox_mode_switch_hook.py b/spaces/KyanChen/RSPrompter/mmdet/engine/hooks/yolox_mode_switch_hook.py
deleted file mode 100644
index 39aadd94bd05dee6383b2d1365726b2a2df11245..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/engine/hooks/yolox_mode_switch_hook.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Sequence
-
-from mmengine.hooks import Hook
-from mmengine.model import is_model_wrapper
-
-from mmdet.registry import HOOKS
-
-
-@HOOKS.register_module()
-class YOLOXModeSwitchHook(Hook):
- """Switch the mode of YOLOX during training.
-
- This hook turns off the mosaic and mixup data augmentation and switches
- to use L1 loss in bbox_head.
-
- Args:
- num_last_epochs (int): The number of latter epochs in the end of the
- training to close the data augmentation and switch to L1 loss.
- Defaults to 15.
- skip_type_keys (Sequence[str], optional): Sequence of type string to be
- skip pipeline. Defaults to ('Mosaic', 'RandomAffine', 'MixUp').
- """
-
- def __init__(
- self,
- num_last_epochs: int = 15,
- skip_type_keys: Sequence[str] = ('Mosaic', 'RandomAffine', 'MixUp')
- ) -> None:
- self.num_last_epochs = num_last_epochs
- self.skip_type_keys = skip_type_keys
- self._restart_dataloader = False
-
- def before_train_epoch(self, runner) -> None:
- """Close mosaic and mixup augmentation and switches to use L1 loss."""
- epoch = runner.epoch
- train_loader = runner.train_dataloader
- model = runner.model
- # TODO: refactor after mmengine using model wrapper
- if is_model_wrapper(model):
- model = model.module
- if (epoch + 1) == runner.max_epochs - self.num_last_epochs:
- runner.logger.info('No mosaic and mixup aug now!')
- # The dataset pipeline cannot be updated when persistent_workers
- # is True, so we need to force the dataloader's multi-process
- # restart. This is a very hacky approach.
- train_loader.dataset.update_skip_type_keys(self.skip_type_keys)
- if hasattr(train_loader, 'persistent_workers'
- ) and train_loader.persistent_workers is True:
- train_loader._DataLoader__initialized = False
- train_loader._iterator = None
- self._restart_dataloader = True
- runner.logger.info('Add additional L1 loss now!')
- model.bbox_head.use_l1 = True
- else:
- # Once the restart is complete, we need to restore
- # the initialization flag.
- if self._restart_dataloader:
- train_loader._DataLoader__initialized = True
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/losses/utils.py b/spaces/KyanChen/RSPrompter/mmdet/models/losses/utils.py
deleted file mode 100644
index 5e6e7859f353f3e5456f0cfc1f66b4b0ad535427..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/losses/utils.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import functools
-from typing import Callable, Optional
-
-import torch
-import torch.nn.functional as F
-from torch import Tensor
-
-
-def reduce_loss(loss: Tensor, reduction: str) -> Tensor:
- """Reduce loss as specified.
-
- Args:
- loss (Tensor): Elementwise loss tensor.
- reduction (str): Options are "none", "mean" and "sum".
-
- Return:
- Tensor: Reduced loss tensor.
- """
- reduction_enum = F._Reduction.get_enum(reduction)
- # none: 0, elementwise_mean:1, sum: 2
- if reduction_enum == 0:
- return loss
- elif reduction_enum == 1:
- return loss.mean()
- elif reduction_enum == 2:
- return loss.sum()
-
-
-def weight_reduce_loss(loss: Tensor,
- weight: Optional[Tensor] = None,
- reduction: str = 'mean',
- avg_factor: Optional[float] = None) -> Tensor:
- """Apply element-wise weight and reduce loss.
-
- Args:
- loss (Tensor): Element-wise loss.
- weight (Optional[Tensor], optional): Element-wise weights.
- Defaults to None.
- reduction (str, optional): Same as built-in losses of PyTorch.
- Defaults to 'mean'.
- avg_factor (Optional[float], optional): Average factor when
- computing the mean of losses. Defaults to None.
-
- Returns:
- Tensor: Processed loss values.
- """
- # if weight is specified, apply element-wise weight
- if weight is not None:
- loss = loss * weight
-
- # if avg_factor is not specified, just reduce the loss
- if avg_factor is None:
- loss = reduce_loss(loss, reduction)
- else:
- # if reduction is mean, then average the loss by avg_factor
- if reduction == 'mean':
- # Avoid causing ZeroDivisionError when avg_factor is 0.0,
- # i.e., all labels of an image belong to ignore index.
- eps = torch.finfo(torch.float32).eps
- loss = loss.sum() / (avg_factor + eps)
- # if reduction is 'none', then do nothing, otherwise raise an error
- elif reduction != 'none':
- raise ValueError('avg_factor can not be used with reduction="sum"')
- return loss
-
-
-def weighted_loss(loss_func: Callable) -> Callable:
- """Create a weighted version of a given loss function.
-
- To use this decorator, the loss function must have the signature like
- `loss_func(pred, target, **kwargs)`. The function only needs to compute
- element-wise loss without any reduction. This decorator will add weight
- and reduction arguments to the function. The decorated function will have
- the signature like `loss_func(pred, target, weight=None, reduction='mean',
- avg_factor=None, **kwargs)`.
-
- :Example:
-
- >>> import torch
- >>> @weighted_loss
- >>> def l1_loss(pred, target):
- >>> return (pred - target).abs()
-
- >>> pred = torch.Tensor([0, 2, 3])
- >>> target = torch.Tensor([1, 1, 1])
- >>> weight = torch.Tensor([1, 0, 1])
-
- >>> l1_loss(pred, target)
- tensor(1.3333)
- >>> l1_loss(pred, target, weight)
- tensor(1.)
- >>> l1_loss(pred, target, reduction='none')
- tensor([1., 1., 2.])
- >>> l1_loss(pred, target, weight, avg_factor=2)
- tensor(1.5000)
- """
-
- @functools.wraps(loss_func)
- def wrapper(pred: Tensor,
- target: Tensor,
- weight: Optional[Tensor] = None,
- reduction: str = 'mean',
- avg_factor: Optional[int] = None,
- **kwargs) -> Tensor:
- """
- Args:
- pred (Tensor): The prediction.
- target (Tensor): Target bboxes.
- weight (Optional[Tensor], optional): The weight of loss for each
- prediction. Defaults to None.
- reduction (str, optional): Options are "none", "mean" and "sum".
- Defaults to 'mean'.
- avg_factor (Optional[int], optional): Average factor that is used
- to average the loss. Defaults to None.
-
- Returns:
- Tensor: Loss tensor.
- """
- # get element-wise loss
- loss = loss_func(pred, target, **kwargs)
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
- return loss
-
- return wrapper
diff --git a/spaces/Lamai/LAMAIGPT/autogpt/chat.py b/spaces/Lamai/LAMAIGPT/autogpt/chat.py
deleted file mode 100644
index 1f6bca96eb216c667656b50f131006b83c681065..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/autogpt/chat.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import time
-
-from openai.error import RateLimitError
-
-from autogpt import token_counter
-from autogpt.config import Config
-from autogpt.llm_utils import create_chat_completion
-from autogpt.logs import logger
-
-cfg = Config()
-
-
-def create_chat_message(role, content):
- """
- Create a chat message with the given role and content.
-
- Args:
- role (str): The role of the message sender, e.g., "system", "user", or "assistant".
- content (str): The content of the message.
-
- Returns:
- dict: A dictionary containing the role and content of the message.
- """
- return {"role": role, "content": content}
-
-
-def generate_context(prompt, relevant_memory, full_message_history, model):
- current_context = [
- create_chat_message("system", prompt),
- create_chat_message(
- "system", f"The current time and date is {time.strftime('%c')}"
- ),
- create_chat_message(
- "system",
- f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
- ),
- ]
-
- # Add messages from the full message history until we reach the token limit
- next_message_to_add_index = len(full_message_history) - 1
- insertion_index = len(current_context)
- # Count the currently used tokens
- current_tokens_used = token_counter.count_message_tokens(current_context, model)
- return (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- )
-
-
-# TODO: Change debug from hardcode to argument
-def chat_with_ai(
- prompt, user_input, full_message_history, permanent_memory, token_limit
-):
- """Interact with the OpenAI API, sending the prompt, user input, message history,
- and permanent memory."""
- while True:
- try:
- """
- Interact with the OpenAI API, sending the prompt, user input,
- message history, and permanent memory.
-
- Args:
- prompt (str): The prompt explaining the rules to the AI.
- user_input (str): The input from the user.
- full_message_history (list): The list of all messages sent between the
- user and the AI.
- permanent_memory (Obj): The memory object containing the permanent
- memory.
- token_limit (int): The maximum number of tokens allowed in the API call.
-
- Returns:
- str: The AI's response.
- """
- model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
- # Reserve 1000 tokens for the response
-
- logger.debug(f"Token limit: {token_limit}")
- send_token_limit = token_limit - 1000
-
- relevant_memory = (
- ""
- if len(full_message_history) == 0
- else permanent_memory.get_relevant(str(full_message_history[-9:]), 10)
- )
-
- logger.debug(f"Memory Stats: {permanent_memory.get_stats()}")
-
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(prompt, relevant_memory, full_message_history, model)
-
- while current_tokens_used > 2500:
- # remove memories until we are under 2500 tokens
- relevant_memory = relevant_memory[:-1]
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(
- prompt, relevant_memory, full_message_history, model
- )
-
- current_tokens_used += token_counter.count_message_tokens(
- [create_chat_message("user", user_input)], model
- ) # Account for user input (appended later)
-
- while next_message_to_add_index >= 0:
- # print (f"CURRENT TOKENS USED: {current_tokens_used}")
- message_to_add = full_message_history[next_message_to_add_index]
-
- tokens_to_add = token_counter.count_message_tokens(
- [message_to_add], model
- )
- if current_tokens_used + tokens_to_add > send_token_limit:
- break
-
- # Add the most recent message to the start of the current context,
- # after the two system prompts.
- current_context.insert(
- insertion_index, full_message_history[next_message_to_add_index]
- )
-
- # Count the currently used tokens
- current_tokens_used += tokens_to_add
-
- # Move to the next most recent message in the full message history
- next_message_to_add_index -= 1
-
- # Append user input, the length of this is accounted for above
- current_context.extend([create_chat_message("user", user_input)])
-
- # Calculate remaining tokens
- tokens_remaining = token_limit - current_tokens_used
- # assert tokens_remaining >= 0, "Tokens remaining is negative.
- # This should never happen, please submit a bug report at
- # https://www.github.com/Torantulino/Auto-GPT"
-
- # Debug print the current context
- logger.debug(f"Token limit: {token_limit}")
- logger.debug(f"Send Token Count: {current_tokens_used}")
- logger.debug(f"Tokens remaining for response: {tokens_remaining}")
- logger.debug("------------ CONTEXT SENT TO AI ---------------")
- for message in current_context:
- # Skip printing the prompt
- if message["role"] == "system" and message["content"] == prompt:
- continue
- logger.debug(f"{message['role'].capitalize()}: {message['content']}")
- logger.debug("")
- logger.debug("----------- END OF CONTEXT ----------------")
-
- # TODO: use a model defined elsewhere, so that model can contain
- # temperature and other settings we care about
- assistant_reply = create_chat_completion(
- model=model,
- messages=current_context,
- max_tokens=tokens_remaining,
- )
-
- # Update full message history
- full_message_history.append(create_chat_message("user", user_input))
- full_message_history.append(
- create_chat_message("assistant", assistant_reply)
- )
-
- return assistant_reply
- except RateLimitError:
- # TODO: When we switch to langchain, this is built in
- print("Error: ", "API Rate Limit Reached. Waiting 10 seconds...")
- time.sleep(10)
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/onnx_inference.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/onnx_inference.py
deleted file mode 100644
index 2726cdb9c8fb19414bd901aa4eb87fc1e4ab807a..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import librosa
-import numpy as np
-import onnxruntime
-
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- logger.info("Load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from lib.infer.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from lib.infer.infer_pack.modules.F0Predictor.HarvestF0Predictor import (
- HarvestF0Predictor,
- )
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from lib.infer.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/julius/__init__.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/julius/__init__.py
deleted file mode 100644
index b4d96cc38ae6c0288a6bfa93f91e6f438af15e52..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/julius/__init__.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-
-# flake8: noqa
-"""
-.. image:: ../logo.png
-
-Julius contains different Digital Signal Processing algorithms implemented
-with PyTorch, so that they are differentiable and available on CUDA.
-Note that all the modules implemented here can be used with TorchScript.
-
-For now, I have implemented:
-
-- `julius.resample`: fast sinc resampling.
-- `julius.fftconv`: FFT based convolutions.
-- `julius.lowpass`: FIR low pass filter banks.
-- `julius.filters`: FIR high pass and band pass filters.
-- `julius.bands`: Decomposition of a waveform signal over mel-scale frequency bands.
-
-Along that, you might found useful utilities in:
-
-- `julius.core`: DSP related functions.
-- `julius.utils`: Generic utilities.
-
-
-Please checkout [the Github repository](https://github.com/adefossez/julius) for other informations.
-For a verification of the speed and correctness of Julius, check the benchmark module `bench`.
-
-
-This package is named in this honor of
-[Julius O. Smith](https://ccrma.stanford.edu/~jos/),
-whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want
-to learn more about DSP.
-"""
diff --git a/spaces/Lbin123/Lbingo/src/components/turn-counter.tsx b/spaces/Lbin123/Lbingo/src/components/turn-counter.tsx
deleted file mode 100644
index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000
--- a/spaces/Lbin123/Lbingo/src/components/turn-counter.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import React from 'react'
-import { Throttling } from '@/lib/bots/bing/types'
-
-export interface TurnCounterProps {
- throttling?: Throttling
-}
-
-export function TurnCounter({ throttling }: TurnCounterProps) {
- if (!throttling) {
- return null
- }
-
- return (
-
-The model is licensed with a CreativeML Open RAIL-M license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license
-
Biases and content acknowledgment
-Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the model card
-
- """
- )
-
-block.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/model_download/yolov5_model_p6_all.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/model_download/yolov5_model_p6_all.sh
deleted file mode 100644
index dfe8d9014e46cf8f7df244095d0115df55e0a209..0000000000000000000000000000000000000000
--- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/model_download/yolov5_model_p6_all.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-cd ./yolov5
-
-# 下载YOLOv5模型
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n6.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s6.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m6.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l6.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x6.pt
\ No newline at end of file
diff --git a/spaces/abdellatif/pokemon-detector/README.md b/spaces/abdellatif/pokemon-detector/README.md
deleted file mode 100644
index 5fc8085e60b16a50e26d09c4b27138bd50f5575e..0000000000000000000000000000000000000000
--- a/spaces/abdellatif/pokemon-detector/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pokemon Detector
-emoji: 🐢
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abhishek/first-order-motion-model/train.py b/spaces/abhishek/first-order-motion-model/train.py
deleted file mode 100644
index a987e6cfd45b42a11887439d983ecf88ac957c6d..0000000000000000000000000000000000000000
--- a/spaces/abhishek/first-order-motion-model/train.py
+++ /dev/null
@@ -1,87 +0,0 @@
-from tqdm import trange
-import torch
-
-from torch.utils.data import DataLoader
-
-from logger import Logger
-from modules.model import GeneratorFullModel, DiscriminatorFullModel
-
-from torch.optim.lr_scheduler import MultiStepLR
-
-from sync_batchnorm import DataParallelWithCallback
-
-from frames_dataset import DatasetRepeater
-
-
-def train(config, generator, discriminator, kp_detector, checkpoint, log_dir, dataset, device_ids):
- train_params = config['train_params']
-
- optimizer_generator = torch.optim.Adam(generator.parameters(), lr=train_params['lr_generator'], betas=(0.5, 0.999))
- optimizer_discriminator = torch.optim.Adam(discriminator.parameters(), lr=train_params['lr_discriminator'], betas=(0.5, 0.999))
- optimizer_kp_detector = torch.optim.Adam(kp_detector.parameters(), lr=train_params['lr_kp_detector'], betas=(0.5, 0.999))
-
- if checkpoint is not None:
- start_epoch = Logger.load_cpk(checkpoint, generator, discriminator, kp_detector,
- optimizer_generator, optimizer_discriminator,
- None if train_params['lr_kp_detector'] == 0 else optimizer_kp_detector)
- else:
- start_epoch = 0
-
- scheduler_generator = MultiStepLR(optimizer_generator, train_params['epoch_milestones'], gamma=0.1,
- last_epoch=start_epoch - 1)
- scheduler_discriminator = MultiStepLR(optimizer_discriminator, train_params['epoch_milestones'], gamma=0.1,
- last_epoch=start_epoch - 1)
- scheduler_kp_detector = MultiStepLR(optimizer_kp_detector, train_params['epoch_milestones'], gamma=0.1,
- last_epoch=-1 + start_epoch * (train_params['lr_kp_detector'] != 0))
-
- if 'num_repeats' in train_params or train_params['num_repeats'] != 1:
- dataset = DatasetRepeater(dataset, train_params['num_repeats'])
- dataloader = DataLoader(dataset, batch_size=train_params['batch_size'], shuffle=True, num_workers=6, drop_last=True)
-
- generator_full = GeneratorFullModel(kp_detector, generator, discriminator, train_params)
- discriminator_full = DiscriminatorFullModel(kp_detector, generator, discriminator, train_params)
-
- if torch.cuda.is_available():
- generator_full = DataParallelWithCallback(generator_full, device_ids=device_ids)
- discriminator_full = DataParallelWithCallback(discriminator_full, device_ids=device_ids)
-
- with Logger(log_dir=log_dir, visualizer_params=config['visualizer_params'], checkpoint_freq=train_params['checkpoint_freq']) as logger:
- for epoch in trange(start_epoch, train_params['num_epochs']):
- for x in dataloader:
- losses_generator, generated = generator_full(x)
-
- loss_values = [val.mean() for val in losses_generator.values()]
- loss = sum(loss_values)
-
- loss.backward()
- optimizer_generator.step()
- optimizer_generator.zero_grad()
- optimizer_kp_detector.step()
- optimizer_kp_detector.zero_grad()
-
- if train_params['loss_weights']['generator_gan'] != 0:
- optimizer_discriminator.zero_grad()
- losses_discriminator = discriminator_full(x, generated)
- loss_values = [val.mean() for val in losses_discriminator.values()]
- loss = sum(loss_values)
-
- loss.backward()
- optimizer_discriminator.step()
- optimizer_discriminator.zero_grad()
- else:
- losses_discriminator = {}
-
- losses_generator.update(losses_discriminator)
- losses = {key: value.mean().detach().data.cpu().numpy() for key, value in losses_generator.items()}
- logger.log_iter(losses=losses)
-
- scheduler_generator.step()
- scheduler_discriminator.step()
- scheduler_kp_detector.step()
-
- logger.log_epoch(epoch, {'generator': generator,
- 'discriminator': discriminator,
- 'kp_detector': kp_detector,
- 'optimizer_generator': optimizer_generator,
- 'optimizer_discriminator': optimizer_discriminator,
- 'optimizer_kp_detector': optimizer_kp_detector}, inp=x, out=generated)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/apcnet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/apcnet_r50-d8.py
deleted file mode 100644
index c8f5316cbcf3896ba9de7ca2c801eba512f01d5e..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/apcnet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='APCHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- pool_scales=(1, 2, 3, 6),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/io.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/io.py
deleted file mode 100644
index aaefde58aa3ea5b58f86249ce7e1c40c186eb8dd..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/io.py
+++ /dev/null
@@ -1,151 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from io import BytesIO, StringIO
-from pathlib import Path
-
-from ..utils import is_list_of, is_str
-from .file_client import FileClient
-from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler
-
-file_handlers = {
- 'json': JsonHandler(),
- 'yaml': YamlHandler(),
- 'yml': YamlHandler(),
- 'pickle': PickleHandler(),
- 'pkl': PickleHandler()
-}
-
-
-def load(file, file_format=None, file_client_args=None, **kwargs):
- """Load data from json/yaml/pickle files.
-
- This method provides a unified api for loading data from serialized files.
-
- Note:
- In v1.3.16 and later, ``load`` supports loading data from serialized
- files those can be storaged in different backends.
-
- Args:
- file (str or :obj:`Path` or file-like object): Filename or a file-like
- object.
- file_format (str, optional): If not specified, the file format will be
- inferred from the file extension, otherwise use the specified one.
- Currently supported formats include "json", "yaml/yml" and
- "pickle/pkl".
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> load('/path/of/your/file') # file is storaged in disk
- >>> load('https://path/of/your/file') # file is storaged in Internet
- >>> load('s3://path/of/your/file') # file is storaged in petrel
-
- Returns:
- The content from the file.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None and is_str(file):
- file_format = file.split('.')[-1]
- if file_format not in file_handlers:
- raise TypeError(f'Unsupported format: {file_format}')
-
- handler = file_handlers[file_format]
- if is_str(file):
- file_client = FileClient.infer_client(file_client_args, file)
- if handler.str_like:
- with StringIO(file_client.get_text(file)) as f:
- obj = handler.load_from_fileobj(f, **kwargs)
- else:
- with BytesIO(file_client.get(file)) as f:
- obj = handler.load_from_fileobj(f, **kwargs)
- elif hasattr(file, 'read'):
- obj = handler.load_from_fileobj(file, **kwargs)
- else:
- raise TypeError('"file" must be a filepath str or a file-object')
- return obj
-
-
-def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs):
- """Dump data to json/yaml/pickle strings or files.
-
- This method provides a unified api for dumping data as strings or to files,
- and also supports custom arguments for each file format.
-
- Note:
- In v1.3.16 and later, ``dump`` supports dumping data as strings or to
- files which is saved to different backends.
-
- Args:
- obj (any): The python object to be dumped.
- file (str or :obj:`Path` or file-like object, optional): If not
- specified, then the object is dumped to a str, otherwise to a file
- specified by the filename or file-like object.
- file_format (str, optional): Same as :func:`load`.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> dump('hello world', '/path/of/your/file') # disk
- >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel
-
- Returns:
- bool: True for success, False otherwise.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None:
- if is_str(file):
- file_format = file.split('.')[-1]
- elif file is None:
- raise ValueError(
- 'file_format must be specified since file is None')
- if file_format not in file_handlers:
- raise TypeError(f'Unsupported format: {file_format}')
-
- handler = file_handlers[file_format]
- if file is None:
- return handler.dump_to_str(obj, **kwargs)
- elif is_str(file):
- file_client = FileClient.infer_client(file_client_args, file)
- if handler.str_like:
- with StringIO() as f:
- handler.dump_to_fileobj(obj, f, **kwargs)
- file_client.put_text(f.getvalue(), file)
- else:
- with BytesIO() as f:
- handler.dump_to_fileobj(obj, f, **kwargs)
- file_client.put(f.getvalue(), file)
- elif hasattr(file, 'write'):
- handler.dump_to_fileobj(obj, file, **kwargs)
- else:
- raise TypeError('"file" must be a filename str or a file-object')
-
-
-def _register_handler(handler, file_formats):
- """Register a handler for some file extensions.
-
- Args:
- handler (:obj:`BaseFileHandler`): Handler to be registered.
- file_formats (str or list[str]): File formats to be handled by this
- handler.
- """
- if not isinstance(handler, BaseFileHandler):
- raise TypeError(
- f'handler must be a child of BaseFileHandler, not {type(handler)}')
- if isinstance(file_formats, str):
- file_formats = [file_formats]
- if not is_list_of(file_formats, str):
- raise TypeError('file_formats must be a str or a list of str')
- for ext in file_formats:
- file_handlers[ext] = handler
-
-
-def register_handler(file_formats, **kwargs):
-
- def wrap(cls):
- _register_handler(cls(**kwargs), file_formats)
- return cls
-
- return wrap
diff --git a/spaces/adba/Real-CUGAN/upcunet_v3.py b/spaces/adba/Real-CUGAN/upcunet_v3.py
deleted file mode 100644
index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000
--- a/spaces/adba/Real-CUGAN/upcunet_v3.py
+++ /dev/null
@@ -1,714 +0,0 @@
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-import os, sys
-import numpy as np
-
-root_path = os.path.abspath('.')
-sys.path.append(root_path)
-
-
-class SEBlock(nn.Module):
- def __init__(self, in_channels, reduction=8, bias=False):
- super(SEBlock, self).__init__()
- self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias)
- self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias)
-
- def forward(self, x):
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half()
- else:
- x0 = torch.mean(x, dim=(2, 3), keepdim=True)
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
- def forward_mean(self, x, x0):
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
-
-class UNetConv(nn.Module):
- def __init__(self, in_channels, mid_channels, out_channels, se):
- super(UNetConv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(in_channels, mid_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- nn.Conv2d(mid_channels, out_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- )
- if se:
- self.seblock = SEBlock(out_channels, reduction=8, bias=True)
- else:
- self.seblock = None
-
- def forward(self, x):
- z = self.conv(x)
- if self.seblock is not None:
- z = self.seblock(z)
- return z
-
-
-class UNet1(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet1x3(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1x3, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet2(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet2, self).__init__()
-
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 64, 128, se=True)
- self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0)
- self.conv3 = UNetConv(128, 256, 128, se=True)
- self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0)
- self.conv4 = UNetConv(128, 64, 64, se=True)
- self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv5 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
-
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3(x3)
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4(x2 + x3)
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
- def forward_a(self, x): # conv234结尾有se
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x2): # conv234结尾有se
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3.conv(x3)
- return x3
-
- def forward_c(self, x2, x3): # conv234结尾有se
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4.conv(x2 + x3)
- return x4
-
- def forward_d(self, x1, x4): # conv234结尾有se
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
-
-class UpCunet2x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet2x, self).__init__()
- self.unet1 = UNet1(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 36, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 36, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2]
- return res #
-
-
-class UpCunet3x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet3x, self).__init__()
- self.unet1 = UNet1x3(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 4 + 1) * 4
- pw = ((w0 - 1) // 4 + 1) * 4
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除
- else:
- crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 28, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 28, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop #
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3]
- return res
-
-
-class UpCunet4x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet4x, self).__init__()
- self.unet1 = UNet1(in_channels, 64, deconv=True)
- self.unet2 = UNet2(64, 64, deconv=False)
- self.ps = nn.PixelShuffle(2)
- self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True)
-
- def forward(self, x, tile_mode):
- n, c, h0, w0 = x.shape
- x00 = x
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- x = self.conv_final(x)
- x = F.pad(x, (-1, -1, -1, -1))
- x = self.ps(x)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4]
- x += F.interpolate(x00, scale_factor=4, mode='nearest')
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 38, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 38, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- x_crop = self.conv_final(x_crop)
- x_crop = F.pad(x_crop, (-1, -1, -1, -1))
- x_crop = self.ps(x_crop)
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape)
- res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4]
- res += F.interpolate(x00, scale_factor=4, mode='nearest')
- return res #
-
-
-class RealWaifuUpScaler(object):
- def __init__(self, scale, weight_path, half, device):
- weight = torch.load(weight_path, map_location="cpu")
- self.model = eval("UpCunet%sx" % scale)()
- if (half == True):
- self.model = self.model.half().to(device)
- else:
- self.model = self.model.to(device)
- self.model.load_state_dict(weight, strict=True)
- self.model.eval()
- self.half = half
- self.device = device
-
- def np2tensor(self, np_frame):
- if (self.half == False):
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255
- else:
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255
-
- def tensor2np(self, tensor):
- if (self.half == False):
- return (
- np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0)))
- else:
- return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(),
- (1, 2, 0)))
-
- def __call__(self, frame, tile_mode):
- with torch.no_grad():
- tensor = self.np2tensor(frame)
- result = self.tensor2np(self.model(tensor, tile_mode))
- return result
-
-
-if __name__ == "__main__":
- ###########inference_img
- import time, cv2, sys
- from time import time as ttime
-
- for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3),
- ("weights_v3/up4x-latest-denoise3x.pth", 4)]:
- for tile_mode in [0, 1, 2, 3, 4]:
- upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0")
- input_dir = "%s/input_dir1" % root_path
- output_dir = "%s/opt-dir-all-test" % root_path
- os.makedirs(output_dir, exist_ok=True)
- for name in os.listdir(input_dir):
- print(name)
- tmp = name.split(".")
- inp_path = os.path.join(input_dir, name)
- suffix = tmp[-1]
- prefix = ".".join(tmp[:-1])
- tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- print(inp_path, tmp_path)
- # 支持中文路径
- # os.link(inp_path, tmp_path)#win用硬链接
- os.symlink(inp_path, tmp_path) # linux用软链接
- frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]]
- t0 = ttime()
- result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1]
- t1 = ttime()
- print(prefix, "done", t1 - t0)
- tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- cv2.imwrite(tmp_opt_path, result)
- n = 0
- while (1):
- if (n == 0):
- suffix = "_%sx_tile%s.png" % (scale, tile_mode)
- else:
- suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) #
- if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False):
- break
- else:
- n += 1
- final_opt_path = os.path.join(output_dir, prefix + suffix)
- os.rename(tmp_opt_path, final_opt_path)
- os.remove(tmp_path)
diff --git a/spaces/akhaliq/GPEN/face_model/op/upfirdn2d.py b/spaces/akhaliq/GPEN/face_model/op/upfirdn2d.py
deleted file mode 100644
index 2e3844749dea0a79fed49f161d9760ee6b4c07fd..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/GPEN/face_model/op/upfirdn2d.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import os
-import platform
-
-import torch
-import torch.nn.functional as F
-from torch.autograd import Function
-from torch.utils.cpp_extension import load, _import_module_from_library
-
-# if running GPEN without cuda, please comment line 10-18
-if platform.system() == 'Linux' and torch.cuda.is_available():
- module_path = os.path.dirname(__file__)
- upfirdn2d_op = load(
- 'upfirdn2d',
- sources=[
- os.path.join(module_path, 'upfirdn2d.cpp'),
- os.path.join(module_path, 'upfirdn2d_kernel.cu'),
- ],
- )
-
-
-#upfirdn2d_op = _import_module_from_library('upfirdn2d', '/tmp/torch_extensions/upfirdn2d', True)
-
-class UpFirDn2dBackward(Function):
- @staticmethod
- def forward(
- ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
- ):
-
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_op.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- kernel, = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_op.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(
- ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
- )
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_op.upfirdn2d(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
- )
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0), device='cpu'):
- if platform.system() == 'Linux' and torch.cuda.is_available() and device != 'cpu':
- out = UpFirDn2d.apply(
- input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])
- )
- else:
- out = upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1])
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- input = input.permute(0, 2, 3, 1)
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- # out = out.permute(0, 2, 3, 1)
- return out[:, :, ::down_y, ::down_x]
-
diff --git a/spaces/akhaliq/PaintTransformer/train/models/painter_model.py b/spaces/akhaliq/PaintTransformer/train/models/painter_model.py
deleted file mode 100644
index 4ce7ed9ddc7dc6d6ae7ead397d362a717e2edf8d..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/PaintTransformer/train/models/painter_model.py
+++ /dev/null
@@ -1,247 +0,0 @@
-import torch
-import numpy as np
-from .base_model import BaseModel
-from . import networks
-from util import morphology
-from scipy.optimize import linear_sum_assignment
-from PIL import Image
-
-
-class PainterModel(BaseModel):
-
- @staticmethod
- def modify_commandline_options(parser, is_train=True):
- parser.set_defaults(dataset_mode='null')
- parser.add_argument('--used_strokes', type=int, default=8,
- help='actually generated strokes number')
- parser.add_argument('--num_blocks', type=int, default=3,
- help='number of transformer blocks for stroke generator')
- parser.add_argument('--lambda_w', type=float, default=10.0, help='weight for w loss of stroke shape')
- parser.add_argument('--lambda_pixel', type=float, default=10.0, help='weight for pixel-level L1 loss')
- parser.add_argument('--lambda_gt', type=float, default=1.0, help='weight for ground-truth loss')
- parser.add_argument('--lambda_decision', type=float, default=10.0, help='weight for stroke decision loss')
- parser.add_argument('--lambda_recall', type=float, default=10.0, help='weight of recall for stroke decision loss')
- return parser
-
- def __init__(self, opt):
- BaseModel.__init__(self, opt)
- self.loss_names = ['pixel', 'gt', 'w', 'decision']
- self.visual_names = ['old', 'render', 'rec']
- self.model_names = ['g']
- self.d = 12 # xc, yc, w, h, theta, R0, G0, B0, R2, G2, B2, A
- self.d_shape = 5
-
- def read_img(img_path, img_type='RGB'):
- img = Image.open(img_path).convert(img_type)
- img = np.array(img)
- if img.ndim == 2:
- img = np.expand_dims(img, axis=-1)
- img = img.transpose((2, 0, 1))
- img = torch.from_numpy(img).unsqueeze(0).float() / 255.
- return img
-
- brush_large_vertical = read_img('brush/brush_large_vertical.png', 'L').to(self.device)
- brush_large_horizontal = read_img('brush/brush_large_horizontal.png', 'L').to(self.device)
- self.meta_brushes = torch.cat(
- [brush_large_vertical, brush_large_horizontal], dim=0)
- net_g = networks.Painter(self.d_shape, opt.used_strokes, opt.ngf,
- n_enc_layers=opt.num_blocks, n_dec_layers=opt.num_blocks)
- self.net_g = networks.init_net(net_g, opt.init_type, opt.init_gain, self.gpu_ids)
- self.old = None
- self.render = None
- self.rec = None
- self.gt_param = None
- self.pred_param = None
- self.gt_decision = None
- self.pred_decision = None
- self.patch_size = 32
- self.loss_pixel = torch.tensor(0., device=self.device)
- self.loss_gt = torch.tensor(0., device=self.device)
- self.loss_w = torch.tensor(0., device=self.device)
- self.loss_decision = torch.tensor(0., device=self.device)
- self.criterion_pixel = torch.nn.L1Loss().to(self.device)
- self.criterion_decision = torch.nn.BCEWithLogitsLoss(pos_weight=torch.tensor(opt.lambda_recall)).to(self.device)
- if self.isTrain:
- self.optimizer = torch.optim.Adam(self.net_g.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
- self.optimizers.append(self.optimizer)
-
- def param2stroke(self, param, H, W):
- # param: b, 12
- b = param.shape[0]
- param_list = torch.split(param, 1, dim=1)
- x0, y0, w, h, theta = [item.squeeze(-1) for item in param_list[:5]]
- R0, G0, B0, R2, G2, B2, _ = param_list[5:]
- sin_theta = torch.sin(torch.acos(torch.tensor(-1., device=param.device)) * theta)
- cos_theta = torch.cos(torch.acos(torch.tensor(-1., device=param.device)) * theta)
- index = torch.full((b,), -1, device=param.device)
- index[h > w] = 0
- index[h <= w] = 1
- brush = self.meta_brushes[index.long()]
- alphas = torch.cat([brush, brush, brush], dim=1)
- alphas = (alphas > 0).float()
- t = torch.arange(0, brush.shape[2], device=param.device).unsqueeze(0) / brush.shape[2]
- color_map = torch.stack([R0 * (1 - t) + R2 * t, G0 * (1 - t) + G2 * t, B0 * (1 - t) + B2 * t], dim=1)
- color_map = color_map.unsqueeze(-1).repeat(1, 1, 1, brush.shape[3])
- brush = brush * color_map
-
- warp_00 = cos_theta / w
- warp_01 = sin_theta * H / (W * w)
- warp_02 = (1 - 2 * x0) * cos_theta / w + (1 - 2 * y0) * sin_theta * H / (W * w)
- warp_10 = -sin_theta * W / (H * h)
- warp_11 = cos_theta / h
- warp_12 = (1 - 2 * y0) * cos_theta / h - (1 - 2 * x0) * sin_theta * W / (H * h)
- warp_0 = torch.stack([warp_00, warp_01, warp_02], dim=1)
- warp_1 = torch.stack([warp_10, warp_11, warp_12], dim=1)
- warp = torch.stack([warp_0, warp_1], dim=1)
- grid = torch.nn.functional.affine_grid(warp, torch.Size((b, 3, H, W)), align_corners=False)
- brush = torch.nn.functional.grid_sample(brush, grid, align_corners=False)
- alphas = torch.nn.functional.grid_sample(alphas, grid, align_corners=False)
-
- return brush, alphas
-
- def set_input(self, input_dict):
- self.image_paths = input_dict['A_paths']
- with torch.no_grad():
- old_param = torch.rand(self.opt.batch_size // 4, self.opt.used_strokes, self.d, device=self.device)
- old_param[:, :, :4] = old_param[:, :, :4] * 0.5 + 0.2
- old_param[:, :, -4:-1] = old_param[:, :, -7:-4]
- old_param = old_param.view(-1, self.d).contiguous()
- foregrounds, alphas = self.param2stroke(old_param, self.patch_size * 2, self.patch_size * 2)
- foregrounds = morphology.Dilation2d(m=1)(foregrounds)
- alphas = morphology.Erosion2d(m=1)(alphas)
- foregrounds = foregrounds.view(self.opt.batch_size // 4, self.opt.used_strokes, 3, self.patch_size * 2,
- self.patch_size * 2).contiguous()
- alphas = alphas.view(self.opt.batch_size // 4, self.opt.used_strokes, 3, self.patch_size * 2,
- self.patch_size * 2).contiguous()
- old = torch.zeros(self.opt.batch_size // 4, 3, self.patch_size * 2, self.patch_size * 2, device=self.device)
- for i in range(self.opt.used_strokes):
- foreground = foregrounds[:, i, :, :, :]
- alpha = alphas[:, i, :, :, :]
- old = foreground * alpha + old * (1 - alpha)
- old = old.view(self.opt.batch_size // 4, 3, 2, self.patch_size, 2, self.patch_size).contiguous()
- old = old.permute(0, 2, 4, 1, 3, 5).contiguous()
- self.old = old.view(self.opt.batch_size, 3, self.patch_size, self.patch_size).contiguous()
-
- gt_param = torch.rand(self.opt.batch_size, self.opt.used_strokes, self.d, device=self.device)
- gt_param[:, :, :4] = gt_param[:, :, :4] * 0.5 + 0.2
- gt_param[:, :, -4:-1] = gt_param[:, :, -7:-4]
- self.gt_param = gt_param[:, :, :self.d_shape]
- gt_param = gt_param.view(-1, self.d).contiguous()
- foregrounds, alphas = self.param2stroke(gt_param, self.patch_size, self.patch_size)
- foregrounds = morphology.Dilation2d(m=1)(foregrounds)
- alphas = morphology.Erosion2d(m=1)(alphas)
- foregrounds = foregrounds.view(self.opt.batch_size, self.opt.used_strokes, 3, self.patch_size,
- self.patch_size).contiguous()
- alphas = alphas.view(self.opt.batch_size, self.opt.used_strokes, 3, self.patch_size,
- self.patch_size).contiguous()
- self.render = self.old.clone()
- gt_decision = torch.ones(self.opt.batch_size, self.opt.used_strokes, device=self.device)
- for i in range(self.opt.used_strokes):
- foreground = foregrounds[:, i, :, :, :]
- alpha = alphas[:, i, :, :, :]
- for j in range(i):
- iou = (torch.sum(alpha * alphas[:, j, :, :, :], dim=(-3, -2, -1)) + 1e-5) / (
- torch.sum(alphas[:, j, :, :, :], dim=(-3, -2, -1)) + 1e-5)
- gt_decision[:, i] = ((iou < 0.75) | (~gt_decision[:, j].bool())).float() * gt_decision[:, i]
- decision = gt_decision[:, i].view(self.opt.batch_size, 1, 1, 1).contiguous()
- self.render = foreground * alpha * decision + self.render * (1 - alpha * decision)
- self.gt_decision = gt_decision
-
- def forward(self):
- param, decisions = self.net_g(self.render, self.old)
- # stroke_param: b, stroke_per_patch, param_per_stroke
- # decision: b, stroke_per_patch, 1
- self.pred_decision = decisions.view(-1, self.opt.used_strokes).contiguous()
- self.pred_param = param[:, :, :self.d_shape]
- param = param.view(-1, self.d).contiguous()
- foregrounds, alphas = self.param2stroke(param, self.patch_size, self.patch_size)
- foregrounds = morphology.Dilation2d(m=1)(foregrounds)
- alphas = morphology.Erosion2d(m=1)(alphas)
- # foreground, alpha: b * stroke_per_patch, 3, output_size, output_size
- foregrounds = foregrounds.view(-1, self.opt.used_strokes, 3, self.patch_size, self.patch_size)
- alphas = alphas.view(-1, self.opt.used_strokes, 3, self.patch_size, self.patch_size)
- # foreground, alpha: b, stroke_per_patch, 3, output_size, output_size
- decisions = networks.SignWithSigmoidGrad.apply(decisions.view(-1, self.opt.used_strokes, 1, 1, 1).contiguous())
- self.rec = self.old.clone()
- for j in range(foregrounds.shape[1]):
- foreground = foregrounds[:, j, :, :, :]
- alpha = alphas[:, j, :, :, :]
- decision = decisions[:, j, :, :, :]
- self.rec = foreground * alpha * decision + self.rec * (1 - alpha * decision)
-
- @staticmethod
- def get_sigma_sqrt(w, h, theta):
- sigma_00 = w * (torch.cos(theta) ** 2) / 2 + h * (torch.sin(theta) ** 2) / 2
- sigma_01 = (w - h) * torch.cos(theta) * torch.sin(theta) / 2
- sigma_11 = h * (torch.cos(theta) ** 2) / 2 + w * (torch.sin(theta) ** 2) / 2
- sigma_0 = torch.stack([sigma_00, sigma_01], dim=-1)
- sigma_1 = torch.stack([sigma_01, sigma_11], dim=-1)
- sigma = torch.stack([sigma_0, sigma_1], dim=-2)
- return sigma
-
- @staticmethod
- def get_sigma(w, h, theta):
- sigma_00 = w * w * (torch.cos(theta) ** 2) / 4 + h * h * (torch.sin(theta) ** 2) / 4
- sigma_01 = (w * w - h * h) * torch.cos(theta) * torch.sin(theta) / 4
- sigma_11 = h * h * (torch.cos(theta) ** 2) / 4 + w * w * (torch.sin(theta) ** 2) / 4
- sigma_0 = torch.stack([sigma_00, sigma_01], dim=-1)
- sigma_1 = torch.stack([sigma_01, sigma_11], dim=-1)
- sigma = torch.stack([sigma_0, sigma_1], dim=-2)
- return sigma
-
- def gaussian_w_distance(self, param_1, param_2):
- mu_1, w_1, h_1, theta_1 = torch.split(param_1, (2, 1, 1, 1), dim=-1)
- w_1 = w_1.squeeze(-1)
- h_1 = h_1.squeeze(-1)
- theta_1 = torch.acos(torch.tensor(-1., device=param_1.device)) * theta_1.squeeze(-1)
- trace_1 = (w_1 ** 2 + h_1 ** 2) / 4
- mu_2, w_2, h_2, theta_2 = torch.split(param_2, (2, 1, 1, 1), dim=-1)
- w_2 = w_2.squeeze(-1)
- h_2 = h_2.squeeze(-1)
- theta_2 = torch.acos(torch.tensor(-1., device=param_2.device)) * theta_2.squeeze(-1)
- trace_2 = (w_2 ** 2 + h_2 ** 2) / 4
- sigma_1_sqrt = self.get_sigma_sqrt(w_1, h_1, theta_1)
- sigma_2 = self.get_sigma(w_2, h_2, theta_2)
- trace_12 = torch.matmul(torch.matmul(sigma_1_sqrt, sigma_2), sigma_1_sqrt)
- trace_12 = torch.sqrt(trace_12[..., 0, 0] + trace_12[..., 1, 1] + 2 * torch.sqrt(
- trace_12[..., 0, 0] * trace_12[..., 1, 1] - trace_12[..., 0, 1] * trace_12[..., 1, 0]))
- return torch.sum((mu_1 - mu_2) ** 2, dim=-1) + trace_1 + trace_2 - 2 * trace_12
-
- def optimize_parameters(self):
- self.forward()
- self.loss_pixel = self.criterion_pixel(self.rec, self.render) * self.opt.lambda_pixel
- cur_valid_gt_size = 0
- with torch.no_grad():
- r_idx = []
- c_idx = []
- for i in range(self.gt_param.shape[0]):
- is_valid_gt = self.gt_decision[i].bool()
- valid_gt_param = self.gt_param[i, is_valid_gt]
- cost_matrix_l1 = torch.cdist(self.pred_param[i], valid_gt_param, p=1)
- pred_param_broad = self.pred_param[i].unsqueeze(1).contiguous().repeat(
- 1, valid_gt_param.shape[0], 1)
- valid_gt_param_broad = valid_gt_param.unsqueeze(0).contiguous().repeat(
- self.pred_param.shape[1], 1, 1)
- cost_matrix_w = self.gaussian_w_distance(pred_param_broad, valid_gt_param_broad)
- decision = self.pred_decision[i]
- cost_matrix_decision = (1 - decision).unsqueeze(-1).repeat(1, valid_gt_param.shape[0])
- r, c = linear_sum_assignment((cost_matrix_l1 + cost_matrix_w + cost_matrix_decision).cpu())
- r_idx.append(torch.tensor(r + self.pred_param.shape[1] * i, device=self.device))
- c_idx.append(torch.tensor(c + cur_valid_gt_size, device=self.device))
- cur_valid_gt_size += valid_gt_param.shape[0]
- r_idx = torch.cat(r_idx, dim=0)
- c_idx = torch.cat(c_idx, dim=0)
- paired_gt_decision = torch.zeros(self.gt_decision.shape[0] * self.gt_decision.shape[1], device=self.device)
- paired_gt_decision[r_idx] = 1.
- all_valid_gt_param = self.gt_param[self.gt_decision.bool(), :]
- all_pred_param = self.pred_param.view(-1, self.pred_param.shape[2]).contiguous()
- all_pred_decision = self.pred_decision.view(-1).contiguous()
- paired_gt_param = all_valid_gt_param[c_idx, :]
- paired_pred_param = all_pred_param[r_idx, :]
- self.loss_gt = self.criterion_pixel(paired_pred_param, paired_gt_param) * self.opt.lambda_gt
- self.loss_w = self.gaussian_w_distance(paired_pred_param, paired_gt_param).mean() * self.opt.lambda_w
- self.loss_decision = self.criterion_decision(all_pred_decision, paired_gt_decision) * self.opt.lambda_decision
- loss = self.loss_pixel + self.loss_gt + self.loss_w + self.loss_decision
- loss.backward()
- self.optimizer.step()
- self.optimizer.zero_grad()
diff --git a/spaces/akhaliq/Real-ESRGAN/realesrgan/models/__init__.py b/spaces/akhaliq/Real-ESRGAN/realesrgan/models/__init__.py
deleted file mode 100644
index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-ESRGAN/realesrgan/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import importlib
-from basicsr.utils import scandir
-from os import path as osp
-
-# automatically scan and import model modules for registry
-# scan all the files that end with '_model.py' under the model folder
-model_folder = osp.dirname(osp.abspath(__file__))
-model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
-# import all the model modules
-_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/models/target_python.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/models/target_python.py
deleted file mode 100644
index 744bd7ef58b4870406fcef8cb3b3667548a0ccea..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/models/target_python.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import sys
-from typing import List, Optional, Tuple
-
-from pip._vendor.packaging.tags import Tag
-
-from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot
-from pip._internal.utils.misc import normalize_version_info
-
-
-class TargetPython:
-
- """
- Encapsulates the properties of a Python interpreter one is targeting
- for a package install, download, etc.
- """
-
- __slots__ = [
- "_given_py_version_info",
- "abis",
- "implementation",
- "platforms",
- "py_version",
- "py_version_info",
- "_valid_tags",
- ]
-
- def __init__(
- self,
- platforms: Optional[List[str]] = None,
- py_version_info: Optional[Tuple[int, ...]] = None,
- abis: Optional[List[str]] = None,
- implementation: Optional[str] = None,
- ) -> None:
- """
- :param platforms: A list of strings or None. If None, searches for
- packages that are supported by the current system. Otherwise, will
- find packages that can be built on the platforms passed in. These
- packages will only be downloaded for distribution: they will
- not be built locally.
- :param py_version_info: An optional tuple of ints representing the
- Python version information to use (e.g. `sys.version_info[:3]`).
- This can have length 1, 2, or 3 when provided.
- :param abis: A list of strings or None. This is passed to
- compatibility_tags.py's get_supported() function as is.
- :param implementation: A string or None. This is passed to
- compatibility_tags.py's get_supported() function as is.
- """
- # Store the given py_version_info for when we call get_supported().
- self._given_py_version_info = py_version_info
-
- if py_version_info is None:
- py_version_info = sys.version_info[:3]
- else:
- py_version_info = normalize_version_info(py_version_info)
-
- py_version = ".".join(map(str, py_version_info[:2]))
-
- self.abis = abis
- self.implementation = implementation
- self.platforms = platforms
- self.py_version = py_version
- self.py_version_info = py_version_info
-
- # This is used to cache the return value of get_tags().
- self._valid_tags: Optional[List[Tag]] = None
-
- def format_given(self) -> str:
- """
- Format the given, non-None attributes for display.
- """
- display_version = None
- if self._given_py_version_info is not None:
- display_version = ".".join(
- str(part) for part in self._given_py_version_info
- )
-
- key_values = [
- ("platforms", self.platforms),
- ("version_info", display_version),
- ("abis", self.abis),
- ("implementation", self.implementation),
- ]
- return " ".join(
- f"{key}={value!r}" for key, value in key_values if value is not None
- )
-
- def get_tags(self) -> List[Tag]:
- """
- Return the supported PEP 425 tags to check wheel candidates against.
-
- The tags are returned in order of preference (most preferred first).
- """
- if self._valid_tags is None:
- # Pass versions=None if no py_version_info was given since
- # versions=None uses special default logic.
- py_version_info = self._given_py_version_info
- if py_version_info is None:
- version = None
- else:
- version = version_info_to_nodot(py_version_info)
-
- tags = get_supported(
- version=version,
- platforms=self.platforms,
- abis=self.abis,
- impl=self.implementation,
- )
- self._valid_tags = tags
-
- return self._valid_tags
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/req/req_install.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/req/req_install.py
deleted file mode 100644
index 02dbda1941f845a8087ea4544271fa94b69a8bda..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/req/req_install.py
+++ /dev/null
@@ -1,858 +0,0 @@
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import functools
-import logging
-import os
-import shutil
-import sys
-import uuid
-import zipfile
-from typing import Any, Collection, Dict, Iterable, List, Optional, Sequence, Union
-
-from pip._vendor.packaging.markers import Marker
-from pip._vendor.packaging.requirements import Requirement
-from pip._vendor.packaging.specifiers import SpecifierSet
-from pip._vendor.packaging.utils import canonicalize_name
-from pip._vendor.packaging.version import Version
-from pip._vendor.packaging.version import parse as parse_version
-from pip._vendor.pep517.wrappers import Pep517HookCaller
-
-from pip._internal.build_env import BuildEnvironment, NoOpBuildEnvironment
-from pip._internal.exceptions import InstallationError, LegacyInstallFailure
-from pip._internal.locations import get_scheme
-from pip._internal.metadata import (
- BaseDistribution,
- get_default_environment,
- get_directory_distribution,
-)
-from pip._internal.models.link import Link
-from pip._internal.operations.build.metadata import generate_metadata
-from pip._internal.operations.build.metadata_editable import generate_editable_metadata
-from pip._internal.operations.build.metadata_legacy import (
- generate_metadata as generate_metadata_legacy,
-)
-from pip._internal.operations.install.editable_legacy import (
- install_editable as install_editable_legacy,
-)
-from pip._internal.operations.install.legacy import install as install_legacy
-from pip._internal.operations.install.wheel import install_wheel
-from pip._internal.pyproject import load_pyproject_toml, make_pyproject_path
-from pip._internal.req.req_uninstall import UninstallPathSet
-from pip._internal.utils.deprecation import deprecated
-from pip._internal.utils.direct_url_helpers import (
- direct_url_for_editable,
- direct_url_from_link,
-)
-from pip._internal.utils.hashes import Hashes
-from pip._internal.utils.misc import (
- ask_path_exists,
- backup_dir,
- display_path,
- hide_url,
- redact_auth_from_url,
-)
-from pip._internal.utils.packaging import safe_extra
-from pip._internal.utils.subprocess import runner_with_spinner_message
-from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds
-from pip._internal.utils.virtualenv import running_under_virtualenv
-from pip._internal.vcs import vcs
-
-logger = logging.getLogger(__name__)
-
-
-class InstallRequirement:
- """
- Represents something that may be installed later on, may have information
- about where to fetch the relevant requirement and also contains logic for
- installing the said requirement.
- """
-
- def __init__(
- self,
- req: Optional[Requirement],
- comes_from: Optional[Union[str, "InstallRequirement"]],
- editable: bool = False,
- link: Optional[Link] = None,
- markers: Optional[Marker] = None,
- use_pep517: Optional[bool] = None,
- isolated: bool = False,
- install_options: Optional[List[str]] = None,
- global_options: Optional[List[str]] = None,
- hash_options: Optional[Dict[str, List[str]]] = None,
- constraint: bool = False,
- extras: Collection[str] = (),
- user_supplied: bool = False,
- permit_editable_wheels: bool = False,
- ) -> None:
- assert req is None or isinstance(req, Requirement), req
- self.req = req
- self.comes_from = comes_from
- self.constraint = constraint
- self.editable = editable
- self.permit_editable_wheels = permit_editable_wheels
- self.legacy_install_reason: Optional[int] = None
-
- # source_dir is the local directory where the linked requirement is
- # located, or unpacked. In case unpacking is needed, creating and
- # populating source_dir is done by the RequirementPreparer. Note this
- # is not necessarily the directory where pyproject.toml or setup.py is
- # located - that one is obtained via unpacked_source_directory.
- self.source_dir: Optional[str] = None
- if self.editable:
- assert link
- if link.is_file:
- self.source_dir = os.path.normpath(os.path.abspath(link.file_path))
-
- if link is None and req and req.url:
- # PEP 508 URL requirement
- link = Link(req.url)
- self.link = self.original_link = link
- self.original_link_is_in_wheel_cache = False
-
- # Path to any downloaded or already-existing package.
- self.local_file_path: Optional[str] = None
- if self.link and self.link.is_file:
- self.local_file_path = self.link.file_path
-
- if extras:
- self.extras = extras
- elif req:
- self.extras = {safe_extra(extra) for extra in req.extras}
- else:
- self.extras = set()
- if markers is None and req:
- markers = req.marker
- self.markers = markers
-
- # This holds the Distribution object if this requirement is already installed.
- self.satisfied_by: Optional[BaseDistribution] = None
- # Whether the installation process should try to uninstall an existing
- # distribution before installing this requirement.
- self.should_reinstall = False
- # Temporary build location
- self._temp_build_dir: Optional[TempDirectory] = None
- # Set to True after successful installation
- self.install_succeeded: Optional[bool] = None
- # Supplied options
- self.install_options = install_options if install_options else []
- self.global_options = global_options if global_options else []
- self.hash_options = hash_options if hash_options else {}
- # Set to True after successful preparation of this requirement
- self.prepared = False
- # User supplied requirement are explicitly requested for installation
- # by the user via CLI arguments or requirements files, as opposed to,
- # e.g. dependencies, extras or constraints.
- self.user_supplied = user_supplied
-
- self.isolated = isolated
- self.build_env: BuildEnvironment = NoOpBuildEnvironment()
-
- # For PEP 517, the directory where we request the project metadata
- # gets stored. We need this to pass to build_wheel, so the backend
- # can ensure that the wheel matches the metadata (see the PEP for
- # details).
- self.metadata_directory: Optional[str] = None
-
- # The static build requirements (from pyproject.toml)
- self.pyproject_requires: Optional[List[str]] = None
-
- # Build requirements that we will check are available
- self.requirements_to_check: List[str] = []
-
- # The PEP 517 backend we should use to build the project
- self.pep517_backend: Optional[Pep517HookCaller] = None
-
- # Are we using PEP 517 for this requirement?
- # After pyproject.toml has been loaded, the only valid values are True
- # and False. Before loading, None is valid (meaning "use the default").
- # Setting an explicit value before loading pyproject.toml is supported,
- # but after loading this flag should be treated as read only.
- self.use_pep517 = use_pep517
-
- # This requirement needs more preparation before it can be built
- self.needs_more_preparation = False
-
- def __str__(self) -> str:
- if self.req:
- s = str(self.req)
- if self.link:
- s += " from {}".format(redact_auth_from_url(self.link.url))
- elif self.link:
- s = redact_auth_from_url(self.link.url)
- else:
- s = ""
- if self.satisfied_by is not None:
- s += " in {}".format(display_path(self.satisfied_by.location))
- if self.comes_from:
- if isinstance(self.comes_from, str):
- comes_from: Optional[str] = self.comes_from
- else:
- comes_from = self.comes_from.from_path()
- if comes_from:
- s += f" (from {comes_from})"
- return s
-
- def __repr__(self) -> str:
- return "<{} object: {} editable={!r}>".format(
- self.__class__.__name__, str(self), self.editable
- )
-
- def format_debug(self) -> str:
- """An un-tested helper for getting state, for debugging."""
- attributes = vars(self)
- names = sorted(attributes)
-
- state = ("{}={!r}".format(attr, attributes[attr]) for attr in sorted(names))
- return "<{name} object: {{{state}}}>".format(
- name=self.__class__.__name__,
- state=", ".join(state),
- )
-
- # Things that are valid for all kinds of requirements?
- @property
- def name(self) -> Optional[str]:
- if self.req is None:
- return None
- return self.req.name
-
- @functools.lru_cache() # use cached_property in python 3.8+
- def supports_pyproject_editable(self) -> bool:
- if not self.use_pep517:
- return False
- assert self.pep517_backend
- with self.build_env:
- runner = runner_with_spinner_message(
- "Checking if build backend supports build_editable"
- )
- with self.pep517_backend.subprocess_runner(runner):
- return "build_editable" in self.pep517_backend._supported_features()
-
- @property
- def specifier(self) -> SpecifierSet:
- return self.req.specifier
-
- @property
- def is_pinned(self) -> bool:
- """Return whether I am pinned to an exact version.
-
- For example, some-package==1.2 is pinned; some-package>1.2 is not.
- """
- specifiers = self.specifier
- return len(specifiers) == 1 and next(iter(specifiers)).operator in {"==", "==="}
-
- def match_markers(self, extras_requested: Optional[Iterable[str]] = None) -> bool:
- if not extras_requested:
- # Provide an extra to safely evaluate the markers
- # without matching any extra
- extras_requested = ("",)
- if self.markers is not None:
- return any(
- self.markers.evaluate({"extra": extra}) for extra in extras_requested
- )
- else:
- return True
-
- @property
- def has_hash_options(self) -> bool:
- """Return whether any known-good hashes are specified as options.
-
- These activate --require-hashes mode; hashes specified as part of a
- URL do not.
-
- """
- return bool(self.hash_options)
-
- def hashes(self, trust_internet: bool = True) -> Hashes:
- """Return a hash-comparer that considers my option- and URL-based
- hashes to be known-good.
-
- Hashes in URLs--ones embedded in the requirements file, not ones
- downloaded from an index server--are almost peers with ones from
- flags. They satisfy --require-hashes (whether it was implicitly or
- explicitly activated) but do not activate it. md5 and sha224 are not
- allowed in flags, which should nudge people toward good algos. We
- always OR all hashes together, even ones from URLs.
-
- :param trust_internet: Whether to trust URL-based (#md5=...) hashes
- downloaded from the internet, as by populate_link()
-
- """
- good_hashes = self.hash_options.copy()
- link = self.link if trust_internet else self.original_link
- if link and link.hash:
- good_hashes.setdefault(link.hash_name, []).append(link.hash)
- return Hashes(good_hashes)
-
- def from_path(self) -> Optional[str]:
- """Format a nice indicator to show where this "comes from" """
- if self.req is None:
- return None
- s = str(self.req)
- if self.comes_from:
- if isinstance(self.comes_from, str):
- comes_from = self.comes_from
- else:
- comes_from = self.comes_from.from_path()
- if comes_from:
- s += "->" + comes_from
- return s
-
- def ensure_build_location(
- self, build_dir: str, autodelete: bool, parallel_builds: bool
- ) -> str:
- assert build_dir is not None
- if self._temp_build_dir is not None:
- assert self._temp_build_dir.path
- return self._temp_build_dir.path
- if self.req is None:
- # Some systems have /tmp as a symlink which confuses custom
- # builds (such as numpy). Thus, we ensure that the real path
- # is returned.
- self._temp_build_dir = TempDirectory(
- kind=tempdir_kinds.REQ_BUILD, globally_managed=True
- )
-
- return self._temp_build_dir.path
-
- # This is the only remaining place where we manually determine the path
- # for the temporary directory. It is only needed for editables where
- # it is the value of the --src option.
-
- # When parallel builds are enabled, add a UUID to the build directory
- # name so multiple builds do not interfere with each other.
- dir_name: str = canonicalize_name(self.name)
- if parallel_builds:
- dir_name = f"{dir_name}_{uuid.uuid4().hex}"
-
- # FIXME: Is there a better place to create the build_dir? (hg and bzr
- # need this)
- if not os.path.exists(build_dir):
- logger.debug("Creating directory %s", build_dir)
- os.makedirs(build_dir)
- actual_build_dir = os.path.join(build_dir, dir_name)
- # `None` indicates that we respect the globally-configured deletion
- # settings, which is what we actually want when auto-deleting.
- delete_arg = None if autodelete else False
- return TempDirectory(
- path=actual_build_dir,
- delete=delete_arg,
- kind=tempdir_kinds.REQ_BUILD,
- globally_managed=True,
- ).path
-
- def _set_requirement(self) -> None:
- """Set requirement after generating metadata."""
- assert self.req is None
- assert self.metadata is not None
- assert self.source_dir is not None
-
- # Construct a Requirement object from the generated metadata
- if isinstance(parse_version(self.metadata["Version"]), Version):
- op = "=="
- else:
- op = "==="
-
- self.req = Requirement(
- "".join(
- [
- self.metadata["Name"],
- op,
- self.metadata["Version"],
- ]
- )
- )
-
- def warn_on_mismatching_name(self) -> None:
- metadata_name = canonicalize_name(self.metadata["Name"])
- if canonicalize_name(self.req.name) == metadata_name:
- # Everything is fine.
- return
-
- # If we're here, there's a mismatch. Log a warning about it.
- logger.warning(
- "Generating metadata for package %s "
- "produced metadata for project name %s. Fix your "
- "#egg=%s fragments.",
- self.name,
- metadata_name,
- self.name,
- )
- self.req = Requirement(metadata_name)
-
- def check_if_exists(self, use_user_site: bool) -> None:
- """Find an installed distribution that satisfies or conflicts
- with this requirement, and set self.satisfied_by or
- self.should_reinstall appropriately.
- """
- if self.req is None:
- return
- existing_dist = get_default_environment().get_distribution(self.req.name)
- if not existing_dist:
- return
-
- version_compatible = self.req.specifier.contains(
- existing_dist.version,
- prereleases=True,
- )
- if not version_compatible:
- self.satisfied_by = None
- if use_user_site:
- if existing_dist.in_usersite:
- self.should_reinstall = True
- elif running_under_virtualenv() and existing_dist.in_site_packages:
- raise InstallationError(
- f"Will not install to the user site because it will "
- f"lack sys.path precedence to {existing_dist.raw_name} "
- f"in {existing_dist.location}"
- )
- else:
- self.should_reinstall = True
- else:
- if self.editable:
- self.should_reinstall = True
- # when installing editables, nothing pre-existing should ever
- # satisfy
- self.satisfied_by = None
- else:
- self.satisfied_by = existing_dist
-
- # Things valid for wheels
- @property
- def is_wheel(self) -> bool:
- if not self.link:
- return False
- return self.link.is_wheel
-
- # Things valid for sdists
- @property
- def unpacked_source_directory(self) -> str:
- return os.path.join(
- self.source_dir, self.link and self.link.subdirectory_fragment or ""
- )
-
- @property
- def setup_py_path(self) -> str:
- assert self.source_dir, f"No source dir for {self}"
- setup_py = os.path.join(self.unpacked_source_directory, "setup.py")
-
- return setup_py
-
- @property
- def setup_cfg_path(self) -> str:
- assert self.source_dir, f"No source dir for {self}"
- setup_cfg = os.path.join(self.unpacked_source_directory, "setup.cfg")
-
- return setup_cfg
-
- @property
- def pyproject_toml_path(self) -> str:
- assert self.source_dir, f"No source dir for {self}"
- return make_pyproject_path(self.unpacked_source_directory)
-
- def load_pyproject_toml(self) -> None:
- """Load the pyproject.toml file.
-
- After calling this routine, all of the attributes related to PEP 517
- processing for this requirement have been set. In particular, the
- use_pep517 attribute can be used to determine whether we should
- follow the PEP 517 or legacy (setup.py) code path.
- """
- pyproject_toml_data = load_pyproject_toml(
- self.use_pep517, self.pyproject_toml_path, self.setup_py_path, str(self)
- )
-
- if pyproject_toml_data is None:
- self.use_pep517 = False
- return
-
- self.use_pep517 = True
- requires, backend, check, backend_path = pyproject_toml_data
- self.requirements_to_check = check
- self.pyproject_requires = requires
- self.pep517_backend = Pep517HookCaller(
- self.unpacked_source_directory,
- backend,
- backend_path=backend_path,
- )
-
- def isolated_editable_sanity_check(self) -> None:
- """Check that an editable requirement if valid for use with PEP 517/518.
-
- This verifies that an editable that has a pyproject.toml either supports PEP 660
- or as a setup.py or a setup.cfg
- """
- if (
- self.editable
- and self.use_pep517
- and not self.supports_pyproject_editable()
- and not os.path.isfile(self.setup_py_path)
- and not os.path.isfile(self.setup_cfg_path)
- ):
- raise InstallationError(
- f"Project {self} has a 'pyproject.toml' and its build "
- f"backend is missing the 'build_editable' hook. Since it does not "
- f"have a 'setup.py' nor a 'setup.cfg', "
- f"it cannot be installed in editable mode. "
- f"Consider using a build backend that supports PEP 660."
- )
-
- def prepare_metadata(self) -> None:
- """Ensure that project metadata is available.
-
- Under PEP 517 and PEP 660, call the backend hook to prepare the metadata.
- Under legacy processing, call setup.py egg-info.
- """
- assert self.source_dir
- details = self.name or f"from {self.link}"
-
- if self.use_pep517:
- assert self.pep517_backend is not None
- if (
- self.editable
- and self.permit_editable_wheels
- and self.supports_pyproject_editable()
- ):
- self.metadata_directory = generate_editable_metadata(
- build_env=self.build_env,
- backend=self.pep517_backend,
- details=details,
- )
- else:
- self.metadata_directory = generate_metadata(
- build_env=self.build_env,
- backend=self.pep517_backend,
- details=details,
- )
- else:
- self.metadata_directory = generate_metadata_legacy(
- build_env=self.build_env,
- setup_py_path=self.setup_py_path,
- source_dir=self.unpacked_source_directory,
- isolated=self.isolated,
- details=details,
- )
-
- # Act on the newly generated metadata, based on the name and version.
- if not self.name:
- self._set_requirement()
- else:
- self.warn_on_mismatching_name()
-
- self.assert_source_matches_version()
-
- @property
- def metadata(self) -> Any:
- if not hasattr(self, "_metadata"):
- self._metadata = self.get_dist().metadata
-
- return self._metadata
-
- def get_dist(self) -> BaseDistribution:
- return get_directory_distribution(self.metadata_directory)
-
- def assert_source_matches_version(self) -> None:
- assert self.source_dir
- version = self.metadata["version"]
- if self.req.specifier and version not in self.req.specifier:
- logger.warning(
- "Requested %s, but installing version %s",
- self,
- version,
- )
- else:
- logger.debug(
- "Source in %s has version %s, which satisfies requirement %s",
- display_path(self.source_dir),
- version,
- self,
- )
-
- # For both source distributions and editables
- def ensure_has_source_dir(
- self,
- parent_dir: str,
- autodelete: bool = False,
- parallel_builds: bool = False,
- ) -> None:
- """Ensure that a source_dir is set.
-
- This will create a temporary build dir if the name of the requirement
- isn't known yet.
-
- :param parent_dir: The ideal pip parent_dir for the source_dir.
- Generally src_dir for editables and build_dir for sdists.
- :return: self.source_dir
- """
- if self.source_dir is None:
- self.source_dir = self.ensure_build_location(
- parent_dir,
- autodelete=autodelete,
- parallel_builds=parallel_builds,
- )
-
- # For editable installations
- def update_editable(self) -> None:
- if not self.link:
- logger.debug(
- "Cannot update repository at %s; repository location is unknown",
- self.source_dir,
- )
- return
- assert self.editable
- assert self.source_dir
- if self.link.scheme == "file":
- # Static paths don't get updated
- return
- vcs_backend = vcs.get_backend_for_scheme(self.link.scheme)
- # Editable requirements are validated in Requirement constructors.
- # So here, if it's neither a path nor a valid VCS URL, it's a bug.
- assert vcs_backend, f"Unsupported VCS URL {self.link.url}"
- hidden_url = hide_url(self.link.url)
- vcs_backend.obtain(self.source_dir, url=hidden_url, verbosity=0)
-
- # Top-level Actions
- def uninstall(
- self, auto_confirm: bool = False, verbose: bool = False
- ) -> Optional[UninstallPathSet]:
- """
- Uninstall the distribution currently satisfying this requirement.
-
- Prompts before removing or modifying files unless
- ``auto_confirm`` is True.
-
- Refuses to delete or modify files outside of ``sys.prefix`` -
- thus uninstallation within a virtual environment can only
- modify that virtual environment, even if the virtualenv is
- linked to global site-packages.
-
- """
- assert self.req
- dist = get_default_environment().get_distribution(self.req.name)
- if not dist:
- logger.warning("Skipping %s as it is not installed.", self.name)
- return None
- logger.info("Found existing installation: %s", dist)
-
- uninstalled_pathset = UninstallPathSet.from_dist(dist)
- uninstalled_pathset.remove(auto_confirm, verbose)
- return uninstalled_pathset
-
- def _get_archive_name(self, path: str, parentdir: str, rootdir: str) -> str:
- def _clean_zip_name(name: str, prefix: str) -> str:
- assert name.startswith(
- prefix + os.path.sep
- ), f"name {name!r} doesn't start with prefix {prefix!r}"
- name = name[len(prefix) + 1 :]
- name = name.replace(os.path.sep, "/")
- return name
-
- path = os.path.join(parentdir, path)
- name = _clean_zip_name(path, rootdir)
- return self.name + "/" + name
-
- def archive(self, build_dir: Optional[str]) -> None:
- """Saves archive to provided build_dir.
-
- Used for saving downloaded VCS requirements as part of `pip download`.
- """
- assert self.source_dir
- if build_dir is None:
- return
-
- create_archive = True
- archive_name = "{}-{}.zip".format(self.name, self.metadata["version"])
- archive_path = os.path.join(build_dir, archive_name)
-
- if os.path.exists(archive_path):
- response = ask_path_exists(
- "The file {} exists. (i)gnore, (w)ipe, "
- "(b)ackup, (a)bort ".format(display_path(archive_path)),
- ("i", "w", "b", "a"),
- )
- if response == "i":
- create_archive = False
- elif response == "w":
- logger.warning("Deleting %s", display_path(archive_path))
- os.remove(archive_path)
- elif response == "b":
- dest_file = backup_dir(archive_path)
- logger.warning(
- "Backing up %s to %s",
- display_path(archive_path),
- display_path(dest_file),
- )
- shutil.move(archive_path, dest_file)
- elif response == "a":
- sys.exit(-1)
-
- if not create_archive:
- return
-
- zip_output = zipfile.ZipFile(
- archive_path,
- "w",
- zipfile.ZIP_DEFLATED,
- allowZip64=True,
- )
- with zip_output:
- dir = os.path.normcase(os.path.abspath(self.unpacked_source_directory))
- for dirpath, dirnames, filenames in os.walk(dir):
- for dirname in dirnames:
- dir_arcname = self._get_archive_name(
- dirname,
- parentdir=dirpath,
- rootdir=dir,
- )
- zipdir = zipfile.ZipInfo(dir_arcname + "/")
- zipdir.external_attr = 0x1ED << 16 # 0o755
- zip_output.writestr(zipdir, "")
- for filename in filenames:
- file_arcname = self._get_archive_name(
- filename,
- parentdir=dirpath,
- rootdir=dir,
- )
- filename = os.path.join(dirpath, filename)
- zip_output.write(filename, file_arcname)
-
- logger.info("Saved %s", display_path(archive_path))
-
- def install(
- self,
- install_options: List[str],
- global_options: Optional[Sequence[str]] = None,
- root: Optional[str] = None,
- home: Optional[str] = None,
- prefix: Optional[str] = None,
- warn_script_location: bool = True,
- use_user_site: bool = False,
- pycompile: bool = True,
- ) -> None:
- scheme = get_scheme(
- self.name,
- user=use_user_site,
- home=home,
- root=root,
- isolated=self.isolated,
- prefix=prefix,
- )
-
- global_options = global_options if global_options is not None else []
- if self.editable and not self.is_wheel:
- install_editable_legacy(
- install_options,
- global_options,
- prefix=prefix,
- home=home,
- use_user_site=use_user_site,
- name=self.name,
- setup_py_path=self.setup_py_path,
- isolated=self.isolated,
- build_env=self.build_env,
- unpacked_source_directory=self.unpacked_source_directory,
- )
- self.install_succeeded = True
- return
-
- if self.is_wheel:
- assert self.local_file_path
- direct_url = None
- if self.editable:
- direct_url = direct_url_for_editable(self.unpacked_source_directory)
- elif self.original_link:
- direct_url = direct_url_from_link(
- self.original_link,
- self.source_dir,
- self.original_link_is_in_wheel_cache,
- )
- install_wheel(
- self.name,
- self.local_file_path,
- scheme=scheme,
- req_description=str(self.req),
- pycompile=pycompile,
- warn_script_location=warn_script_location,
- direct_url=direct_url,
- requested=self.user_supplied,
- )
- self.install_succeeded = True
- return
-
- # TODO: Why don't we do this for editable installs?
-
- # Extend the list of global and install options passed on to
- # the setup.py call with the ones from the requirements file.
- # Options specified in requirements file override those
- # specified on the command line, since the last option given
- # to setup.py is the one that is used.
- global_options = list(global_options) + self.global_options
- install_options = list(install_options) + self.install_options
-
- try:
- success = install_legacy(
- install_options=install_options,
- global_options=global_options,
- root=root,
- home=home,
- prefix=prefix,
- use_user_site=use_user_site,
- pycompile=pycompile,
- scheme=scheme,
- setup_py_path=self.setup_py_path,
- isolated=self.isolated,
- req_name=self.name,
- build_env=self.build_env,
- unpacked_source_directory=self.unpacked_source_directory,
- req_description=str(self.req),
- )
- except LegacyInstallFailure as exc:
- self.install_succeeded = False
- raise exc
- except Exception:
- self.install_succeeded = True
- raise
-
- self.install_succeeded = success
-
- if success and self.legacy_install_reason == 8368:
- deprecated(
- reason=(
- "{} was installed using the legacy 'setup.py install' "
- "method, because a wheel could not be built for it.".format(
- self.name
- )
- ),
- replacement="to fix the wheel build issue reported above",
- gone_in=None,
- issue=8368,
- )
-
-
-def check_invalid_constraint_type(req: InstallRequirement) -> str:
-
- # Check for unsupported forms
- problem = ""
- if not req.name:
- problem = "Unnamed requirements are not allowed as constraints"
- elif req.editable:
- problem = "Editable requirements are not allowed as constraints"
- elif req.extras:
- problem = "Constraints cannot have extras"
-
- if problem:
- deprecated(
- reason=(
- "Constraints are only allowed to take the form of a package "
- "name and a version specifier. Other forms were originally "
- "permitted as an accident of the implementation, but were "
- "undocumented. The new implementation of the resolver no "
- "longer supports these forms."
- ),
- replacement="replacing the constraint with a requirement",
- # No plan yet for when the new resolver becomes default
- gone_in=None,
- issue=8210,
- )
-
- return problem
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/_inputstream.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/_inputstream.py
deleted file mode 100644
index e0bb37602c8e2f1f808ba8fdcb1b7f63451fa4f5..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/_inputstream.py
+++ /dev/null
@@ -1,918 +0,0 @@
-from __future__ import absolute_import, division, unicode_literals
-
-from pip._vendor.six import text_type
-from pip._vendor.six.moves import http_client, urllib
-
-import codecs
-import re
-from io import BytesIO, StringIO
-
-from pip._vendor import webencodings
-
-from .constants import EOF, spaceCharacters, asciiLetters, asciiUppercase
-from .constants import _ReparseException
-from . import _utils
-
-# Non-unicode versions of constants for use in the pre-parser
-spaceCharactersBytes = frozenset([item.encode("ascii") for item in spaceCharacters])
-asciiLettersBytes = frozenset([item.encode("ascii") for item in asciiLetters])
-asciiUppercaseBytes = frozenset([item.encode("ascii") for item in asciiUppercase])
-spacesAngleBrackets = spaceCharactersBytes | frozenset([b">", b"<"])
-
-
-invalid_unicode_no_surrogate = "[\u0001-\u0008\u000B\u000E-\u001F\u007F-\u009F\uFDD0-\uFDEF\uFFFE\uFFFF\U0001FFFE\U0001FFFF\U0002FFFE\U0002FFFF\U0003FFFE\U0003FFFF\U0004FFFE\U0004FFFF\U0005FFFE\U0005FFFF\U0006FFFE\U0006FFFF\U0007FFFE\U0007FFFF\U0008FFFE\U0008FFFF\U0009FFFE\U0009FFFF\U000AFFFE\U000AFFFF\U000BFFFE\U000BFFFF\U000CFFFE\U000CFFFF\U000DFFFE\U000DFFFF\U000EFFFE\U000EFFFF\U000FFFFE\U000FFFFF\U0010FFFE\U0010FFFF]" # noqa
-
-if _utils.supports_lone_surrogates:
- # Use one extra step of indirection and create surrogates with
- # eval. Not using this indirection would introduce an illegal
- # unicode literal on platforms not supporting such lone
- # surrogates.
- assert invalid_unicode_no_surrogate[-1] == "]" and invalid_unicode_no_surrogate.count("]") == 1
- invalid_unicode_re = re.compile(invalid_unicode_no_surrogate[:-1] +
- eval('"\\uD800-\\uDFFF"') + # pylint:disable=eval-used
- "]")
-else:
- invalid_unicode_re = re.compile(invalid_unicode_no_surrogate)
-
-non_bmp_invalid_codepoints = {0x1FFFE, 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE,
- 0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE, 0x5FFFF,
- 0x6FFFE, 0x6FFFF, 0x7FFFE, 0x7FFFF, 0x8FFFE,
- 0x8FFFF, 0x9FFFE, 0x9FFFF, 0xAFFFE, 0xAFFFF,
- 0xBFFFE, 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE,
- 0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE, 0xFFFFF,
- 0x10FFFE, 0x10FFFF}
-
-ascii_punctuation_re = re.compile("[\u0009-\u000D\u0020-\u002F\u003A-\u0040\u005C\u005B-\u0060\u007B-\u007E]")
-
-# Cache for charsUntil()
-charsUntilRegEx = {}
-
-
-class BufferedStream(object):
- """Buffering for streams that do not have buffering of their own
-
- The buffer is implemented as a list of chunks on the assumption that
- joining many strings will be slow since it is O(n**2)
- """
-
- def __init__(self, stream):
- self.stream = stream
- self.buffer = []
- self.position = [-1, 0] # chunk number, offset
-
- def tell(self):
- pos = 0
- for chunk in self.buffer[:self.position[0]]:
- pos += len(chunk)
- pos += self.position[1]
- return pos
-
- def seek(self, pos):
- assert pos <= self._bufferedBytes()
- offset = pos
- i = 0
- while len(self.buffer[i]) < offset:
- offset -= len(self.buffer[i])
- i += 1
- self.position = [i, offset]
-
- def read(self, bytes):
- if not self.buffer:
- return self._readStream(bytes)
- elif (self.position[0] == len(self.buffer) and
- self.position[1] == len(self.buffer[-1])):
- return self._readStream(bytes)
- else:
- return self._readFromBuffer(bytes)
-
- def _bufferedBytes(self):
- return sum([len(item) for item in self.buffer])
-
- def _readStream(self, bytes):
- data = self.stream.read(bytes)
- self.buffer.append(data)
- self.position[0] += 1
- self.position[1] = len(data)
- return data
-
- def _readFromBuffer(self, bytes):
- remainingBytes = bytes
- rv = []
- bufferIndex = self.position[0]
- bufferOffset = self.position[1]
- while bufferIndex < len(self.buffer) and remainingBytes != 0:
- assert remainingBytes > 0
- bufferedData = self.buffer[bufferIndex]
-
- if remainingBytes <= len(bufferedData) - bufferOffset:
- bytesToRead = remainingBytes
- self.position = [bufferIndex, bufferOffset + bytesToRead]
- else:
- bytesToRead = len(bufferedData) - bufferOffset
- self.position = [bufferIndex, len(bufferedData)]
- bufferIndex += 1
- rv.append(bufferedData[bufferOffset:bufferOffset + bytesToRead])
- remainingBytes -= bytesToRead
-
- bufferOffset = 0
-
- if remainingBytes:
- rv.append(self._readStream(remainingBytes))
-
- return b"".join(rv)
-
-
-def HTMLInputStream(source, **kwargs):
- # Work around Python bug #20007: read(0) closes the connection.
- # http://bugs.python.org/issue20007
- if (isinstance(source, http_client.HTTPResponse) or
- # Also check for addinfourl wrapping HTTPResponse
- (isinstance(source, urllib.response.addbase) and
- isinstance(source.fp, http_client.HTTPResponse))):
- isUnicode = False
- elif hasattr(source, "read"):
- isUnicode = isinstance(source.read(0), text_type)
- else:
- isUnicode = isinstance(source, text_type)
-
- if isUnicode:
- encodings = [x for x in kwargs if x.endswith("_encoding")]
- if encodings:
- raise TypeError("Cannot set an encoding with a unicode input, set %r" % encodings)
-
- return HTMLUnicodeInputStream(source, **kwargs)
- else:
- return HTMLBinaryInputStream(source, **kwargs)
-
-
-class HTMLUnicodeInputStream(object):
- """Provides a unicode stream of characters to the HTMLTokenizer.
-
- This class takes care of character encoding and removing or replacing
- incorrect byte-sequences and also provides column and line tracking.
-
- """
-
- _defaultChunkSize = 10240
-
- def __init__(self, source):
- """Initialises the HTMLInputStream.
-
- HTMLInputStream(source, [encoding]) -> Normalized stream from source
- for use by html5lib.
-
- source can be either a file-object, local filename or a string.
-
- The optional encoding parameter must be a string that indicates
- the encoding. If specified, that encoding will be used,
- regardless of any BOM or later declaration (such as in a meta
- element)
-
- """
-
- if not _utils.supports_lone_surrogates:
- # Such platforms will have already checked for such
- # surrogate errors, so no need to do this checking.
- self.reportCharacterErrors = None
- elif len("\U0010FFFF") == 1:
- self.reportCharacterErrors = self.characterErrorsUCS4
- else:
- self.reportCharacterErrors = self.characterErrorsUCS2
-
- # List of where new lines occur
- self.newLines = [0]
-
- self.charEncoding = (lookupEncoding("utf-8"), "certain")
- self.dataStream = self.openStream(source)
-
- self.reset()
-
- def reset(self):
- self.chunk = ""
- self.chunkSize = 0
- self.chunkOffset = 0
- self.errors = []
-
- # number of (complete) lines in previous chunks
- self.prevNumLines = 0
- # number of columns in the last line of the previous chunk
- self.prevNumCols = 0
-
- # Deal with CR LF and surrogates split over chunk boundaries
- self._bufferedCharacter = None
-
- def openStream(self, source):
- """Produces a file object from source.
-
- source can be either a file object, local filename or a string.
-
- """
- # Already a file object
- if hasattr(source, 'read'):
- stream = source
- else:
- stream = StringIO(source)
-
- return stream
-
- def _position(self, offset):
- chunk = self.chunk
- nLines = chunk.count('\n', 0, offset)
- positionLine = self.prevNumLines + nLines
- lastLinePos = chunk.rfind('\n', 0, offset)
- if lastLinePos == -1:
- positionColumn = self.prevNumCols + offset
- else:
- positionColumn = offset - (lastLinePos + 1)
- return (positionLine, positionColumn)
-
- def position(self):
- """Returns (line, col) of the current position in the stream."""
- line, col = self._position(self.chunkOffset)
- return (line + 1, col)
-
- def char(self):
- """ Read one character from the stream or queue if available. Return
- EOF when EOF is reached.
- """
- # Read a new chunk from the input stream if necessary
- if self.chunkOffset >= self.chunkSize:
- if not self.readChunk():
- return EOF
-
- chunkOffset = self.chunkOffset
- char = self.chunk[chunkOffset]
- self.chunkOffset = chunkOffset + 1
-
- return char
-
- def readChunk(self, chunkSize=None):
- if chunkSize is None:
- chunkSize = self._defaultChunkSize
-
- self.prevNumLines, self.prevNumCols = self._position(self.chunkSize)
-
- self.chunk = ""
- self.chunkSize = 0
- self.chunkOffset = 0
-
- data = self.dataStream.read(chunkSize)
-
- # Deal with CR LF and surrogates broken across chunks
- if self._bufferedCharacter:
- data = self._bufferedCharacter + data
- self._bufferedCharacter = None
- elif not data:
- # We have no more data, bye-bye stream
- return False
-
- if len(data) > 1:
- lastv = ord(data[-1])
- if lastv == 0x0D or 0xD800 <= lastv <= 0xDBFF:
- self._bufferedCharacter = data[-1]
- data = data[:-1]
-
- if self.reportCharacterErrors:
- self.reportCharacterErrors(data)
-
- # Replace invalid characters
- data = data.replace("\r\n", "\n")
- data = data.replace("\r", "\n")
-
- self.chunk = data
- self.chunkSize = len(data)
-
- return True
-
- def characterErrorsUCS4(self, data):
- for _ in range(len(invalid_unicode_re.findall(data))):
- self.errors.append("invalid-codepoint")
-
- def characterErrorsUCS2(self, data):
- # Someone picked the wrong compile option
- # You lose
- skip = False
- for match in invalid_unicode_re.finditer(data):
- if skip:
- continue
- codepoint = ord(match.group())
- pos = match.start()
- # Pretty sure there should be endianness issues here
- if _utils.isSurrogatePair(data[pos:pos + 2]):
- # We have a surrogate pair!
- char_val = _utils.surrogatePairToCodepoint(data[pos:pos + 2])
- if char_val in non_bmp_invalid_codepoints:
- self.errors.append("invalid-codepoint")
- skip = True
- elif (codepoint >= 0xD800 and codepoint <= 0xDFFF and
- pos == len(data) - 1):
- self.errors.append("invalid-codepoint")
- else:
- skip = False
- self.errors.append("invalid-codepoint")
-
- def charsUntil(self, characters, opposite=False):
- """ Returns a string of characters from the stream up to but not
- including any character in 'characters' or EOF. 'characters' must be
- a container that supports the 'in' method and iteration over its
- characters.
- """
-
- # Use a cache of regexps to find the required characters
- try:
- chars = charsUntilRegEx[(characters, opposite)]
- except KeyError:
- if __debug__:
- for c in characters:
- assert(ord(c) < 128)
- regex = "".join(["\\x%02x" % ord(c) for c in characters])
- if not opposite:
- regex = "^%s" % regex
- chars = charsUntilRegEx[(characters, opposite)] = re.compile("[%s]+" % regex)
-
- rv = []
-
- while True:
- # Find the longest matching prefix
- m = chars.match(self.chunk, self.chunkOffset)
- if m is None:
- # If nothing matched, and it wasn't because we ran out of chunk,
- # then stop
- if self.chunkOffset != self.chunkSize:
- break
- else:
- end = m.end()
- # If not the whole chunk matched, return everything
- # up to the part that didn't match
- if end != self.chunkSize:
- rv.append(self.chunk[self.chunkOffset:end])
- self.chunkOffset = end
- break
- # If the whole remainder of the chunk matched,
- # use it all and read the next chunk
- rv.append(self.chunk[self.chunkOffset:])
- if not self.readChunk():
- # Reached EOF
- break
-
- r = "".join(rv)
- return r
-
- def unget(self, char):
- # Only one character is allowed to be ungotten at once - it must
- # be consumed again before any further call to unget
- if char is not EOF:
- if self.chunkOffset == 0:
- # unget is called quite rarely, so it's a good idea to do
- # more work here if it saves a bit of work in the frequently
- # called char and charsUntil.
- # So, just prepend the ungotten character onto the current
- # chunk:
- self.chunk = char + self.chunk
- self.chunkSize += 1
- else:
- self.chunkOffset -= 1
- assert self.chunk[self.chunkOffset] == char
-
-
-class HTMLBinaryInputStream(HTMLUnicodeInputStream):
- """Provides a unicode stream of characters to the HTMLTokenizer.
-
- This class takes care of character encoding and removing or replacing
- incorrect byte-sequences and also provides column and line tracking.
-
- """
-
- def __init__(self, source, override_encoding=None, transport_encoding=None,
- same_origin_parent_encoding=None, likely_encoding=None,
- default_encoding="windows-1252", useChardet=True):
- """Initialises the HTMLInputStream.
-
- HTMLInputStream(source, [encoding]) -> Normalized stream from source
- for use by html5lib.
-
- source can be either a file-object, local filename or a string.
-
- The optional encoding parameter must be a string that indicates
- the encoding. If specified, that encoding will be used,
- regardless of any BOM or later declaration (such as in a meta
- element)
-
- """
- # Raw Stream - for unicode objects this will encode to utf-8 and set
- # self.charEncoding as appropriate
- self.rawStream = self.openStream(source)
-
- HTMLUnicodeInputStream.__init__(self, self.rawStream)
-
- # Encoding Information
- # Number of bytes to use when looking for a meta element with
- # encoding information
- self.numBytesMeta = 1024
- # Number of bytes to use when using detecting encoding using chardet
- self.numBytesChardet = 100
- # Things from args
- self.override_encoding = override_encoding
- self.transport_encoding = transport_encoding
- self.same_origin_parent_encoding = same_origin_parent_encoding
- self.likely_encoding = likely_encoding
- self.default_encoding = default_encoding
-
- # Determine encoding
- self.charEncoding = self.determineEncoding(useChardet)
- assert self.charEncoding[0] is not None
-
- # Call superclass
- self.reset()
-
- def reset(self):
- self.dataStream = self.charEncoding[0].codec_info.streamreader(self.rawStream, 'replace')
- HTMLUnicodeInputStream.reset(self)
-
- def openStream(self, source):
- """Produces a file object from source.
-
- source can be either a file object, local filename or a string.
-
- """
- # Already a file object
- if hasattr(source, 'read'):
- stream = source
- else:
- stream = BytesIO(source)
-
- try:
- stream.seek(stream.tell())
- except Exception:
- stream = BufferedStream(stream)
-
- return stream
-
- def determineEncoding(self, chardet=True):
- # BOMs take precedence over everything
- # This will also read past the BOM if present
- charEncoding = self.detectBOM(), "certain"
- if charEncoding[0] is not None:
- return charEncoding
-
- # If we've been overridden, we've been overridden
- charEncoding = lookupEncoding(self.override_encoding), "certain"
- if charEncoding[0] is not None:
- return charEncoding
-
- # Now check the transport layer
- charEncoding = lookupEncoding(self.transport_encoding), "certain"
- if charEncoding[0] is not None:
- return charEncoding
-
- # Look for meta elements with encoding information
- charEncoding = self.detectEncodingMeta(), "tentative"
- if charEncoding[0] is not None:
- return charEncoding
-
- # Parent document encoding
- charEncoding = lookupEncoding(self.same_origin_parent_encoding), "tentative"
- if charEncoding[0] is not None and not charEncoding[0].name.startswith("utf-16"):
- return charEncoding
-
- # "likely" encoding
- charEncoding = lookupEncoding(self.likely_encoding), "tentative"
- if charEncoding[0] is not None:
- return charEncoding
-
- # Guess with chardet, if available
- if chardet:
- try:
- from pip._vendor.chardet.universaldetector import UniversalDetector
- except ImportError:
- pass
- else:
- buffers = []
- detector = UniversalDetector()
- while not detector.done:
- buffer = self.rawStream.read(self.numBytesChardet)
- assert isinstance(buffer, bytes)
- if not buffer:
- break
- buffers.append(buffer)
- detector.feed(buffer)
- detector.close()
- encoding = lookupEncoding(detector.result['encoding'])
- self.rawStream.seek(0)
- if encoding is not None:
- return encoding, "tentative"
-
- # Try the default encoding
- charEncoding = lookupEncoding(self.default_encoding), "tentative"
- if charEncoding[0] is not None:
- return charEncoding
-
- # Fallback to html5lib's default if even that hasn't worked
- return lookupEncoding("windows-1252"), "tentative"
-
- def changeEncoding(self, newEncoding):
- assert self.charEncoding[1] != "certain"
- newEncoding = lookupEncoding(newEncoding)
- if newEncoding is None:
- return
- if newEncoding.name in ("utf-16be", "utf-16le"):
- newEncoding = lookupEncoding("utf-8")
- assert newEncoding is not None
- elif newEncoding == self.charEncoding[0]:
- self.charEncoding = (self.charEncoding[0], "certain")
- else:
- self.rawStream.seek(0)
- self.charEncoding = (newEncoding, "certain")
- self.reset()
- raise _ReparseException("Encoding changed from %s to %s" % (self.charEncoding[0], newEncoding))
-
- def detectBOM(self):
- """Attempts to detect at BOM at the start of the stream. If
- an encoding can be determined from the BOM return the name of the
- encoding otherwise return None"""
- bomDict = {
- codecs.BOM_UTF8: 'utf-8',
- codecs.BOM_UTF16_LE: 'utf-16le', codecs.BOM_UTF16_BE: 'utf-16be',
- codecs.BOM_UTF32_LE: 'utf-32le', codecs.BOM_UTF32_BE: 'utf-32be'
- }
-
- # Go to beginning of file and read in 4 bytes
- string = self.rawStream.read(4)
- assert isinstance(string, bytes)
-
- # Try detecting the BOM using bytes from the string
- encoding = bomDict.get(string[:3]) # UTF-8
- seek = 3
- if not encoding:
- # Need to detect UTF-32 before UTF-16
- encoding = bomDict.get(string) # UTF-32
- seek = 4
- if not encoding:
- encoding = bomDict.get(string[:2]) # UTF-16
- seek = 2
-
- # Set the read position past the BOM if one was found, otherwise
- # set it to the start of the stream
- if encoding:
- self.rawStream.seek(seek)
- return lookupEncoding(encoding)
- else:
- self.rawStream.seek(0)
- return None
-
- def detectEncodingMeta(self):
- """Report the encoding declared by the meta element
- """
- buffer = self.rawStream.read(self.numBytesMeta)
- assert isinstance(buffer, bytes)
- parser = EncodingParser(buffer)
- self.rawStream.seek(0)
- encoding = parser.getEncoding()
-
- if encoding is not None and encoding.name in ("utf-16be", "utf-16le"):
- encoding = lookupEncoding("utf-8")
-
- return encoding
-
-
-class EncodingBytes(bytes):
- """String-like object with an associated position and various extra methods
- If the position is ever greater than the string length then an exception is
- raised"""
- def __new__(self, value):
- assert isinstance(value, bytes)
- return bytes.__new__(self, value.lower())
-
- def __init__(self, value):
- # pylint:disable=unused-argument
- self._position = -1
-
- def __iter__(self):
- return self
-
- def __next__(self):
- p = self._position = self._position + 1
- if p >= len(self):
- raise StopIteration
- elif p < 0:
- raise TypeError
- return self[p:p + 1]
-
- def next(self):
- # Py2 compat
- return self.__next__()
-
- def previous(self):
- p = self._position
- if p >= len(self):
- raise StopIteration
- elif p < 0:
- raise TypeError
- self._position = p = p - 1
- return self[p:p + 1]
-
- def setPosition(self, position):
- if self._position >= len(self):
- raise StopIteration
- self._position = position
-
- def getPosition(self):
- if self._position >= len(self):
- raise StopIteration
- if self._position >= 0:
- return self._position
- else:
- return None
-
- position = property(getPosition, setPosition)
-
- def getCurrentByte(self):
- return self[self.position:self.position + 1]
-
- currentByte = property(getCurrentByte)
-
- def skip(self, chars=spaceCharactersBytes):
- """Skip past a list of characters"""
- p = self.position # use property for the error-checking
- while p < len(self):
- c = self[p:p + 1]
- if c not in chars:
- self._position = p
- return c
- p += 1
- self._position = p
- return None
-
- def skipUntil(self, chars):
- p = self.position
- while p < len(self):
- c = self[p:p + 1]
- if c in chars:
- self._position = p
- return c
- p += 1
- self._position = p
- return None
-
- def matchBytes(self, bytes):
- """Look for a sequence of bytes at the start of a string. If the bytes
- are found return True and advance the position to the byte after the
- match. Otherwise return False and leave the position alone"""
- rv = self.startswith(bytes, self.position)
- if rv:
- self.position += len(bytes)
- return rv
-
- def jumpTo(self, bytes):
- """Look for the next sequence of bytes matching a given sequence. If
- a match is found advance the position to the last byte of the match"""
- try:
- self._position = self.index(bytes, self.position) + len(bytes) - 1
- except ValueError:
- raise StopIteration
- return True
-
-
-class EncodingParser(object):
- """Mini parser for detecting character encoding from meta elements"""
-
- def __init__(self, data):
- """string - the data to work on for encoding detection"""
- self.data = EncodingBytes(data)
- self.encoding = None
-
- def getEncoding(self):
- if b"")
-
- def handleMeta(self):
- if self.data.currentByte not in spaceCharactersBytes:
- # if we have ")
-
- def getAttribute(self):
- """Return a name,value pair for the next attribute in the stream,
- if one is found, or None"""
- data = self.data
- # Step 1 (skip chars)
- c = data.skip(spaceCharactersBytes | frozenset([b"/"]))
- assert c is None or len(c) == 1
- # Step 2
- if c in (b">", None):
- return None
- # Step 3
- attrName = []
- attrValue = []
- # Step 4 attribute name
- while True:
- if c == b"=" and attrName:
- break
- elif c in spaceCharactersBytes:
- # Step 6!
- c = data.skip()
- break
- elif c in (b"/", b">"):
- return b"".join(attrName), b""
- elif c in asciiUppercaseBytes:
- attrName.append(c.lower())
- elif c is None:
- return None
- else:
- attrName.append(c)
- # Step 5
- c = next(data)
- # Step 7
- if c != b"=":
- data.previous()
- return b"".join(attrName), b""
- # Step 8
- next(data)
- # Step 9
- c = data.skip()
- # Step 10
- if c in (b"'", b'"'):
- # 10.1
- quoteChar = c
- while True:
- # 10.2
- c = next(data)
- # 10.3
- if c == quoteChar:
- next(data)
- return b"".join(attrName), b"".join(attrValue)
- # 10.4
- elif c in asciiUppercaseBytes:
- attrValue.append(c.lower())
- # 10.5
- else:
- attrValue.append(c)
- elif c == b">":
- return b"".join(attrName), b""
- elif c in asciiUppercaseBytes:
- attrValue.append(c.lower())
- elif c is None:
- return None
- else:
- attrValue.append(c)
- # Step 11
- while True:
- c = next(data)
- if c in spacesAngleBrackets:
- return b"".join(attrName), b"".join(attrValue)
- elif c in asciiUppercaseBytes:
- attrValue.append(c.lower())
- elif c is None:
- return None
- else:
- attrValue.append(c)
-
-
-class ContentAttrParser(object):
- def __init__(self, data):
- assert isinstance(data, bytes)
- self.data = data
-
- def parse(self):
- try:
- # Check if the attr name is charset
- # otherwise return
- self.data.jumpTo(b"charset")
- self.data.position += 1
- self.data.skip()
- if not self.data.currentByte == b"=":
- # If there is no = sign keep looking for attrs
- return None
- self.data.position += 1
- self.data.skip()
- # Look for an encoding between matching quote marks
- if self.data.currentByte in (b'"', b"'"):
- quoteMark = self.data.currentByte
- self.data.position += 1
- oldPosition = self.data.position
- if self.data.jumpTo(quoteMark):
- return self.data[oldPosition:self.data.position]
- else:
- return None
- else:
- # Unquoted value
- oldPosition = self.data.position
- try:
- self.data.skipUntil(spaceCharactersBytes)
- return self.data[oldPosition:self.data.position]
- except StopIteration:
- # Return the whole remaining value
- return self.data[oldPosition:]
- except StopIteration:
- return None
-
-
-def lookupEncoding(encoding):
- """Return the python codec name corresponding to an encoding or None if the
- string doesn't correspond to a valid encoding."""
- if isinstance(encoding, bytes):
- try:
- encoding = encoding.decode("ascii")
- except UnicodeDecodeError:
- return None
-
- if encoding is not None:
- try:
- return webencodings.lookup(encoding)
- except AttributeError:
- return None
- else:
- return None
diff --git a/spaces/alkzar90/rock-glacier-segmentation/app.py b/spaces/alkzar90/rock-glacier-segmentation/app.py
deleted file mode 100644
index 81e1e30d66b28b52a82ed52a6d0b235a8424a46d..0000000000000000000000000000000000000000
--- a/spaces/alkzar90/rock-glacier-segmentation/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import gradio as gr
-import os
-import random
-import numpy as np
-import torch
-from torch import nn
-from torchvision import transforms
-from transformers import SegformerForSemanticSegmentation
-
-# examples
-os.system("wget -O 073.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/buenos_resultados/073.png")
-os.system("wget -O 356.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/buenos_resultados/356.png")
-os.system("wget -O 599.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/buenos_resultados/599.png")
-os.system("wget -O 630.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/buenos_resultados/630.png")
-os.system("wget -O 673.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/buenos_resultados/673.png")
-
-
-os.system("wget -O 019.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/malos_resultados/019.png")
-os.system("wget -O 261.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/malos_resultados/261.png")
-os.system("wget -O 524.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/malos_resultados/524.png")
-os.system("wget -O 716.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/malos_resultados/716.png")
-os.system("wget -O 898.png https://huggingface.co/spaces/alkzar90/rock-glacier-segmentation/resolve/main/example_images/malos_resultados/898.png")
-
-# model-setting
-MODEL_PATH="./best_model_mixto/"
-
-device = torch.device("cpu")
-
-preprocessor = transforms.Compose([
- transforms.Resize(128),
- transforms.ToTensor()
- ])
-model = SegformerForSemanticSegmentation.from_pretrained(MODEL_PATH)
-model.eval()
-
-# inference-functions
-def upscale_logits(logit_outputs, size):
- """Escala los logits a (4W)x(4H) para recobrar dimensiones originales del input"""
- return nn.functional.interpolate(
- logit_outputs,
- size=size,
- mode="bilinear",
- align_corners=False
- )
-
-
-def visualize_instance_seg_mask(mask):
- """Agrega colores RGB a cada una de las clases en la mask"""
- image = np.zeros((mask.shape[0], mask.shape[1], 3))
- labels = np.unique(mask)
- label2color = {label: (random.randint(0, 1),
- random.randint(0, 255),
- random.randint(0, 255)) for label in labels}
- for i in range(image.shape[0]):
- for j in range(image.shape[1]):
- image[i, j, :] = label2color[mask[i, j]]
- image = image / 255
- return image
-
-
-def query_image(img):
- """Función para generar predicciones a la escala origina"""
- inputs = preprocessor(img).unsqueeze(0)
- with torch.no_grad():
- preds = model(inputs)["logits"]
- preds_upscale = upscale_logits(preds, preds.shape[2])
- predict_label = torch.argmax(preds_upscale, dim=1).to(device)
- result = predict_label[0,:,:].detach().cpu().numpy()
- return visualize_instance_seg_mask(result)
-
-# demo
-demo = gr.Interface(
- query_image,
- inputs=[gr.Image(type="pil").style(full_width=True, height=256)],
- outputs=[gr.Image().style(full_width=True, height=256)],
- title="Skyguard: segmentador de glaciares de roca 🛰️ +️ 🛡️ ️",
- description="Modelo de segmentación de imágenes para detectar glaciares de roca. Se entrenó un modelo [nvidia/SegFormer](https://huggingface.co/nvidia/mit-b0) con _fine-tuning_ en el [rock-glacier-dataset](https://huggingface.co/datasets/alkzar90/rock-glacier-dataset)",
- examples=[["073.png"], ["356.png"], ["599.png"], ["630.png"], ["673.png"],
- ["019.png"], ["261.png"], ["524.png"], ["716.png"], ["898.png"]],
- cache_examples=False
-)
-
-demo.launch()
diff --git a/spaces/amankishore/sjc/guided_diffusion/README.md b/spaces/amankishore/sjc/guided_diffusion/README.md
deleted file mode 100644
index 4afc26c63af01a48a86f76a0b08f1c26161747c7..0000000000000000000000000000000000000000
--- a/spaces/amankishore/sjc/guided_diffusion/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Selected modules from OpenAI's [guided diffusion](https://github.com/openai/guided-diffusion), retrieved at commit `22e0df8183507e13a7813f8d38d51b072ca1e67c`
-
-It's a bare minimum set of files needed to run their pretrained models. You can download these model checkpoints following the instructions in their repository README
-
-Some modifications are made to remove the distributed processing utilities in order to reduce code complexity.
diff --git a/spaces/americanboy/Prime_Numbers/README.md b/spaces/americanboy/Prime_Numbers/README.md
deleted file mode 100644
index 7955119402ca9e86f76418ced9ac921ac619963f..0000000000000000000000000000000000000000
--- a/spaces/americanboy/Prime_Numbers/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Space
-emoji: 📚
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/anaclaudia13ct/insect_detection/utils/loggers/wandb/wandb_utils.py b/spaces/anaclaudia13ct/insect_detection/utils/loggers/wandb/wandb_utils.py
deleted file mode 100644
index 238f4edbf2a0ddf34c024fbb6775c71dd19e18aa..0000000000000000000000000000000000000000
--- a/spaces/anaclaudia13ct/insect_detection/utils/loggers/wandb/wandb_utils.py
+++ /dev/null
@@ -1,589 +0,0 @@
-"""Utilities and tools for tracking runs with Weights & Biases."""
-
-import logging
-import os
-import sys
-from contextlib import contextmanager
-from pathlib import Path
-from typing import Dict
-
-import yaml
-from tqdm import tqdm
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-from utils.dataloaders import LoadImagesAndLabels, img2label_paths
-from utils.general import LOGGER, check_dataset, check_file
-
-try:
- import wandb
-
- assert hasattr(wandb, '__version__') # verify package import not local dir
-except (ImportError, AssertionError):
- wandb = None
-
-RANK = int(os.getenv('RANK', -1))
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
- return from_string[len(prefix):]
-
-
-def check_wandb_config_file(data_config_file):
- wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path
- if Path(wandb_config).is_file():
- return wandb_config
- return data_config_file
-
-
-def check_wandb_dataset(data_file):
- is_trainset_wandb_artifact = False
- is_valset_wandb_artifact = False
- if isinstance(data_file, dict):
- # In that case another dataset manager has already processed it and we don't have to
- return data_file
- if check_file(data_file) and data_file.endswith('.yaml'):
- with open(data_file, errors='ignore') as f:
- data_dict = yaml.safe_load(f)
- is_trainset_wandb_artifact = isinstance(data_dict['train'],
- str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX)
- is_valset_wandb_artifact = isinstance(data_dict['val'],
- str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX)
- if is_trainset_wandb_artifact or is_valset_wandb_artifact:
- return data_dict
- else:
- return check_dataset(data_file)
-
-
-def get_run_info(run_path):
- run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX))
- run_id = run_path.stem
- project = run_path.parent.stem
- entity = run_path.parent.parent.stem
- model_artifact_name = 'run_' + run_id + '_model'
- return entity, project, run_id, model_artifact_name
-
-
-def check_wandb_resume(opt):
- process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None
- if isinstance(opt.resume, str):
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- if RANK not in [-1, 0]: # For resuming DDP runs
- entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
- api = wandb.Api()
- artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest')
- modeldir = artifact.download()
- opt.weights = str(Path(modeldir) / "last.pt")
- return True
- return None
-
-
-def process_wandb_config_ddp_mode(opt):
- with open(check_file(opt.data), errors='ignore') as f:
- data_dict = yaml.safe_load(f) # data dict
- train_dir, val_dir = None, None
- if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias)
- train_dir = train_artifact.download()
- train_path = Path(train_dir) / 'data/images/'
- data_dict['train'] = str(train_path)
-
- if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias)
- val_dir = val_artifact.download()
- val_path = Path(val_dir) / 'data/images/'
- data_dict['val'] = str(val_path)
- if train_dir or val_dir:
- ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml')
- with open(ddp_data_path, 'w') as f:
- yaml.safe_dump(data_dict, f)
- opt.data = ddp_data_path
-
-
-class WandbLogger():
- """Log training runs, datasets, models, and predictions to Weights & Biases.
-
- This logger sends information to W&B at wandb.ai. By default, this information
- includes hyperparameters, system configuration and metrics, model metrics,
- and basic data metrics and analyses.
-
- By providing additional command line arguments to train.py, datasets,
- models and predictions can also be logged.
-
- For more on how this logger is used, see the Weights & Biases documentation:
- https://docs.wandb.com/guides/integrations/yolov5
- """
-
- def __init__(self, opt, run_id=None, job_type='Training'):
- """
- - Initialize WandbLogger instance
- - Upload dataset if opt.upload_dataset is True
- - Setup training processes if job_type is 'Training'
-
- arguments:
- opt (namespace) -- Commandline arguments for this run
- run_id (str) -- Run ID of W&B run to be resumed
- job_type (str) -- To set the job_type for this run
-
- """
- # Temporary-fix
- if opt.upload_dataset:
- opt.upload_dataset = False
- # LOGGER.info("Uploading Dataset functionality is not being supported temporarily due to a bug.")
-
- # Pre-training routine --
- self.job_type = job_type
- self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run
- self.val_artifact, self.train_artifact = None, None
- self.train_artifact_path, self.val_artifact_path = None, None
- self.result_artifact = None
- self.val_table, self.result_table = None, None
- self.bbox_media_panel_images = []
- self.val_table_path_map = None
- self.max_imgs_to_log = 16
- self.wandb_artifact_data_dict = None
- self.data_dict = None
- # It's more elegant to stick to 1 wandb.init call,
- # but useful config data is overwritten in the WandbLogger's wandb.init call
- if isinstance(opt.resume, str): # checks resume from artifact
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
- model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name
- assert wandb, 'install wandb to resume wandb runs'
- # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config
- self.wandb_run = wandb.init(id=run_id,
- project=project,
- entity=entity,
- resume='allow',
- allow_val_change=True)
- opt.resume = model_artifact_name
- elif self.wandb:
- self.wandb_run = wandb.init(config=opt,
- resume="allow",
- project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem,
- entity=opt.entity,
- name=opt.name if opt.name != 'exp' else None,
- job_type=job_type,
- id=run_id,
- allow_val_change=True) if not wandb.run else wandb.run
- if self.wandb_run:
- if self.job_type == 'Training':
- if opt.upload_dataset:
- if not opt.resume:
- self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt)
-
- if isinstance(opt.data, dict):
- # This means another dataset manager has already processed the dataset info (e.g. ClearML)
- # and they will have stored the already processed dict in opt.data
- self.data_dict = opt.data
- elif opt.resume:
- # resume from artifact
- if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- self.data_dict = dict(self.wandb_run.config.data_dict)
- else: # local resume
- self.data_dict = check_wandb_dataset(opt.data)
- else:
- self.data_dict = check_wandb_dataset(opt.data)
- self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict
-
- # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming.
- self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict}, allow_val_change=True)
- self.setup_training(opt)
-
- if self.job_type == 'Dataset Creation':
- self.wandb_run.config.update({"upload_dataset": True})
- self.data_dict = self.check_and_upload_dataset(opt)
-
- def check_and_upload_dataset(self, opt):
- """
- Check if the dataset format is compatible and upload it as W&B artifact
-
- arguments:
- opt (namespace)-- Commandline arguments for current run
-
- returns:
- Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links.
- """
- assert wandb, 'Install wandb to upload dataset'
- config_path = self.log_dataset_artifact(opt.data, opt.single_cls,
- 'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem)
- with open(config_path, errors='ignore') as f:
- wandb_data_dict = yaml.safe_load(f)
- return wandb_data_dict
-
- def setup_training(self, opt):
- """
- Setup the necessary processes for training YOLO models:
- - Attempt to download model checkpoint and dataset artifacts if opt.resume stats with WANDB_ARTIFACT_PREFIX
- - Update data_dict, to contain info of previous run if resumed and the paths of dataset artifact if downloaded
- - Setup log_dict, initialize bbox_interval
-
- arguments:
- opt (namespace) -- commandline arguments for this run
-
- """
- self.log_dict, self.current_epoch = {}, 0
- self.bbox_interval = opt.bbox_interval
- if isinstance(opt.resume, str):
- modeldir, _ = self.download_model_artifact(opt)
- if modeldir:
- self.weights = Path(modeldir) / "last.pt"
- config = self.wandb_run.config
- opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str(
- self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs,\
- config.hyp, config.imgsz
- data_dict = self.data_dict
- if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download
- self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(
- data_dict.get('train'), opt.artifact_alias)
- self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(
- data_dict.get('val'), opt.artifact_alias)
-
- if self.train_artifact_path is not None:
- train_path = Path(self.train_artifact_path) / 'data/images/'
- data_dict['train'] = str(train_path)
- if self.val_artifact_path is not None:
- val_path = Path(self.val_artifact_path) / 'data/images/'
- data_dict['val'] = str(val_path)
-
- if self.val_artifact is not None:
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
- columns = ["epoch", "id", "ground truth", "prediction"]
- columns.extend(self.data_dict['names'])
- self.result_table = wandb.Table(columns)
- self.val_table = self.val_artifact.get("val")
- if self.val_table_path_map is None:
- self.map_val_table_path()
- if opt.bbox_interval == -1:
- self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1
- if opt.evolve or opt.noplots:
- self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval
- train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None
- # Update the the data_dict to point to local artifacts dir
- if train_from_artifact:
- self.data_dict = data_dict
-
- def download_dataset_artifact(self, path, alias):
- """
- download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX
-
- arguments:
- path -- path of the dataset to be used for training
- alias (str)-- alias of the artifact to be download/used for training
-
- returns:
- (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset
- is found otherwise returns (None, None)
- """
- if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX):
- artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias)
- dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/"))
- assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'"
- datadir = dataset_artifact.download()
- return datadir, dataset_artifact
- return None, None
-
- def download_model_artifact(self, opt):
- """
- download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX
-
- arguments:
- opt (namespace) -- Commandline arguments for this run
- """
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest")
- assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist'
- modeldir = model_artifact.download()
- # epochs_trained = model_artifact.metadata.get('epochs_trained')
- total_epochs = model_artifact.metadata.get('total_epochs')
- is_finished = total_epochs is None
- assert not is_finished, 'training is finished, can only resume incomplete runs.'
- return modeldir, model_artifact
- return None, None
-
- def log_model(self, path, opt, epoch, fitness_score, best_model=False):
- """
- Log the model checkpoint as W&B artifact
-
- arguments:
- path (Path) -- Path of directory containing the checkpoints
- opt (namespace) -- Command line arguments for this run
- epoch (int) -- Current epoch number
- fitness_score (float) -- fitness score for current epoch
- best_model (boolean) -- Boolean representing if the current checkpoint is the best yet.
- """
- model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model',
- type='model',
- metadata={
- 'original_url': str(path),
- 'epochs_trained': epoch + 1,
- 'save period': opt.save_period,
- 'project': opt.project,
- 'total_epochs': opt.epochs,
- 'fitness_score': fitness_score})
- model_artifact.add_file(str(path / 'last.pt'), name='last.pt')
- wandb.log_artifact(model_artifact,
- aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else ''])
- LOGGER.info(f"Saving model artifact on epoch {epoch + 1}")
-
- def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False):
- """
- Log the dataset as W&B artifact and return the new data file with W&B links
-
- arguments:
- data_file (str) -- the .yaml file with information about the dataset like - path, classes etc.
- single_class (boolean) -- train multi-class data as single-class
- project (str) -- project name. Used to construct the artifact path
- overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new
- file with _wandb postfix. Eg -> data_wandb.yaml
-
- returns:
- the new .yaml file with artifact links. it can be used to start training directly from artifacts
- """
- upload_dataset = self.wandb_run.config.upload_dataset
- log_val_only = isinstance(upload_dataset, str) and upload_dataset == 'val'
- self.data_dict = check_dataset(data_file) # parse and check
- data = dict(self.data_dict)
- nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names'])
- names = {k: v for k, v in enumerate(names)} # to index dictionary
-
- # log train set
- if not log_val_only:
- self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(data['train'], rect=True, batch_size=1),
- names,
- name='train') if data.get('train') else None
- if data.get('train'):
- data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train')
-
- self.val_artifact = self.create_dataset_table(
- LoadImagesAndLabels(data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None
- if data.get('val'):
- data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val')
-
- path = Path(data_file)
- # create a _wandb.yaml file with artifacts links if both train and test set are logged
- if not log_val_only:
- path = (path.stem if overwrite_config else path.stem + '_wandb') + '.yaml' # updated data.yaml path
- path = ROOT / 'data' / path
- data.pop('download', None)
- data.pop('path', None)
- with open(path, 'w') as f:
- yaml.safe_dump(data, f)
- LOGGER.info(f"Created dataset config file {path}")
-
- if self.job_type == 'Training': # builds correct artifact pipeline graph
- if not log_val_only:
- self.wandb_run.log_artifact(
- self.train_artifact) # calling use_artifact downloads the dataset. NOT NEEDED!
- self.wandb_run.use_artifact(self.val_artifact)
- self.val_artifact.wait()
- self.val_table = self.val_artifact.get('val')
- self.map_val_table_path()
- else:
- self.wandb_run.log_artifact(self.train_artifact)
- self.wandb_run.log_artifact(self.val_artifact)
- return path
-
- def map_val_table_path(self):
- """
- Map the validation dataset Table like name of file -> it's id in the W&B Table.
- Useful for - referencing artifacts for evaluation.
- """
- self.val_table_path_map = {}
- LOGGER.info("Mapping dataset")
- for i, data in enumerate(tqdm(self.val_table.data)):
- self.val_table_path_map[data[3]] = data[0]
-
- def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int, str], name: str = 'dataset'):
- """
- Create and return W&B artifact containing W&B Table of the dataset.
-
- arguments:
- dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table
- class_to_id -- hash map that maps class ids to labels
- name -- name of the artifact
-
- returns:
- dataset artifact to be logged or used
- """
- # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging
- artifact = wandb.Artifact(name=name, type="dataset")
- img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None
- img_files = tqdm(dataset.im_files) if not img_files else img_files
- for img_file in img_files:
- if Path(img_file).is_dir():
- artifact.add_dir(img_file, name='data/images')
- labels_path = 'labels'.join(dataset.path.rsplit('images', 1))
- artifact.add_dir(labels_path, name='data/labels')
- else:
- artifact.add_file(img_file, name='data/images/' + Path(img_file).name)
- label_file = Path(img2label_paths([img_file])[0])
- artifact.add_file(str(label_file), name='data/labels/' +
- label_file.name) if label_file.exists() else None
- table = wandb.Table(columns=["id", "train_image", "Classes", "name"])
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()])
- for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)):
- box_data, img_classes = [], {}
- for cls, *xywh in labels[:, 1:].tolist():
- cls = int(cls)
- box_data.append({
- "position": {
- "middle": [xywh[0], xywh[1]],
- "width": xywh[2],
- "height": xywh[3]},
- "class_id": cls,
- "box_caption": "%s" % (class_to_id[cls])})
- img_classes[cls] = class_to_id[cls]
- boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space
- table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()),
- Path(paths).name)
- artifact.add(table, name)
- return artifact
-
- def log_training_progress(self, predn, path, names):
- """
- Build evaluation Table. Uses reference from validation dataset table.
-
- arguments:
- predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class]
- path (str): local path of the current evaluation image
- names (dict(int, str)): hash map that maps class ids to labels
- """
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()])
- box_data = []
- avg_conf_per_class = [0] * len(self.data_dict['names'])
- pred_class_count = {}
- for *xyxy, conf, cls in predn.tolist():
- if conf >= 0.25:
- cls = int(cls)
- box_data.append({
- "position": {
- "minX": xyxy[0],
- "minY": xyxy[1],
- "maxX": xyxy[2],
- "maxY": xyxy[3]},
- "class_id": cls,
- "box_caption": f"{names[cls]} {conf:.3f}",
- "scores": {
- "class_score": conf},
- "domain": "pixel"})
- avg_conf_per_class[cls] += conf
-
- if cls in pred_class_count:
- pred_class_count[cls] += 1
- else:
- pred_class_count[cls] = 1
-
- for pred_class in pred_class_count.keys():
- avg_conf_per_class[pred_class] = avg_conf_per_class[pred_class] / pred_class_count[pred_class]
-
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- id = self.val_table_path_map[Path(path).name]
- self.result_table.add_data(self.current_epoch, id, self.val_table.data[id][1],
- wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set),
- *avg_conf_per_class)
-
- def val_one_image(self, pred, predn, path, names, im):
- """
- Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel
-
- arguments:
- pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class]
- predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class]
- path (str): local path of the current evaluation image
- """
- if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact
- self.log_training_progress(predn, path, names)
-
- if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0:
- if self.current_epoch % self.bbox_interval == 0:
- box_data = [{
- "position": {
- "minX": xyxy[0],
- "minY": xyxy[1],
- "maxX": xyxy[2],
- "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": f"{names[int(cls)]} {conf:.3f}",
- "scores": {
- "class_score": conf},
- "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name))
-
- def log(self, log_dict):
- """
- save the metrics to the logging dictionary
-
- arguments:
- log_dict (Dict) -- metrics/media to be logged in current step
- """
- if self.wandb_run:
- for key, value in log_dict.items():
- self.log_dict[key] = value
-
- def end_epoch(self, best_result=False):
- """
- commit the log_dict, model artifacts and Tables to W&B and flush the log_dict.
-
- arguments:
- best_result (boolean): Boolean representing if the result of this evaluation is best or not
- """
- if self.wandb_run:
- with all_logging_disabled():
- if self.bbox_media_panel_images:
- self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images
- try:
- wandb.log(self.log_dict)
- except BaseException as e:
- LOGGER.info(
- f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}"
- )
- self.wandb_run.finish()
- self.wandb_run = None
-
- self.log_dict = {}
- self.bbox_media_panel_images = []
- if self.result_artifact:
- self.result_artifact.add(self.result_table, 'result')
- wandb.log_artifact(self.result_artifact,
- aliases=[
- 'latest', 'last', 'epoch ' + str(self.current_epoch),
- ('best' if best_result else '')])
-
- wandb.log({"evaluation": self.result_table})
- columns = ["epoch", "id", "ground truth", "prediction"]
- columns.extend(self.data_dict['names'])
- self.result_table = wandb.Table(columns)
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
-
- def finish_run(self):
- """
- Log metrics if any and finish the current W&B run
- """
- if self.wandb_run:
- if self.log_dict:
- with all_logging_disabled():
- wandb.log(self.log_dict)
- wandb.run.finish()
-
-
-@contextmanager
-def all_logging_disabled(highest_level=logging.CRITICAL):
- """ source - https://gist.github.com/simon-weber/7853144
- A context manager that will prevent any logging messages triggered during the body from being processed.
- :param highest_level: the maximum logging level in use.
- This would only need to be changed if a custom level greater than CRITICAL is defined.
- """
- previous_level = logging.root.manager.disable
- logging.disable(highest_level)
- try:
- yield
- finally:
- logging.disable(previous_level)
diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/api-example-stream.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/api-example-stream.py
deleted file mode 100644
index 49058776927c7d85e49f5f717d8a77135fb2f8a1..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/api-example-stream.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import asyncio
-import json
-import sys
-
-try:
- import websockets
-except ImportError:
- print("Websockets package not found. Make sure it's installed.")
-
-# For local streaming, the websockets are hosted without ssl - ws://
-HOST = 'localhost:5005'
-URI = f'ws://{HOST}/api/v1/stream'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - wss://
-# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream'
-
-async def run(context):
- # Note: the selected defaults change from time to time.
- request = {
- 'prompt': context,
- 'max_new_tokens': 250,
- 'do_sample': True,
- 'temperature': 1.3,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'repetition_penalty': 1.18,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- async with websockets.connect(URI, ping_interval=None) as websocket:
- await websocket.send(json.dumps(request))
-
- yield context # Remove this if you just want to see the reply
-
- while True:
- incoming_data = await websocket.recv()
- incoming_data = json.loads(incoming_data)
-
- match incoming_data['event']:
- case 'text_stream':
- yield incoming_data['text']
- case 'stream_end':
- return
-
-
-async def print_response_stream(prompt):
- async for response in run(prompt):
- print(response, end='')
- sys.stdout.flush() # If we don't flush, we won't see tokens in realtime.
-
-
-if __name__ == '__main__':
- prompt = "In order to make homemade bread, follow these steps:\n1)"
- asyncio.run(print_response_stream(prompt))
diff --git a/spaces/aritheanalyst/legalsummarizer/index.html b/spaces/aritheanalyst/legalsummarizer/index.html
deleted file mode 100644
index 6297e87faac86ebe520de6e6a2446dd934b57137..0000000000000000000000000000000000000000
--- a/spaces/aritheanalyst/legalsummarizer/index.html
+++ /dev/null
@@ -1,166 +0,0 @@
-
-
-
-
-
-
- No Show Prediction
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/fast_speech_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/fast_speech_config.py
deleted file mode 100644
index af6c2db6faf55ee2b15047fff86281d42dab1b87..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/configs/fast_speech_config.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from dataclasses import dataclass, field
-from typing import List
-
-from TTS.tts.configs.shared_configs import BaseTTSConfig
-from TTS.tts.models.forward_tts import ForwardTTSArgs
-
-
-@dataclass
-class FastSpeechConfig(BaseTTSConfig):
- """Configure `ForwardTTS` as FastSpeech model.
-
- Example:
-
- >>> from TTS.tts.configs.fast_speech_config import FastSpeechConfig
- >>> config = FastSpeechConfig()
-
- Args:
- model (str):
- Model name used for selecting the right model at initialization. Defaults to `fast_pitch`.
-
- base_model (str):
- Name of the base model being configured as this model so that 🐸 TTS knows it needs to initiate
- the base model rather than searching for the `model` implementation. Defaults to `forward_tts`.
-
- model_args (Coqpit):
- Model class arguments. Check `FastSpeechArgs` for more details. Defaults to `FastSpeechArgs()`.
-
- data_dep_init_steps (int):
- Number of steps used for computing normalization parameters at the beginning of the training. GlowTTS uses
- Activation Normalization that pre-computes normalization stats at the beginning and use the same values
- for the rest. Defaults to 10.
-
- speakers_file (str):
- Path to the file containing the list of speakers. Needed at inference for loading matching speaker ids to
- speaker names. Defaults to `None`.
-
-
- use_speaker_embedding (bool):
- enable / disable using speaker embeddings for multi-speaker models. If set True, the model is
- in the multi-speaker mode. Defaults to False.
-
- use_d_vector_file (bool):
- enable /disable using external speaker embeddings in place of the learned embeddings. Defaults to False.
-
- d_vector_file (str):
- Path to the file including pre-computed speaker embeddings. Defaults to None.
-
- d_vector_dim (int):
- Dimension of the external speaker embeddings. Defaults to 0.
-
- optimizer (str):
- Name of the model optimizer. Defaults to `Adam`.
-
- optimizer_params (dict):
- Arguments of the model optimizer. Defaults to `{"betas": [0.9, 0.998], "weight_decay": 1e-6}`.
-
- lr_scheduler (str):
- Name of the learning rate scheduler. Defaults to `Noam`.
-
- lr_scheduler_params (dict):
- Arguments of the learning rate scheduler. Defaults to `{"warmup_steps": 4000}`.
-
- lr (float):
- Initial learning rate. Defaults to `1e-3`.
-
- grad_clip (float):
- Gradient norm clipping value. Defaults to `5.0`.
-
- spec_loss_type (str):
- Type of the spectrogram loss. Check `ForwardTTSLoss` for possible values. Defaults to `mse`.
-
- duration_loss_type (str):
- Type of the duration loss. Check `ForwardTTSLoss` for possible values. Defaults to `mse`.
-
- use_ssim_loss (bool):
- Enable/disable the use of SSIM (Structural Similarity) loss. Defaults to True.
-
- wd (float):
- Weight decay coefficient. Defaults to `1e-7`.
-
- ssim_loss_alpha (float):
- Weight for the SSIM loss. If set 0, disables the SSIM loss. Defaults to 1.0.
-
- dur_loss_alpha (float):
- Weight for the duration predictor's loss. If set 0, disables the huber loss. Defaults to 1.0.
-
- spec_loss_alpha (float):
- Weight for the L1 spectrogram loss. If set 0, disables the L1 loss. Defaults to 1.0.
-
- pitch_loss_alpha (float):
- Weight for the pitch predictor's loss. If set 0, disables the pitch predictor. Defaults to 1.0.
-
- binary_loss_alpha (float):
- Weight for the binary loss. If set 0, disables the binary loss. Defaults to 1.0.
-
- binary_loss_warmup_epochs (float):
- Number of epochs to gradually increase the binary loss impact. Defaults to 150.
-
- min_seq_len (int):
- Minimum input sequence length to be used at training.
-
- max_seq_len (int):
- Maximum input sequence length to be used at training. Larger values result in more VRAM usage.
- """
-
- model: str = "fast_speech"
- base_model: str = "forward_tts"
-
- # model specific params
- model_args: ForwardTTSArgs = field(default_factory=lambda: ForwardTTSArgs(use_pitch=False))
-
- # multi-speaker settings
- num_speakers: int = 0
- speakers_file: str = None
- use_speaker_embedding: bool = False
- use_d_vector_file: bool = False
- d_vector_file: str = False
- d_vector_dim: int = 0
-
- # optimizer parameters
- optimizer: str = "Adam"
- optimizer_params: dict = field(default_factory=lambda: {"betas": [0.9, 0.998], "weight_decay": 1e-6})
- lr_scheduler: str = "NoamLR"
- lr_scheduler_params: dict = field(default_factory=lambda: {"warmup_steps": 4000})
- lr: float = 1e-4
- grad_clip: float = 5.0
-
- # loss params
- spec_loss_type: str = "mse"
- duration_loss_type: str = "mse"
- use_ssim_loss: bool = True
- ssim_loss_alpha: float = 1.0
- dur_loss_alpha: float = 1.0
- spec_loss_alpha: float = 1.0
- pitch_loss_alpha: float = 0.0
- aligner_loss_alpha: float = 1.0
- binary_align_loss_alpha: float = 1.0
- binary_loss_warmup_epochs: int = 150
-
- # overrides
- min_seq_len: int = 13
- max_seq_len: int = 200
- r: int = 1 # DO NOT CHANGE
-
- # dataset configs
- compute_f0: bool = False
- f0_cache_path: str = None
-
- # testing
- test_sentences: List[str] = field(
- default_factory=lambda: [
- "It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
- "Be a voice, not an echo.",
- "I'm sorry Dave. I'm afraid I can't do that.",
- "This cake is great. It's so delicious and moist.",
- "Prior to November 22, 1963.",
- ]
- )
-
- def __post_init__(self):
- # Pass multi-speaker parameters to the model args as `model.init_multispeaker()` looks for it there.
- if self.num_speakers > 0:
- self.model_args.num_speakers = self.num_speakers
-
- # speaker embedding settings
- if self.use_speaker_embedding:
- self.model_args.use_speaker_embedding = True
- if self.speakers_file:
- self.model_args.speakers_file = self.speakers_file
-
- # d-vector settings
- if self.use_d_vector_file:
- self.model_args.use_d_vector_file = True
- if self.d_vector_dim is not None and self.d_vector_dim > 0:
- self.model_args.d_vector_dim = self.d_vector_dim
- if self.d_vector_file:
- self.model_args.d_vector_file = self.d_vector_file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/belarusian/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/belarusian/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/kokoro/tacotron2-DDC/run.sh b/spaces/artificialguybr/video-dubbing/TTS/recipes/kokoro/tacotron2-DDC/run.sh
deleted file mode 100644
index 69800cf7b4e9b518a352191498ec50e44af86f90..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/recipes/kokoro/tacotron2-DDC/run.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# take the scripts's parent's directory to prefix all the output paths.
-RUN_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
-CORPUS=kokoro-speech-v1_1-small
-echo $RUN_DIR
-if [ \! -d $RUN_DIR/$CORPUS ] ; then
- echo "$RUN_DIR/$CORPUS doesn't exist."
- echo "Follow the instruction of https://github.com/kaiidams/Kokoro-Speech-Dataset to make the corpus."
- exit 1
-fi
-# create train-val splits
-shuf $RUN_DIR/$CORPUS/metadata.csv > $RUN_DIR/$CORPUS/metadata_shuf.csv
-head -n 8000 $RUN_DIR/$CORPUS/metadata_shuf.csv > $RUN_DIR/$CORPUS/metadata_train.csv
-tail -n 812 $RUN_DIR/$CORPUS/metadata_shuf.csv > $RUN_DIR/$CORPUS/metadata_val.csv
-# compute dataset mean and variance for normalization
-python TTS/bin/compute_statistics.py $RUN_DIR/tacotron2-DDC.json $RUN_DIR/scale_stats.npy --data_path $RUN_DIR/$CORPUS/wavs/
-# training ....
-# change the GPU id if needed
-CUDA_VISIBLE_DEVICES="0" python TTS/bin/train_tts.py --config_path $RUN_DIR/tacotron2-DDC.json \
- --coqpit.output_path $RUN_DIR \
- --coqpit.datasets.0.path $RUN_DIR/$CORPUS \
- --coqpit.audio.stats_path $RUN_DIR/scale_stats.npy \
- --coqpit.phoneme_cache_path $RUN_DIR/phoneme_cache \
\ No newline at end of file
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/ModuleSetupCode.c b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/ModuleSetupCode.c
deleted file mode 100644
index f7af78bfa74d85ee0757c59ca41ab27f9ab62dc1..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/ModuleSetupCode.c
+++ /dev/null
@@ -1,1640 +0,0 @@
-/////////////// CModulePreamble ///////////////
-
-#include /* For offsetof */
-#ifndef offsetof
- #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
-#endif
-
-#if !defined(WIN32) && !defined(MS_WINDOWS)
- #ifndef __stdcall
- #define __stdcall
- #endif
- #ifndef __cdecl
- #define __cdecl
- #endif
- #ifndef __fastcall
- #define __fastcall
- #endif
-#endif
-
-#ifndef DL_IMPORT
- #define DL_IMPORT(t) t
-#endif
-#ifndef DL_EXPORT
- #define DL_EXPORT(t) t
-#endif
-
-// For use in DL_IMPORT/DL_EXPORT macros.
-#define __PYX_COMMA ,
-
-#ifndef HAVE_LONG_LONG
- // CPython has required PY_LONG_LONG support for years, even if HAVE_LONG_LONG is not defined for us
- #if PY_VERSION_HEX >= 0x02070000
- #define HAVE_LONG_LONG
- #endif
-#endif
-
-#ifndef PY_LONG_LONG
- #define PY_LONG_LONG LONG_LONG
-#endif
-
-#ifndef Py_HUGE_VAL
- #define Py_HUGE_VAL HUGE_VAL
-#endif
-
-#ifdef PYPY_VERSION
- #define CYTHON_COMPILING_IN_PYPY 1
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
-
- #undef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 0
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #if PY_VERSION_HEX < 0x03050000
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #undef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #undef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 1
- #undef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 0
- #undef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 0
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC (PYPY_VERSION_HEX >= 0x07030900)
- #endif
-
-#elif defined(PYSTON_VERSION)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 1
- #define CYTHON_COMPILING_IN_CPYTHON 0
-
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 0
- #endif
-
-#else
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 1
-
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- // looks like calling _PyType_Lookup() isn't safe in Py<=2.6/3.1
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
- #define CYTHON_USE_PYTYPE_LOOKUP 1
- #endif
- #if PY_MAJOR_VERSION < 3
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
- #define CYTHON_USE_PYLONG_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2
- // Python 3.11a2 hid _PyLong_FormatAdvancedWriter and _PyFloat_FormatAdvancedWriter
- // therefore disable unicode writer until a better alternative appears
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #elif !defined(CYTHON_USE_UNICODE_WRITER)
- #define CYTHON_USE_UNICODE_WRITER 1
- #endif
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #if PY_VERSION_HEX >= 0x030B00A4
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #elif !defined(CYTHON_FAST_THREAD_STATE)
- #define CYTHON_FAST_THREAD_STATE 1
- #endif
- #ifndef CYTHON_FAST_PYCALL
- // Python 3.11 deleted localplus argument from frame object, which is used in our
- // fast_pycall code
- // On Python 3.10 it causes issues when used while profiling/debugging
- #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000)
- #endif
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
- #endif
- #ifndef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)
- #endif
- #if PY_VERSION_HEX >= 0x030B00A4
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #elif !defined(CYTHON_USE_EXC_INFO_STACK)
- #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
- #endif
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 1
- #endif
-#endif
-
-#if !defined(CYTHON_FAST_PYCCALL)
-#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
-#endif
-
-#if CYTHON_USE_PYLONG_INTERNALS
- #if PY_MAJOR_VERSION < 3
- #include "longintrepr.h"
- #endif
- /* These short defines can easily conflict with other code */
- #undef SHIFT
- #undef BASE
- #undef MASK
- /* Compile-time sanity check that these are indeed equal. Github issue #2670. */
- #ifdef SIZEOF_VOID_P
- enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
- #endif
-#endif
-
-#ifndef __has_attribute
- #define __has_attribute(x) 0
-#endif
-
-#ifndef __has_cpp_attribute
- #define __has_cpp_attribute(x) 0
-#endif
-
-// restrict
-#ifndef CYTHON_RESTRICT
- #if defined(__GNUC__)
- #define CYTHON_RESTRICT __restrict__
- #elif defined(_MSC_VER) && _MSC_VER >= 1400
- #define CYTHON_RESTRICT __restrict
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_RESTRICT restrict
- #else
- #define CYTHON_RESTRICT
- #endif
-#endif
-
-// unused attribute
-#ifndef CYTHON_UNUSED
-# if defined(__GNUC__)
-# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-#endif
-
-#ifndef CYTHON_MAYBE_UNUSED_VAR
-# if defined(__cplusplus)
- template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
-# else
-# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
-# endif
-#endif
-
-#ifndef CYTHON_NCP_UNUSED
-# if CYTHON_COMPILING_IN_CPYTHON
-# define CYTHON_NCP_UNUSED
-# else
-# define CYTHON_NCP_UNUSED CYTHON_UNUSED
-# endif
-#endif
-
-#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
-
-#ifdef _MSC_VER
- #ifndef _MSC_STDINT_H_
- #if _MSC_VER < 1300
- typedef unsigned char uint8_t;
- typedef unsigned int uint32_t;
- #else
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int32 uint32_t;
- #endif
- #endif
-#else
- #include
-#endif
-
-
-#ifndef CYTHON_FALLTHROUGH
- #if defined(__cplusplus) && __cplusplus >= 201103L
- #if __has_cpp_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH [[fallthrough]]
- #elif __has_cpp_attribute(clang::fallthrough)
- #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
- #elif __has_cpp_attribute(gnu::fallthrough)
- #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
- #endif
- #endif
-
- #ifndef CYTHON_FALLTHROUGH
- #if __has_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
- #else
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
-
- #if defined(__clang__ ) && defined(__apple_build_version__)
- #if __apple_build_version__ < 7000000 /* Xcode < 7.0 */
- #undef CYTHON_FALLTHROUGH
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
-#endif
-
-/////////////// CInitCode ///////////////
-
-// inline attribute
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #elif defined(__GNUC__)
- #define CYTHON_INLINE __inline__
- #elif defined(_MSC_VER)
- #define CYTHON_INLINE __inline
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_INLINE inline
- #else
- #define CYTHON_INLINE
- #endif
-#endif
-
-
-/////////////// CppInitCode ///////////////
-
-#ifndef __cplusplus
- #error "Cython files generated with the C++ option must be compiled with a C++ compiler."
-#endif
-
-// inline attribute
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #else
- #define CYTHON_INLINE inline
- #endif
-#endif
-
-// Work around clang bug http://stackoverflow.com/questions/21847816/c-invoke-nested-template-class-destructor
-template
-void __Pyx_call_destructor(T& x) {
- x.~T();
-}
-
-// Used for temporary variables of "reference" type.
-template
-class __Pyx_FakeReference {
- public:
- __Pyx_FakeReference() : ptr(NULL) { }
- // __Pyx_FakeReference(T& ref) : ptr(&ref) { }
- // Const version needed as Cython doesn't know about const overloads (e.g. for stl containers).
- __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { }
- T *operator->() { return ptr; }
- T *operator&() { return ptr; }
- operator T&() { return *ptr; }
- // TODO(robertwb): Delegate all operators (or auto-generate unwrapping code where needed).
- template bool operator ==(U other) { return *ptr == other; }
- template bool operator !=(U other) { return *ptr != other; }
- private:
- T *ptr;
-};
-
-
-/////////////// PythonCompatibility ///////////////
-
-#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
- #define Py_OptimizeFlag 0
-#endif
-
-#define __PYX_BUILD_PY_SSIZE_T "n"
-#define CYTHON_FORMAT_SSIZE_T "z"
-
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) \
- PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
- #define __Pyx_DefaultClassType PyClass_Type
-#else
- #define __Pyx_BUILTIN_MODULE_NAME "builtins"
- #define __Pyx_DefaultClassType PyType_Type
-#if PY_VERSION_HEX >= 0x030B00A1
- static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f,
- PyObject *code, PyObject *c, PyObject* n, PyObject *v,
- PyObject *fv, PyObject *cell, PyObject* fn,
- PyObject *name, int fline, PyObject *lnos) {
- // TODO - currently written to be simple and work in limited API etc.
- // A more optimized version would be good
- PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL;
- PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL;
- const char *fn_cstr=NULL;
- const char *name_cstr=NULL;
- PyCodeObject* co=NULL;
- PyObject *type, *value, *traceback;
-
- // we must be able to call this while an exception is happening - thus clear then restore the state
- PyErr_Fetch(&type, &value, &traceback);
-
- if (!(kwds=PyDict_New())) goto end;
- if (!(argcount=PyLong_FromLong(a))) goto end;
- if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end;
- if (!(posonlyargcount=PyLong_FromLong(0))) goto end;
- if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end;
- if (!(kwonlyargcount=PyLong_FromLong(k))) goto end;
- if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end;
- if (!(nlocals=PyLong_FromLong(l))) goto end;
- if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end;
- if (!(stacksize=PyLong_FromLong(s))) goto end;
- if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end;
- if (!(flags=PyLong_FromLong(f))) goto end;
- if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end;
-
- if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end;
- if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end;
- if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end;
-
- if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too;
- if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here
- if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too;
-
- Py_XDECREF((PyObject*)co);
- co = (PyCodeObject*)call_result;
- call_result = NULL;
-
- if (0) {
- cleanup_code_too:
- Py_XDECREF((PyObject*)co);
- co = NULL;
- }
- end:
- Py_XDECREF(kwds);
- Py_XDECREF(argcount);
- Py_XDECREF(posonlyargcount);
- Py_XDECREF(kwonlyargcount);
- Py_XDECREF(nlocals);
- Py_XDECREF(stacksize);
- Py_XDECREF(replace);
- Py_XDECREF(call_result);
- Py_XDECREF(empty);
- if (type) {
- PyErr_Restore(type, value, traceback);
- }
- return co;
- }
-#else
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) \
- PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#endif
- #define __Pyx_DefaultClassType PyType_Type
-#endif
-
-#ifndef Py_TPFLAGS_CHECKTYPES
- #define Py_TPFLAGS_CHECKTYPES 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_INDEX
- #define Py_TPFLAGS_HAVE_INDEX 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
- #define Py_TPFLAGS_HAVE_NEWBUFFER 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_FINALIZE
- #define Py_TPFLAGS_HAVE_FINALIZE 0
-#endif
-
-#ifndef METH_STACKLESS
- // already defined for Stackless Python (all versions) and C-Python >= 3.7
- // value if defined: Stackless Python < 3.6: 0x80 else 0x100
- #define METH_STACKLESS 0
-#endif
-#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
- // new in CPython 3.6, but changed in 3.7 - see
- // positional-only parameters:
- // https://bugs.python.org/issue29464
- // const args:
- // https://bugs.python.org/issue32240
- #ifndef METH_FASTCALL
- #define METH_FASTCALL 0x80
- #endif
- typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
- // new in CPython 3.7, used to be old signature of _PyCFunctionFast() in 3.6
- typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
- Py_ssize_t nargs, PyObject *kwnames);
-#else
- #define __Pyx_PyCFunctionFast _PyCFunctionFast
- #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
-#endif
-#if CYTHON_FAST_PYCCALL
-#define __Pyx_PyFastCFunction_Check(func) \
- ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
-#else
-#define __Pyx_PyFastCFunction_Check(func) 0
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
- #define PyObject_Malloc(s) PyMem_Malloc(s)
- #define PyObject_Free(p) PyMem_Free(p)
- #define PyObject_Realloc(p) PyMem_Realloc(p)
-#endif
-
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
- #define PyMem_RawMalloc(n) PyMem_Malloc(n)
- #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
- #define PyMem_RawFree(p) PyMem_Free(p)
-#endif
-
-#if CYTHON_COMPILING_IN_PYSTON
- // special C-API functions only in Pyston
- #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
-#else
- #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
-#endif
-
-#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#elif PY_VERSION_HEX >= 0x03060000
- //#elif PY_VERSION_HEX >= 0x03050200
- // Actually added in 3.5.2, but compiling against that does not guarantee that we get imported there.
- #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
-#elif PY_VERSION_HEX >= 0x03000000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#else
- #define __Pyx_PyThreadState_Current _PyThreadState_Current
-#endif
-
-// TSS (Thread Specific Storage) API
-#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
-#include "pythread.h"
-#define Py_tss_NEEDS_INIT 0
-typedef int Py_tss_t;
-static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
- *key = PyThread_create_key();
- return 0; /* PyThread_create_key reports success always */
-}
-static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
- Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
- *key = Py_tss_NEEDS_INIT;
- return key;
-}
-static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
- PyObject_Free(key);
-}
-static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
- return *key != Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
- PyThread_delete_key(*key);
- *key = Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
- return PyThread_set_key_value(*key, value);
-}
-static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
- return PyThread_get_key_value(*key);
-}
-// PyThread_delete_key_value(key) is equalivalent to PyThread_set_key_value(key, NULL)
-// PyThread_ReInitTLS() is a no-op
-#endif /* TSS (Thread Specific Storage) API */
-
-#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
-#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
-#else
-#define __Pyx_PyDict_NewPresized(n) PyDict_New()
-#endif
-
-#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
-#else
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
-#endif
-
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
-#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
-#else
-#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
-#endif
-
-/* new Py3.3 unicode type (PEP 393) */
-#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
- #define CYTHON_PEP393_ENABLED 1
-
- #if defined(PyUnicode_IS_READY)
- #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ? \
- 0 : _PyUnicode_Ready((PyObject *)(op)))
- #else
- // Py3.12 / PEP-623 will remove wstr type unicode strings and all of the PyUnicode_READY() machinery.
- #define __Pyx_PyUnicode_READY(op) (0)
- #endif
-
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
- #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
- #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
- #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
- #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE)
- #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000
- // Avoid calling deprecated C-API functions in Py3.9+ that PEP-623 schedules for removal in Py3.12.
- // https://www.python.org/dev/peps/pep-0623/
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length))
- #else
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
- #endif
- #else
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u))
- #endif
-#else
- #define CYTHON_PEP393_ENABLED 0
- #define PyUnicode_1BYTE_KIND 1
- #define PyUnicode_2BYTE_KIND 2
- #define PyUnicode_4BYTE_KIND 4
- #define __Pyx_PyUnicode_READY(op) (0)
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
- #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
- #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
- /* (void)(k) => avoid unused variable warning due to macro: */
- #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
-#else
- #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ? \
- PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
- #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
- #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
-#endif
-
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
- #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
-#endif
-
-// ("..." % x) must call PyNumber_Remainder() if x is a string subclass that implements "__rmod__()".
-#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
-#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
-
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
-#else
- #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
-#endif
-
-#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
- #define PyObject_ASCII(o) PyObject_Repr(o)
-#endif
-
-#if PY_MAJOR_VERSION >= 3
- #define PyBaseString_Type PyUnicode_Type
- #define PyStringObject PyUnicodeObject
- #define PyString_Type PyUnicode_Type
- #define PyString_Check PyUnicode_Check
- #define PyString_CheckExact PyUnicode_CheckExact
- // PyPy3 used to define "PyObject_Unicode"
-#ifndef PyObject_Unicode
- #define PyObject_Unicode PyObject_Str
-#endif
-#endif
-
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
- #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
-#else
- #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
- #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
-#endif
-
-#ifndef PySet_CheckExact
- #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
-#endif
-
-
-#if PY_VERSION_HEX >= 0x030900A4
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size)
-#else
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size)
-#endif
-
-#if CYTHON_ASSUME_SAFE_MACROS
- #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
-#else
- // NOTE: might fail with exception => check for -1
- #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
-#endif
-
-#if PY_MAJOR_VERSION >= 3
- #define PyIntObject PyLongObject
- #define PyInt_Type PyLong_Type
- #define PyInt_Check(op) PyLong_Check(op)
- #define PyInt_CheckExact(op) PyLong_CheckExact(op)
- #define PyInt_FromString PyLong_FromString
- #define PyInt_FromUnicode PyLong_FromUnicode
- #define PyInt_FromLong PyLong_FromLong
- #define PyInt_FromSize_t PyLong_FromSize_t
- #define PyInt_FromSsize_t PyLong_FromSsize_t
- #define PyInt_AsLong PyLong_AsLong
- #define PyInt_AS_LONG PyLong_AS_LONG
- #define PyInt_AsSsize_t PyLong_AsSsize_t
- #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
- #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
- #define PyNumber_Int PyNumber_Long
-#endif
-
-#if PY_MAJOR_VERSION >= 3
- #define PyBoolObject PyLongObject
-#endif
-
-#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
- #ifndef PyUnicode_InternFromString
- #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
- #endif
-#endif
-
-#if PY_VERSION_HEX < 0x030200A4
- typedef long Py_hash_t;
- #define __Pyx_PyInt_FromHash_t PyInt_FromLong
- #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t
-#else
- #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
- #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t
-#endif
-
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func))
-#else
- #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
-#endif
-
-// backport of PyAsyncMethods from Py3.5 to older Py3.x versions
-// (mis-)using the "tp_reserved" type slot which is re-activated as "tp_as_async" in Py3.5
-#if CYTHON_USE_ASYNC_SLOTS
- #if PY_VERSION_HEX >= 0x030500B1
- #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
- #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
- #else
- #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
- #endif
-#else
- #define __Pyx_PyType_AsAsync(obj) NULL
-#endif
-#ifndef __Pyx_PyAsyncMethodsStruct
- typedef struct {
- unaryfunc am_await;
- unaryfunc am_aiter;
- unaryfunc am_anext;
- } __Pyx_PyAsyncMethodsStruct;
-#endif
-
-
-/////////////// SmallCodeConfig.proto ///////////////
-
-#ifndef CYTHON_SMALL_CODE
-#if defined(__clang__)
- #define CYTHON_SMALL_CODE
-#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))
- #define CYTHON_SMALL_CODE __attribute__((cold))
-#else
- #define CYTHON_SMALL_CODE
-#endif
-#endif
-
-
-/////////////// PyModInitFuncType.proto ///////////////
-
-#ifndef CYTHON_NO_PYINIT_EXPORT
-#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC
-
-#elif PY_MAJOR_VERSION < 3
-// Py2: define this to void manually because PyMODINIT_FUNC adds __declspec(dllexport) to it's definition.
-#ifdef __cplusplus
-#define __Pyx_PyMODINIT_FUNC extern "C" void
-#else
-#define __Pyx_PyMODINIT_FUNC void
-#endif
-
-#else
-// Py3+: define this to PyObject * manually because PyMODINIT_FUNC adds __declspec(dllexport) to it's definition.
-#ifdef __cplusplus
-#define __Pyx_PyMODINIT_FUNC extern "C" PyObject *
-#else
-#define __Pyx_PyMODINIT_FUNC PyObject *
-#endif
-#endif
-
-
-/////////////// FastTypeChecks.proto ///////////////
-
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);/*proto*/
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);/*proto*/
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);/*proto*/
-#else
-#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
-#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
-#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
-#endif
-
-#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
-
-/////////////// FastTypeChecks ///////////////
-//@requires: Exceptions.c::PyThreadStateGet
-//@requires: Exceptions.c::PyErrFetchRestore
-
-#if CYTHON_COMPILING_IN_CPYTHON
-static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {
- while (a) {
- a = a->tp_base;
- if (a == b)
- return 1;
- }
- return b == &PyBaseObject_Type;
-}
-
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) {
- PyObject *mro;
- if (a == b) return 1;
- mro = a->tp_mro;
- if (likely(mro)) {
- Py_ssize_t i, n;
- n = PyTuple_GET_SIZE(mro);
- for (i = 0; i < n; i++) {
- if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b)
- return 1;
- }
- return 0;
- }
- // should only get here for incompletely initialised types, i.e. never under normal usage patterns
- return __Pyx_InBases(a, b);
-}
-
-
-#if PY_MAJOR_VERSION == 2
-static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) {
- // PyObject_IsSubclass() can recurse and therefore is not safe
- PyObject *exception, *value, *tb;
- int res;
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ErrFetch(&exception, &value, &tb);
-
- res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0;
- // This function must not fail, so print the error here (which also clears it)
- if (unlikely(res == -1)) {
- PyErr_WriteUnraisable(err);
- res = 0;
- }
- if (!res) {
- res = PyObject_IsSubclass(err, exc_type2);
- // This function must not fail, so print the error here (which also clears it)
- if (unlikely(res == -1)) {
- PyErr_WriteUnraisable(err);
- res = 0;
- }
- }
-
- __Pyx_ErrRestore(exception, value, tb);
- return res;
-}
-#else
-static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) {
- int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0;
- if (!res) {
- res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2);
- }
- return res;
-}
-#endif
-
-// so far, we only call PyErr_GivenExceptionMatches() with an exception type (not instance) as first argument
-// => optimise for that case
-
-static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
- Py_ssize_t i, n;
- assert(PyExceptionClass_Check(exc_type));
- n = PyTuple_GET_SIZE(tuple);
-#if PY_MAJOR_VERSION >= 3
- // the tighter subtype checking in Py3 allows faster out-of-order comparison
- for (i=0; i pure safety check assertions.
- assert(PyExceptionClass_Check(exc_type1));
- assert(PyExceptionClass_Check(exc_type2));
- if (likely(err == exc_type1 || err == exc_type2)) return 1;
- if (likely(PyExceptionClass_Check(err))) {
- return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2);
- }
- return (PyErr_GivenExceptionMatches(err, exc_type1) || PyErr_GivenExceptionMatches(err, exc_type2));
-}
-
-#endif
-
-
-/////////////// MathInitCode ///////////////
-
-#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS)
- #if !defined(_USE_MATH_DEFINES)
- #define _USE_MATH_DEFINES
- #endif
-#endif
-#include
-
-#ifdef NAN
-#define __PYX_NAN() ((float) NAN)
-#else
-static CYTHON_INLINE float __PYX_NAN() {
- // Initialize NaN. The sign is irrelevant, an exponent with all bits 1 and
- // a nonzero mantissa means NaN. If the first bit in the mantissa is 1, it is
- // a quiet NaN.
- float value;
- memset(&value, 0xFF, sizeof(value));
- return value;
-}
-#endif
-
-#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
-#define __Pyx_truncl trunc
-#else
-#define __Pyx_truncl truncl
-#endif
-
-
-/////////////// UtilityFunctionPredeclarations.proto ///////////////
-
-typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
- const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/
-
-/////////////// ForceInitThreads.proto ///////////////
-//@proto_block: utility_code_proto_before_types
-
-#ifndef __PYX_FORCE_INIT_THREADS
- #define __PYX_FORCE_INIT_THREADS 0
-#endif
-
-/////////////// InitThreads.init ///////////////
-
-#if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0
-PyEval_InitThreads();
-#endif
-
-
-/////////////// ModuleCreationPEP489 ///////////////
-//@substitute: naming
-
-//#if CYTHON_PEP489_MULTI_PHASE_INIT
-static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {
- #if PY_VERSION_HEX >= 0x030700A1
- static PY_INT64_T main_interpreter_id = -1;
- PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp);
- if (main_interpreter_id == -1) {
- main_interpreter_id = current_id;
- return (unlikely(current_id == -1)) ? -1 : 0;
- } else if (unlikely(main_interpreter_id != current_id))
-
- #else
- static PyInterpreterState *main_interpreter = NULL;
- PyInterpreterState *current_interpreter = PyThreadState_Get()->interp;
- if (!main_interpreter) {
- main_interpreter = current_interpreter;
- } else if (unlikely(main_interpreter != current_interpreter))
- #endif
-
- {
- PyErr_SetString(
- PyExc_ImportError,
- "Interpreter change detected - this module can only be loaded into one interpreter per process.");
- return -1;
- }
- return 0;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) {
- PyObject *value = PyObject_GetAttrString(spec, from_name);
- int result = 0;
- if (likely(value)) {
- if (allow_none || value != Py_None) {
- result = PyDict_SetItemString(moddict, to_name, value);
- }
- Py_DECREF(value);
- } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
- PyErr_Clear();
- } else {
- result = -1;
- }
- return result;
-}
-
-static CYTHON_SMALL_CODE PyObject* ${pymodule_create_func_cname}(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) {
- PyObject *module = NULL, *moddict, *modname;
-
- // For now, we only have exactly one module instance.
- if (__Pyx_check_single_interpreter())
- return NULL;
- if (${module_cname})
- return __Pyx_NewRef(${module_cname});
-
- modname = PyObject_GetAttrString(spec, "name");
- if (unlikely(!modname)) goto bad;
-
- module = PyModule_NewObject(modname);
- Py_DECREF(modname);
- if (unlikely(!module)) goto bad;
-
- moddict = PyModule_GetDict(module);
- if (unlikely(!moddict)) goto bad;
- // moddict is a borrowed reference
-
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad;
-
- return module;
-bad:
- Py_XDECREF(module);
- return NULL;
-}
-//#endif
-
-
-/////////////// CodeObjectCache.proto ///////////////
-
-typedef struct {
- PyCodeObject* code_object;
- int code_line;
-} __Pyx_CodeObjectCacheEntry;
-
-struct __Pyx_CodeObjectCache {
- int count;
- int max_count;
- __Pyx_CodeObjectCacheEntry* entries;
-};
-
-static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
-
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
-static PyCodeObject *__pyx_find_code_object(int code_line);
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
-
-/////////////// CodeObjectCache ///////////////
-// Note that errors are simply ignored in the code below.
-// This is just a cache, if a lookup or insertion fails - so what?
-
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {
- int start = 0, mid = 0, end = count - 1;
- if (end >= 0 && code_line > entries[end].code_line) {
- return count;
- }
- while (start < end) {
- mid = start + (end - start) / 2;
- if (code_line < entries[mid].code_line) {
- end = mid;
- } else if (code_line > entries[mid].code_line) {
- start = mid + 1;
- } else {
- return mid;
- }
- }
- if (code_line <= entries[mid].code_line) {
- return mid;
- } else {
- return mid + 1;
- }
-}
-
-static PyCodeObject *__pyx_find_code_object(int code_line) {
- PyCodeObject* code_object;
- int pos;
- if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {
- return NULL;
- }
- pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);
- if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {
- return NULL;
- }
- code_object = __pyx_code_cache.entries[pos].code_object;
- Py_INCREF(code_object);
- return code_object;
-}
-
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {
- int pos, i;
- __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;
- if (unlikely(!code_line)) {
- return;
- }
- if (unlikely(!entries)) {
- entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));
- if (likely(entries)) {
- __pyx_code_cache.entries = entries;
- __pyx_code_cache.max_count = 64;
- __pyx_code_cache.count = 1;
- entries[0].code_line = code_line;
- entries[0].code_object = code_object;
- Py_INCREF(code_object);
- }
- return;
- }
- pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);
- if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {
- PyCodeObject* tmp = entries[pos].code_object;
- entries[pos].code_object = code_object;
- Py_DECREF(tmp);
- return;
- }
- if (__pyx_code_cache.count == __pyx_code_cache.max_count) {
- int new_max = __pyx_code_cache.max_count + 64;
- entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(
- __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry));
- if (unlikely(!entries)) {
- return;
- }
- __pyx_code_cache.entries = entries;
- __pyx_code_cache.max_count = new_max;
- }
- for (i=__pyx_code_cache.count; i>pos; i--) {
- entries[i] = entries[i-1];
- }
- entries[pos].code_line = code_line;
- entries[pos].code_object = code_object;
- __pyx_code_cache.count++;
- Py_INCREF(code_object);
-}
-
-/////////////// CodeObjectCache.cleanup ///////////////
-
- if (__pyx_code_cache.entries) {
- __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;
- int i, count = __pyx_code_cache.count;
- __pyx_code_cache.count = 0;
- __pyx_code_cache.max_count = 0;
- __pyx_code_cache.entries = NULL;
- for (i=0; i '9');
- break;
- }
- if (rt_from_call[i] != ctversion[i]) {
- same = 0;
- break;
- }
- }
-
- if (!same) {
- char rtversion[5] = {'\0'};
- // copy the runtime-version for the error message
- char message[200];
- for (i=0; i<4; ++i) {
- if (rt_from_call[i] == '.') {
- if (found_dot) break;
- found_dot = 1;
- } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') {
- break;
- }
- rtversion[i] = rt_from_call[i];
- }
- PyOS_snprintf(message, sizeof(message),
- "compiletime version %s of module '%.100s' "
- "does not match runtime version %s",
- ctversion, __Pyx_MODULE_NAME, rtversion);
- return PyErr_WarnEx(NULL, message, 1);
- }
- return 0;
-}
-
-/////////////// IsLittleEndian.proto ///////////////
-
-static CYTHON_INLINE int __Pyx_Is_Little_Endian(void);
-
-/////////////// IsLittleEndian ///////////////
-
-static CYTHON_INLINE int __Pyx_Is_Little_Endian(void)
-{
- union {
- uint32_t u32;
- uint8_t u8[4];
- } S;
- S.u32 = 0x01020304;
- return S.u8[0] == 4;
-}
-
-/////////////// Refnanny.proto ///////////////
-
-#ifndef CYTHON_REFNANNY
- #define CYTHON_REFNANNY 0
-#endif
-
-#if CYTHON_REFNANNY
- typedef struct {
- void (*INCREF)(void*, PyObject*, int);
- void (*DECREF)(void*, PyObject*, int);
- void (*GOTREF)(void*, PyObject*, int);
- void (*GIVEREF)(void*, PyObject*, int);
- void* (*SetupContext)(const char*, int, const char*);
- void (*FinishContext)(void**);
- } __Pyx_RefNannyAPIStruct;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); /*proto*/
- #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
-#ifdef WITH_THREAD
- #define __Pyx_RefNannySetupContext(name, acquire_gil) \
- if (acquire_gil) { \
- PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure(); \
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__); \
- PyGILState_Release(__pyx_gilstate_save); \
- } else { \
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__); \
- }
-#else
- #define __Pyx_RefNannySetupContext(name, acquire_gil) \
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
-#endif
- #define __Pyx_RefNannyFinishContext() \
- __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
- #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
- #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
- #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
- #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
-#else
- #define __Pyx_RefNannyDeclarations
- #define __Pyx_RefNannySetupContext(name, acquire_gil)
- #define __Pyx_RefNannyFinishContext()
- #define __Pyx_INCREF(r) Py_INCREF(r)
- #define __Pyx_DECREF(r) Py_DECREF(r)
- #define __Pyx_GOTREF(r)
- #define __Pyx_GIVEREF(r)
- #define __Pyx_XINCREF(r) Py_XINCREF(r)
- #define __Pyx_XDECREF(r) Py_XDECREF(r)
- #define __Pyx_XGOTREF(r)
- #define __Pyx_XGIVEREF(r)
-#endif /* CYTHON_REFNANNY */
-
-#define __Pyx_XDECREF_SET(r, v) do { \
- PyObject *tmp = (PyObject *) r; \
- r = v; __Pyx_XDECREF(tmp); \
- } while (0)
-#define __Pyx_DECREF_SET(r, v) do { \
- PyObject *tmp = (PyObject *) r; \
- r = v; __Pyx_DECREF(tmp); \
- } while (0)
-
-#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
-#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
-
-/////////////// Refnanny ///////////////
-
-#if CYTHON_REFNANNY
-static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {
- PyObject *m = NULL, *p = NULL;
- void *r = NULL;
- m = PyImport_ImportModule(modname);
- if (!m) goto end;
- p = PyObject_GetAttrString(m, "RefNannyAPI");
- if (!p) goto end;
- r = PyLong_AsVoidPtr(p);
-end:
- Py_XDECREF(p);
- Py_XDECREF(m);
- return (__Pyx_RefNannyAPIStruct *)r;
-}
-#endif /* CYTHON_REFNANNY */
-
-
-/////////////// ImportRefnannyAPI ///////////////
-
-#if CYTHON_REFNANNY
-__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
-if (!__Pyx_RefNanny) {
- PyErr_Clear();
- __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
- if (!__Pyx_RefNanny)
- Py_FatalError("failed to import 'refnanny' module");
-}
-#endif
-
-
-/////////////// RegisterModuleCleanup.proto ///////////////
-//@substitute: naming
-
-static void ${cleanup_cname}(PyObject *self); /*proto*/
-
-#if PY_MAJOR_VERSION < 3 || CYTHON_COMPILING_IN_PYPY
-static int __Pyx_RegisterCleanup(void); /*proto*/
-#else
-#define __Pyx_RegisterCleanup() (0)
-#endif
-
-/////////////// RegisterModuleCleanup ///////////////
-//@substitute: naming
-
-#if PY_MAJOR_VERSION < 3 || CYTHON_COMPILING_IN_PYPY
-static PyObject* ${cleanup_cname}_atexit(PyObject *module, CYTHON_UNUSED PyObject *unused) {
- ${cleanup_cname}(module);
- Py_INCREF(Py_None); return Py_None;
-}
-
-static int __Pyx_RegisterCleanup(void) {
- // Don't use Py_AtExit because that has a 32-call limit and is called
- // after python finalization.
- // Also, we try to prepend the cleanup function to "atexit._exithandlers"
- // in Py2 because CPython runs them last-to-first. Being run last allows
- // user exit code to run before us that may depend on the globals
- // and cached objects that we are about to clean up.
-
- static PyMethodDef cleanup_def = {
- "__cleanup", (PyCFunction)${cleanup_cname}_atexit, METH_NOARGS, 0};
-
- PyObject *cleanup_func = 0;
- PyObject *atexit = 0;
- PyObject *reg = 0;
- PyObject *args = 0;
- PyObject *res = 0;
- int ret = -1;
-
- cleanup_func = PyCFunction_New(&cleanup_def, 0);
- if (!cleanup_func)
- goto bad;
-
- atexit = PyImport_ImportModule("atexit");
- if (!atexit)
- goto bad;
- reg = PyObject_GetAttrString(atexit, "_exithandlers");
- if (reg && PyList_Check(reg)) {
- PyObject *a, *kw;
- a = PyTuple_New(0);
- kw = PyDict_New();
- if (!a || !kw) {
- Py_XDECREF(a);
- Py_XDECREF(kw);
- goto bad;
- }
- args = PyTuple_Pack(3, cleanup_func, a, kw);
- Py_DECREF(a);
- Py_DECREF(kw);
- if (!args)
- goto bad;
- ret = PyList_Insert(reg, 0, args);
- } else {
- if (!reg)
- PyErr_Clear();
- Py_XDECREF(reg);
- reg = PyObject_GetAttrString(atexit, "register");
- if (!reg)
- goto bad;
- args = PyTuple_Pack(1, cleanup_func);
- if (!args)
- goto bad;
- res = PyObject_CallObject(reg, args);
- if (!res)
- goto bad;
- ret = 0;
- }
-bad:
- Py_XDECREF(cleanup_func);
- Py_XDECREF(atexit);
- Py_XDECREF(reg);
- Py_XDECREF(args);
- Py_XDECREF(res);
- return ret;
-}
-#endif
-
-/////////////// FastGil.init ///////////////
-#ifdef WITH_THREAD
-__Pyx_FastGilFuncInit();
-#endif
-
-/////////////// NoFastGil.proto ///////////////
-//@proto_block: utility_code_proto_before_types
-
-#define __Pyx_PyGILState_Ensure PyGILState_Ensure
-#define __Pyx_PyGILState_Release PyGILState_Release
-#define __Pyx_FastGIL_Remember()
-#define __Pyx_FastGIL_Forget()
-#define __Pyx_FastGilFuncInit()
-
-/////////////// FastGil.proto ///////////////
-//@proto_block: utility_code_proto_before_types
-
-struct __Pyx_FastGilVtab {
- PyGILState_STATE (*Fast_PyGILState_Ensure)(void);
- void (*Fast_PyGILState_Release)(PyGILState_STATE oldstate);
- void (*FastGIL_Remember)(void);
- void (*FastGIL_Forget)(void);
-};
-
-static void __Pyx_FastGIL_Noop(void) {}
-static struct __Pyx_FastGilVtab __Pyx_FastGilFuncs = {
- PyGILState_Ensure,
- PyGILState_Release,
- __Pyx_FastGIL_Noop,
- __Pyx_FastGIL_Noop
-};
-
-static void __Pyx_FastGilFuncInit(void);
-
-#define __Pyx_PyGILState_Ensure __Pyx_FastGilFuncs.Fast_PyGILState_Ensure
-#define __Pyx_PyGILState_Release __Pyx_FastGilFuncs.Fast_PyGILState_Release
-#define __Pyx_FastGIL_Remember __Pyx_FastGilFuncs.FastGIL_Remember
-#define __Pyx_FastGIL_Forget __Pyx_FastGilFuncs.FastGIL_Forget
-
-#ifdef WITH_THREAD
- #ifndef CYTHON_THREAD_LOCAL
- #if __STDC_VERSION__ >= 201112
- #define CYTHON_THREAD_LOCAL _Thread_local
- #elif defined(__GNUC__)
- #define CYTHON_THREAD_LOCAL __thread
- #elif defined(_MSC_VER)
- #define CYTHON_THREAD_LOCAL __declspec(thread)
- #endif
- #endif
-#endif
-
-/////////////// FastGil ///////////////
-//@requires: CommonStructures.c::FetchCommonPointer
-// The implementations of PyGILState_Ensure/Release calls PyThread_get_key_value
-// several times which is turns out to be quite slow (slower in fact than
-// acquiring the GIL itself). Simply storing it in a thread local for the
-// common case is much faster.
-// To make optimal use of this thread local, we attempt to share it between
-// modules.
-
-#define __Pyx_FastGIL_ABI_module "_cython_" CYTHON_ABI
-#define __Pyx_FastGIL_PyCapsuleName "FastGilFuncs"
-#define __Pyx_FastGIL_PyCapsule \
- __Pyx_FastGIL_ABI_module "." __Pyx_FastGIL_PyCapsuleName
-
-#if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_THREAD_LOCAL
-#endif
-
-#ifdef CYTHON_THREAD_LOCAL
-
-#include "pythread.h"
-#include "pystate.h"
-
-static CYTHON_THREAD_LOCAL PyThreadState *__Pyx_FastGil_tcur = NULL;
-static CYTHON_THREAD_LOCAL int __Pyx_FastGil_tcur_depth = 0;
-static int __Pyx_FastGil_autoTLSkey = -1;
-
-static CYTHON_INLINE void __Pyx_FastGIL_Remember0(void) {
- ++__Pyx_FastGil_tcur_depth;
-}
-
-static CYTHON_INLINE void __Pyx_FastGIL_Forget0(void) {
- if (--__Pyx_FastGil_tcur_depth == 0) {
- __Pyx_FastGil_tcur = NULL;
- }
-}
-
-static CYTHON_INLINE PyThreadState *__Pyx_FastGil_get_tcur(void) {
- PyThreadState *tcur = __Pyx_FastGil_tcur;
- if (tcur == NULL) {
- tcur = __Pyx_FastGil_tcur = (PyThreadState*)PyThread_get_key_value(__Pyx_FastGil_autoTLSkey);
- }
- return tcur;
-}
-
-static PyGILState_STATE __Pyx_FastGil_PyGILState_Ensure(void) {
- int current;
- PyThreadState *tcur;
- __Pyx_FastGIL_Remember0();
- tcur = __Pyx_FastGil_get_tcur();
- if (tcur == NULL) {
- // Uninitialized, need to initialize now.
- return PyGILState_Ensure();
- }
- current = tcur == __Pyx_PyThreadState_Current;
- if (current == 0) {
- PyEval_RestoreThread(tcur);
- }
- ++tcur->gilstate_counter;
- return current ? PyGILState_LOCKED : PyGILState_UNLOCKED;
-}
-
-static void __Pyx_FastGil_PyGILState_Release(PyGILState_STATE oldstate) {
- PyThreadState *tcur = __Pyx_FastGil_get_tcur();
- __Pyx_FastGIL_Forget0();
- if (tcur->gilstate_counter == 1) {
- // This is the last lock, do all the cleanup as well.
- PyGILState_Release(oldstate);
- } else {
- --tcur->gilstate_counter;
- if (oldstate == PyGILState_UNLOCKED) {
- PyEval_SaveThread();
- }
- }
-}
-
-static void __Pyx_FastGilFuncInit0(void) {
- /* Try to detect autoTLSkey. */
- int key;
- void* this_thread_state = (void*) PyGILState_GetThisThreadState();
- for (key = 0; key < 100; key++) {
- if (PyThread_get_key_value(key) == this_thread_state) {
- __Pyx_FastGil_autoTLSkey = key;
- break;
- }
- }
- if (__Pyx_FastGil_autoTLSkey != -1) {
- PyObject* capsule = NULL;
- PyObject* abi_module = NULL;
- __Pyx_PyGILState_Ensure = __Pyx_FastGil_PyGILState_Ensure;
- __Pyx_PyGILState_Release = __Pyx_FastGil_PyGILState_Release;
- __Pyx_FastGIL_Remember = __Pyx_FastGIL_Remember0;
- __Pyx_FastGIL_Forget = __Pyx_FastGIL_Forget0;
- capsule = PyCapsule_New(&__Pyx_FastGilFuncs, __Pyx_FastGIL_PyCapsule, NULL);
- abi_module = PyImport_AddModule(__Pyx_FastGIL_ABI_module);
- if (capsule && abi_module) {
- PyObject_SetAttrString(abi_module, __Pyx_FastGIL_PyCapsuleName, capsule);
- }
- Py_XDECREF(capsule);
- }
-}
-
-#else
-
-static void __Pyx_FastGilFuncInit0(void) {
- CYTHON_UNUSED void* force_use = (void*)&__Pyx_FetchCommonPointer;
-}
-
-#endif
-
-static void __Pyx_FastGilFuncInit(void) {
-#if PY_VERSION_HEX >= 0x02070000
- struct __Pyx_FastGilVtab* shared = (struct __Pyx_FastGilVtab*)PyCapsule_Import(__Pyx_FastGIL_PyCapsule, 1);
-#else
- struct __Pyx_FastGilVtab* shared = NULL;
-#endif
- if (shared) {
- __Pyx_FastGilFuncs = *shared;
- } else {
- PyErr_Clear();
- __Pyx_FastGilFuncInit0();
- }
-}
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_compat.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_compat.py
deleted file mode 100644
index 7062be5daf03181b94a445d119ff66a1b1ddde2f..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_compat.py
+++ /dev/null
@@ -1,218 +0,0 @@
-from abc import ABCMeta, abstractmethod
-from contextlib import AbstractContextManager
-from types import TracebackType
-from typing import (
- TYPE_CHECKING,
- Any,
- AsyncContextManager,
- Callable,
- ContextManager,
- Generator,
- Generic,
- Iterable,
- List,
- Optional,
- Tuple,
- Type,
- TypeVar,
- Union,
- overload,
-)
-from warnings import warn
-
-if TYPE_CHECKING:
- from ._testing import TaskInfo
-else:
- TaskInfo = object
-
-T = TypeVar("T")
-AnyDeprecatedAwaitable = Union[
- "DeprecatedAwaitable",
- "DeprecatedAwaitableFloat",
- "DeprecatedAwaitableList[T]",
- TaskInfo,
-]
-
-
-@overload
-async def maybe_async(__obj: TaskInfo) -> TaskInfo:
- ...
-
-
-@overload
-async def maybe_async(__obj: "DeprecatedAwaitableFloat") -> float:
- ...
-
-
-@overload
-async def maybe_async(__obj: "DeprecatedAwaitableList[T]") -> List[T]:
- ...
-
-
-@overload
-async def maybe_async(__obj: "DeprecatedAwaitable") -> None:
- ...
-
-
-async def maybe_async(
- __obj: "AnyDeprecatedAwaitable[T]",
-) -> Union[TaskInfo, float, List[T], None]:
- """
- Await on the given object if necessary.
-
- This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
- methods were converted from coroutine functions into regular functions.
-
- Do **not** try to use this for any other purpose!
-
- :return: the result of awaiting on the object if coroutine, or the object itself otherwise
-
- .. versionadded:: 2.2
-
- """
- return __obj._unwrap()
-
-
-class _ContextManagerWrapper:
- def __init__(self, cm: ContextManager[T]):
- self._cm = cm
-
- async def __aenter__(self) -> T:
- return self._cm.__enter__()
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- return self._cm.__exit__(exc_type, exc_val, exc_tb)
-
-
-def maybe_async_cm(
- cm: Union[ContextManager[T], AsyncContextManager[T]]
-) -> AsyncContextManager[T]:
- """
- Wrap a regular context manager as an async one if necessary.
-
- This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
- methods were changed to return regular context managers instead of async ones.
-
- :param cm: a regular or async context manager
- :return: an async context manager
-
- .. versionadded:: 2.2
-
- """
- if not isinstance(cm, AbstractContextManager):
- raise TypeError("Given object is not an context manager")
-
- return _ContextManagerWrapper(cm)
-
-
-def _warn_deprecation(
- awaitable: "AnyDeprecatedAwaitable[Any]", stacklevel: int = 1
-) -> None:
- warn(
- f'Awaiting on {awaitable._name}() is deprecated. Use "await '
- f"anyio.maybe_async({awaitable._name}(...)) if you have to support both AnyIO 2.x "
- f'and 3.x, or just remove the "await" if you are completely migrating to AnyIO 3+.',
- DeprecationWarning,
- stacklevel=stacklevel + 1,
- )
-
-
-class DeprecatedAwaitable:
- def __init__(self, func: Callable[..., "DeprecatedAwaitable"]):
- self._name = f"{func.__module__}.{func.__qualname__}"
-
- def __await__(self) -> Generator[None, None, None]:
- _warn_deprecation(self)
- if False:
- yield
-
- def __reduce__(self) -> Tuple[Type[None], Tuple[()]]:
- return type(None), ()
-
- def _unwrap(self) -> None:
- return None
-
-
-class DeprecatedAwaitableFloat(float):
- def __new__(
- cls, x: float, func: Callable[..., "DeprecatedAwaitableFloat"]
- ) -> "DeprecatedAwaitableFloat":
- return super().__new__(cls, x)
-
- def __init__(self, x: float, func: Callable[..., "DeprecatedAwaitableFloat"]):
- self._name = f"{func.__module__}.{func.__qualname__}"
-
- def __await__(self) -> Generator[None, None, float]:
- _warn_deprecation(self)
- if False:
- yield
-
- return float(self)
-
- def __reduce__(self) -> Tuple[Type[float], Tuple[float]]:
- return float, (float(self),)
-
- def _unwrap(self) -> float:
- return float(self)
-
-
-class DeprecatedAwaitableList(List[T]):
- def __init__(
- self,
- iterable: Iterable[T] = (),
- *,
- func: Callable[..., "DeprecatedAwaitableList[T]"],
- ):
- super().__init__(iterable)
- self._name = f"{func.__module__}.{func.__qualname__}"
-
- def __await__(self) -> Generator[None, None, List[T]]:
- _warn_deprecation(self)
- if False:
- yield
-
- return list(self)
-
- def __reduce__(self) -> Tuple[Type[List[T]], Tuple[List[T]]]:
- return list, (list(self),)
-
- def _unwrap(self) -> List[T]:
- return list(self)
-
-
-class DeprecatedAsyncContextManager(Generic[T], metaclass=ABCMeta):
- @abstractmethod
- def __enter__(self) -> T:
- pass
-
- @abstractmethod
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- pass
-
- async def __aenter__(self) -> T:
- warn(
- f"Using {self.__class__.__name__} as an async context manager has been deprecated. "
- f'Use "async with anyio.maybe_async_cm(yourcontextmanager) as foo:" if you have to '
- f'support both AnyIO 2.x and 3.x, or just remove the "async" from "async with" if '
- f"you are completely migrating to AnyIO 3+.",
- DeprecationWarning,
- )
- return self.__enter__()
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- return self.__exit__(exc_type, exc_val, exc_tb)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/pretraining.md b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/pretraining.md
deleted file mode 100644
index 8f8e6d0facaa47141342294d7eb26c4232a9677b..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/pretraining.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Pretraining
-
-(If you are new to the ideas of `mmpt.processors`, see [README](README.md) first.)
-We mostly use [howto100M](https://github.com/antoine77340/howto100m) dataset for pretraining (other datasets are coming). So you are less likely to write a new `MetaProcessor`, `VideoProcessor` or `TextProcessor` but only working on a new `Aligner`, a new model and loss.
-
-### Data Sharding
-Pretraining on Howto100M is heavy on IO since we have millions of videos or captions on the hard disk that cannot be fit into the memory.
-It is desirable to have an optimized preprocessing step before the actual dataloading.
-
-We support data sharding to pack multiple videos into a shards of training data for both videos and captions. (see [dataset](DATASET.md) for preprocessing).
-These shards will be mapped into memory to reduce the frequency of IO access on millions of files. See (processors starting with `Sharded*`).
-This will be the default config for a how2 dataset `projects/task/how2.yaml`.
-
-Great thanks to Dmytro Okhonko for sharing the code from MARGE project.
-
-### Training
-Pretraining on Howto100m is expected on one or multiple nodes, where each node has 8 GPUS with 32 GB mem.
-launching a pretraing on MFM+MLM can be done, via:
-```python locallaunch.py projects/mfmmlm/how2.yaml```
-
-### Pre-training with a Retrieval Model (VideoCLIP)
-This projects now support alternatively run a retrieval model and pre-training.
-We implement a basic retrieval model that is built on the hidden states of a video and faiss.
-
-You may need to install faiss via `conda install faiss-cpu -c pytorch`.
-
-Right now, the hidden states of a video is computed as the average of 8 clips of their pooled visual/text hidden states.
-See `mmpt/tasks/retritask.py` for more details.
-The `.yaml` config for running pre-training with a retrieval model can be found at `projects/retri/videoretri.yaml`.
diff --git a/spaces/aryan29/movie-recommender-system/app.py b/spaces/aryan29/movie-recommender-system/app.py
deleted file mode 100644
index 74f48144b685a94ab38e0f6e41e9e53c405e5b88..0000000000000000000000000000000000000000
--- a/spaces/aryan29/movie-recommender-system/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-import pickle
-import pandas as pd
-
-df = pd.read_csv("model/movies.csv")
-
-with open("model/similarity.pkl", "rb") as f:
- similarity = pickle.load(f)
-
-def recommendation(movie_title):
- movie_title = movie_title.title()
- try:
- id_of_movie = df[df['title']==movie_title].index[0]
- except IndexError:
- return f"Error: Movie '{movie_title}' not found in database."
- distances = similarity[id_of_movie]
- movie_list = sorted(list(enumerate(distances)), reverse=True, key=lambda x:x[1])[1:10]
- recommendations = [df.iloc[i[0]].title for i in movie_list]
- return "\n".join(recommendations)
-
-
-movie_input = gr.Textbox(label="Enter a movie title")
-output_text = gr.Textbox(label="Recommendations")
-
-
-iface = gr.Interface( fn=recommendation, inputs=movie_input, outputs=output_text , allow_flagging=False )
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/avivdm1/AutoGPT/autogpt/speech/macos_tts.py b/spaces/avivdm1/AutoGPT/autogpt/speech/macos_tts.py
deleted file mode 100644
index 4c072ce256782e83a578b5181abf1a7b524c621b..0000000000000000000000000000000000000000
--- a/spaces/avivdm1/AutoGPT/autogpt/speech/macos_tts.py
+++ /dev/null
@@ -1,21 +0,0 @@
-""" MacOS TTS Voice. """
-import os
-
-from autogpt.speech.base import VoiceBase
-
-
-class MacOSTTS(VoiceBase):
- """MacOS TTS Voice."""
-
- def _setup(self) -> None:
- pass
-
- def _speech(self, text: str, voice_index: int = 0) -> bool:
- """Play the given text."""
- if voice_index == 0:
- os.system(f'say "{text}"')
- elif voice_index == 1:
- os.system(f'say -v "Ava (Premium)" "{text}"')
- else:
- os.system(f'say -v Samantha "{text}"')
- return True
diff --git a/spaces/awacke1/AIZTH-03-09-2023/README.md b/spaces/awacke1/AIZTH-03-09-2023/README.md
deleted file mode 100644
index 7949c7c5ce22c94ce42ca91cd39150dbff9ac918..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AIZTH-03-09-2023/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 🔥AI Zero to Hero 3-9-2023🔥
-emoji: 🔥AIZTH
-colorFrom: red
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awsaf49/gcvit-tf/app.py b/spaces/awsaf49/gcvit-tf/app.py
deleted file mode 100644
index ce5f7469dbe2a3b0ff1a7d709336b2af80993166..0000000000000000000000000000000000000000
--- a/spaces/awsaf49/gcvit-tf/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import tensorflow as tf
-import gradio as gr
-import gcvit
-from gcvit.utils import get_gradcam_model, get_gradcam_prediction
-
-def predict_fn(image, model_name):
- """A predict function that will be invoked by gradio."""
- model = getattr(gcvit, model_name)(pretrain=True)
- gradcam_model = get_gradcam_model(model)
- preds, overlay = get_gradcam_prediction(image, gradcam_model, cmap='jet', alpha=0.4, pred_index=None)
- preds = {x[1]:float(x[2]) for x in preds}
- return [preds, overlay]
-
-demo = gr.Interface(
- fn=predict_fn,
- inputs=[
- gr.inputs.Image(label="Input Image"),
- gr.Radio(['GCViTXXTiny', 'GCViTXTiny', 'GCViTTiny',
- 'GCViTSmall', 'GCViTBase','GCViTLarge'], value='GCViTXXTiny', label='Model Name')
- ],
- outputs=[
- gr.outputs.Label(label="Prediction"),
- gr.inputs.Image(label="GradCAM"),
- ],
- title="Global Context Vision Transformer (GCViT) Demo",
- description="Image Classification with GCViT Model using ImageNet Pretrain Weights.",
- examples=[
- ["example/hot_air_ballon.jpg", 'GCViTXXTiny'],
- ["example/chelsea.png", 'GCViTXXTiny'],
- ["example/penguin.JPG", 'GCViTXXTiny'],
- ["example/bus.jpg", 'GCViTXXTiny'],
- ],
-)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMCubeUVPacker.d.ts b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMCubeUVPacker.d.ts
deleted file mode 100644
index b6408a1cedea053689a9967ddbf6bb56a9fc8181..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMCubeUVPacker.d.ts
+++ /dev/null
@@ -1,14 +0,0 @@
-import {
- CubeTexture,
- Renderer,
- ShaderMaterial,
- WebGLRenderTarget
-} from '../../../src/Three';
-
-export class PMREMCubeUVPacker {
- CubeUVRenderTarget:WebGLRenderTarget;
-
- constructor(cubeTextureLods: CubeTexture[]);
- update(renderer:Renderer): void;
- dispose(): void;
-}
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/TextureLoader.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/loaders/TextureLoader.d.ts
deleted file mode 100644
index 96e5703420f77d2a15e761b80733ee675dd29def..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/TextureLoader.d.ts
+++ /dev/null
@@ -1,30 +0,0 @@
-import { LoadingManager } from './LoadingManager';
-import { Texture } from './../textures/Texture';
-
-/**
- * Class for loading a texture.
- * Unlike other loaders, this one emits events instead of using predefined callbacks. So if you're interested in getting notified when things happen, you need to add listeners to the object.
- */
-export class TextureLoader {
- constructor(manager?: LoadingManager);
-
- manager: LoadingManager;
- crossOrigin: string;
- withCredentials: string;
- path: string;
-
- /**
- * Begin loading from url
- *
- * @param url
- */
- load(
- url: string,
- onLoad?: (texture: Texture) => void,
- onProgress?: (event: ProgressEvent) => void,
- onError?: (event: ErrorEvent) => void
- ): Texture;
- setCrossOrigin(crossOrigin: string): TextureLoader;
- setWithCredentials(value: string): TextureLoader;
- setPath(path: string): TextureLoader;
-}
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/color_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/color_vertex.glsl.js
deleted file mode 100644
index 9bb6d738ae64b466be2367ebe21f24bfda137169..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/color_vertex.glsl.js
+++ /dev/null
@@ -1,7 +0,0 @@
-export default /* glsl */`
-#ifdef USE_COLOR
-
- vColor.xyz = color.xyz;
-
-#endif
-`;
diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/guessr/nearest_neighbor_embedder_guessr.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/guessr/nearest_neighbor_embedder_guessr.py
deleted file mode 100644
index 4fa57706f17b23496dfea18b0d161d4eee217a98..0000000000000000000000000000000000000000
--- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/guessr/nearest_neighbor_embedder_guessr.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from dataclasses import dataclass
-
-from PIL import Image
-import pandas as pd
-
-from geoguessr_bot.guessr import AbstractGuessr
-from geoguessr_bot.interfaces import Coordinate
-from geoguessr_bot.retriever import AbstractImageEmbedder
-from geoguessr_bot.retriever import Retriever
-
-
-@dataclass
-class NearestNeighborEmbedderGuessr(AbstractGuessr):
- """Guesses a coordinate using an Embedder and a retriever followed by NN.
- """
- embedder: AbstractImageEmbedder
- retriever: Retriever
- metadata_path: str
-
- def __post_init__(self):
- """Load metadata
- """
- metadata = pd.read_csv(self.metadata_path)
- self.image_to_coordinate = {
- image.split("/")[-1]: Coordinate(latitude=latitude, longitude=longitude)
- for image, latitude, longitude in zip(metadata["path"], metadata["latitude"], metadata["longitude"])
- }
-
-
- def guess(self, image: Image) -> Coordinate:
- """Guess a coordinate from an image
- """
- # Embed image
- image = Image.fromarray(image)
- image_embedding = self.embedder.embed(image)[None, :]
-
- # Retrieve nearest neighbor
- nearest_neighbors = self.retriever.retrieve(image_embedding)
- nearest_neighbor = nearest_neighbors[0][0][0]
-
- # Guess coordinate
- guess_coordinate = self.image_to_coordinate[nearest_neighbor]
- return guess_coordinate
diff --git a/spaces/bguberfain/Detic/tools/get_lvis_cat_info.py b/spaces/bguberfain/Detic/tools/get_lvis_cat_info.py
deleted file mode 100644
index 83f286983ce811c4057ea8e8041e6a95dda78113..0000000000000000000000000000000000000000
--- a/spaces/bguberfain/Detic/tools/get_lvis_cat_info.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--ann", default='datasets/lvis/lvis_v1_train.json')
- parser.add_argument("--add_freq", action='store_true')
- parser.add_argument("--r_thresh", type=int, default=10)
- parser.add_argument("--c_thresh", type=int, default=100)
- args = parser.parse_args()
-
- print('Loading', args.ann)
- data = json.load(open(args.ann, 'r'))
- cats = data['categories']
- image_count = {x['id']: set() for x in cats}
- ann_count = {x['id']: 0 for x in cats}
- for x in data['annotations']:
- image_count[x['category_id']].add(x['image_id'])
- ann_count[x['category_id']] += 1
- num_freqs = {x: 0 for x in ['r', 'f', 'c']}
- for x in cats:
- x['image_count'] = len(image_count[x['id']])
- x['instance_count'] = ann_count[x['id']]
- if args.add_freq:
- freq = 'f'
- if x['image_count'] < args.c_thresh:
- freq = 'c'
- if x['image_count'] < args.r_thresh:
- freq = 'r'
- x['frequency'] = freq
- num_freqs[freq] += 1
- print(cats)
- image_counts = sorted([x['image_count'] for x in cats])
- # print('image count', image_counts)
- # import pdb; pdb.set_trace()
- if args.add_freq:
- for x in ['r', 'c', 'f']:
- print(x, num_freqs[x])
- out = cats # {'categories': cats}
- out_path = args.ann[:-5] + '_cat_info.json'
- print('Saving to', out_path)
- json.dump(out, open(out_path, 'w'))
-
diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/multi_tracker_zoo.py b/spaces/bhasker412/IDD-YOLO-Tracking/trackers/multi_tracker_zoo.py
deleted file mode 100644
index 0a41973f77fb4e1dd1cf552f78f020e7f16c542c..0000000000000000000000000000000000000000
--- a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/multi_tracker_zoo.py
+++ /dev/null
@@ -1,52 +0,0 @@
-from trackers.strongsort.utils.parser import get_config
-
-
-def create_tracker(tracker_type, tracker_config, reid_weights, device, half):
-
- cfg = get_config()
- cfg.merge_from_file(tracker_config)
-
- if tracker_type == 'strongsort':
- from trackers.strongsort.strong_sort import StrongSORT
- strongsort = StrongSORT(
- reid_weights,
- device,
- half,
- max_dist=cfg.strongsort.max_dist,
- max_iou_dist=cfg.strongsort.max_iou_dist,
- max_age=cfg.strongsort.max_age,
- max_unmatched_preds=cfg.strongsort.max_unmatched_preds,
- n_init=cfg.strongsort.n_init,
- nn_budget=cfg.strongsort.nn_budget,
- mc_lambda=cfg.strongsort.mc_lambda,
- ema_alpha=cfg.strongsort.ema_alpha,
-
- )
- return strongsort
-
- elif tracker_type == 'ocsort':
- from trackers.ocsort.ocsort import OCSort
- ocsort = OCSort(
- det_thresh=cfg.ocsort.det_thresh,
- max_age=cfg.ocsort.max_age,
- min_hits=cfg.ocsort.min_hits,
- iou_threshold=cfg.ocsort.iou_thresh,
- delta_t=cfg.ocsort.delta_t,
- asso_func=cfg.ocsort.asso_func,
- inertia=cfg.ocsort.inertia,
- use_byte=cfg.ocsort.use_byte,
- )
- return ocsort
-
- elif tracker_type == 'bytetrack':
- from trackers.bytetrack.byte_tracker import BYTETracker
- bytetracker = BYTETracker(
- track_thresh=cfg.bytetrack.track_thresh,
- match_thresh=cfg.bytetrack.match_thresh,
- track_buffer=cfg.bytetrack.track_buffer,
- frame_rate=cfg.bytetrack.frame_rate
- )
- return bytetracker
- else:
- print('No such tracker')
- exit()
\ No newline at end of file
diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py b/spaces/bigjoker/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py
deleted file mode 100644
index e8783bca153954afd086536a6dee854ec5e17ba9..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import contextlib
-import os
-
-import numpy as np
-import torch
-from PIL import Image
-from basicsr.utils.download_util import load_file_from_url
-from tqdm import tqdm
-
-from modules import modelloader, devices, script_callbacks, shared
-from modules.shared import cmd_opts, opts, state
-from swinir_model_arch import SwinIR as net
-from swinir_model_arch_v2 import Swin2SR as net2
-from modules.upscaler import Upscaler, UpscalerData
-
-
-device_swinir = devices.get_device_for('swinir')
-
-
-class UpscalerSwinIR(Upscaler):
- def __init__(self, dirname):
- self.name = "SwinIR"
- self.model_url = "https://github.com/JingyunLiang/SwinIR/releases/download/v0.0" \
- "/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \
- "-L_x4_GAN.pth "
- self.model_name = "SwinIR 4x"
- self.user_path = dirname
- super().__init__()
- scalers = []
- model_files = self.find_models(ext_filter=[".pt", ".pth"])
- for model in model_files:
- if "http" in model:
- name = self.model_name
- else:
- name = modelloader.friendly_name(model)
- model_data = UpscalerData(name, model, self)
- scalers.append(model_data)
- self.scalers = scalers
-
- def do_upscale(self, img, model_file):
- model = self.load_model(model_file)
- if model is None:
- return img
- model = model.to(device_swinir, dtype=devices.dtype)
- img = upscale(img, model)
- try:
- torch.cuda.empty_cache()
- except:
- pass
- return img
-
- def load_model(self, path, scale=4):
- if "http" in path:
- dl_name = "%s%s" % (self.model_name.replace(" ", "_"), ".pth")
- filename = load_file_from_url(url=path, model_dir=self.model_path, file_name=dl_name, progress=True)
- else:
- filename = path
- if filename is None or not os.path.exists(filename):
- return None
- if filename.endswith(".v2.pth"):
- model = net2(
- upscale=scale,
- in_chans=3,
- img_size=64,
- window_size=8,
- img_range=1.0,
- depths=[6, 6, 6, 6, 6, 6],
- embed_dim=180,
- num_heads=[6, 6, 6, 6, 6, 6],
- mlp_ratio=2,
- upsampler="nearest+conv",
- resi_connection="1conv",
- )
- params = None
- else:
- model = net(
- upscale=scale,
- in_chans=3,
- img_size=64,
- window_size=8,
- img_range=1.0,
- depths=[6, 6, 6, 6, 6, 6, 6, 6, 6],
- embed_dim=240,
- num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8],
- mlp_ratio=2,
- upsampler="nearest+conv",
- resi_connection="3conv",
- )
- params = "params_ema"
-
- pretrained_model = torch.load(filename)
- if params is not None:
- model.load_state_dict(pretrained_model[params], strict=True)
- else:
- model.load_state_dict(pretrained_model, strict=True)
- return model
-
-
-def upscale(
- img,
- model,
- tile=None,
- tile_overlap=None,
- window_size=8,
- scale=4,
-):
- tile = tile or opts.SWIN_tile
- tile_overlap = tile_overlap or opts.SWIN_tile_overlap
-
-
- img = np.array(img)
- img = img[:, :, ::-1]
- img = np.moveaxis(img, 2, 0) / 255
- img = torch.from_numpy(img).float()
- img = img.unsqueeze(0).to(device_swinir, dtype=devices.dtype)
- with torch.no_grad(), devices.autocast():
- _, _, h_old, w_old = img.size()
- h_pad = (h_old // window_size + 1) * window_size - h_old
- w_pad = (w_old // window_size + 1) * window_size - w_old
- img = torch.cat([img, torch.flip(img, [2])], 2)[:, :, : h_old + h_pad, :]
- img = torch.cat([img, torch.flip(img, [3])], 3)[:, :, :, : w_old + w_pad]
- output = inference(img, model, tile, tile_overlap, window_size, scale)
- output = output[..., : h_old * scale, : w_old * scale]
- output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- if output.ndim == 3:
- output = np.transpose(
- output[[2, 1, 0], :, :], (1, 2, 0)
- ) # CHW-RGB to HCW-BGR
- output = (output * 255.0).round().astype(np.uint8) # float32 to uint8
- return Image.fromarray(output, "RGB")
-
-
-def inference(img, model, tile, tile_overlap, window_size, scale):
- # test the image tile by tile
- b, c, h, w = img.size()
- tile = min(tile, h, w)
- assert tile % window_size == 0, "tile size should be a multiple of window_size"
- sf = scale
-
- stride = tile - tile_overlap
- h_idx_list = list(range(0, h - tile, stride)) + [h - tile]
- w_idx_list = list(range(0, w - tile, stride)) + [w - tile]
- E = torch.zeros(b, c, h * sf, w * sf, dtype=devices.dtype, device=device_swinir).type_as(img)
- W = torch.zeros_like(E, dtype=devices.dtype, device=device_swinir)
-
- with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="SwinIR tiles") as pbar:
- for h_idx in h_idx_list:
- if state.interrupted or state.skipped:
- break
-
- for w_idx in w_idx_list:
- if state.interrupted or state.skipped:
- break
-
- in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile]
- out_patch = model(in_patch)
- out_patch_mask = torch.ones_like(out_patch)
-
- E[
- ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
- ].add_(out_patch)
- W[
- ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
- ].add_(out_patch_mask)
- pbar.update(1)
- output = E.div_(W)
-
- return output
-
-
-def on_ui_settings():
- import gradio as gr
-
- shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling")))
- shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling")))
-
-
-script_callbacks.on_ui_settings(on_ui_settings)
diff --git a/spaces/bioriAsaeru/text-to-voice/800 Bullets 2002 800 Balas LiMiTED DVDRip XviD QiX How a Boy Discovers His Grandfathers Wild West Show.md b/spaces/bioriAsaeru/text-to-voice/800 Bullets 2002 800 Balas LiMiTED DVDRip XviD QiX How a Boy Discovers His Grandfathers Wild West Show.md
deleted file mode 100644
index d3cd861398111189c94bd07790a2572653a27656..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/800 Bullets 2002 800 Balas LiMiTED DVDRip XviD QiX How a Boy Discovers His Grandfathers Wild West Show.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Avantgarde Lt Book Font UPD Download.md b/spaces/bioriAsaeru/text-to-voice/Avantgarde Lt Book Font UPD Download.md
deleted file mode 100644
index 34b9da8d7e03792ab37023afba756e3d9e9d3100..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Avantgarde Lt Book Font UPD Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Click to view font family "ITC Avant Garde LT".ITC Avant Garde LTITC Avant Garde LT Bold ItalicITC Avant Garde LT Extra BoldITC Avant Garde LT Extra Bold ItalicITC Avant Garde LT Extra LightITC Avant Garde LT Extra Light ItalicITC Avant Garde LT Italic About the font ITC Avant Garde LT BoldBe aware that the ITC Avant Garde LT Bold font is free for personal knowledge and use only. However, you need to contact the author for commercial use or for any support.You can use the ITC Avant Garde LT Bold to create interesting designs, covers, shop and store name and logos.Also, the ITC Avant Garde LT Bold font is perfect for branding projects, housewares designs, product packaging, or simply as a stylish text overlay on any background image.FamilyITC Avant Garde LTSub-familyBoldVersionVersion 5.50AuthorCompanyLinotype GmbHSite Avant Garde Gothic is a registered trademark of International Typeface CorporationLicenceFor personal use onlyLicence MaisFontesFor personal use onlyMost wanted:fontes gratis, baixar fontes gratis, font ttf, fontes para word gratis, fonts free Typography ITC Avant Garde LT BoldTo evaluate the typeface, in this section there is a preview of which we select 31 special characters or with accents, 26 letters of the alphabet in upper and lower case and the numbering from 0 to 10. The letters will be the same after installed in your operating system, either for viewing or for printing. ITC Avant Garde LT Bold font authorFurthermore, about all the content of this source, we also provide some additional information from the author and/or company. Therefore, if you need to clarify doubts about the license for personal or commercial use, please contact the author. Author: Linotype GmbHCompany: Linotype GmbHSite: License informationThe ITC Avant Garde LT Bold font provided is for typography style knowledge only. The download is completely free for personal use and the font cannot be used for commercial purposes.Therefore, if you wish to use this font for commercial purposes, you must purchase a license or contact the author for permission to use it. How to install the ITC Avant Garde LT Bold fontYou can install the ITC Avant Garde LT Bold font on any operating system. For safety and to ensure that there is no Malware or malicious software, downloading the source file é compressed in ZIP format. Fonts are in OTF (OpenType) or TTF (TrueType) format.
Click here to install the font on Microsoft Windows (all versions).
Click here to install the font on MAC OS.
Content related to ITC Avant Garde LT BoldWe found new special content and prepared with all dedication! The content below is related to the source ITC Avant Garde LT Bold. Click on the topic you want to learn more! Download ITC Avant Garde LT FontsEvery designer needs a good font to make the content stand out in their art. That's why we selected the ITC Avant Garde LT fonts. Check out! Download variations of ITC Avant Garde LT BoldAccording to the ITC Avant Garde LT Bold font family, below, we have listed other fonts that may be useful for your project. We have made an improved selection especially for you.Random fonts: Click to load 3 other fontsAvant Gard EF Bold Download this fontAvant Gard EF Bold Condensed Download this fontAvant Gard EF Book Download this fontAvant Gard EF Book Condensed Download this fontAvant Gard EF Book Oblique Download this font Leave your feedback for the ITC Avant Garde LT Bold fontFinally, it's very important that we know your feedback about the ITC Avant Garde LT Bold font. Also tell us what type of project you used. Sharing your opinion and ideas will help many other participants in the MaisFontes community to improve the arts.
Also take the opportunity to share on social networks or click SAVE to keep this font in your fonts panel in the User Portal. Create a free account on MaisFontes by clicking here. Cloud words: ITC Avant Garde LT Bold ITC Avant Garde LT Bold font download;ITC Avant Garde LT Bold font free;ITC Avant Garde LT Bold download;ITC Avant Garde LT Bold Font;ITC Avant Garde LT Bold Logotipo;free font ITC Avant Garde LT Bold;ITC Avant Garde LT Bold free font;Font ITC Avant Garde LT Bold; × ITC Avant Garde LT BoldEmail type correctly your email Cancel Send email× Click to show the lettertypeitc-avant-garde-lt-bold.png Save imageDonate and help us!Continue browsing
An electronic publication license can be used for the embedding of fonts into electronic documents including e-books, e-magazines and e-newspapers. A license covers only a single title but is valid for the full operating life of that title. Every issue of an e-magazine, e-newspaper or other form of e-periodical is considered a separate, new publication. Format variations do not count as separate publications. If a publication is updated and distributed to existing users, a new license is not required. However, updated versions issued to new customers are defined as new publications and require a separate license. Learn more about licenses for eBooks
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/ops/upfirdn2d.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/ops/upfirdn2d.py
deleted file mode 100644
index ceeac2b9834e33b7c601c28bf27f32aa91c69256..0000000000000000000000000000000000000000
--- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/ops/upfirdn2d.py
+++ /dev/null
@@ -1,384 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Custom PyTorch ops for efficient resampling of 2D images."""
-
-import os
-import warnings
-import numpy as np
-import torch
-import traceback
-
-from .. import custom_ops
-from .. import misc
-from . import conv2d_gradfix
-
-#----------------------------------------------------------------------------
-
-_inited = False
-_plugin = None
-
-def _init():
- global _inited, _plugin
- if not _inited:
- sources = ['upfirdn2d.cpp', 'upfirdn2d.cu']
- sources = [os.path.join(os.path.dirname(__file__), s) for s in sources]
- try:
- _plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math'])
- except:
- warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc())
- return _plugin is not None
-
-def _parse_scaling(scaling):
- if isinstance(scaling, int):
- scaling = [scaling, scaling]
- assert isinstance(scaling, (list, tuple))
- assert all(isinstance(x, int) for x in scaling)
- sx, sy = scaling
- assert sx >= 1 and sy >= 1
- return sx, sy
-
-def _parse_padding(padding):
- if isinstance(padding, int):
- padding = [padding, padding]
- assert isinstance(padding, (list, tuple))
- assert all(isinstance(x, int) for x in padding)
- if len(padding) == 2:
- padx, pady = padding
- padding = [padx, padx, pady, pady]
- padx0, padx1, pady0, pady1 = padding
- return padx0, padx1, pady0, pady1
-
-def _get_filter_size(f):
- if f is None:
- return 1, 1
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- fw = f.shape[-1]
- fh = f.shape[0]
- with misc.suppress_tracer_warnings():
- fw = int(fw)
- fh = int(fh)
- misc.assert_shape(f, [fh, fw][:f.ndim])
- assert fw >= 1 and fh >= 1
- return fw, fh
-
-#----------------------------------------------------------------------------
-
-def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None):
- r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`.
-
- Args:
- f: Torch tensor, numpy array, or python list of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable),
- `[]` (impulse), or
- `None` (identity).
- device: Result device (default: cpu).
- normalize: Normalize the filter so that it retains the magnitude
- for constant input signal (DC)? (default: True).
- flip_filter: Flip the filter? (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- separable: Return a separable filter? (default: select automatically).
-
- Returns:
- Float32 tensor of the shape
- `[filter_height, filter_width]` (non-separable) or
- `[filter_taps]` (separable).
- """
- # Validate.
- if f is None:
- f = 1
- f = torch.as_tensor(f, dtype=torch.float32)
- assert f.ndim in [0, 1, 2]
- assert f.numel() > 0
- if f.ndim == 0:
- f = f[np.newaxis]
-
- # Separable?
- if separable is None:
- separable = (f.ndim == 1 and f.numel() >= 8)
- if f.ndim == 1 and not separable:
- f = f.ger(f)
- assert f.ndim == (1 if separable else 2)
-
- # Apply normalize, flip, gain, and device.
- if normalize:
- f /= f.sum()
- if flip_filter:
- f = f.flip(list(range(f.ndim)))
- f = f * (gain ** (f.ndim / 2))
- f = f.to(device=device)
- return f
-
-#----------------------------------------------------------------------------
-
-def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Pad, upsample, filter, and downsample a batch of 2D images.
-
- Performs the following sequence of operations for each channel:
-
- 1. Upsample the image by inserting N-1 zeros after each pixel (`up`).
-
- 2. Pad the image with the specified number of zeros on each side (`padding`).
- Negative padding corresponds to cropping the image.
-
- 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it
- so that the footprint of all output pixels lies within the input image.
-
- 4. Downsample the image by keeping every Nth pixel (`down`).
-
- This sequence of operations bears close resemblance to scipy.signal.upfirdn().
- The fused op is considerably more efficient than performing the same calculation
- using standard PyTorch ops. It supports gradients of arbitrary order.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- up: Integer upsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- down: Integer downsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- assert isinstance(x, torch.Tensor)
- assert impl in ['ref', 'cuda']
- if impl == 'cuda' and x.device.type == 'cuda' and _init():
- return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f)
- return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain)
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1):
- """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- if f is None:
- f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- assert f.dtype == torch.float32 and not f.requires_grad
- batch_size, num_channels, in_height, in_width = x.shape
- upx, upy = _parse_scaling(up)
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
-
- # Upsample by inserting zeros.
- x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1])
- x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1])
- x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx])
-
- # Pad or crop.
- x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)])
- x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)]
-
- # Setup filter.
- f = f * (gain ** (f.ndim / 2))
- f = f.to(x.dtype)
- if not flip_filter:
- f = f.flip(list(range(f.ndim)))
-
- # Convolve with the filter.
- f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim)
- if f.ndim == 4:
- x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels)
- else:
- x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels)
- x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels)
-
- # Downsample by throwing away pixels.
- x = x[:, :, ::downy, ::downx]
- return x
-
-#----------------------------------------------------------------------------
-
-_upfirdn2d_cuda_cache = dict()
-
-def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1):
- """Fast CUDA implementation of `upfirdn2d()` using custom ops.
- """
- # Parse arguments.
- upx, upy = _parse_scaling(up)
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
-
- # Lookup from cache.
- key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
- if key in _upfirdn2d_cuda_cache:
- return _upfirdn2d_cuda_cache[key]
-
- # Forward op.
- class Upfirdn2dCuda(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x, f): # pylint: disable=arguments-differ
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- if f is None:
- f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
- y = x
- if f.ndim == 2:
- y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
- else:
- y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain))
- y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain))
- ctx.save_for_backward(f)
- ctx.x_shape = x.shape
- return y
-
- @staticmethod
- def backward(ctx, dy): # pylint: disable=arguments-differ
- f, = ctx.saved_tensors
- _, _, ih, iw = ctx.x_shape
- _, _, oh, ow = dy.shape
- fw, fh = _get_filter_size(f)
- p = [
- fw - padx0 - 1,
- iw * upx - ow * downx + padx0 - upx + 1,
- fh - pady0 - 1,
- ih * upy - oh * downy + pady0 - upy + 1,
- ]
- dx = None
- df = None
-
- if ctx.needs_input_grad[0]:
- dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f)
-
- assert not ctx.needs_input_grad[1]
- return dx, df
-
- # Add to cache.
- _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda
- return Upfirdn2dCuda
-
-#----------------------------------------------------------------------------
-
-def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Filter a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape matches the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- padding: Padding with respect to the output. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + fw // 2,
- padx1 + (fw - 1) // 2,
- pady0 + fh // 2,
- pady1 + (fh - 1) // 2,
- ]
- return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
-
-#----------------------------------------------------------------------------
-
-def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Upsample a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape is a multiple of the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- up: Integer upsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the output. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- upx, upy = _parse_scaling(up)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + (fw + upx - 1) // 2,
- padx1 + (fw - upx) // 2,
- pady0 + (fh + upy - 1) // 2,
- pady1 + (fh - upy) // 2,
- ]
- return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl)
-
-#----------------------------------------------------------------------------
-
-def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
- r"""Downsample a batch of 2D images using the given 2D FIR filter.
-
- By default, the result is padded so that its shape is a fraction of the input.
- User-specified padding is applied on top of that, with negative values
- indicating cropping. Pixels outside the image are assumed to be zero.
-
- Args:
- x: Float32/float64/float16 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- f: Float32 FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- down: Integer downsampling factor. Can be a single int or a list/tuple
- `[x, y]` (default: 1).
- padding: Padding with respect to the input. Can be a single number or a
- list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- flip_filter: False = convolution, True = correlation (default: False).
- gain: Overall scaling factor for signal magnitude (default: 1).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- downx, downy = _parse_scaling(down)
- padx0, padx1, pady0, pady1 = _parse_padding(padding)
- fw, fh = _get_filter_size(f)
- p = [
- padx0 + (fw - downx + 1) // 2,
- padx1 + (fw - downx) // 2,
- pady0 + (fh - downy + 1) // 2,
- pady1 + (fh - downy) // 2,
- ]
- return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/brogelio/air_draw/README.md b/spaces/brogelio/air_draw/README.md
deleted file mode 100644
index 611a16fc6d936e5b7712ff00e7a69268ee018fea..0000000000000000000000000000000000000000
--- a/spaces/brogelio/air_draw/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: air_draw
-emoji: ✍
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 2.8.14
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/butterswords/nlc-explorer/Assets/Countries/.ipynb_checkpoints/Country-Data-Origin-checkpoint.md b/spaces/butterswords/nlc-explorer/Assets/Countries/.ipynb_checkpoints/Country-Data-Origin-checkpoint.md
deleted file mode 100644
index 37c51688b22365b186390d61df79051abeffea46..0000000000000000000000000000000000000000
--- a/spaces/butterswords/nlc-explorer/Assets/Countries/.ipynb_checkpoints/Country-Data-Origin-checkpoint.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# Origin of the country data used in this project
-
-I started by getting a list of countries on Github, from [
-Daina Bouquin](https://github.com/dbouquin/IS_608/blob/master/NanosatDB_munging/Countries-Continents.csv), because it seemed relatively completey and contained continents. Then I started to think about secondary data that might be useful for exposing the bias in an algorithm and opted for the [World Happiness Report 2021](https://worldhappiness.report/ed/2021/#appendices-and-data). I added the continents to the countries in that file to ensure I could retain the initial categorization I used.
\ No newline at end of file
diff --git a/spaces/camenduru-com/one-shot-talking-face/app.py b/spaces/camenduru-com/one-shot-talking-face/app.py
deleted file mode 100644
index b30d511ade12aff3d2392b6e3160a642c9cf6823..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/one-shot-talking-face/app.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import gradio as gr
-import os, subprocess, torchaudio
-import torch
-from PIL import Image
-
-block = gr.Blocks()
-
-def pad_image(image):
- w, h = image.size
- if w == h:
- return image
- elif w > h:
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
- new_image.paste(image, (0, (w - h) // 2))
- return new_image
- else:
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
- new_image.paste(image, ((h - w) // 2, 0))
- return new_image
-
-def calculate(image_in, audio_in):
- waveform, sample_rate = torchaudio.load(audio_in)
- waveform = torch.mean(waveform, dim=0, keepdim=True)
- torchaudio.save("/content/audio.wav", waveform, sample_rate, encoding="PCM_S", bits_per_sample=16)
- image = Image.open(image_in)
- image = pad_image(image)
- image.save("image.png")
-
- pocketsphinx_run = subprocess.run(['pocketsphinx', '-phone_align', 'yes', 'single', '/content/audio.wav'], check=True, capture_output=True)
- jq_run = subprocess.run(['jq', '[.w[]|{word: (.t | ascii_upcase | sub(""; "sil") | sub(""; "sil") | sub("\\\(2\\\)"; "") | sub("\\\(3\\\)"; "") | sub("\\\(4\\\)"; "") | sub("\\\[SPEECH\\\]"; "SIL") | sub("\\\[NOISE\\\]"; "SIL")), phones: [.w[]|{ph: .t | sub("\\\+SPN\\\+"; "SIL") | sub("\\\+NSN\\\+"; "SIL"), bg: (.b*100)|floor, ed: (.b*100+.d*100)|floor}]}]'], input=pocketsphinx_run.stdout, capture_output=True)
- with open("test.json", "w") as f:
- f.write(jq_run.stdout.decode('utf-8').strip())
-
- os.system(f"cd /content/one-shot-talking-face && python3 -B test_script.py --img_path /content/image.png --audio_path /content/audio.wav --phoneme_path /content/test.json --save_dir /content/train")
- return "/content/train/image_audio.mp4"
-
-def run():
- with block:
- gr.Markdown(
- """
-
- map: 📄 [arxiv](https://arxiv.org/abs/2112.02749) ⇨ 👩💻 [github](https://github.com/FuxiVirtualHuman/AAAI22-one-shot-talking-face) ⇨ 🦒 [colab](https://github.com/camenduru/one-shot-talking-face-colab) ⇨ 🤗 [huggingface](https://huggingface.co/spaces/camenduru/one-shot-talking-face) | tools: 🌀 [duplicate this space](https://huggingface.co/spaces/camenduru/sandbox?duplicate=true) | 🐢 [tortoise tts](https://huggingface.co/spaces/mdnestor/tortoise) | 📺 [video upscaler](https://huggingface.co/spaces/kadirnar/Anime4k) | 🎨 [text-to-image](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) | 🐣 [twitter](https://twitter.com/camenduru) | ☕ [buy-a-coffee](https://patreon.com/camenduru)
- """)
- with gr.Group():
- with gr.Box():
- with gr.Row().style(equal_height=True):
- image_in = gr.Image(show_label=False, type="filepath")
- audio_in = gr.Audio(show_label=False, type='filepath')
- video_out = gr.Video(show_label=False)
- with gr.Row().style(equal_height=True):
- btn = gr.Button("Generate")
-
- examples = gr.Examples(examples=[
- ["./examples/monalisa.jpg", "./examples/obama2.wav"],
- ["./examples/monalisa.jpg", "./examples/trump.wav"],
- ["./examples/o2.jpg", "./examples/obama2.wav"],
- ["./examples/o2.jpg", "./examples/trump.wav" ],
- ["./examples/image.png", "./examples/audio.wav"],
- ], fn=calculate, inputs=[image_in, audio_in], outputs=[video_out], cache_examples=True)
-
- btn.click(calculate, inputs=[image_in, audio_in], outputs=[video_out])
- block.queue()
- block.launch(server_name="0.0.0.0", server_port=7860)
-
-if __name__ == "__main__":
- run()
\ No newline at end of file
diff --git a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/latex/attention/background.tex b/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/latex/attention/background.tex
deleted file mode 100644
index 785069dc0f9143bad24e640056dd1072d5c6e5b5..0000000000000000000000000000000000000000
--- a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/latex/attention/background.tex
+++ /dev/null
@@ -1,58 +0,0 @@
-The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
-
-Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
-
-End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
-
-To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
-In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
-
-
-%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
-
-%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
-
-%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
-
-%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
-
-%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
-
-%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
-
-%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
-
-
-
-%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
-
-%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
-
-%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
-
-%\begin{table}[h!]
-%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
-%\label{tab:op_complexities}
-%\begin{center}
-%\vspace{-5pt}
-%\scalebox{0.75}{
-
-%\begin{tabular}{l|c|c|c}
-%\hline \hline
-%Layer Type & Receptive & Complexity & Sequential \\
-% & Field & & Operations \\
-%\hline
-%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
-%\hline
-%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
-%\hline
-%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
-%\hline
-%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
-%\hline
-%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
-%\hline \hline
-%\end{tabular}
-%}
-%\end{center}
-%\end{table}
\ No newline at end of file
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/datasets/cityscapes.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/datasets/cityscapes.py
deleted file mode 100644
index 1e84a5bdb3d4e410d8eef4b80a5d4c099a180104..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/datasets/cityscapes.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import functools
-import json
-import logging
-import multiprocessing as mp
-import numpy as np
-import os
-from itertools import chain
-import pycocotools.mask as mask_util
-from PIL import Image
-
-from detectron2.structures import BoxMode
-from detectron2.utils.comm import get_world_size
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import setup_logger
-
-try:
- import cv2 # noqa
-except ImportError:
- # OpenCV is an optional dependency at the moment
- pass
-
-
-logger = logging.getLogger(__name__)
-
-
-def _get_cityscapes_files(image_dir, gt_dir):
- files = []
- # scan through the directory
- cities = PathManager.ls(image_dir)
- logger.info(f"{len(cities)} cities found in '{image_dir}'.")
- for city in cities:
- city_img_dir = os.path.join(image_dir, city)
- city_gt_dir = os.path.join(gt_dir, city)
- for basename in PathManager.ls(city_img_dir):
- image_file = os.path.join(city_img_dir, basename)
-
- suffix = "leftImg8bit.png"
- assert basename.endswith(suffix), basename
- basename = basename[: -len(suffix)]
-
- instance_file = os.path.join(city_gt_dir, basename + "gtFine_instanceIds.png")
- label_file = os.path.join(city_gt_dir, basename + "gtFine_labelIds.png")
- json_file = os.path.join(city_gt_dir, basename + "gtFine_polygons.json")
-
- files.append((image_file, instance_file, label_file, json_file))
- assert len(files), "No images found in {}".format(image_dir)
- for f in files[0]:
- assert PathManager.isfile(f), f
- return files
-
-
-def load_cityscapes_instances(image_dir, gt_dir, from_json=True, to_polygons=True):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train".
- from_json (bool): whether to read annotations from the raw json file or the png files.
- to_polygons (bool): whether to represent the segmentation as polygons
- (COCO's format) instead of masks (cityscapes's format).
-
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
- """
- if from_json:
- assert to_polygons, (
- "Cityscapes's json annotations are in polygon format. "
- "Converting to mask format is not supported now."
- )
- files = _get_cityscapes_files(image_dir, gt_dir)
-
- logger.info("Preprocessing cityscapes annotations ...")
- # This is still not fast: all workers will execute duplicate works and will
- # take up to 10m on a 8GPU server.
- pool = mp.Pool(processes=max(mp.cpu_count() // get_world_size() // 2, 4))
-
- ret = pool.map(
- functools.partial(_cityscapes_files_to_dict, from_json=from_json, to_polygons=to_polygons),
- files,
- )
- logger.info("Loaded {} images from {}".format(len(ret), image_dir))
-
- # Map cityscape ids to contiguous ids
- from cityscapesscripts.helpers.labels import labels
-
- labels = [l for l in labels if l.hasInstances and not l.ignoreInEval]
- dataset_id_to_contiguous_id = {l.id: idx for idx, l in enumerate(labels)}
- for dict_per_image in ret:
- for anno in dict_per_image["annotations"]:
- anno["category_id"] = dataset_id_to_contiguous_id[anno["category_id"]]
- return ret
-
-
-def load_cityscapes_semantic(image_dir, gt_dir):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train".
-
- Returns:
- list[dict]: a list of dict, each has "file_name" and
- "sem_seg_file_name".
- """
- ret = []
- # gt_dir is small and contain many small files. make sense to fetch to local first
- gt_dir = PathManager.get_local_path(gt_dir)
- for image_file, _, label_file, json_file in _get_cityscapes_files(image_dir, gt_dir):
- label_file = label_file.replace("labelIds", "labelTrainIds")
-
- with PathManager.open(json_file, "r") as f:
- jsonobj = json.load(f)
- ret.append(
- {
- "file_name": image_file,
- "sem_seg_file_name": label_file,
- "height": jsonobj["imgHeight"],
- "width": jsonobj["imgWidth"],
- }
- )
- assert len(ret), f"No images found in {image_dir}!"
- assert PathManager.isfile(
- ret[0]["sem_seg_file_name"]
- ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa
- return ret
-
-
-def _cityscapes_files_to_dict(files, from_json, to_polygons):
- """
- Parse cityscapes annotation files to a instance segmentation dataset dict.
-
- Args:
- files (tuple): consists of (image_file, instance_id_file, label_id_file, json_file)
- from_json (bool): whether to read annotations from the raw json file or the png files.
- to_polygons (bool): whether to represent the segmentation as polygons
- (COCO's format) instead of masks (cityscapes's format).
-
- Returns:
- A dict in Detectron2 Dataset format.
- """
- from cityscapesscripts.helpers.labels import id2label, name2label
-
- image_file, instance_id_file, _, json_file = files
-
- annos = []
-
- if from_json:
- from shapely.geometry import MultiPolygon, Polygon
-
- with PathManager.open(json_file, "r") as f:
- jsonobj = json.load(f)
- ret = {
- "file_name": image_file,
- "image_id": os.path.basename(image_file),
- "height": jsonobj["imgHeight"],
- "width": jsonobj["imgWidth"],
- }
-
- # `polygons_union` contains the union of all valid polygons.
- polygons_union = Polygon()
-
- # CityscapesScripts draw the polygons in sequential order
- # and each polygon *overwrites* existing ones. See
- # (https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/json2instanceImg.py) # noqa
- # We use reverse order, and each polygon *avoids* early ones.
- # This will resolve the ploygon overlaps in the same way as CityscapesScripts.
- for obj in jsonobj["objects"][::-1]:
- if "deleted" in obj: # cityscapes data format specific
- continue
- label_name = obj["label"]
-
- try:
- label = name2label[label_name]
- except KeyError:
- if label_name.endswith("group"): # crowd area
- label = name2label[label_name[: -len("group")]]
- else:
- raise
- if label.id < 0: # cityscapes data format
- continue
-
- # Cityscapes's raw annotations uses integer coordinates
- # Therefore +0.5 here
- poly_coord = np.asarray(obj["polygon"], dtype="f4") + 0.5
- # CityscapesScript uses PIL.ImageDraw.polygon to rasterize
- # polygons for evaluation. This function operates in integer space
- # and draws each pixel whose center falls into the polygon.
- # Therefore it draws a polygon which is 0.5 "fatter" in expectation.
- # We therefore dilate the input polygon by 0.5 as our input.
- poly = Polygon(poly_coord).buffer(0.5, resolution=4)
-
- if not label.hasInstances or label.ignoreInEval:
- # even if we won't store the polygon it still contributes to overlaps resolution
- polygons_union = polygons_union.union(poly)
- continue
-
- # Take non-overlapping part of the polygon
- poly_wo_overlaps = poly.difference(polygons_union)
- if poly_wo_overlaps.is_empty:
- continue
- polygons_union = polygons_union.union(poly)
-
- anno = {}
- anno["iscrowd"] = label_name.endswith("group")
- anno["category_id"] = label.id
-
- if isinstance(poly_wo_overlaps, Polygon):
- poly_list = [poly_wo_overlaps]
- elif isinstance(poly_wo_overlaps, MultiPolygon):
- poly_list = poly_wo_overlaps.geoms
- else:
- raise NotImplementedError("Unknown geometric structure {}".format(poly_wo_overlaps))
-
- poly_coord = []
- for poly_el in poly_list:
- # COCO API can work only with exterior boundaries now, hence we store only them.
- # TODO: store both exterior and interior boundaries once other parts of the
- # codebase support holes in polygons.
- poly_coord.append(list(chain(*poly_el.exterior.coords)))
- anno["segmentation"] = poly_coord
- (xmin, ymin, xmax, ymax) = poly_wo_overlaps.bounds
-
- anno["bbox"] = (xmin, ymin, xmax, ymax)
- anno["bbox_mode"] = BoxMode.XYXY_ABS
-
- annos.append(anno)
- else:
- # See also the official annotation parsing scripts at
- # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/instances2dict.py # noqa
- with PathManager.open(instance_id_file, "rb") as f:
- inst_image = np.asarray(Image.open(f), order="F")
- # ids < 24 are stuff labels (filtering them first is about 5% faster)
- flattened_ids = np.unique(inst_image[inst_image >= 24])
-
- ret = {
- "file_name": image_file,
- "image_id": os.path.basename(image_file),
- "height": inst_image.shape[0],
- "width": inst_image.shape[1],
- }
-
- for instance_id in flattened_ids:
- # For non-crowd annotations, instance_id // 1000 is the label_id
- # Crowd annotations have <1000 instance ids
- label_id = instance_id // 1000 if instance_id >= 1000 else instance_id
- label = id2label[label_id]
- if not label.hasInstances or label.ignoreInEval:
- continue
-
- anno = {}
- anno["iscrowd"] = instance_id < 1000
- anno["category_id"] = label.id
-
- mask = np.asarray(inst_image == instance_id, dtype=np.uint8, order="F")
-
- inds = np.nonzero(mask)
- ymin, ymax = inds[0].min(), inds[0].max()
- xmin, xmax = inds[1].min(), inds[1].max()
- anno["bbox"] = (xmin, ymin, xmax, ymax)
- if xmax <= xmin or ymax <= ymin:
- continue
- anno["bbox_mode"] = BoxMode.XYXY_ABS
- if to_polygons:
- # This conversion comes from D4809743 and D5171122,
- # when Mask-RCNN was first developed.
- contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[
- -2
- ]
- polygons = [c.reshape(-1).tolist() for c in contours if len(c) >= 3]
- # opencv's can produce invalid polygons
- if len(polygons) == 0:
- continue
- anno["segmentation"] = polygons
- else:
- anno["segmentation"] = mask_util.encode(mask[:, :, None])[0]
- annos.append(anno)
- ret["annotations"] = annos
- return ret
-
-
-if __name__ == "__main__":
- """
- Test the cityscapes dataset loader.
-
- Usage:
- python -m detectron2.data.datasets.cityscapes \
- cityscapes/leftImg8bit/train cityscapes/gtFine/train
- """
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("image_dir")
- parser.add_argument("gt_dir")
- parser.add_argument("--type", choices=["instance", "semantic"], default="instance")
- args = parser.parse_args()
- from detectron2.data.catalog import Metadata
- from detectron2.utils.visualizer import Visualizer
- from cityscapesscripts.helpers.labels import labels
-
- logger = setup_logger(name=__name__)
-
- dirname = "cityscapes-data-vis"
- os.makedirs(dirname, exist_ok=True)
-
- if args.type == "instance":
- dicts = load_cityscapes_instances(
- args.image_dir, args.gt_dir, from_json=True, to_polygons=True
- )
- logger.info("Done loading {} samples.".format(len(dicts)))
-
- thing_classes = [k.name for k in labels if k.hasInstances and not k.ignoreInEval]
- meta = Metadata().set(thing_classes=thing_classes)
-
- else:
- dicts = load_cityscapes_semantic(args.image_dir, args.gt_dir)
- logger.info("Done loading {} samples.".format(len(dicts)))
-
- stuff_classes = [k.name for k in labels if k.trainId != 255]
- stuff_colors = [k.color for k in labels if k.trainId != 255]
- meta = Metadata().set(stuff_classes=stuff_classes, stuff_colors=stuff_colors)
-
- for d in dicts:
- img = np.array(Image.open(PathManager.open(d["file_name"], "rb")))
- visualizer = Visualizer(img, metadata=meta)
- vis = visualizer.draw_dataset_dict(d)
- # cv2.imshow("a", vis.get_image()[:, :, ::-1])
- # cv2.waitKey()
- fpath = os.path.join(dirname, os.path.basename(d["file_name"]))
- vis.save(fpath)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/shape_spec.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/shape_spec.py
deleted file mode 100644
index 8dac3c59b96576710656abebe9b5eac25868abbb..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/shape_spec.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-from dataclasses import dataclass
-from typing import Optional
-
-
-@dataclass
-class ShapeSpec:
- """
- A simple structure that contains basic shape specification about a tensor.
- It is often used as the auxiliary inputs/outputs of models,
- to complement the lack of shape inference ability among pytorch modules.
- """
-
- channels: Optional[int] = None
- height: Optional[int] = None
- width: Optional[int] = None
- stride: Optional[int] = None
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DeepLab/train_net.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DeepLab/train_net.py
deleted file mode 100644
index d3414ddf8e7af49640dd1372d75df7acb0b8bb49..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DeepLab/train_net.py
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-"""
-DeepLab Training Script.
-
-This script is a simplified version of the training script in detectron2/tools.
-"""
-
-import os
-
-import detectron2.data.transforms as T
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import DatasetMapper, MetadataCatalog, build_detection_train_loader
-from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch
-from detectron2.evaluation import CityscapesSemSegEvaluator, DatasetEvaluators, SemSegEvaluator
-from detectron2.projects.deeplab import add_deeplab_config, build_lr_scheduler
-
-
-def build_sem_seg_train_aug(cfg):
- augs = [
- T.ResizeShortestEdge(
- cfg.INPUT.MIN_SIZE_TRAIN, cfg.INPUT.MAX_SIZE_TRAIN, cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING
- )
- ]
- if cfg.INPUT.CROP.ENABLED:
- augs.append(
- T.RandomCrop_CategoryAreaConstraint(
- cfg.INPUT.CROP.TYPE,
- cfg.INPUT.CROP.SIZE,
- cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA,
- cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
- )
- )
- augs.append(T.RandomFlip())
- return augs
-
-
-class Trainer(DefaultTrainer):
- """
- We use the "DefaultTrainer" which contains a number pre-defined logic for
- standard training workflow. They may not work for you, especially if you
- are working on a new research project. In that case you can use the cleaner
- "SimpleTrainer", or write your own training loop.
- """
-
- @classmethod
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
- """
- Create evaluator(s) for a given dataset.
- This uses the special metadata "evaluator_type" associated with each builtin dataset.
- For your own dataset, you can simply create an evaluator manually in your
- script and do not have to worry about the hacky if-else logic here.
- """
- if output_folder is None:
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
- evaluator_list = []
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
- if evaluator_type == "sem_seg":
- return SemSegEvaluator(
- dataset_name,
- distributed=True,
- output_dir=output_folder,
- )
- if evaluator_type == "cityscapes_sem_seg":
- return CityscapesSemSegEvaluator(dataset_name)
- if len(evaluator_list) == 0:
- raise NotImplementedError(
- "no Evaluator for the dataset {} with the type {}".format(
- dataset_name, evaluator_type
- )
- )
- if len(evaluator_list) == 1:
- return evaluator_list[0]
- return DatasetEvaluators(evaluator_list)
-
- @classmethod
- def build_train_loader(cls, cfg):
- if "SemanticSegmentor" in cfg.MODEL.META_ARCHITECTURE:
- mapper = DatasetMapper(cfg, is_train=True, augmentations=build_sem_seg_train_aug(cfg))
- else:
- mapper = None
- return build_detection_train_loader(cfg, mapper=mapper)
-
- @classmethod
- def build_lr_scheduler(cls, cfg, optimizer):
- """
- It now calls :func:`detectron2.solver.build_lr_scheduler`.
- Overwrite it if you'd like a different scheduler.
- """
- return build_lr_scheduler(cfg, optimizer)
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- add_deeplab_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- if args.eval_only:
- model = Trainer.build_model(cfg)
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
- res = Trainer.test(cfg, model)
- return res
-
- trainer = Trainer(cfg)
- trainer.resume_or_load(resume=args.resume)
- return trainer.train()
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/catontheturntable/Ghibli-Diffusion/README.md b/spaces/catontheturntable/Ghibli-Diffusion/README.md
deleted file mode 100644
index c0ea0069dd242b579faa9197e55fa1ffe7d77ec0..0000000000000000000000000000000000000000
--- a/spaces/catontheturntable/Ghibli-Diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ghibli Diffusion
-emoji: 🚀
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-duplicated_from: akhaliq/Ghibli-Diffusion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ceckenrode/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli/README.md b/spaces/ceckenrode/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli/README.md
deleted file mode 100644
index ee0ebeda09ed5698e3c821d7a761031c7163e404..0000000000000000000000000000000000000000
--- a/spaces/ceckenrode/Easy-Button-Zero-Shot-Text-Classifier-facebook-bart-large-mnli/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Easy Button Zero Shot Text Classifier Facebook Bart Large Mnli
-emoji: 👁
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cgunadi/CDSS_Demo/app.py b/spaces/cgunadi/CDSS_Demo/app.py
deleted file mode 100644
index e393a4fda1274a9f87d6d483a653d1b4353de7f4..0000000000000000000000000000000000000000
--- a/spaces/cgunadi/CDSS_Demo/app.py
+++ /dev/null
@@ -1,140 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Sat Sep 17 22:46:12 2022
-
-@author: conny
-"""
-
-import os
-os.environ['KMP_DUPLICATE_LIB_OK']='True'
-
-import plotly.express as px
-
-import streamlit as st
-from streamlit_option_menu import option_menu
-
-st. set_page_config(layout="wide")
-
-from transformers import pipeline
-
-import pandas as pd
-
-@st.cache(allow_output_mutation = True)
-def init_text_summarization_model():
- MODEL = 'facebook/bart-large-cnn'
- pipe = pipeline("summarization", model=MODEL)
- return pipe
-
-@st.cache(allow_output_mutation = True)
-def init_zsl_topic_classification():
- MODEL = 'facebook/bart-large-mnli'
- pipe = pipeline("zero-shot-classification", model=MODEL)
- template = "This text is about {}."
- return pipe, template
-
-# Model initialization
-pipeline_summarization = init_text_summarization_model()
-pipeline_zsl, template = init_zsl_topic_classification()
-
-st.header('Customer Review Analysis')
-
-# Review text box
-default_review = \
- """I attempted to attend the Bank Alpha Eastbank branch last Friday to open a Kids Savings Account for my 3 year old son. I was informed by the Bank Alpha staffer that I was "too close to closing time" and that "I'd have to come back another time". It was about 20 minutes prior to close and there was no one else in the branch except the staff. I did say that I had my son's birth certificate and a Medicare card to establish his identity. Still, no dice. Come back later. No worries, can do.
- I returned today (Monday) to the same branch at 1130, with more than enough time for the account opening to occur. I confirmed with another Bank Alpha staffer that I had my son's birth certificate and Medicare card. However, he went out the back "just to check something". Upon coming back, I was informed that they would not be able to open the account for me today as they required my son, a 3 year old, to be present. The staffer on Friday failed to mention this to me. Equally, the Bank Alpha website for the Kids Savings Account does not list the physical presence of the child as a requirement for opening said account.
- I have never come across a bank so committed to not providing services to prospective customers as Bank Alpha. As a result, my son won't be banking with Bank Alpha, and I probably won't be recommending the use of Bank Alpha to any family or friends either."""
-review = st.text_area("Paste/write a review here..", value=default_review, height=250)
-
-tabs = option_menu(menu_title=None,
- options=[
- "Text Summarization",
- "Zero-Shot-Learning",
- ],
- default_index=0,
- orientation='horizontal'
- )
-
-### Text Summarization
-if tabs == 'Text Summarization':
- button = st.button('Summarize review')
-
- if button:
- # Text summarization inference
- with st.spinner("Summarizing review..."):
- summary_text = pipeline_summarization(review, max_length=130, min_length=30, do_sample=False)
-
- # Show output
- st.write(summary_text[0]['summary_text'])
-
-### Zero-Shot-Learning
-elif tabs == 'Zero-Shot-Learning':
- col_product, col_topic = st.columns(2)
-
- # Set product classes
- products = col_product.multiselect(
- label='Available Products and Services:',
- options=[
- 'Bank Account',
- 'Credit Card',
- 'Home Loan',
- 'Insurance',
- ],
- default=[
- 'Bank Account',
- 'Credit Card',
- 'Home Loan',
- 'Insurance',
- ]
- )
- product_is_multi_label = col_product.checkbox("Can have more than one classes", value=True)
-
- # Set topic classes
- topics = col_topic.multiselect(
- label="Possible Review Topics:",
- options=[
- "Excellent Customer Service",
- "Great Product Feature",
- "Poor Service",
- "Unclear Procedure",
- "Other"
- ],
- default=[
- "Excellent Customer Service",
- "Great Product Feature",
- "Poor Service",
- "Unclear Procedure",
- ]
- )
- topic_is_multi_label = col_topic.checkbox("Can have more than one classes", value=False)
-
- button = st.button('Classify')
-
- if button:
- # ZSL inference
- with st.spinner("Identifying product/service and classifying review..."):
- product_classification_output = pipeline_zsl(review, products, hypothesis_template=template, multi_label=product_is_multi_label)
- topic_classification_output = pipeline_zsl(review, topics, hypothesis_template=template, multi_label=topic_is_multi_label)
-
- # Show output
- col_output_product, col_output_topic = st.columns(2)
-
- data = {
- 'Product': product_classification_output['labels'],
- 'Scores': product_classification_output['scores']
- }
- df = pd.DataFrame(data)
- df = df.sort_values(by='Scores', ascending=True)
- fig = px.bar(df, x='Scores', y='Product', orientation='h')
- col_output_product.plotly_chart(fig, use_container_width=True)
-
- data = {
- 'Topic': topic_classification_output['labels'],
- 'Scores': topic_classification_output['scores']
- }
- df = pd.DataFrame(data)
- df = df.sort_values(by='Scores', ascending=True)
- fig = px.bar(df, x='Scores', y='Topic', orientation='h')
- col_output_topic.plotly_chart(fig, use_container_width=True)
-
-
\ No newline at end of file
diff --git a/spaces/chenyangqi/FateZero/FateZero/ckpt/download.sh b/spaces/chenyangqi/FateZero/FateZero/ckpt/download.sh
deleted file mode 100644
index d679f787e4b64653007e99a07c5b59cf2418288f..0000000000000000000000000000000000000000
--- a/spaces/chenyangqi/FateZero/FateZero/ckpt/download.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-# download from huggingface face, takes 20G space
-git lfs install
-git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
diff --git a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/models/unet_3d_blocks.py b/spaces/chenyangqi/FateZero/FateZero/video_diffusion/models/unet_3d_blocks.py
deleted file mode 100644
index 9e6285cf1416c7e1be444cc0be4b4575c7eedb0b..0000000000000000000000000000000000000000
--- a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/models/unet_3d_blocks.py
+++ /dev/null
@@ -1,631 +0,0 @@
-# code mostly taken from https://github.com/huggingface/diffusers
-import torch
-from torch import nn
-
-from .attention import SpatioTemporalTransformerModel
-from .resnet import DownsamplePseudo3D, ResnetBlockPseudo3D, UpsamplePseudo3D
-
-
-def get_down_block(
- down_block_type,
- num_layers,
- in_channels,
- out_channels,
- temb_channels,
- add_downsample,
- resnet_eps,
- resnet_act_fn,
- attn_num_head_channels,
- resnet_groups=None,
- cross_attention_dim=None,
- downsample_padding=None,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- resnet_time_scale_shift="default",
- model_config: dict={}
-):
- down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type
- if down_block_type == "DownBlockPseudo3D":
- return DownBlockPseudo3D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- downsample_padding=downsample_padding,
- resnet_time_scale_shift=resnet_time_scale_shift,
- model_config=model_config
- )
- elif down_block_type == "CrossAttnDownBlockPseudo3D":
- if cross_attention_dim is None:
- raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlockPseudo3D")
- return CrossAttnDownBlockPseudo3D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- downsample_padding=downsample_padding,
- cross_attention_dim=cross_attention_dim,
- attn_num_head_channels=attn_num_head_channels,
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
- resnet_time_scale_shift=resnet_time_scale_shift,
- model_config=model_config
- )
- raise ValueError(f"{down_block_type} does not exist.")
-
-
-def get_up_block(
- up_block_type,
- num_layers,
- in_channels,
- out_channels,
- prev_output_channel,
- temb_channels,
- add_upsample,
- resnet_eps,
- resnet_act_fn,
- attn_num_head_channels,
- resnet_groups=None,
- cross_attention_dim=None,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- resnet_time_scale_shift="default",
- model_config: dict={}
-):
- up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type
- if up_block_type == "UpBlockPseudo3D":
- return UpBlockPseudo3D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- resnet_time_scale_shift=resnet_time_scale_shift,
- model_config=model_config
- )
- elif up_block_type == "CrossAttnUpBlockPseudo3D":
- if cross_attention_dim is None:
- raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlockPseudo3D")
- return CrossAttnUpBlockPseudo3D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- resnet_groups=resnet_groups,
- cross_attention_dim=cross_attention_dim,
- attn_num_head_channels=attn_num_head_channels,
- dual_cross_attention=dual_cross_attention,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
- resnet_time_scale_shift=resnet_time_scale_shift,
- model_config=model_config
- )
- raise ValueError(f"{up_block_type} does not exist.")
-
-
-class UNetMidBlockPseudo3DCrossAttn(nn.Module):
- def __init__(
- self,
- in_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- output_scale_factor=1.0,
- cross_attention_dim=1280,
- dual_cross_attention=False,
- use_linear_projection=False,
- upcast_attention=False,
- model_config: dict={}
- ):
- super().__init__()
-
- self.has_cross_attention = True
- self.attn_num_head_channels = attn_num_head_channels
- resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
-
- # there is always at least one resnet
- resnets = [
- ResnetBlockPseudo3D(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- model_config=model_config
- )
- ]
- attentions = []
-
- for _ in range(num_layers):
- if dual_cross_attention:
- raise NotImplementedError
- attentions.append(
- SpatioTemporalTransformerModel(
- attn_num_head_channels,
- in_channels // attn_num_head_channels,
- in_channels=in_channels,
- num_layers=1,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- use_linear_projection=use_linear_projection,
- upcast_attention=upcast_attention,
- model_config=model_config
- )
- )
- resnets.append(
- ResnetBlockPseudo3D(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- model_config=model_config
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- def forward(self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None):
- # TODO(Patrick, William) - attention_mask is currently not used. Implement once used
- hidden_states = self.resnets[0](hidden_states, temb)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
- hidden_states = resnet(hidden_states, temb)
-
- return hidden_states
-
-
-class CrossAttnDownBlockPseudo3D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- cross_attention_dim=1280,
- output_scale_factor=1.0,
- downsample_padding=1,
- add_downsample=True,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- model_config: dict={}
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.has_cross_attention = True
- self.attn_num_head_channels = attn_num_head_channels
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlockPseudo3D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- model_config=model_config
- )
- )
- if dual_cross_attention:
- raise NotImplementedError
- attentions.append(
- SpatioTemporalTransformerModel(
- attn_num_head_channels,
- out_channels // attn_num_head_channels,
- in_channels=out_channels,
- num_layers=1,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
- model_config=model_config
- )
- )
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- DownsamplePseudo3D(
- out_channels,
- use_conv=True,
- out_channels=out_channels,
- padding=downsample_padding,
- name="op",
- model_config=model_config
- )
- ]
- )
- else:
- self.downsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None):
- # TODO(Patrick, William) - attention mask is not used
- output_states = ()
-
- for resnet, attn in zip(self.resnets, self.attentions):
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module, return_dict=None):
- def custom_forward(*inputs):
- if return_dict is not None:
- return module(*inputs, return_dict=return_dict)
- else:
- return module(*inputs)
-
- return custom_forward
-
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(attn, return_dict=False),
- hidden_states,
- encoder_hidden_states,
- )[0]
- else:
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
-
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class DownBlockPseudo3D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_downsample=True,
- downsample_padding=1,
- model_config: dict={}
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlockPseudo3D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- model_config=model_config
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- DownsamplePseudo3D(
- out_channels,
- use_conv=True,
- out_channels=out_channels,
- padding=downsample_padding,
- name="op",
- model_config=model_config
- )
- ]
- )
- else:
- self.downsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(self, hidden_states, temb=None):
- output_states = ()
-
- for resnet in self.resnets:
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs)
-
- return custom_forward
-
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- else:
- hidden_states = resnet(hidden_states, temb)
-
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class CrossAttnUpBlockPseudo3D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- prev_output_channel: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- cross_attention_dim=1280,
- output_scale_factor=1.0,
- add_upsample=True,
- dual_cross_attention=False,
- use_linear_projection=False,
- only_cross_attention=False,
- upcast_attention=False,
- model_config: dict={},
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.has_cross_attention = True
- self.attn_num_head_channels = attn_num_head_channels
- self.model_config = model_config
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlockPseudo3D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- model_config=model_config
- )
- )
- if dual_cross_attention:
- raise NotImplementedError
- attentions.append(
- SpatioTemporalTransformerModel(
- attn_num_head_channels,
- out_channels // attn_num_head_channels,
- in_channels=out_channels,
- num_layers=1,
- cross_attention_dim=cross_attention_dim,
- norm_num_groups=resnet_groups,
- use_linear_projection=use_linear_projection,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
- model_config=model_config
- )
- )
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList(
- [UpsamplePseudo3D(out_channels, use_conv=True, out_channels=out_channels, model_config=model_config)]
- )
- else:
- self.upsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(
- self,
- hidden_states,
- res_hidden_states_tuple,
- temb=None,
- encoder_hidden_states=None,
- upsample_size=None,
- attention_mask=None,
- ):
- # TODO(Patrick, William) - attention mask is not used
- for resnet, attn in zip(self.resnets, self.attentions):
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module, return_dict=None):
- def custom_forward(*inputs):
- if return_dict is not None:
- return module(*inputs, return_dict=return_dict)
- else:
- return module(*inputs)
-
- return custom_forward
-
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(attn, return_dict=False),
- hidden_states,
- encoder_hidden_states,
- )[0]
- else:
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
-
-
-class UpBlockPseudo3D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- prev_output_channel: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_upsample=True,
- model_config: dict={},
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlockPseudo3D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- model_config=model_config
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList(
- [UpsamplePseudo3D(out_channels, use_conv=True, out_channels=out_channels, model_config=model_config)]
- )
- else:
- self.upsamplers = None
-
- self.gradient_checkpointing = False
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- if self.training and self.gradient_checkpointing:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs)
-
- return custom_forward
-
- hidden_states = torch.utils.checkpoint.checkpoint(
- create_custom_forward(resnet), hidden_states, temb
- )
- else:
- hidden_states = resnet(hidden_states, temb)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states, upsample_size)
-
- return hidden_states
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/bin/pdf2txt.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/bin/pdf2txt.py
deleted file mode 100644
index f17b5db582c3b7824c1d885b786b7eefaa5672be..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/bin/pdf2txt.py
+++ /dev/null
@@ -1,317 +0,0 @@
-#!/Users/chuan_hd/Documents/workspace/Others/ChatGPT/LawAssistantChatBot/.venv/bin/python
-"""A command line tool for extracting text and images from PDF and
-output it to plain text, html, xml or tags."""
-import argparse
-import logging
-import sys
-from typing import Any, Container, Iterable, List, Optional
-
-import pdfminer.high_level
-from pdfminer.layout import LAParams
-from pdfminer.utils import AnyIO
-
-logging.basicConfig()
-
-OUTPUT_TYPES = ((".htm", "html"), (".html", "html"), (".xml", "xml"), (".tag", "tag"))
-
-
-def float_or_disabled(x: str) -> Optional[float]:
- if x.lower().strip() == "disabled":
- return None
- try:
- return float(x)
- except ValueError:
- raise argparse.ArgumentTypeError("invalid float value: {}".format(x))
-
-
-def extract_text(
- files: Iterable[str] = [],
- outfile: str = "-",
- laparams: Optional[LAParams] = None,
- output_type: str = "text",
- codec: str = "utf-8",
- strip_control: bool = False,
- maxpages: int = 0,
- page_numbers: Optional[Container[int]] = None,
- password: str = "",
- scale: float = 1.0,
- rotation: int = 0,
- layoutmode: str = "normal",
- output_dir: Optional[str] = None,
- debug: bool = False,
- disable_caching: bool = False,
- **kwargs: Any
-) -> AnyIO:
- if not files:
- raise ValueError("Must provide files to work upon!")
-
- if output_type == "text" and outfile != "-":
- for override, alttype in OUTPUT_TYPES:
- if outfile.endswith(override):
- output_type = alttype
-
- if outfile == "-":
- outfp: AnyIO = sys.stdout
- if sys.stdout.encoding is not None:
- codec = "utf-8"
- else:
- outfp = open(outfile, "wb")
-
- for fname in files:
- with open(fname, "rb") as fp:
- pdfminer.high_level.extract_text_to_fp(fp, **locals())
- return outfp
-
-
-def create_parser() -> argparse.ArgumentParser:
- parser = argparse.ArgumentParser(description=__doc__, add_help=True)
- parser.add_argument(
- "files",
- type=str,
- default=None,
- nargs="+",
- help="One or more paths to PDF files.",
- )
-
- parser.add_argument(
- "--version",
- "-v",
- action="version",
- version="pdfminer.six v{}".format(pdfminer.__version__),
- )
- parser.add_argument(
- "--debug",
- "-d",
- default=False,
- action="store_true",
- help="Use debug logging level.",
- )
- parser.add_argument(
- "--disable-caching",
- "-C",
- default=False,
- action="store_true",
- help="If caching or resources, such as fonts, should be disabled.",
- )
-
- parse_params = parser.add_argument_group(
- "Parser", description="Used during PDF parsing"
- )
- parse_params.add_argument(
- "--page-numbers",
- type=int,
- default=None,
- nargs="+",
- help="A space-seperated list of page numbers to parse.",
- )
- parse_params.add_argument(
- "--pagenos",
- "-p",
- type=str,
- help="A comma-separated list of page numbers to parse. "
- "Included for legacy applications, use --page-numbers "
- "for more idiomatic argument entry.",
- )
- parse_params.add_argument(
- "--maxpages",
- "-m",
- type=int,
- default=0,
- help="The maximum number of pages to parse.",
- )
- parse_params.add_argument(
- "--password",
- "-P",
- type=str,
- default="",
- help="The password to use for decrypting PDF file.",
- )
- parse_params.add_argument(
- "--rotation",
- "-R",
- default=0,
- type=int,
- help="The number of degrees to rotate the PDF "
- "before other types of processing.",
- )
-
- la_params = LAParams() # will be used for defaults
- la_param_group = parser.add_argument_group(
- "Layout analysis", description="Used during layout analysis."
- )
- la_param_group.add_argument(
- "--no-laparams",
- "-n",
- default=False,
- action="store_true",
- help="If layout analysis parameters should be ignored.",
- )
- la_param_group.add_argument(
- "--detect-vertical",
- "-V",
- default=la_params.detect_vertical,
- action="store_true",
- help="If vertical text should be considered during layout analysis",
- )
- la_param_group.add_argument(
- "--line-overlap",
- type=float,
- default=la_params.line_overlap,
- help="If two characters have more overlap than this they "
- "are considered to be on the same line. The overlap is specified "
- "relative to the minimum height of both characters.",
- )
- la_param_group.add_argument(
- "--char-margin",
- "-M",
- type=float,
- default=la_params.char_margin,
- help="If two characters are closer together than this margin they "
- "are considered to be part of the same line. The margin is "
- "specified relative to the width of the character.",
- )
- la_param_group.add_argument(
- "--word-margin",
- "-W",
- type=float,
- default=la_params.word_margin,
- help="If two characters on the same line are further apart than this "
- "margin then they are considered to be two separate words, and "
- "an intermediate space will be added for readability. The margin "
- "is specified relative to the width of the character.",
- )
- la_param_group.add_argument(
- "--line-margin",
- "-L",
- type=float,
- default=la_params.line_margin,
- help="If two lines are close together they are considered to "
- "be part of the same paragraph. The margin is specified "
- "relative to the height of a line.",
- )
- la_param_group.add_argument(
- "--boxes-flow",
- "-F",
- type=float_or_disabled,
- default=la_params.boxes_flow,
- help="Specifies how much a horizontal and vertical position of a "
- "text matters when determining the order of lines. The value "
- "should be within the range of -1.0 (only horizontal position "
- "matters) to +1.0 (only vertical position matters). You can also "
- "pass `disabled` to disable advanced layout analysis, and "
- "instead return text based on the position of the bottom left "
- "corner of the text box.",
- )
- la_param_group.add_argument(
- "--all-texts",
- "-A",
- default=la_params.all_texts,
- action="store_true",
- help="If layout analysis should be performed on text in figures.",
- )
-
- output_params = parser.add_argument_group(
- "Output", description="Used during output generation."
- )
- output_params.add_argument(
- "--outfile",
- "-o",
- type=str,
- default="-",
- help="Path to file where output is written. "
- 'Or "-" (default) to write to stdout.',
- )
- output_params.add_argument(
- "--output_type",
- "-t",
- type=str,
- default="text",
- help="Type of output to generate {text,html,xml,tag}.",
- )
- output_params.add_argument(
- "--codec",
- "-c",
- type=str,
- default="utf-8",
- help="Text encoding to use in output file.",
- )
- output_params.add_argument(
- "--output-dir",
- "-O",
- default=None,
- help="The output directory to put extracted images in. If not given, "
- "images are not extracted.",
- )
- output_params.add_argument(
- "--layoutmode",
- "-Y",
- default="normal",
- type=str,
- help="Type of layout to use when generating html "
- "{normal,exact,loose}. If normal,each line is"
- " positioned separately in the html. If exact"
- ", each character is positioned separately in"
- " the html. If loose, same result as normal "
- "but with an additional newline after each "
- "text line. Only used when output_type is html.",
- )
- output_params.add_argument(
- "--scale",
- "-s",
- type=float,
- default=1.0,
- help="The amount of zoom to use when generating html file. "
- "Only used when output_type is html.",
- )
- output_params.add_argument(
- "--strip-control",
- "-S",
- default=False,
- action="store_true",
- help="Remove control statement from text. "
- "Only used when output_type is xml.",
- )
-
- return parser
-
-
-def parse_args(args: Optional[List[str]]) -> argparse.Namespace:
- parsed_args = create_parser().parse_args(args=args)
-
- # Propagate parsed layout parameters to LAParams object
- if parsed_args.no_laparams:
- parsed_args.laparams = None
- else:
- parsed_args.laparams = LAParams(
- line_overlap=parsed_args.line_overlap,
- char_margin=parsed_args.char_margin,
- line_margin=parsed_args.line_margin,
- word_margin=parsed_args.word_margin,
- boxes_flow=parsed_args.boxes_flow,
- detect_vertical=parsed_args.detect_vertical,
- all_texts=parsed_args.all_texts,
- )
-
- if parsed_args.page_numbers:
- parsed_args.page_numbers = {x - 1 for x in parsed_args.page_numbers}
-
- if parsed_args.pagenos:
- parsed_args.page_numbers = {int(x) - 1 for x in parsed_args.pagenos.split(",")}
-
- if parsed_args.output_type == "text" and parsed_args.outfile != "-":
- for override, alttype in OUTPUT_TYPES:
- if parsed_args.outfile.endswith(override):
- parsed_args.output_type = alttype
-
- return parsed_args
-
-
-def main(args: Optional[List[str]] = None) -> int:
- parsed_args = parse_args(args)
- outfp = extract_text(**vars(parsed_args))
- outfp.close()
- return 0
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/migrations/00002-migration-2.sqlite.sql b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/migrations/00002-migration-2.sqlite.sql
deleted file mode 100644
index 01e4b222af541efb9022d2eeb69e39239faecb34..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/migrations/00002-migration-2.sqlite.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-CREATE TABLE table2 (
- name TEXT PRIMARY KEY
-);
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/npconv.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/npconv.py
deleted file mode 100644
index df99550d348a89dd4086050358591ac94ad50467..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/npconv.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from clickhouse_connect.driver.options import np
-
-from clickhouse_connect.driver.types import ByteSource
-
-
-def read_numpy_array(source: ByteSource, np_type: str, num_rows: int):
- dtype = np.dtype(np_type)
- buffer = source.read_bytes(dtype.itemsize * num_rows)
- return np.frombuffer(buffer, dtype, num_rows)
diff --git a/spaces/cihyFjudo/fairness-paper-search/Forza Motorsport 4 2011 PC Windows Full Game Cracked RAR Password The Complete Review and Rating.md b/spaces/cihyFjudo/fairness-paper-search/Forza Motorsport 4 2011 PC Windows Full Game Cracked RAR Password The Complete Review and Rating.md
deleted file mode 100644
index 6d9117ba405b0b4a7e3dcc18cd11410379a8d0cd..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Forza Motorsport 4 2011 PC Windows Full Game Cracked RAR Password The Complete Review and Rating.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
microsoft project 2016 install location free downloaddownload minecraft launcher windows 10microsoft windows 7 iso download tool free downloaddirt rally pc demo free downloadmicrosoft expression encoder 4 pro windows 7 free downloadsolidworks 2016 simulation premium free downloadangry bird free pc games download full versionwindows 8.1 download no product key free downloadcar games for pc free download for windows 10windows 10 home kinguin free download
-
forza motorsport 4 2011 pc windows full game cracked rar password
counter strike download full version free pcapowermirror app for pc free downloadhp windows 7 home premium oem iso download freesoundblaster tactic 3d alphaburnout revenge pc game free download full version [url= ]]how to download microsoft access 2016 for free free download [/url] audio recorder download for windows 10isa bridgebest music player for pc with equalizer free downloadhp j4500 driverwindows 8 pc bluetooth software download [url= ]]hp laptop bluetooth driver for windows 10 free download [/url] download netbeans for windows 10 64 bitcode vein pc free downloadavg antivirus full version free download for windows 10video media player for windows xp free download freeinvisible secrets [url= ]]gta vice city pc game download windows 10 [/url] canon imageclass mf6550 driver download windows 10kmspico windows 10 activator download freevlc media player 1.1 11 free download windows xp freeinternet explorer 9 download for windows xp 32 bit free freez wamp [url= ]]android mobile unlock software free download pc [/url] spambully review60 seconds game free download for pc forestofgamesitunes download for windows 7 64 bit new version freedownload driver intel hd graphics 530 windows 10download microsoft visual studio 2008 for windows 10 [url= ]]windows xp professional background free download [/url] hotspot shield vpn free download for pc windows 7talking tom 2 free download for pc windows 7free virus download for windows 10download free adobe acrobat for windows 7 freedownload devcon windows xp free
-
[url= ]]windows server 2016 standard workgroup free download [/url] [url= ]]windows xp sp2 lan driver download free [/url] apple safari latest version free download for windows xp freeveriface download windows 10 freeuniversal usb installer for windows xp free download freewindows 7 sp1 update free download freewpa tester windows xp download free [url= ]]download skype application for windows free [/url] quadro fx 570 driverrealplayer basic free download for windows xp freecycle stunt games free download for pcneed for speed free download for pc windows 10download internet explorer 32 bit for windows 10 [url= ]]stellar phoenix windows data recovery download with crack free [/url] free download libreoffice for windows 10 freedownload idm full crack for windows 10sound blaster x-fi mb3 drivercities skylines free download pcyoutube download pc windows 8.1 [url= ]]logic pro x freeze midi track free download [/url] download mpv player for windows 10wifi link 1000 bgnphoto editor download for pc windows 8.1download game call of duty black ops 4 pchow long does windows 10 download take free [url= ]]descargar sony vegas pro 11 portable mega free download [/url] behringer fca610microphone drivers windows xp download freeautodesk pixlr download for windows 10pcsx2 emulator for pc windows 7 32 bit downloaddigidesign 003 drivers [url= ]]microsoft office word 2010 free download english version free download [/url] free firefox browser download for windows 8 freedell wireless 1395 wlan mini-card not connecting to internetpc software download windows 7free full games download for pc windows 10vlc media player download for pc windows 7 32 bit
-
jvm dll download for windows 10 64 bit gadget pack windows 10 download green screen download for windows movie maker free free ea racing games for pc download ableton live 8 free download full version pc
-
[url= ]]windows 10 enterprise 90-day evaluation free download [/url] [url= ]]gpu download windows 10 [/url] hp dx2000 mt drivers download windows 7 freepc themes free download for windows 7 ultimate freedownload mozilla firefox windows 10 x64cursor windows 10 downloaddownload left 4 dead for free full game pc [url= ]]download hp laserjet p2014 driver for windows 10 [/url] download windows 10 mobile device centeravakin life pc free downloadredsn0w download free windows freetascam 1800 drivernod32 antivirus free download for windows xp sp2 free [url= ]]download free windows 10 pro free [/url] download magic mouse 2 driver for windows 10steelseries 3gc controller driver windows 7download dap for windows xp freegraphic card driver download for windows 7 freecan i download outlook express for windows 10 [url= ]]chrome for windows 10 64 bit download free [/url] autotune download free pccrash mind over mutant pc download freedownload game car simulator pcfull version antivirus free download for windows xp freedownload intel audio driver windows 10 [url= ]]duke nukem manhattan project pc game free download [/url] laserjet 2200d driverlan exam makerdownload windows 10 task managerfree download cutepdf writer for windows 7 freedownload bluestacks for pc windows 10 64 bit [url= ]]btd6 download pc free [/url] where can i download all windows 7 updates freevshare 8.1.3usb vid_0955&pid_9000download google ime for windows 10pop peeper alternative
-
-
microsoft access 2013 buy free downloadbest pc games sites free downloadwindows 10free download freedownload age of empires 4 for windows 10microsoft excel 2016 test upwork free downloadbypass windows 7 professional administrator password free downloadusb serial controller windows 7 driver download freemicrosoft powerpoint 2019 gezginler free downloaddownload paint net for windows 7 freewindows 8.1 download free full version 32 bit for pc free download
-
samsung to htc transfercan i download quicktime on windows 10download game pirates of the caribbean for pchp 1012 driver windows 7dim screen windows 10 download [url= ]]qlikview software free download for windows 8 free [/url] onekey recovery lenovo windows 8 download freehp scan download windows 8 freedownload windows 8 link freeasus x99 strix driversbest typing software for pc free download windows 10 [url= ]]microsoft office powerpoint 2007 slide design free download free download [/url] microsoft office 2010 download for windows 10 freegeometry dash for windows 10 downloadexcel to iifdownload kvm for windows 10hungry shark evolution game download for pc window 7 [url= ]]autodesk revit live 2018 final free download [/url] free microsoft office 2010 download full version for windows 7 freerepair internet explorer windows 7 download freedownload ccleaner portable for windows 10bleach 3d games for pc free downloaddownload samsung smart switch windows 10 [url= ]]download free program pc [/url] fr 300usbintel driver update utility windows 10 free downloaddownload msvcr100 dll windows 10 64 bitbadland free download pchp 8600 driver download windows 10 [url= ]]kaspersky antivirus 2013 free download for windows xp free [/url] silverlight for windows 7 32 bit download freedownload analog clock for windows 10adobe flash player chrome free download windows 10ge force g210mdonjins
-
[url= ]]candy crush soda saga download pc windows 7 [/url] [url= ]]windows 7 lite download free [/url] microsoft calculator plus windows 10 downloadhow to download a youtube video windows 10 freechecksur.exe download for windows 7 freeminecraft windows 10 skyblock map download freelastpass windows 10 download [url= ]]fifa 14 pc download windows 7 [/url] dual boot repair windows 10 downloaddownload apex legends windows 10nvidia geforce gtx 965m drivers windows 10download destiny 2 pc gamehp eprint download windows 10 free [url= ]]download best photo editing software for pc free [/url] lds tools download for windows 10java 64 bit download windows 10 free freebad piggies free download for pc full version with crackwhatsapp for pc free download windows 10 64 bitcanon zoombrowser ex download windows 10 [url= ]]bootcamp windows 10 download drivers [/url] itunes download for windows 10 32 bitdownload windows phone pc suite freejw player windows 10 downloadintel wifi link 5100 agn driver windows 7download webcam pc windows 10 [url= ]]windows 10 pro 64 bit key ebay free download [/url] canon ip1700 driver free download for windows 10download racing games for pc windows 8.1google maps download pc windows 8car wallpaper for pc free downloadhp laserjet 1020 driver download for windows 10 64 bit [url= ]]adobe photoshop free download for pc windows xp [/url] rocketfish webcamadobe photoshop 7.0 free download for pc softonicdownload gta 5 for pc free utorrentfree download windows 10 software for pctexas instruments usb 3.0 xhci host controller windows 10
-
download ometv for pc windows 10google translate download for pc windows 7 32 bitbig jon pc games price is right downloaddefense of the ancients download free pcgta 3 download for pc windows 7 64 bitmicrosoft project 2013 64 bit free download full version free downloadbloody roar pc game full downloadpixelmator tools missing free downloaddownload microsoft office 2013 full version for windows 8.1 64 bit free downloaddownload hid keyboard driver windows 10
-
baseball pc game free download full version dymo labelwriter 400 driver download windows 10 windows 7 taskbar for xp download free windows 10 download stuck at preparing for installation free audacity 2.1 3 download windows 10
-
download pycharm for windows 10pdf printer for windows 8 free download freetoshiba sd card readerfree antivirus download for windows vista 32 bit freedownload windows 10 game bar [url= ]]andy download for pc windows 7 [/url] qualcomm atheros ar956x wireless network adapter acerwindows 7 croatian language pack free download freeclient access as400 windows 10 downloadalone in the dark game download pcdownload teamviewer 9 windows free [url= ]]windows 7 professional vs ultimate which is best free download [/url] opengl free download for windows freefree download coreldraw full version for windows 10avast antivirus free download for pc windows 8.1 64 bitfree download windows mail for xp freefree iis download for windows xp free [url= ]]cricket 07 pc game download [/url] download outlook 2010 for windows 10candy crush saga king game free download for pcdownload game stranded deep pcfree download javascript for windows xp 32 bit freeairmore for pc free download [url= ]]download alice 3 for windows 10 [/url] download mp navigator ex for windows 10 freeatk0110 acpi utility download windows 7 freertl8192cefs17 pc game free downloaddownload bluetooth driver for dell inspiron 1545 windows 10 [url= ]]bus games for pc free download [/url] rar player for windows xp free download freeisedoradownload windows power shell xp freekiller 1535 driverwinamp free download for windows xp free
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Girls Pilot Script Pdf Lena Dunham A Comedy About the Experiences of a Group of Girls in Their Early 20s.md b/spaces/cihyFjudo/fairness-paper-search/Girls Pilot Script Pdf Lena Dunham A Comedy About the Experiences of a Group of Girls in Their Early 20s.md
deleted file mode 100644
index 56ed93199d8d2038ac3961ffdba9deef5b58e698..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Girls Pilot Script Pdf Lena Dunham A Comedy About the Experiences of a Group of Girls in Their Early 20s.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Complete and accurate reporting of programme preparation, implementation and evaluation processes in the field of sexual and reproductive health (SRH) is essential to understand the impact of SRH programmes, as well as to guide their replication and scale-up. To provide an overview of existing reporting tools and identify core items used in programme reporting with a focus on programme preparation, implementation and evaluation processes. A systematic review was completed for the period 2000-2014. Reporting guidelines, checklists and tools, irrespective of study design, applicable for reporting on programmes targeting SRH outcomes, were included. Two independent reviewers screened the title and abstract of all records. Full texts were assessed in duplicate, followed by data extraction on the focus, content area, year of publication, validation and description of reporting items. Data was synthesized using an iterative thematic approach, where items related to programme preparation, implementation and evaluation in each tool were extracted and aggregated into a consolidated list. Out of the 3,656 records screened for title and abstracts, full texts were retrieved for 182 articles, out of which 108 were excluded. Seventy-four full text articles corresponding to 45 reporting tools were retained for synthesis. The majority of tools were developed for reporting on intervention research (n = 15), randomized controlled trials (n = 8) and systematic reviews (n = 7). We identified a total of 50 reporting items, across three main domains and corresponding sub-domains: programme preparation (objective/focus, design, piloting); programme implementation (content, timing/duration/location, providers/staff, participants, delivery, implementation outcomes), and programme evaluation (process evaluation, implementation barriers/facilitators, outcome/impact evaluation). Over the past decade a wide range of tools have been developed to improve the reporting of health research
"
-
- index_list = []
- dir_index = filepath.iterdir()
- for _file in sorted(dir_index):
- # show file url as relative to static path
- rel_path = _file.relative_to(self._directory).as_posix()
- file_url = self._prefix + "/" + rel_path
-
- # if file is a directory, add '/' to the end of the name
- if _file.is_dir():
- file_name = f"{_file.name}/"
- else:
- file_name = _file.name
-
- index_list.append(
- '
".format("\n".join(index_list))
- body = f"\n{h1}\n{ul}\n"
-
- head_str = f"\n{index_of}\n"
- html = f"\n{head_str}\n{body}\n"
-
- return html
-
- def __repr__(self) -> str:
- name = "'" + self.name + "'" if self.name is not None else ""
- return " {directory!r}>".format(
- name=name, path=self._prefix, directory=self._directory
- )
-
-
-class PrefixedSubAppResource(PrefixResource):
- def __init__(self, prefix: str, app: "Application") -> None:
- super().__init__(prefix)
- self._app = app
- for resource in app.router.resources():
- resource.add_prefix(prefix)
-
- def add_prefix(self, prefix: str) -> None:
- super().add_prefix(prefix)
- for resource in self._app.router.resources():
- resource.add_prefix(prefix)
-
- def url_for(self, *args: str, **kwargs: str) -> URL:
- raise RuntimeError(".url_for() is not supported " "by sub-application root")
-
- def get_info(self) -> _InfoDict:
- return {"app": self._app, "prefix": self._prefix}
-
- async def resolve(self, request: Request) -> _Resolve:
- if (
- not request.url.raw_path.startswith(self._prefix2)
- and request.url.raw_path != self._prefix
- ):
- return None, set()
- match_info = await self._app.router.resolve(request)
- match_info.add_app(self._app)
- if isinstance(match_info.http_exception, HTTPMethodNotAllowed):
- methods = match_info.http_exception.allowed_methods
- else:
- methods = set()
- return match_info, methods
-
- def __len__(self) -> int:
- return len(self._app.router.routes())
-
- def __iter__(self) -> Iterator[AbstractRoute]:
- return iter(self._app.router.routes())
-
- def __repr__(self) -> str:
- return " {app!r}>".format(
- prefix=self._prefix, app=self._app
- )
-
-
-class AbstractRuleMatching(abc.ABC):
- @abc.abstractmethod # pragma: no branch
- async def match(self, request: Request) -> bool:
- """Return bool if the request satisfies the criteria"""
-
- @abc.abstractmethod # pragma: no branch
- def get_info(self) -> _InfoDict:
- """Return a dict with additional info useful for introspection"""
-
- @property
- @abc.abstractmethod # pragma: no branch
- def canonical(self) -> str:
- """Return a str"""
-
-
-class Domain(AbstractRuleMatching):
- re_part = re.compile(r"(?!-)[a-z\d-]{1,63}(? None:
- super().__init__()
- self._domain = self.validation(domain)
-
- @property
- def canonical(self) -> str:
- return self._domain
-
- def validation(self, domain: str) -> str:
- if not isinstance(domain, str):
- raise TypeError("Domain must be str")
- domain = domain.rstrip(".").lower()
- if not domain:
- raise ValueError("Domain cannot be empty")
- elif "://" in domain:
- raise ValueError("Scheme not supported")
- url = URL("http://" + domain)
- assert url.raw_host is not None
- if not all(self.re_part.fullmatch(x) for x in url.raw_host.split(".")):
- raise ValueError("Domain not valid")
- if url.port == 80:
- return url.raw_host
- return f"{url.raw_host}:{url.port}"
-
- async def match(self, request: Request) -> bool:
- host = request.headers.get(hdrs.HOST)
- if not host:
- return False
- return self.match_domain(host)
-
- def match_domain(self, host: str) -> bool:
- return host.lower() == self._domain
-
- def get_info(self) -> _InfoDict:
- return {"domain": self._domain}
-
-
-class MaskDomain(Domain):
- re_part = re.compile(r"(?!-)[a-z\d\*-]{1,63}(? None:
- super().__init__(domain)
- mask = self._domain.replace(".", r"\.").replace("*", ".*")
- self._mask = re.compile(mask)
-
- @property
- def canonical(self) -> str:
- return self._mask.pattern
-
- def match_domain(self, host: str) -> bool:
- return self._mask.fullmatch(host) is not None
-
-
-class MatchedSubAppResource(PrefixedSubAppResource):
- def __init__(self, rule: AbstractRuleMatching, app: "Application") -> None:
- AbstractResource.__init__(self)
- self._prefix = ""
- self._app = app
- self._rule = rule
-
- @property
- def canonical(self) -> str:
- return self._rule.canonical
-
- def get_info(self) -> _InfoDict:
- return {"app": self._app, "rule": self._rule}
-
- async def resolve(self, request: Request) -> _Resolve:
- if not await self._rule.match(request):
- return None, set()
- match_info = await self._app.router.resolve(request)
- match_info.add_app(self._app)
- if isinstance(match_info.http_exception, HTTPMethodNotAllowed):
- methods = match_info.http_exception.allowed_methods
- else:
- methods = set()
- return match_info, methods
-
- def __repr__(self) -> str:
- return " {app!r}>" "".format(app=self._app)
-
-
-class ResourceRoute(AbstractRoute):
- """A route with resource"""
-
- def __init__(
- self,
- method: str,
- handler: Union[Handler, Type[AbstractView]],
- resource: AbstractResource,
- *,
- expect_handler: Optional[_ExpectHandler] = None,
- ) -> None:
- super().__init__(
- method, handler, expect_handler=expect_handler, resource=resource
- )
-
- def __repr__(self) -> str:
- return " {handler!r}".format(
- method=self.method, resource=self._resource, handler=self.handler
- )
-
- @property
- def name(self) -> Optional[str]:
- if self._resource is None:
- return None
- return self._resource.name
-
- def url_for(self, *args: str, **kwargs: str) -> URL:
- """Construct url for route with additional params."""
- assert self._resource is not None
- return self._resource.url_for(*args, **kwargs)
-
- def get_info(self) -> _InfoDict:
- assert self._resource is not None
- return self._resource.get_info()
-
-
-class SystemRoute(AbstractRoute):
- def __init__(self, http_exception: HTTPException) -> None:
- super().__init__(hdrs.METH_ANY, self._handle)
- self._http_exception = http_exception
-
- def url_for(self, *args: str, **kwargs: str) -> URL:
- raise RuntimeError(".url_for() is not allowed for SystemRoute")
-
- @property
- def name(self) -> Optional[str]:
- return None
-
- def get_info(self) -> _InfoDict:
- return {"http_exception": self._http_exception}
-
- async def _handle(self, request: Request) -> StreamResponse:
- raise self._http_exception
-
- @property
- def status(self) -> int:
- return self._http_exception.status
-
- @property
- def reason(self) -> str:
- return self._http_exception.reason
-
- def __repr__(self) -> str:
- return "".format(self=self)
-
-
-class View(AbstractView):
- async def _iter(self) -> StreamResponse:
- if self.request.method not in hdrs.METH_ALL:
- self._raise_allowed_methods()
- method: Callable[[], Awaitable[StreamResponse]] = getattr(
- self, self.request.method.lower(), None
- )
- if method is None:
- self._raise_allowed_methods()
- resp = await method()
- return resp
-
- def __await__(self) -> Generator[Any, None, StreamResponse]:
- return self._iter().__await__()
-
- def _raise_allowed_methods(self) -> None:
- allowed_methods = {m for m in hdrs.METH_ALL if hasattr(self, m.lower())}
- raise HTTPMethodNotAllowed(self.request.method, allowed_methods)
-
-
-class ResourcesView(Sized, Iterable[AbstractResource], Container[AbstractResource]):
- def __init__(self, resources: List[AbstractResource]) -> None:
- self._resources = resources
-
- def __len__(self) -> int:
- return len(self._resources)
-
- def __iter__(self) -> Iterator[AbstractResource]:
- yield from self._resources
-
- def __contains__(self, resource: object) -> bool:
- return resource in self._resources
-
-
-class RoutesView(Sized, Iterable[AbstractRoute], Container[AbstractRoute]):
- def __init__(self, resources: List[AbstractResource]):
- self._routes: List[AbstractRoute] = []
- for resource in resources:
- for route in resource:
- self._routes.append(route)
-
- def __len__(self) -> int:
- return len(self._routes)
-
- def __iter__(self) -> Iterator[AbstractRoute]:
- yield from self._routes
-
- def __contains__(self, route: object) -> bool:
- return route in self._routes
-
-
-class UrlDispatcher(AbstractRouter, Mapping[str, AbstractResource]):
-
- NAME_SPLIT_RE = re.compile(r"[.:-]")
-
- def __init__(self) -> None:
- super().__init__()
- self._resources: List[AbstractResource] = []
- self._named_resources: Dict[str, AbstractResource] = {}
-
- async def resolve(self, request: Request) -> UrlMappingMatchInfo:
- method = request.method
- allowed_methods: Set[str] = set()
-
- for resource in self._resources:
- match_dict, allowed = await resource.resolve(request)
- if match_dict is not None:
- return match_dict
- else:
- allowed_methods |= allowed
-
- if allowed_methods:
- return MatchInfoError(HTTPMethodNotAllowed(method, allowed_methods))
- else:
- return MatchInfoError(HTTPNotFound())
-
- def __iter__(self) -> Iterator[str]:
- return iter(self._named_resources)
-
- def __len__(self) -> int:
- return len(self._named_resources)
-
- def __contains__(self, resource: object) -> bool:
- return resource in self._named_resources
-
- def __getitem__(self, name: str) -> AbstractResource:
- return self._named_resources[name]
-
- def resources(self) -> ResourcesView:
- return ResourcesView(self._resources)
-
- def routes(self) -> RoutesView:
- return RoutesView(self._resources)
-
- def named_resources(self) -> Mapping[str, AbstractResource]:
- return MappingProxyType(self._named_resources)
-
- def register_resource(self, resource: AbstractResource) -> None:
- assert isinstance(
- resource, AbstractResource
- ), f"Instance of AbstractResource class is required, got {resource!r}"
- if self.frozen:
- raise RuntimeError("Cannot register a resource into frozen router.")
-
- name = resource.name
-
- if name is not None:
- parts = self.NAME_SPLIT_RE.split(name)
- for part in parts:
- if keyword.iskeyword(part):
- raise ValueError(
- f"Incorrect route name {name!r}, "
- "python keywords cannot be used "
- "for route name"
- )
- if not part.isidentifier():
- raise ValueError(
- "Incorrect route name {!r}, "
- "the name should be a sequence of "
- "python identifiers separated "
- "by dash, dot or column".format(name)
- )
- if name in self._named_resources:
- raise ValueError(
- "Duplicate {!r}, "
- "already handled by {!r}".format(name, self._named_resources[name])
- )
- self._named_resources[name] = resource
- self._resources.append(resource)
-
- def add_resource(self, path: str, *, name: Optional[str] = None) -> Resource:
- if path and not path.startswith("/"):
- raise ValueError("path should be started with / or be empty")
- # Reuse last added resource if path and name are the same
- if self._resources:
- resource = self._resources[-1]
- if resource.name == name and resource.raw_match(path):
- return cast(Resource, resource)
- if not ("{" in path or "}" in path or ROUTE_RE.search(path)):
- resource = PlainResource(_requote_path(path), name=name)
- self.register_resource(resource)
- return resource
- resource = DynamicResource(path, name=name)
- self.register_resource(resource)
- return resource
-
- def add_route(
- self,
- method: str,
- path: str,
- handler: Union[Handler, Type[AbstractView]],
- *,
- name: Optional[str] = None,
- expect_handler: Optional[_ExpectHandler] = None,
- ) -> AbstractRoute:
- resource = self.add_resource(path, name=name)
- return resource.add_route(method, handler, expect_handler=expect_handler)
-
- def add_static(
- self,
- prefix: str,
- path: PathLike,
- *,
- name: Optional[str] = None,
- expect_handler: Optional[_ExpectHandler] = None,
- chunk_size: int = 256 * 1024,
- show_index: bool = False,
- follow_symlinks: bool = False,
- append_version: bool = False,
- ) -> AbstractResource:
- """Add static files view.
-
- prefix - url prefix
- path - folder with files
-
- """
- assert prefix.startswith("/")
- if prefix.endswith("/"):
- prefix = prefix[:-1]
- resource = StaticResource(
- prefix,
- path,
- name=name,
- expect_handler=expect_handler,
- chunk_size=chunk_size,
- show_index=show_index,
- follow_symlinks=follow_symlinks,
- append_version=append_version,
- )
- self.register_resource(resource)
- return resource
-
- def add_head(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute:
- """Shortcut for add_route with method HEAD."""
- return self.add_route(hdrs.METH_HEAD, path, handler, **kwargs)
-
- def add_options(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute:
- """Shortcut for add_route with method OPTIONS."""
- return self.add_route(hdrs.METH_OPTIONS, path, handler, **kwargs)
-
- def add_get(
- self,
- path: str,
- handler: Handler,
- *,
- name: Optional[str] = None,
- allow_head: bool = True,
- **kwargs: Any,
- ) -> AbstractRoute:
- """Shortcut for add_route with method GET.
-
- If allow_head is true, another
- route is added allowing head requests to the same endpoint.
- """
- resource = self.add_resource(path, name=name)
- if allow_head:
- resource.add_route(hdrs.METH_HEAD, handler, **kwargs)
- return resource.add_route(hdrs.METH_GET, handler, **kwargs)
-
- def add_post(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute:
- """Shortcut for add_route with method POST."""
- return self.add_route(hdrs.METH_POST, path, handler, **kwargs)
-
- def add_put(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute:
- """Shortcut for add_route with method PUT."""
- return self.add_route(hdrs.METH_PUT, path, handler, **kwargs)
-
- def add_patch(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute:
- """Shortcut for add_route with method PATCH."""
- return self.add_route(hdrs.METH_PATCH, path, handler, **kwargs)
-
- def add_delete(self, path: str, handler: Handler, **kwargs: Any) -> AbstractRoute:
- """Shortcut for add_route with method DELETE."""
- return self.add_route(hdrs.METH_DELETE, path, handler, **kwargs)
-
- def add_view(
- self, path: str, handler: Type[AbstractView], **kwargs: Any
- ) -> AbstractRoute:
- """Shortcut for add_route with ANY methods for a class-based view."""
- return self.add_route(hdrs.METH_ANY, path, handler, **kwargs)
-
- def freeze(self) -> None:
- super().freeze()
- for resource in self._resources:
- resource.freeze()
-
- def add_routes(self, routes: Iterable[AbstractRouteDef]) -> List[AbstractRoute]:
- """Append routes to route table.
-
- Parameter should be a sequence of RouteDef objects.
-
- Returns a list of registered AbstractRoute instances.
- """
- registered_routes = []
- for route_def in routes:
- registered_routes.extend(route_def.register(self))
- return registered_routes
-
-
-def _quote_path(value: str) -> str:
- if YARL_VERSION < (1, 6):
- value = value.replace("%", "%25")
- return URL.build(path=value, encoded=False).raw_path
-
-
-def _unquote_path(value: str) -> str:
- return URL.build(path=value, encoded=True).path
-
-
-def _requote_path(value: str) -> str:
- # Quote non-ascii characters and other characters which must be quoted,
- # but preserve existing %-sequences.
- result = _quote_path(value)
- if "%" in value:
- result = result.replace("%25", "%")
- return result
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/pointInsidePen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/pointInsidePen.py
deleted file mode 100644
index 8a579ae4c93f824b5ce3a5e80097aeffd5f5933d..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/pointInsidePen.py
+++ /dev/null
@@ -1,192 +0,0 @@
-"""fontTools.pens.pointInsidePen -- Pen implementing "point inside" testing
-for shapes.
-"""
-
-from fontTools.pens.basePen import BasePen
-from fontTools.misc.bezierTools import solveQuadratic, solveCubic
-
-
-__all__ = ["PointInsidePen"]
-
-
-class PointInsidePen(BasePen):
-
- """This pen implements "point inside" testing: to test whether
- a given point lies inside the shape (black) or outside (white).
- Instances of this class can be recycled, as long as the
- setTestPoint() method is used to set the new point to test.
-
- Typical usage:
-
- pen = PointInsidePen(glyphSet, (100, 200))
- outline.draw(pen)
- isInside = pen.getResult()
-
- Both the even-odd algorithm and the non-zero-winding-rule
- algorithm are implemented. The latter is the default, specify
- True for the evenOdd argument of __init__ or setTestPoint
- to use the even-odd algorithm.
- """
-
- # This class implements the classical "shoot a ray from the test point
- # to infinity and count how many times it intersects the outline" (as well
- # as the non-zero variant, where the counter is incremented if the outline
- # intersects the ray in one direction and decremented if it intersects in
- # the other direction).
- # I found an amazingly clear explanation of the subtleties involved in
- # implementing this correctly for polygons here:
- # http://graphics.cs.ucdavis.edu/~okreylos/TAship/Spring2000/PointInPolygon.html
- # I extended the principles outlined on that page to curves.
-
- def __init__(self, glyphSet, testPoint, evenOdd=False):
- BasePen.__init__(self, glyphSet)
- self.setTestPoint(testPoint, evenOdd)
-
- def setTestPoint(self, testPoint, evenOdd=False):
- """Set the point to test. Call this _before_ the outline gets drawn."""
- self.testPoint = testPoint
- self.evenOdd = evenOdd
- self.firstPoint = None
- self.intersectionCount = 0
-
- def getWinding(self):
- if self.firstPoint is not None:
- # always make sure the sub paths are closed; the algorithm only works
- # for closed paths.
- self.closePath()
- return self.intersectionCount
-
- def getResult(self):
- """After the shape has been drawn, getResult() returns True if the test
- point lies within the (black) shape, and False if it doesn't.
- """
- winding = self.getWinding()
- if self.evenOdd:
- result = winding % 2
- else: # non-zero
- result = self.intersectionCount != 0
- return not not result
-
- def _addIntersection(self, goingUp):
- if self.evenOdd or goingUp:
- self.intersectionCount += 1
- else:
- self.intersectionCount -= 1
-
- def _moveTo(self, point):
- if self.firstPoint is not None:
- # always make sure the sub paths are closed; the algorithm only works
- # for closed paths.
- self.closePath()
- self.firstPoint = point
-
- def _lineTo(self, point):
- x, y = self.testPoint
- x1, y1 = self._getCurrentPoint()
- x2, y2 = point
-
- if x1 < x and x2 < x:
- return
- if y1 < y and y2 < y:
- return
- if y1 >= y and y2 >= y:
- return
-
- dx = x2 - x1
- dy = y2 - y1
- t = (y - y1) / dy
- ix = dx * t + x1
- if ix < x:
- return
- self._addIntersection(y2 > y1)
-
- def _curveToOne(self, bcp1, bcp2, point):
- x, y = self.testPoint
- x1, y1 = self._getCurrentPoint()
- x2, y2 = bcp1
- x3, y3 = bcp2
- x4, y4 = point
-
- if x1 < x and x2 < x and x3 < x and x4 < x:
- return
- if y1 < y and y2 < y and y3 < y and y4 < y:
- return
- if y1 >= y and y2 >= y and y3 >= y and y4 >= y:
- return
-
- dy = y1
- cy = (y2 - dy) * 3.0
- by = (y3 - y2) * 3.0 - cy
- ay = y4 - dy - cy - by
- solutions = sorted(solveCubic(ay, by, cy, dy - y))
- solutions = [t for t in solutions if -0.0 <= t <= 1.0]
- if not solutions:
- return
-
- dx = x1
- cx = (x2 - dx) * 3.0
- bx = (x3 - x2) * 3.0 - cx
- ax = x4 - dx - cx - bx
-
- above = y1 >= y
- lastT = None
- for t in solutions:
- if t == lastT:
- continue
- lastT = t
- t2 = t * t
- t3 = t2 * t
-
- direction = 3 * ay * t2 + 2 * by * t + cy
- incomingGoingUp = outgoingGoingUp = direction > 0.0
- if direction == 0.0:
- direction = 6 * ay * t + 2 * by
- outgoingGoingUp = direction > 0.0
- incomingGoingUp = not outgoingGoingUp
- if direction == 0.0:
- direction = ay
- incomingGoingUp = outgoingGoingUp = direction > 0.0
-
- xt = ax * t3 + bx * t2 + cx * t + dx
- if xt < x:
- continue
-
- if t in (0.0, -0.0):
- if not outgoingGoingUp:
- self._addIntersection(outgoingGoingUp)
- elif t == 1.0:
- if incomingGoingUp:
- self._addIntersection(incomingGoingUp)
- else:
- if incomingGoingUp == outgoingGoingUp:
- self._addIntersection(outgoingGoingUp)
- # else:
- # we're not really intersecting, merely touching
-
- def _qCurveToOne_unfinished(self, bcp, point):
- # XXX need to finish this, for now doing it through a cubic
- # (BasePen implements _qCurveTo in terms of a cubic) will
- # have to do.
- x, y = self.testPoint
- x1, y1 = self._getCurrentPoint()
- x2, y2 = bcp
- x3, y3 = point
- c = y1
- b = (y2 - c) * 2.0
- a = y3 - c - b
- solutions = sorted(solveQuadratic(a, b, c - y))
- solutions = [
- t for t in solutions if ZERO_MINUS_EPSILON <= t <= ONE_PLUS_EPSILON
- ]
- if not solutions:
- return
- # XXX
-
- def _closePath(self):
- if self._getCurrentPoint() != self.firstPoint:
- self.lineTo(self.firstPoint)
- self.firstPoint = None
-
- def _endPath(self):
- """Insideness is not defined for open contours."""
- raise NotImplementedError
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avcodec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avcodec.h
deleted file mode 100644
index 1e91b9cb532de6cfd4b803f4a6ecb67e154686a0..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avcodec.h
+++ /dev/null
@@ -1,3242 +0,0 @@
-/*
- * copyright (c) 2001 Fabrice Bellard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_AVCODEC_H
-#define AVCODEC_AVCODEC_H
-
-/**
- * @file
- * @ingroup libavc
- * Libavcodec external API header
- */
-
-#include "libavutil/samplefmt.h"
-#include "libavutil/attributes.h"
-#include "libavutil/avutil.h"
-#include "libavutil/buffer.h"
-#include "libavutil/dict.h"
-#include "libavutil/frame.h"
-#include "libavutil/log.h"
-#include "libavutil/pixfmt.h"
-#include "libavutil/rational.h"
-
-#include "codec.h"
-#include "codec_desc.h"
-#include "codec_par.h"
-#include "codec_id.h"
-#include "defs.h"
-#include "packet.h"
-#include "version_major.h"
-#ifndef HAVE_AV_CONFIG_H
-/* When included as part of the ffmpeg build, only include the major version
- * to avoid unnecessary rebuilds. When included externally, keep including
- * the full version information. */
-#include "version.h"
-#endif
-
-/**
- * @defgroup libavc libavcodec
- * Encoding/Decoding Library
- *
- * @{
- *
- * @defgroup lavc_decoding Decoding
- * @{
- * @}
- *
- * @defgroup lavc_encoding Encoding
- * @{
- * @}
- *
- * @defgroup lavc_codec Codecs
- * @{
- * @defgroup lavc_codec_native Native Codecs
- * @{
- * @}
- * @defgroup lavc_codec_wrappers External library wrappers
- * @{
- * @}
- * @defgroup lavc_codec_hwaccel Hardware Accelerators bridge
- * @{
- * @}
- * @}
- * @defgroup lavc_internal Internal
- * @{
- * @}
- * @}
- */
-
-/**
- * @ingroup libavc
- * @defgroup lavc_encdec send/receive encoding and decoding API overview
- * @{
- *
- * The avcodec_send_packet()/avcodec_receive_frame()/avcodec_send_frame()/
- * avcodec_receive_packet() functions provide an encode/decode API, which
- * decouples input and output.
- *
- * The API is very similar for encoding/decoding and audio/video, and works as
- * follows:
- * - Set up and open the AVCodecContext as usual.
- * - Send valid input:
- * - For decoding, call avcodec_send_packet() to give the decoder raw
- * compressed data in an AVPacket.
- * - For encoding, call avcodec_send_frame() to give the encoder an AVFrame
- * containing uncompressed audio or video.
- *
- * In both cases, it is recommended that AVPackets and AVFrames are
- * refcounted, or libavcodec might have to copy the input data. (libavformat
- * always returns refcounted AVPackets, and av_frame_get_buffer() allocates
- * refcounted AVFrames.)
- * - Receive output in a loop. Periodically call one of the avcodec_receive_*()
- * functions and process their output:
- * - For decoding, call avcodec_receive_frame(). On success, it will return
- * an AVFrame containing uncompressed audio or video data.
- * - For encoding, call avcodec_receive_packet(). On success, it will return
- * an AVPacket with a compressed frame.
- *
- * Repeat this call until it returns AVERROR(EAGAIN) or an error. The
- * AVERROR(EAGAIN) return value means that new input data is required to
- * return new output. In this case, continue with sending input. For each
- * input frame/packet, the codec will typically return 1 output frame/packet,
- * but it can also be 0 or more than 1.
- *
- * At the beginning of decoding or encoding, the codec might accept multiple
- * input frames/packets without returning a frame, until its internal buffers
- * are filled. This situation is handled transparently if you follow the steps
- * outlined above.
- *
- * In theory, sending input can result in EAGAIN - this should happen only if
- * not all output was received. You can use this to structure alternative decode
- * or encode loops other than the one suggested above. For example, you could
- * try sending new input on each iteration, and try to receive output if that
- * returns EAGAIN.
- *
- * End of stream situations. These require "flushing" (aka draining) the codec,
- * as the codec might buffer multiple frames or packets internally for
- * performance or out of necessity (consider B-frames).
- * This is handled as follows:
- * - Instead of valid input, send NULL to the avcodec_send_packet() (decoding)
- * or avcodec_send_frame() (encoding) functions. This will enter draining
- * mode.
- * - Call avcodec_receive_frame() (decoding) or avcodec_receive_packet()
- * (encoding) in a loop until AVERROR_EOF is returned. The functions will
- * not return AVERROR(EAGAIN), unless you forgot to enter draining mode.
- * - Before decoding can be resumed again, the codec has to be reset with
- * avcodec_flush_buffers().
- *
- * Using the API as outlined above is highly recommended. But it is also
- * possible to call functions outside of this rigid schema. For example, you can
- * call avcodec_send_packet() repeatedly without calling
- * avcodec_receive_frame(). In this case, avcodec_send_packet() will succeed
- * until the codec's internal buffer has been filled up (which is typically of
- * size 1 per output frame, after initial input), and then reject input with
- * AVERROR(EAGAIN). Once it starts rejecting input, you have no choice but to
- * read at least some output.
- *
- * Not all codecs will follow a rigid and predictable dataflow; the only
- * guarantee is that an AVERROR(EAGAIN) return value on a send/receive call on
- * one end implies that a receive/send call on the other end will succeed, or
- * at least will not fail with AVERROR(EAGAIN). In general, no codec will
- * permit unlimited buffering of input or output.
- *
- * A codec is not allowed to return AVERROR(EAGAIN) for both sending and receiving. This
- * would be an invalid state, which could put the codec user into an endless
- * loop. The API has no concept of time either: it cannot happen that trying to
- * do avcodec_send_packet() results in AVERROR(EAGAIN), but a repeated call 1 second
- * later accepts the packet (with no other receive/flush API calls involved).
- * The API is a strict state machine, and the passage of time is not supposed
- * to influence it. Some timing-dependent behavior might still be deemed
- * acceptable in certain cases. But it must never result in both send/receive
- * returning EAGAIN at the same time at any point. It must also absolutely be
- * avoided that the current state is "unstable" and can "flip-flop" between
- * the send/receive APIs allowing progress. For example, it's not allowed that
- * the codec randomly decides that it actually wants to consume a packet now
- * instead of returning a frame, after it just returned AVERROR(EAGAIN) on an
- * avcodec_send_packet() call.
- * @}
- */
-
-/**
- * @defgroup lavc_core Core functions/structures.
- * @ingroup libavc
- *
- * Basic definitions, functions for querying libavcodec capabilities,
- * allocating core structures, etc.
- * @{
- */
-
-/**
- * @ingroup lavc_encoding
- * minimum encoding buffer size
- * Used to avoid some checks during header writing.
- */
-#define AV_INPUT_BUFFER_MIN_SIZE 16384
-
-/**
- * @ingroup lavc_encoding
- */
-typedef struct RcOverride{
- int start_frame;
- int end_frame;
- int qscale; // If this is 0 then quality_factor will be used instead.
- float quality_factor;
-} RcOverride;
-
-/* encoding support
- These flags can be passed in AVCodecContext.flags before initialization.
- Note: Not everything is supported yet.
-*/
-
-/**
- * Allow decoders to produce frames with data planes that are not aligned
- * to CPU requirements (e.g. due to cropping).
- */
-#define AV_CODEC_FLAG_UNALIGNED (1 << 0)
-/**
- * Use fixed qscale.
- */
-#define AV_CODEC_FLAG_QSCALE (1 << 1)
-/**
- * 4 MV per MB allowed / advanced prediction for H.263.
- */
-#define AV_CODEC_FLAG_4MV (1 << 2)
-/**
- * Output even those frames that might be corrupted.
- */
-#define AV_CODEC_FLAG_OUTPUT_CORRUPT (1 << 3)
-/**
- * Use qpel MC.
- */
-#define AV_CODEC_FLAG_QPEL (1 << 4)
-/**
- * Don't output frames whose parameters differ from first
- * decoded frame in stream.
- */
-#define AV_CODEC_FLAG_DROPCHANGED (1 << 5)
-/**
- * Request the encoder to output reconstructed frames, i.e.\ frames that would
- * be produced by decoding the encoded bistream. These frames may be retrieved
- * by calling avcodec_receive_frame() immediately after a successful call to
- * avcodec_receive_packet().
- *
- * Should only be used with encoders flagged with the
- * @ref AV_CODEC_CAP_ENCODER_RECON_FRAME capability.
- *
- * @note
- * Each reconstructed frame returned by the encoder corresponds to the last
- * encoded packet, i.e. the frames are returned in coded order rather than
- * presentation order.
- *
- * @note
- * Frame parameters (like pixel format or dimensions) do not have to match the
- * AVCodecContext values. Make sure to use the values from the returned frame.
- */
-#define AV_CODEC_FLAG_RECON_FRAME (1 << 6)
-/**
- * @par decoding
- * Request the decoder to propagate each packet's AVPacket.opaque and
- * AVPacket.opaque_ref to its corresponding output AVFrame.
- *
- * @par encoding:
- * Request the encoder to propagate each frame's AVFrame.opaque and
- * AVFrame.opaque_ref values to its corresponding output AVPacket.
- *
- * @par
- * May only be set on encoders that have the
- * @ref AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE capability flag.
- *
- * @note
- * While in typical cases one input frame produces exactly one output packet
- * (perhaps after a delay), in general the mapping of frames to packets is
- * M-to-N, so
- * - Any number of input frames may be associated with any given output packet.
- * This includes zero - e.g. some encoders may output packets that carry only
- * metadata about the whole stream.
- * - A given input frame may be associated with any number of output packets.
- * Again this includes zero - e.g. some encoders may drop frames under certain
- * conditions.
- * .
- * This implies that when using this flag, the caller must NOT assume that
- * - a given input frame's opaques will necessarily appear on some output packet;
- * - every output packet will have some non-NULL opaque value.
- * .
- * When an output packet contains multiple frames, the opaque values will be
- * taken from the first of those.
- *
- * @note
- * The converse holds for decoders, with frames and packets switched.
- */
-#define AV_CODEC_FLAG_COPY_OPAQUE (1 << 7)
-/**
- * Signal to the encoder that the values of AVFrame.duration are valid and
- * should be used (typically for transferring them to output packets).
- *
- * If this flag is not set, frame durations are ignored.
- */
-#define AV_CODEC_FLAG_FRAME_DURATION (1 << 8)
-/**
- * Use internal 2pass ratecontrol in first pass mode.
- */
-#define AV_CODEC_FLAG_PASS1 (1 << 9)
-/**
- * Use internal 2pass ratecontrol in second pass mode.
- */
-#define AV_CODEC_FLAG_PASS2 (1 << 10)
-/**
- * loop filter.
- */
-#define AV_CODEC_FLAG_LOOP_FILTER (1 << 11)
-/**
- * Only decode/encode grayscale.
- */
-#define AV_CODEC_FLAG_GRAY (1 << 13)
-/**
- * error[?] variables will be set during encoding.
- */
-#define AV_CODEC_FLAG_PSNR (1 << 15)
-/**
- * Use interlaced DCT.
- */
-#define AV_CODEC_FLAG_INTERLACED_DCT (1 << 18)
-/**
- * Force low delay.
- */
-#define AV_CODEC_FLAG_LOW_DELAY (1 << 19)
-/**
- * Place global headers in extradata instead of every keyframe.
- */
-#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22)
-/**
- * Use only bitexact stuff (except (I)DCT).
- */
-#define AV_CODEC_FLAG_BITEXACT (1 << 23)
-/* Fx : Flag for H.263+ extra options */
-/**
- * H.263 advanced intra coding / MPEG-4 AC prediction
- */
-#define AV_CODEC_FLAG_AC_PRED (1 << 24)
-/**
- * interlaced motion estimation
- */
-#define AV_CODEC_FLAG_INTERLACED_ME (1 << 29)
-#define AV_CODEC_FLAG_CLOSED_GOP (1U << 31)
-
-/**
- * Allow non spec compliant speedup tricks.
- */
-#define AV_CODEC_FLAG2_FAST (1 << 0)
-/**
- * Skip bitstream encoding.
- */
-#define AV_CODEC_FLAG2_NO_OUTPUT (1 << 2)
-/**
- * Place global headers at every keyframe instead of in extradata.
- */
-#define AV_CODEC_FLAG2_LOCAL_HEADER (1 << 3)
-
-/**
- * Input bitstream might be truncated at a packet boundaries
- * instead of only at frame boundaries.
- */
-#define AV_CODEC_FLAG2_CHUNKS (1 << 15)
-/**
- * Discard cropping information from SPS.
- */
-#define AV_CODEC_FLAG2_IGNORE_CROP (1 << 16)
-
-/**
- * Show all frames before the first keyframe
- */
-#define AV_CODEC_FLAG2_SHOW_ALL (1 << 22)
-/**
- * Export motion vectors through frame side data
- */
-#define AV_CODEC_FLAG2_EXPORT_MVS (1 << 28)
-/**
- * Do not skip samples and export skip information as frame side data
- */
-#define AV_CODEC_FLAG2_SKIP_MANUAL (1 << 29)
-/**
- * Do not reset ASS ReadOrder field on flush (subtitles decoding)
- */
-#define AV_CODEC_FLAG2_RO_FLUSH_NOOP (1 << 30)
-/**
- * Generate/parse ICC profiles on encode/decode, as appropriate for the type of
- * file. No effect on codecs which cannot contain embedded ICC profiles, or
- * when compiled without support for lcms2.
- */
-#define AV_CODEC_FLAG2_ICC_PROFILES (1U << 31)
-
-/* Exported side data.
- These flags can be passed in AVCodecContext.export_side_data before initialization.
-*/
-/**
- * Export motion vectors through frame side data
- */
-#define AV_CODEC_EXPORT_DATA_MVS (1 << 0)
-/**
- * Export encoder Producer Reference Time through packet side data
- */
-#define AV_CODEC_EXPORT_DATA_PRFT (1 << 1)
-/**
- * Decoding only.
- * Export the AVVideoEncParams structure through frame side data.
- */
-#define AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS (1 << 2)
-/**
- * Decoding only.
- * Do not apply film grain, export it instead.
- */
-#define AV_CODEC_EXPORT_DATA_FILM_GRAIN (1 << 3)
-
-/**
- * The decoder will keep a reference to the frame and may reuse it later.
- */
-#define AV_GET_BUFFER_FLAG_REF (1 << 0)
-
-/**
- * The encoder will keep a reference to the packet and may reuse it later.
- */
-#define AV_GET_ENCODE_BUFFER_FLAG_REF (1 << 0)
-
-struct AVCodecInternal;
-
-/**
- * main external API structure.
- * New fields can be added to the end with minor version bumps.
- * Removal, reordering and changes to existing fields require a major
- * version bump.
- * You can use AVOptions (av_opt* / av_set/get*()) to access these fields from user
- * applications.
- * The name string for AVOptions options matches the associated command line
- * parameter name and can be found in libavcodec/options_table.h
- * The AVOption/command line parameter names differ in some cases from the C
- * structure field names for historic reasons or brevity.
- * sizeof(AVCodecContext) must not be used outside libav*.
- */
-typedef struct AVCodecContext {
- /**
- * information on struct for av_log
- * - set by avcodec_alloc_context3
- */
- const AVClass *av_class;
- int log_level_offset;
-
- enum AVMediaType codec_type; /* see AVMEDIA_TYPE_xxx */
- const struct AVCodec *codec;
- enum AVCodecID codec_id; /* see AV_CODEC_ID_xxx */
-
- /**
- * fourcc (LSB first, so "ABCD" -> ('D'<<24) + ('C'<<16) + ('B'<<8) + 'A').
- * This is used to work around some encoder bugs.
- * A demuxer should set this to what is stored in the field used to identify the codec.
- * If there are multiple such fields in a container then the demuxer should choose the one
- * which maximizes the information about the used codec.
- * If the codec tag field in a container is larger than 32 bits then the demuxer should
- * remap the longer ID to 32 bits with a table or other structure. Alternatively a new
- * extra_codec_tag + size could be added but for this a clear advantage must be demonstrated
- * first.
- * - encoding: Set by user, if not then the default based on codec_id will be used.
- * - decoding: Set by user, will be converted to uppercase by libavcodec during init.
- */
- unsigned int codec_tag;
-
- void *priv_data;
-
- /**
- * Private context used for internal data.
- *
- * Unlike priv_data, this is not codec-specific. It is used in general
- * libavcodec functions.
- */
- struct AVCodecInternal *internal;
-
- /**
- * Private data of the user, can be used to carry app specific stuff.
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- void *opaque;
-
- /**
- * the average bitrate
- * - encoding: Set by user; unused for constant quantizer encoding.
- * - decoding: Set by user, may be overwritten by libavcodec
- * if this info is available in the stream
- */
- int64_t bit_rate;
-
- /**
- * number of bits the bitstream is allowed to diverge from the reference.
- * the reference can be CBR (for CBR pass1) or VBR (for pass2)
- * - encoding: Set by user; unused for constant quantizer encoding.
- * - decoding: unused
- */
- int bit_rate_tolerance;
-
- /**
- * Global quality for codecs which cannot change it per frame.
- * This should be proportional to MPEG-1/2/4 qscale.
- * - encoding: Set by user.
- * - decoding: unused
- */
- int global_quality;
-
- /**
- * - encoding: Set by user.
- * - decoding: unused
- */
- int compression_level;
-#define FF_COMPRESSION_DEFAULT -1
-
- /**
- * AV_CODEC_FLAG_*.
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- int flags;
-
- /**
- * AV_CODEC_FLAG2_*
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- int flags2;
-
- /**
- * some codecs need / can use extradata like Huffman tables.
- * MJPEG: Huffman tables
- * rv10: additional flags
- * MPEG-4: global headers (they can be in the bitstream or here)
- * The allocated memory should be AV_INPUT_BUFFER_PADDING_SIZE bytes larger
- * than extradata_size to avoid problems if it is read with the bitstream reader.
- * The bytewise contents of extradata must not depend on the architecture or CPU endianness.
- * Must be allocated with the av_malloc() family of functions.
- * - encoding: Set/allocated/freed by libavcodec.
- * - decoding: Set/allocated/freed by user.
- */
- uint8_t *extradata;
- int extradata_size;
-
- /**
- * This is the fundamental unit of time (in seconds) in terms
- * of which frame timestamps are represented. For fixed-fps content,
- * timebase should be 1/framerate and timestamp increments should be
- * identically 1.
- * This often, but not always is the inverse of the frame rate or field rate
- * for video. 1/time_base is not the average frame rate if the frame rate is not
- * constant.
- *
- * Like containers, elementary streams also can store timestamps, 1/time_base
- * is the unit in which these timestamps are specified.
- * As example of such codec time base see ISO/IEC 14496-2:2001(E)
- * vop_time_increment_resolution and fixed_vop_rate
- * (fixed_vop_rate == 0 implies that it is different from the framerate)
- *
- * - encoding: MUST be set by user.
- * - decoding: unused.
- */
- AVRational time_base;
-
- /**
- * For some codecs, the time base is closer to the field rate than the frame rate.
- * Most notably, H.264 and MPEG-2 specify time_base as half of frame duration
- * if no telecine is used ...
- *
- * Set to time_base ticks per frame. Default 1, e.g., H.264/MPEG-2 set it to 2.
- */
- int ticks_per_frame;
-
- /**
- * Codec delay.
- *
- * Encoding: Number of frames delay there will be from the encoder input to
- * the decoder output. (we assume the decoder matches the spec)
- * Decoding: Number of frames delay in addition to what a standard decoder
- * as specified in the spec would produce.
- *
- * Video:
- * Number of frames the decoded output will be delayed relative to the
- * encoded input.
- *
- * Audio:
- * For encoding, this field is unused (see initial_padding).
- *
- * For decoding, this is the number of samples the decoder needs to
- * output before the decoder's output is valid. When seeking, you should
- * start decoding this many samples prior to your desired seek point.
- *
- * - encoding: Set by libavcodec.
- * - decoding: Set by libavcodec.
- */
- int delay;
-
-
- /* video only */
- /**
- * picture width / height.
- *
- * @note Those fields may not match the values of the last
- * AVFrame output by avcodec_receive_frame() due frame
- * reordering.
- *
- * - encoding: MUST be set by user.
- * - decoding: May be set by the user before opening the decoder if known e.g.
- * from the container. Some decoders will require the dimensions
- * to be set by the caller. During decoding, the decoder may
- * overwrite those values as required while parsing the data.
- */
- int width, height;
-
- /**
- * Bitstream width / height, may be different from width/height e.g. when
- * the decoded frame is cropped before being output or lowres is enabled.
- *
- * @note Those field may not match the value of the last
- * AVFrame output by avcodec_receive_frame() due frame
- * reordering.
- *
- * - encoding: unused
- * - decoding: May be set by the user before opening the decoder if known
- * e.g. from the container. During decoding, the decoder may
- * overwrite those values as required while parsing the data.
- */
- int coded_width, coded_height;
-
- /**
- * the number of pictures in a group of pictures, or 0 for intra_only
- * - encoding: Set by user.
- * - decoding: unused
- */
- int gop_size;
-
- /**
- * Pixel format, see AV_PIX_FMT_xxx.
- * May be set by the demuxer if known from headers.
- * May be overridden by the decoder if it knows better.
- *
- * @note This field may not match the value of the last
- * AVFrame output by avcodec_receive_frame() due frame
- * reordering.
- *
- * - encoding: Set by user.
- * - decoding: Set by user if known, overridden by libavcodec while
- * parsing the data.
- */
- enum AVPixelFormat pix_fmt;
-
- /**
- * If non NULL, 'draw_horiz_band' is called by the libavcodec
- * decoder to draw a horizontal band. It improves cache usage. Not
- * all codecs can do that. You must check the codec capabilities
- * beforehand.
- * When multithreading is used, it may be called from multiple threads
- * at the same time; threads might draw different parts of the same AVFrame,
- * or multiple AVFrames, and there is no guarantee that slices will be drawn
- * in order.
- * The function is also used by hardware acceleration APIs.
- * It is called at least once during frame decoding to pass
- * the data needed for hardware render.
- * In that mode instead of pixel data, AVFrame points to
- * a structure specific to the acceleration API. The application
- * reads the structure and can change some fields to indicate progress
- * or mark state.
- * - encoding: unused
- * - decoding: Set by user.
- * @param height the height of the slice
- * @param y the y position of the slice
- * @param type 1->top field, 2->bottom field, 3->frame
- * @param offset offset into the AVFrame.data from which the slice should be read
- */
- void (*draw_horiz_band)(struct AVCodecContext *s,
- const AVFrame *src, int offset[AV_NUM_DATA_POINTERS],
- int y, int type, int height);
-
- /**
- * Callback to negotiate the pixel format. Decoding only, may be set by the
- * caller before avcodec_open2().
- *
- * Called by some decoders to select the pixel format that will be used for
- * the output frames. This is mainly used to set up hardware acceleration,
- * then the provided format list contains the corresponding hwaccel pixel
- * formats alongside the "software" one. The software pixel format may also
- * be retrieved from \ref sw_pix_fmt.
- *
- * This callback will be called when the coded frame properties (such as
- * resolution, pixel format, etc.) change and more than one output format is
- * supported for those new properties. If a hardware pixel format is chosen
- * and initialization for it fails, the callback may be called again
- * immediately.
- *
- * This callback may be called from different threads if the decoder is
- * multi-threaded, but not from more than one thread simultaneously.
- *
- * @param fmt list of formats which may be used in the current
- * configuration, terminated by AV_PIX_FMT_NONE.
- * @warning Behavior is undefined if the callback returns a value other
- * than one of the formats in fmt or AV_PIX_FMT_NONE.
- * @return the chosen format or AV_PIX_FMT_NONE
- */
- enum AVPixelFormat (*get_format)(struct AVCodecContext *s, const enum AVPixelFormat * fmt);
-
- /**
- * maximum number of B-frames between non-B-frames
- * Note: The output will be delayed by max_b_frames+1 relative to the input.
- * - encoding: Set by user.
- * - decoding: unused
- */
- int max_b_frames;
-
- /**
- * qscale factor between IP and B-frames
- * If > 0 then the last P-frame quantizer will be used (q= lastp_q*factor+offset).
- * If < 0 then normal ratecontrol will be done (q= -normal_q*factor+offset).
- * - encoding: Set by user.
- * - decoding: unused
- */
- float b_quant_factor;
-
- /**
- * qscale offset between IP and B-frames
- * - encoding: Set by user.
- * - decoding: unused
- */
- float b_quant_offset;
-
- /**
- * Size of the frame reordering buffer in the decoder.
- * For MPEG-2 it is 1 IPB or 0 low delay IP.
- * - encoding: Set by libavcodec.
- * - decoding: Set by libavcodec.
- */
- int has_b_frames;
-
- /**
- * qscale factor between P- and I-frames
- * If > 0 then the last P-frame quantizer will be used (q = lastp_q * factor + offset).
- * If < 0 then normal ratecontrol will be done (q= -normal_q*factor+offset).
- * - encoding: Set by user.
- * - decoding: unused
- */
- float i_quant_factor;
-
- /**
- * qscale offset between P and I-frames
- * - encoding: Set by user.
- * - decoding: unused
- */
- float i_quant_offset;
-
- /**
- * luminance masking (0-> disabled)
- * - encoding: Set by user.
- * - decoding: unused
- */
- float lumi_masking;
-
- /**
- * temporary complexity masking (0-> disabled)
- * - encoding: Set by user.
- * - decoding: unused
- */
- float temporal_cplx_masking;
-
- /**
- * spatial complexity masking (0-> disabled)
- * - encoding: Set by user.
- * - decoding: unused
- */
- float spatial_cplx_masking;
-
- /**
- * p block masking (0-> disabled)
- * - encoding: Set by user.
- * - decoding: unused
- */
- float p_masking;
-
- /**
- * darkness masking (0-> disabled)
- * - encoding: Set by user.
- * - decoding: unused
- */
- float dark_masking;
-
-#if FF_API_SLICE_OFFSET
- /**
- * slice count
- * - encoding: Set by libavcodec.
- * - decoding: Set by user (or 0).
- */
- attribute_deprecated
- int slice_count;
-
- /**
- * slice offsets in the frame in bytes
- * - encoding: Set/allocated by libavcodec.
- * - decoding: Set/allocated by user (or NULL).
- */
- attribute_deprecated
- int *slice_offset;
-#endif
-
- /**
- * sample aspect ratio (0 if unknown)
- * That is the width of a pixel divided by the height of the pixel.
- * Numerator and denominator must be relatively prime and smaller than 256 for some video standards.
- * - encoding: Set by user.
- * - decoding: Set by libavcodec.
- */
- AVRational sample_aspect_ratio;
-
- /**
- * motion estimation comparison function
- * - encoding: Set by user.
- * - decoding: unused
- */
- int me_cmp;
- /**
- * subpixel motion estimation comparison function
- * - encoding: Set by user.
- * - decoding: unused
- */
- int me_sub_cmp;
- /**
- * macroblock comparison function (not supported yet)
- * - encoding: Set by user.
- * - decoding: unused
- */
- int mb_cmp;
- /**
- * interlaced DCT comparison function
- * - encoding: Set by user.
- * - decoding: unused
- */
- int ildct_cmp;
-#define FF_CMP_SAD 0
-#define FF_CMP_SSE 1
-#define FF_CMP_SATD 2
-#define FF_CMP_DCT 3
-#define FF_CMP_PSNR 4
-#define FF_CMP_BIT 5
-#define FF_CMP_RD 6
-#define FF_CMP_ZERO 7
-#define FF_CMP_VSAD 8
-#define FF_CMP_VSSE 9
-#define FF_CMP_NSSE 10
-#define FF_CMP_W53 11
-#define FF_CMP_W97 12
-#define FF_CMP_DCTMAX 13
-#define FF_CMP_DCT264 14
-#define FF_CMP_MEDIAN_SAD 15
-#define FF_CMP_CHROMA 256
-
- /**
- * ME diamond size & shape
- * - encoding: Set by user.
- * - decoding: unused
- */
- int dia_size;
-
- /**
- * amount of previous MV predictors (2a+1 x 2a+1 square)
- * - encoding: Set by user.
- * - decoding: unused
- */
- int last_predictor_count;
-
- /**
- * motion estimation prepass comparison function
- * - encoding: Set by user.
- * - decoding: unused
- */
- int me_pre_cmp;
-
- /**
- * ME prepass diamond size & shape
- * - encoding: Set by user.
- * - decoding: unused
- */
- int pre_dia_size;
-
- /**
- * subpel ME quality
- * - encoding: Set by user.
- * - decoding: unused
- */
- int me_subpel_quality;
-
- /**
- * maximum motion estimation search range in subpel units
- * If 0 then no limit.
- *
- * - encoding: Set by user.
- * - decoding: unused
- */
- int me_range;
-
- /**
- * slice flags
- * - encoding: unused
- * - decoding: Set by user.
- */
- int slice_flags;
-#define SLICE_FLAG_CODED_ORDER 0x0001 ///< draw_horiz_band() is called in coded order instead of display
-#define SLICE_FLAG_ALLOW_FIELD 0x0002 ///< allow draw_horiz_band() with field slices (MPEG-2 field pics)
-#define SLICE_FLAG_ALLOW_PLANE 0x0004 ///< allow draw_horiz_band() with 1 component at a time (SVQ1)
-
- /**
- * macroblock decision mode
- * - encoding: Set by user.
- * - decoding: unused
- */
- int mb_decision;
-#define FF_MB_DECISION_SIMPLE 0 ///< uses mb_cmp
-#define FF_MB_DECISION_BITS 1 ///< chooses the one which needs the fewest bits
-#define FF_MB_DECISION_RD 2 ///< rate distortion
-
- /**
- * custom intra quantization matrix
- * Must be allocated with the av_malloc() family of functions, and will be freed in
- * avcodec_free_context().
- * - encoding: Set/allocated by user, freed by libavcodec. Can be NULL.
- * - decoding: Set/allocated/freed by libavcodec.
- */
- uint16_t *intra_matrix;
-
- /**
- * custom inter quantization matrix
- * Must be allocated with the av_malloc() family of functions, and will be freed in
- * avcodec_free_context().
- * - encoding: Set/allocated by user, freed by libavcodec. Can be NULL.
- * - decoding: Set/allocated/freed by libavcodec.
- */
- uint16_t *inter_matrix;
-
- /**
- * precision of the intra DC coefficient - 8
- * - encoding: Set by user.
- * - decoding: Set by libavcodec
- */
- int intra_dc_precision;
-
- /**
- * Number of macroblock rows at the top which are skipped.
- * - encoding: unused
- * - decoding: Set by user.
- */
- int skip_top;
-
- /**
- * Number of macroblock rows at the bottom which are skipped.
- * - encoding: unused
- * - decoding: Set by user.
- */
- int skip_bottom;
-
- /**
- * minimum MB Lagrange multiplier
- * - encoding: Set by user.
- * - decoding: unused
- */
- int mb_lmin;
-
- /**
- * maximum MB Lagrange multiplier
- * - encoding: Set by user.
- * - decoding: unused
- */
- int mb_lmax;
-
- /**
- * - encoding: Set by user.
- * - decoding: unused
- */
- int bidir_refine;
-
- /**
- * minimum GOP size
- * - encoding: Set by user.
- * - decoding: unused
- */
- int keyint_min;
-
- /**
- * number of reference frames
- * - encoding: Set by user.
- * - decoding: Set by lavc.
- */
- int refs;
-
- /**
- * Note: Value depends upon the compare function used for fullpel ME.
- * - encoding: Set by user.
- * - decoding: unused
- */
- int mv0_threshold;
-
- /**
- * Chromaticity coordinates of the source primaries.
- * - encoding: Set by user
- * - decoding: Set by libavcodec
- */
- enum AVColorPrimaries color_primaries;
-
- /**
- * Color Transfer Characteristic.
- * - encoding: Set by user
- * - decoding: Set by libavcodec
- */
- enum AVColorTransferCharacteristic color_trc;
-
- /**
- * YUV colorspace type.
- * - encoding: Set by user
- * - decoding: Set by libavcodec
- */
- enum AVColorSpace colorspace;
-
- /**
- * MPEG vs JPEG YUV range.
- * - encoding: Set by user to override the default output color range value,
- * If not specified, libavcodec sets the color range depending on the
- * output format.
- * - decoding: Set by libavcodec, can be set by the user to propagate the
- * color range to components reading from the decoder context.
- */
- enum AVColorRange color_range;
-
- /**
- * This defines the location of chroma samples.
- * - encoding: Set by user
- * - decoding: Set by libavcodec
- */
- enum AVChromaLocation chroma_sample_location;
-
- /**
- * Number of slices.
- * Indicates number of picture subdivisions. Used for parallelized
- * decoding.
- * - encoding: Set by user
- * - decoding: unused
- */
- int slices;
-
- /** Field order
- * - encoding: set by libavcodec
- * - decoding: Set by user.
- */
- enum AVFieldOrder field_order;
-
- /* audio only */
- int sample_rate; ///< samples per second
-
-#if FF_API_OLD_CHANNEL_LAYOUT
- /**
- * number of audio channels
- * @deprecated use ch_layout.nb_channels
- */
- attribute_deprecated
- int channels;
-#endif
-
- /**
- * audio sample format
- * - encoding: Set by user.
- * - decoding: Set by libavcodec.
- */
- enum AVSampleFormat sample_fmt; ///< sample format
-
- /* The following data should not be initialized. */
- /**
- * Number of samples per channel in an audio frame.
- *
- * - encoding: set by libavcodec in avcodec_open2(). Each submitted frame
- * except the last must contain exactly frame_size samples per channel.
- * May be 0 when the codec has AV_CODEC_CAP_VARIABLE_FRAME_SIZE set, then the
- * frame size is not restricted.
- * - decoding: may be set by some decoders to indicate constant frame size
- */
- int frame_size;
-
-#if FF_API_AVCTX_FRAME_NUMBER
- /**
- * Frame counter, set by libavcodec.
- *
- * - decoding: total number of frames returned from the decoder so far.
- * - encoding: total number of frames passed to the encoder so far.
- *
- * @note the counter is not incremented if encoding/decoding resulted in
- * an error.
- * @deprecated use frame_num instead
- */
- attribute_deprecated
- int frame_number;
-#endif
-
- /**
- * number of bytes per packet if constant and known or 0
- * Used by some WAV based audio codecs.
- */
- int block_align;
-
- /**
- * Audio cutoff bandwidth (0 means "automatic")
- * - encoding: Set by user.
- * - decoding: unused
- */
- int cutoff;
-
-#if FF_API_OLD_CHANNEL_LAYOUT
- /**
- * Audio channel layout.
- * - encoding: set by user.
- * - decoding: set by user, may be overwritten by libavcodec.
- * @deprecated use ch_layout
- */
- attribute_deprecated
- uint64_t channel_layout;
-
- /**
- * Request decoder to use this channel layout if it can (0 for default)
- * - encoding: unused
- * - decoding: Set by user.
- * @deprecated use "downmix" codec private option
- */
- attribute_deprecated
- uint64_t request_channel_layout;
-#endif
-
- /**
- * Type of service that the audio stream conveys.
- * - encoding: Set by user.
- * - decoding: Set by libavcodec.
- */
- enum AVAudioServiceType audio_service_type;
-
- /**
- * desired sample format
- * - encoding: Not used.
- * - decoding: Set by user.
- * Decoder will decode to this format if it can.
- */
- enum AVSampleFormat request_sample_fmt;
-
- /**
- * This callback is called at the beginning of each frame to get data
- * buffer(s) for it. There may be one contiguous buffer for all the data or
- * there may be a buffer per each data plane or anything in between. What
- * this means is, you may set however many entries in buf[] you feel necessary.
- * Each buffer must be reference-counted using the AVBuffer API (see description
- * of buf[] below).
- *
- * The following fields will be set in the frame before this callback is
- * called:
- * - format
- * - width, height (video only)
- * - sample_rate, channel_layout, nb_samples (audio only)
- * Their values may differ from the corresponding values in
- * AVCodecContext. This callback must use the frame values, not the codec
- * context values, to calculate the required buffer size.
- *
- * This callback must fill the following fields in the frame:
- * - data[]
- * - linesize[]
- * - extended_data:
- * * if the data is planar audio with more than 8 channels, then this
- * callback must allocate and fill extended_data to contain all pointers
- * to all data planes. data[] must hold as many pointers as it can.
- * extended_data must be allocated with av_malloc() and will be freed in
- * av_frame_unref().
- * * otherwise extended_data must point to data
- * - buf[] must contain one or more pointers to AVBufferRef structures. Each of
- * the frame's data and extended_data pointers must be contained in these. That
- * is, one AVBufferRef for each allocated chunk of memory, not necessarily one
- * AVBufferRef per data[] entry. See: av_buffer_create(), av_buffer_alloc(),
- * and av_buffer_ref().
- * - extended_buf and nb_extended_buf must be allocated with av_malloc() by
- * this callback and filled with the extra buffers if there are more
- * buffers than buf[] can hold. extended_buf will be freed in
- * av_frame_unref().
- *
- * If AV_CODEC_CAP_DR1 is not set then get_buffer2() must call
- * avcodec_default_get_buffer2() instead of providing buffers allocated by
- * some other means.
- *
- * Each data plane must be aligned to the maximum required by the target
- * CPU.
- *
- * @see avcodec_default_get_buffer2()
- *
- * Video:
- *
- * If AV_GET_BUFFER_FLAG_REF is set in flags then the frame may be reused
- * (read and/or written to if it is writable) later by libavcodec.
- *
- * avcodec_align_dimensions2() should be used to find the required width and
- * height, as they normally need to be rounded up to the next multiple of 16.
- *
- * Some decoders do not support linesizes changing between frames.
- *
- * If frame multithreading is used, this callback may be called from a
- * different thread, but not from more than one at once. Does not need to be
- * reentrant.
- *
- * @see avcodec_align_dimensions2()
- *
- * Audio:
- *
- * Decoders request a buffer of a particular size by setting
- * AVFrame.nb_samples prior to calling get_buffer2(). The decoder may,
- * however, utilize only part of the buffer by setting AVFrame.nb_samples
- * to a smaller value in the output frame.
- *
- * As a convenience, av_samples_get_buffer_size() and
- * av_samples_fill_arrays() in libavutil may be used by custom get_buffer2()
- * functions to find the required data size and to fill data pointers and
- * linesize. In AVFrame.linesize, only linesize[0] may be set for audio
- * since all planes must be the same size.
- *
- * @see av_samples_get_buffer_size(), av_samples_fill_arrays()
- *
- * - encoding: unused
- * - decoding: Set by libavcodec, user can override.
- */
- int (*get_buffer2)(struct AVCodecContext *s, AVFrame *frame, int flags);
-
- /* - encoding parameters */
- float qcompress; ///< amount of qscale change between easy & hard scenes (0.0-1.0)
- float qblur; ///< amount of qscale smoothing over time (0.0-1.0)
-
- /**
- * minimum quantizer
- * - encoding: Set by user.
- * - decoding: unused
- */
- int qmin;
-
- /**
- * maximum quantizer
- * - encoding: Set by user.
- * - decoding: unused
- */
- int qmax;
-
- /**
- * maximum quantizer difference between frames
- * - encoding: Set by user.
- * - decoding: unused
- */
- int max_qdiff;
-
- /**
- * decoder bitstream buffer size
- * - encoding: Set by user.
- * - decoding: unused
- */
- int rc_buffer_size;
-
- /**
- * ratecontrol override, see RcOverride
- * - encoding: Allocated/set/freed by user.
- * - decoding: unused
- */
- int rc_override_count;
- RcOverride *rc_override;
-
- /**
- * maximum bitrate
- * - encoding: Set by user.
- * - decoding: Set by user, may be overwritten by libavcodec.
- */
- int64_t rc_max_rate;
-
- /**
- * minimum bitrate
- * - encoding: Set by user.
- * - decoding: unused
- */
- int64_t rc_min_rate;
-
- /**
- * Ratecontrol attempt to use, at maximum, of what can be used without an underflow.
- * - encoding: Set by user.
- * - decoding: unused.
- */
- float rc_max_available_vbv_use;
-
- /**
- * Ratecontrol attempt to use, at least, times the amount needed to prevent a vbv overflow.
- * - encoding: Set by user.
- * - decoding: unused.
- */
- float rc_min_vbv_overflow_use;
-
- /**
- * Number of bits which should be loaded into the rc buffer before decoding starts.
- * - encoding: Set by user.
- * - decoding: unused
- */
- int rc_initial_buffer_occupancy;
-
- /**
- * trellis RD quantization
- * - encoding: Set by user.
- * - decoding: unused
- */
- int trellis;
-
- /**
- * pass1 encoding statistics output buffer
- * - encoding: Set by libavcodec.
- * - decoding: unused
- */
- char *stats_out;
-
- /**
- * pass2 encoding statistics input buffer
- * Concatenated stuff from stats_out of pass1 should be placed here.
- * - encoding: Allocated/set/freed by user.
- * - decoding: unused
- */
- char *stats_in;
-
- /**
- * Work around bugs in encoders which sometimes cannot be detected automatically.
- * - encoding: Set by user
- * - decoding: Set by user
- */
- int workaround_bugs;
-#define FF_BUG_AUTODETECT 1 ///< autodetection
-#define FF_BUG_XVID_ILACE 4
-#define FF_BUG_UMP4 8
-#define FF_BUG_NO_PADDING 16
-#define FF_BUG_AMV 32
-#define FF_BUG_QPEL_CHROMA 64
-#define FF_BUG_STD_QPEL 128
-#define FF_BUG_QPEL_CHROMA2 256
-#define FF_BUG_DIRECT_BLOCKSIZE 512
-#define FF_BUG_EDGE 1024
-#define FF_BUG_HPEL_CHROMA 2048
-#define FF_BUG_DC_CLIP 4096
-#define FF_BUG_MS 8192 ///< Work around various bugs in Microsoft's broken decoders.
-#define FF_BUG_TRUNCATED 16384
-#define FF_BUG_IEDGE 32768
-
- /**
- * strictly follow the standard (MPEG-4, ...).
- * - encoding: Set by user.
- * - decoding: Set by user.
- * Setting this to STRICT or higher means the encoder and decoder will
- * generally do stupid things, whereas setting it to unofficial or lower
- * will mean the encoder might produce output that is not supported by all
- * spec-compliant decoders. Decoders don't differentiate between normal,
- * unofficial and experimental (that is, they always try to decode things
- * when they can) unless they are explicitly asked to behave stupidly
- * (=strictly conform to the specs)
- * This may only be set to one of the FF_COMPLIANCE_* values in defs.h.
- */
- int strict_std_compliance;
-
- /**
- * error concealment flags
- * - encoding: unused
- * - decoding: Set by user.
- */
- int error_concealment;
-#define FF_EC_GUESS_MVS 1
-#define FF_EC_DEBLOCK 2
-#define FF_EC_FAVOR_INTER 256
-
- /**
- * debug
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- int debug;
-#define FF_DEBUG_PICT_INFO 1
-#define FF_DEBUG_RC 2
-#define FF_DEBUG_BITSTREAM 4
-#define FF_DEBUG_MB_TYPE 8
-#define FF_DEBUG_QP 16
-#define FF_DEBUG_DCT_COEFF 0x00000040
-#define FF_DEBUG_SKIP 0x00000080
-#define FF_DEBUG_STARTCODE 0x00000100
-#define FF_DEBUG_ER 0x00000400
-#define FF_DEBUG_MMCO 0x00000800
-#define FF_DEBUG_BUGS 0x00001000
-#define FF_DEBUG_BUFFERS 0x00008000
-#define FF_DEBUG_THREADS 0x00010000
-#define FF_DEBUG_GREEN_MD 0x00800000
-#define FF_DEBUG_NOMC 0x01000000
-
- /**
- * Error recognition; may misdetect some more or less valid parts as errors.
- * This is a bitfield of the AV_EF_* values defined in defs.h.
- *
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- int err_recognition;
-
-#if FF_API_REORDERED_OPAQUE
- /**
- * opaque 64-bit number (generally a PTS) that will be reordered and
- * output in AVFrame.reordered_opaque
- * - encoding: Set by libavcodec to the reordered_opaque of the input
- * frame corresponding to the last returned packet. Only
- * supported by encoders with the
- * AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE capability.
- * - decoding: Set by user.
- *
- * @deprecated Use AV_CODEC_FLAG_COPY_OPAQUE instead
- */
- attribute_deprecated
- int64_t reordered_opaque;
-#endif
-
- /**
- * Hardware accelerator in use
- * - encoding: unused.
- * - decoding: Set by libavcodec
- */
- const struct AVHWAccel *hwaccel;
-
- /**
- * Legacy hardware accelerator context.
- *
- * For some hardware acceleration methods, the caller may use this field to
- * signal hwaccel-specific data to the codec. The struct pointed to by this
- * pointer is hwaccel-dependent and defined in the respective header. Please
- * refer to the FFmpeg HW accelerator documentation to know how to fill
- * this.
- *
- * In most cases this field is optional - the necessary information may also
- * be provided to libavcodec through @ref hw_frames_ctx or @ref
- * hw_device_ctx (see avcodec_get_hw_config()). However, in some cases it
- * may be the only method of signalling some (optional) information.
- *
- * The struct and its contents are owned by the caller.
- *
- * - encoding: May be set by the caller before avcodec_open2(). Must remain
- * valid until avcodec_free_context().
- * - decoding: May be set by the caller in the get_format() callback.
- * Must remain valid until the next get_format() call,
- * or avcodec_free_context() (whichever comes first).
- */
- void *hwaccel_context;
-
- /**
- * error
- * - encoding: Set by libavcodec if flags & AV_CODEC_FLAG_PSNR.
- * - decoding: unused
- */
- uint64_t error[AV_NUM_DATA_POINTERS];
-
- /**
- * DCT algorithm, see FF_DCT_* below
- * - encoding: Set by user.
- * - decoding: unused
- */
- int dct_algo;
-#define FF_DCT_AUTO 0
-#define FF_DCT_FASTINT 1
-#define FF_DCT_INT 2
-#define FF_DCT_MMX 3
-#define FF_DCT_ALTIVEC 5
-#define FF_DCT_FAAN 6
-
- /**
- * IDCT algorithm, see FF_IDCT_* below.
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- int idct_algo;
-#define FF_IDCT_AUTO 0
-#define FF_IDCT_INT 1
-#define FF_IDCT_SIMPLE 2
-#define FF_IDCT_SIMPLEMMX 3
-#define FF_IDCT_ARM 7
-#define FF_IDCT_ALTIVEC 8
-#define FF_IDCT_SIMPLEARM 10
-#define FF_IDCT_XVID 14
-#define FF_IDCT_SIMPLEARMV5TE 16
-#define FF_IDCT_SIMPLEARMV6 17
-#define FF_IDCT_FAAN 20
-#define FF_IDCT_SIMPLENEON 22
-#if FF_API_IDCT_NONE
-// formerly used by xvmc
-#define FF_IDCT_NONE 24
-#endif
-#define FF_IDCT_SIMPLEAUTO 128
-
- /**
- * bits per sample/pixel from the demuxer (needed for huffyuv).
- * - encoding: Set by libavcodec.
- * - decoding: Set by user.
- */
- int bits_per_coded_sample;
-
- /**
- * Bits per sample/pixel of internal libavcodec pixel/sample format.
- * - encoding: set by user.
- * - decoding: set by libavcodec.
- */
- int bits_per_raw_sample;
-
- /**
- * low resolution decoding, 1-> 1/2 size, 2->1/4 size
- * - encoding: unused
- * - decoding: Set by user.
- */
- int lowres;
-
- /**
- * thread count
- * is used to decide how many independent tasks should be passed to execute()
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- int thread_count;
-
- /**
- * Which multithreading methods to use.
- * Use of FF_THREAD_FRAME will increase decoding delay by one frame per thread,
- * so clients which cannot provide future frames should not use it.
- *
- * - encoding: Set by user, otherwise the default is used.
- * - decoding: Set by user, otherwise the default is used.
- */
- int thread_type;
-#define FF_THREAD_FRAME 1 ///< Decode more than one frame at once
-#define FF_THREAD_SLICE 2 ///< Decode more than one part of a single frame at once
-
- /**
- * Which multithreading methods are in use by the codec.
- * - encoding: Set by libavcodec.
- * - decoding: Set by libavcodec.
- */
- int active_thread_type;
-
- /**
- * The codec may call this to execute several independent things.
- * It will return only after finishing all tasks.
- * The user may replace this with some multithreaded implementation,
- * the default implementation will execute the parts serially.
- * @param count the number of things to execute
- * - encoding: Set by libavcodec, user can override.
- * - decoding: Set by libavcodec, user can override.
- */
- int (*execute)(struct AVCodecContext *c, int (*func)(struct AVCodecContext *c2, void *arg), void *arg2, int *ret, int count, int size);
-
- /**
- * The codec may call this to execute several independent things.
- * It will return only after finishing all tasks.
- * The user may replace this with some multithreaded implementation,
- * the default implementation will execute the parts serially.
- * @param c context passed also to func
- * @param count the number of things to execute
- * @param arg2 argument passed unchanged to func
- * @param ret return values of executed functions, must have space for "count" values. May be NULL.
- * @param func function that will be called count times, with jobnr from 0 to count-1.
- * threadnr will be in the range 0 to c->thread_count-1 < MAX_THREADS and so that no
- * two instances of func executing at the same time will have the same threadnr.
- * @return always 0 currently, but code should handle a future improvement where when any call to func
- * returns < 0 no further calls to func may be done and < 0 is returned.
- * - encoding: Set by libavcodec, user can override.
- * - decoding: Set by libavcodec, user can override.
- */
- int (*execute2)(struct AVCodecContext *c, int (*func)(struct AVCodecContext *c2, void *arg, int jobnr, int threadnr), void *arg2, int *ret, int count);
-
- /**
- * noise vs. sse weight for the nsse comparison function
- * - encoding: Set by user.
- * - decoding: unused
- */
- int nsse_weight;
-
- /**
- * profile
- * - encoding: Set by user.
- * - decoding: Set by libavcodec.
- */
- int profile;
-#define FF_PROFILE_UNKNOWN -99
-#define FF_PROFILE_RESERVED -100
-
-#define FF_PROFILE_AAC_MAIN 0
-#define FF_PROFILE_AAC_LOW 1
-#define FF_PROFILE_AAC_SSR 2
-#define FF_PROFILE_AAC_LTP 3
-#define FF_PROFILE_AAC_HE 4
-#define FF_PROFILE_AAC_HE_V2 28
-#define FF_PROFILE_AAC_LD 22
-#define FF_PROFILE_AAC_ELD 38
-#define FF_PROFILE_MPEG2_AAC_LOW 128
-#define FF_PROFILE_MPEG2_AAC_HE 131
-
-#define FF_PROFILE_DNXHD 0
-#define FF_PROFILE_DNXHR_LB 1
-#define FF_PROFILE_DNXHR_SQ 2
-#define FF_PROFILE_DNXHR_HQ 3
-#define FF_PROFILE_DNXHR_HQX 4
-#define FF_PROFILE_DNXHR_444 5
-
-#define FF_PROFILE_DTS 20
-#define FF_PROFILE_DTS_ES 30
-#define FF_PROFILE_DTS_96_24 40
-#define FF_PROFILE_DTS_HD_HRA 50
-#define FF_PROFILE_DTS_HD_MA 60
-#define FF_PROFILE_DTS_EXPRESS 70
-#define FF_PROFILE_DTS_HD_MA_X 61
-#define FF_PROFILE_DTS_HD_MA_X_IMAX 62
-
-
-#define FF_PROFILE_EAC3_DDP_ATMOS 30
-
-#define FF_PROFILE_TRUEHD_ATMOS 30
-
-#define FF_PROFILE_MPEG2_422 0
-#define FF_PROFILE_MPEG2_HIGH 1
-#define FF_PROFILE_MPEG2_SS 2
-#define FF_PROFILE_MPEG2_SNR_SCALABLE 3
-#define FF_PROFILE_MPEG2_MAIN 4
-#define FF_PROFILE_MPEG2_SIMPLE 5
-
-#define FF_PROFILE_H264_CONSTRAINED (1<<9) // 8+1; constraint_set1_flag
-#define FF_PROFILE_H264_INTRA (1<<11) // 8+3; constraint_set3_flag
-
-#define FF_PROFILE_H264_BASELINE 66
-#define FF_PROFILE_H264_CONSTRAINED_BASELINE (66|FF_PROFILE_H264_CONSTRAINED)
-#define FF_PROFILE_H264_MAIN 77
-#define FF_PROFILE_H264_EXTENDED 88
-#define FF_PROFILE_H264_HIGH 100
-#define FF_PROFILE_H264_HIGH_10 110
-#define FF_PROFILE_H264_HIGH_10_INTRA (110|FF_PROFILE_H264_INTRA)
-#define FF_PROFILE_H264_MULTIVIEW_HIGH 118
-#define FF_PROFILE_H264_HIGH_422 122
-#define FF_PROFILE_H264_HIGH_422_INTRA (122|FF_PROFILE_H264_INTRA)
-#define FF_PROFILE_H264_STEREO_HIGH 128
-#define FF_PROFILE_H264_HIGH_444 144
-#define FF_PROFILE_H264_HIGH_444_PREDICTIVE 244
-#define FF_PROFILE_H264_HIGH_444_INTRA (244|FF_PROFILE_H264_INTRA)
-#define FF_PROFILE_H264_CAVLC_444 44
-
-#define FF_PROFILE_VC1_SIMPLE 0
-#define FF_PROFILE_VC1_MAIN 1
-#define FF_PROFILE_VC1_COMPLEX 2
-#define FF_PROFILE_VC1_ADVANCED 3
-
-#define FF_PROFILE_MPEG4_SIMPLE 0
-#define FF_PROFILE_MPEG4_SIMPLE_SCALABLE 1
-#define FF_PROFILE_MPEG4_CORE 2
-#define FF_PROFILE_MPEG4_MAIN 3
-#define FF_PROFILE_MPEG4_N_BIT 4
-#define FF_PROFILE_MPEG4_SCALABLE_TEXTURE 5
-#define FF_PROFILE_MPEG4_SIMPLE_FACE_ANIMATION 6
-#define FF_PROFILE_MPEG4_BASIC_ANIMATED_TEXTURE 7
-#define FF_PROFILE_MPEG4_HYBRID 8
-#define FF_PROFILE_MPEG4_ADVANCED_REAL_TIME 9
-#define FF_PROFILE_MPEG4_CORE_SCALABLE 10
-#define FF_PROFILE_MPEG4_ADVANCED_CODING 11
-#define FF_PROFILE_MPEG4_ADVANCED_CORE 12
-#define FF_PROFILE_MPEG4_ADVANCED_SCALABLE_TEXTURE 13
-#define FF_PROFILE_MPEG4_SIMPLE_STUDIO 14
-#define FF_PROFILE_MPEG4_ADVANCED_SIMPLE 15
-
-#define FF_PROFILE_JPEG2000_CSTREAM_RESTRICTION_0 1
-#define FF_PROFILE_JPEG2000_CSTREAM_RESTRICTION_1 2
-#define FF_PROFILE_JPEG2000_CSTREAM_NO_RESTRICTION 32768
-#define FF_PROFILE_JPEG2000_DCINEMA_2K 3
-#define FF_PROFILE_JPEG2000_DCINEMA_4K 4
-
-#define FF_PROFILE_VP9_0 0
-#define FF_PROFILE_VP9_1 1
-#define FF_PROFILE_VP9_2 2
-#define FF_PROFILE_VP9_3 3
-
-#define FF_PROFILE_HEVC_MAIN 1
-#define FF_PROFILE_HEVC_MAIN_10 2
-#define FF_PROFILE_HEVC_MAIN_STILL_PICTURE 3
-#define FF_PROFILE_HEVC_REXT 4
-#define FF_PROFILE_HEVC_SCC 9
-
-#define FF_PROFILE_VVC_MAIN_10 1
-#define FF_PROFILE_VVC_MAIN_10_444 33
-
-#define FF_PROFILE_AV1_MAIN 0
-#define FF_PROFILE_AV1_HIGH 1
-#define FF_PROFILE_AV1_PROFESSIONAL 2
-
-#define FF_PROFILE_MJPEG_HUFFMAN_BASELINE_DCT 0xc0
-#define FF_PROFILE_MJPEG_HUFFMAN_EXTENDED_SEQUENTIAL_DCT 0xc1
-#define FF_PROFILE_MJPEG_HUFFMAN_PROGRESSIVE_DCT 0xc2
-#define FF_PROFILE_MJPEG_HUFFMAN_LOSSLESS 0xc3
-#define FF_PROFILE_MJPEG_JPEG_LS 0xf7
-
-#define FF_PROFILE_SBC_MSBC 1
-
-#define FF_PROFILE_PRORES_PROXY 0
-#define FF_PROFILE_PRORES_LT 1
-#define FF_PROFILE_PRORES_STANDARD 2
-#define FF_PROFILE_PRORES_HQ 3
-#define FF_PROFILE_PRORES_4444 4
-#define FF_PROFILE_PRORES_XQ 5
-
-#define FF_PROFILE_ARIB_PROFILE_A 0
-#define FF_PROFILE_ARIB_PROFILE_C 1
-
-#define FF_PROFILE_KLVA_SYNC 0
-#define FF_PROFILE_KLVA_ASYNC 1
-
- /**
- * level
- * - encoding: Set by user.
- * - decoding: Set by libavcodec.
- */
- int level;
-#define FF_LEVEL_UNKNOWN -99
-
- /**
- * Skip loop filtering for selected frames.
- * - encoding: unused
- * - decoding: Set by user.
- */
- enum AVDiscard skip_loop_filter;
-
- /**
- * Skip IDCT/dequantization for selected frames.
- * - encoding: unused
- * - decoding: Set by user.
- */
- enum AVDiscard skip_idct;
-
- /**
- * Skip decoding for selected frames.
- * - encoding: unused
- * - decoding: Set by user.
- */
- enum AVDiscard skip_frame;
-
- /**
- * Header containing style information for text subtitles.
- * For SUBTITLE_ASS subtitle type, it should contain the whole ASS
- * [Script Info] and [V4+ Styles] section, plus the [Events] line and
- * the Format line following. It shouldn't include any Dialogue line.
- * - encoding: Set/allocated/freed by user (before avcodec_open2())
- * - decoding: Set/allocated/freed by libavcodec (by avcodec_open2())
- */
- uint8_t *subtitle_header;
- int subtitle_header_size;
-
- /**
- * Audio only. The number of "priming" samples (padding) inserted by the
- * encoder at the beginning of the audio. I.e. this number of leading
- * decoded samples must be discarded by the caller to get the original audio
- * without leading padding.
- *
- * - decoding: unused
- * - encoding: Set by libavcodec. The timestamps on the output packets are
- * adjusted by the encoder so that they always refer to the
- * first sample of the data actually contained in the packet,
- * including any added padding. E.g. if the timebase is
- * 1/samplerate and the timestamp of the first input sample is
- * 0, the timestamp of the first output packet will be
- * -initial_padding.
- */
- int initial_padding;
-
- /**
- * - decoding: For codecs that store a framerate value in the compressed
- * bitstream, the decoder may export it here. { 0, 1} when
- * unknown.
- * - encoding: May be used to signal the framerate of CFR content to an
- * encoder.
- */
- AVRational framerate;
-
- /**
- * Nominal unaccelerated pixel format, see AV_PIX_FMT_xxx.
- * - encoding: unused.
- * - decoding: Set by libavcodec before calling get_format()
- */
- enum AVPixelFormat sw_pix_fmt;
-
- /**
- * Timebase in which pkt_dts/pts and AVPacket.dts/pts are.
- * - encoding unused.
- * - decoding set by user.
- */
- AVRational pkt_timebase;
-
- /**
- * AVCodecDescriptor
- * - encoding: unused.
- * - decoding: set by libavcodec.
- */
- const AVCodecDescriptor *codec_descriptor;
-
- /**
- * Current statistics for PTS correction.
- * - decoding: maintained and used by libavcodec, not intended to be used by user apps
- * - encoding: unused
- */
- int64_t pts_correction_num_faulty_pts; /// Number of incorrect PTS values so far
- int64_t pts_correction_num_faulty_dts; /// Number of incorrect DTS values so far
- int64_t pts_correction_last_pts; /// PTS of the last frame
- int64_t pts_correction_last_dts; /// DTS of the last frame
-
- /**
- * Character encoding of the input subtitles file.
- * - decoding: set by user
- * - encoding: unused
- */
- char *sub_charenc;
-
- /**
- * Subtitles character encoding mode. Formats or codecs might be adjusting
- * this setting (if they are doing the conversion themselves for instance).
- * - decoding: set by libavcodec
- * - encoding: unused
- */
- int sub_charenc_mode;
-#define FF_SUB_CHARENC_MODE_DO_NOTHING -1 ///< do nothing (demuxer outputs a stream supposed to be already in UTF-8, or the codec is bitmap for instance)
-#define FF_SUB_CHARENC_MODE_AUTOMATIC 0 ///< libavcodec will select the mode itself
-#define FF_SUB_CHARENC_MODE_PRE_DECODER 1 ///< the AVPacket data needs to be recoded to UTF-8 before being fed to the decoder, requires iconv
-#define FF_SUB_CHARENC_MODE_IGNORE 2 ///< neither convert the subtitles, nor check them for valid UTF-8
-
- /**
- * Skip processing alpha if supported by codec.
- * Note that if the format uses pre-multiplied alpha (common with VP6,
- * and recommended due to better video quality/compression)
- * the image will look as if alpha-blended onto a black background.
- * However for formats that do not use pre-multiplied alpha
- * there might be serious artefacts (though e.g. libswscale currently
- * assumes pre-multiplied alpha anyway).
- *
- * - decoding: set by user
- * - encoding: unused
- */
- int skip_alpha;
-
- /**
- * Number of samples to skip after a discontinuity
- * - decoding: unused
- * - encoding: set by libavcodec
- */
- int seek_preroll;
-
- /**
- * custom intra quantization matrix
- * - encoding: Set by user, can be NULL.
- * - decoding: unused.
- */
- uint16_t *chroma_intra_matrix;
-
- /**
- * dump format separator.
- * can be ", " or "\n " or anything else
- * - encoding: Set by user.
- * - decoding: Set by user.
- */
- uint8_t *dump_separator;
-
- /**
- * ',' separated list of allowed decoders.
- * If NULL then all are allowed
- * - encoding: unused
- * - decoding: set by user
- */
- char *codec_whitelist;
-
- /**
- * Properties of the stream that gets decoded
- * - encoding: unused
- * - decoding: set by libavcodec
- */
- unsigned properties;
-#define FF_CODEC_PROPERTY_LOSSLESS 0x00000001
-#define FF_CODEC_PROPERTY_CLOSED_CAPTIONS 0x00000002
-#define FF_CODEC_PROPERTY_FILM_GRAIN 0x00000004
-
- /**
- * Additional data associated with the entire coded stream.
- *
- * - decoding: unused
- * - encoding: may be set by libavcodec after avcodec_open2().
- */
- AVPacketSideData *coded_side_data;
- int nb_coded_side_data;
-
- /**
- * A reference to the AVHWFramesContext describing the input (for encoding)
- * or output (decoding) frames. The reference is set by the caller and
- * afterwards owned (and freed) by libavcodec - it should never be read by
- * the caller after being set.
- *
- * - decoding: This field should be set by the caller from the get_format()
- * callback. The previous reference (if any) will always be
- * unreffed by libavcodec before the get_format() call.
- *
- * If the default get_buffer2() is used with a hwaccel pixel
- * format, then this AVHWFramesContext will be used for
- * allocating the frame buffers.
- *
- * - encoding: For hardware encoders configured to use a hwaccel pixel
- * format, this field should be set by the caller to a reference
- * to the AVHWFramesContext describing input frames.
- * AVHWFramesContext.format must be equal to
- * AVCodecContext.pix_fmt.
- *
- * This field should be set before avcodec_open2() is called.
- */
- AVBufferRef *hw_frames_ctx;
-
- /**
- * Audio only. The amount of padding (in samples) appended by the encoder to
- * the end of the audio. I.e. this number of decoded samples must be
- * discarded by the caller from the end of the stream to get the original
- * audio without any trailing padding.
- *
- * - decoding: unused
- * - encoding: unused
- */
- int trailing_padding;
-
- /**
- * The number of pixels per image to maximally accept.
- *
- * - decoding: set by user
- * - encoding: set by user
- */
- int64_t max_pixels;
-
- /**
- * A reference to the AVHWDeviceContext describing the device which will
- * be used by a hardware encoder/decoder. The reference is set by the
- * caller and afterwards owned (and freed) by libavcodec.
- *
- * This should be used if either the codec device does not require
- * hardware frames or any that are used are to be allocated internally by
- * libavcodec. If the user wishes to supply any of the frames used as
- * encoder input or decoder output then hw_frames_ctx should be used
- * instead. When hw_frames_ctx is set in get_format() for a decoder, this
- * field will be ignored while decoding the associated stream segment, but
- * may again be used on a following one after another get_format() call.
- *
- * For both encoders and decoders this field should be set before
- * avcodec_open2() is called and must not be written to thereafter.
- *
- * Note that some decoders may require this field to be set initially in
- * order to support hw_frames_ctx at all - in that case, all frames
- * contexts used must be created on the same device.
- */
- AVBufferRef *hw_device_ctx;
-
- /**
- * Bit set of AV_HWACCEL_FLAG_* flags, which affect hardware accelerated
- * decoding (if active).
- * - encoding: unused
- * - decoding: Set by user (either before avcodec_open2(), or in the
- * AVCodecContext.get_format callback)
- */
- int hwaccel_flags;
-
- /**
- * Video decoding only. Certain video codecs support cropping, meaning that
- * only a sub-rectangle of the decoded frame is intended for display. This
- * option controls how cropping is handled by libavcodec.
- *
- * When set to 1 (the default), libavcodec will apply cropping internally.
- * I.e. it will modify the output frame width/height fields and offset the
- * data pointers (only by as much as possible while preserving alignment, or
- * by the full amount if the AV_CODEC_FLAG_UNALIGNED flag is set) so that
- * the frames output by the decoder refer only to the cropped area. The
- * crop_* fields of the output frames will be zero.
- *
- * When set to 0, the width/height fields of the output frames will be set
- * to the coded dimensions and the crop_* fields will describe the cropping
- * rectangle. Applying the cropping is left to the caller.
- *
- * @warning When hardware acceleration with opaque output frames is used,
- * libavcodec is unable to apply cropping from the top/left border.
- *
- * @note when this option is set to zero, the width/height fields of the
- * AVCodecContext and output AVFrames have different meanings. The codec
- * context fields store display dimensions (with the coded dimensions in
- * coded_width/height), while the frame fields store the coded dimensions
- * (with the display dimensions being determined by the crop_* fields).
- */
- int apply_cropping;
-
- /*
- * Video decoding only. Sets the number of extra hardware frames which
- * the decoder will allocate for use by the caller. This must be set
- * before avcodec_open2() is called.
- *
- * Some hardware decoders require all frames that they will use for
- * output to be defined in advance before decoding starts. For such
- * decoders, the hardware frame pool must therefore be of a fixed size.
- * The extra frames set here are on top of any number that the decoder
- * needs internally in order to operate normally (for example, frames
- * used as reference pictures).
- */
- int extra_hw_frames;
-
- /**
- * The percentage of damaged samples to discard a frame.
- *
- * - decoding: set by user
- * - encoding: unused
- */
- int discard_damaged_percentage;
-
- /**
- * The number of samples per frame to maximally accept.
- *
- * - decoding: set by user
- * - encoding: set by user
- */
- int64_t max_samples;
-
- /**
- * Bit set of AV_CODEC_EXPORT_DATA_* flags, which affects the kind of
- * metadata exported in frame, packet, or coded stream side data by
- * decoders and encoders.
- *
- * - decoding: set by user
- * - encoding: set by user
- */
- int export_side_data;
-
- /**
- * This callback is called at the beginning of each packet to get a data
- * buffer for it.
- *
- * The following field will be set in the packet before this callback is
- * called:
- * - size
- * This callback must use the above value to calculate the required buffer size,
- * which must padded by at least AV_INPUT_BUFFER_PADDING_SIZE bytes.
- *
- * In some specific cases, the encoder may not use the entire buffer allocated by this
- * callback. This will be reflected in the size value in the packet once returned by
- * avcodec_receive_packet().
- *
- * This callback must fill the following fields in the packet:
- * - data: alignment requirements for AVPacket apply, if any. Some architectures and
- * encoders may benefit from having aligned data.
- * - buf: must contain a pointer to an AVBufferRef structure. The packet's
- * data pointer must be contained in it. See: av_buffer_create(), av_buffer_alloc(),
- * and av_buffer_ref().
- *
- * If AV_CODEC_CAP_DR1 is not set then get_encode_buffer() must call
- * avcodec_default_get_encode_buffer() instead of providing a buffer allocated by
- * some other means.
- *
- * The flags field may contain a combination of AV_GET_ENCODE_BUFFER_FLAG_ flags.
- * They may be used for example to hint what use the buffer may get after being
- * created.
- * Implementations of this callback may ignore flags they don't understand.
- * If AV_GET_ENCODE_BUFFER_FLAG_REF is set in flags then the packet may be reused
- * (read and/or written to if it is writable) later by libavcodec.
- *
- * This callback must be thread-safe, as when frame threading is used, it may
- * be called from multiple threads simultaneously.
- *
- * @see avcodec_default_get_encode_buffer()
- *
- * - encoding: Set by libavcodec, user can override.
- * - decoding: unused
- */
- int (*get_encode_buffer)(struct AVCodecContext *s, AVPacket *pkt, int flags);
-
- /**
- * Audio channel layout.
- * - encoding: must be set by the caller, to one of AVCodec.ch_layouts.
- * - decoding: may be set by the caller if known e.g. from the container.
- * The decoder can then override during decoding as needed.
- */
- AVChannelLayout ch_layout;
-
- /**
- * Frame counter, set by libavcodec.
- *
- * - decoding: total number of frames returned from the decoder so far.
- * - encoding: total number of frames passed to the encoder so far.
- *
- * @note the counter is not incremented if encoding/decoding resulted in
- * an error.
- */
- int64_t frame_num;
-} AVCodecContext;
-
-/**
- * @defgroup lavc_hwaccel AVHWAccel
- *
- * @note Nothing in this structure should be accessed by the user. At some
- * point in future it will not be externally visible at all.
- *
- * @{
- */
-typedef struct AVHWAccel {
- /**
- * Name of the hardware accelerated codec.
- * The name is globally unique among encoders and among decoders (but an
- * encoder and a decoder can share the same name).
- */
- const char *name;
-
- /**
- * Type of codec implemented by the hardware accelerator.
- *
- * See AVMEDIA_TYPE_xxx
- */
- enum AVMediaType type;
-
- /**
- * Codec implemented by the hardware accelerator.
- *
- * See AV_CODEC_ID_xxx
- */
- enum AVCodecID id;
-
- /**
- * Supported pixel format.
- *
- * Only hardware accelerated formats are supported here.
- */
- enum AVPixelFormat pix_fmt;
-
- /**
- * Hardware accelerated codec capabilities.
- * see AV_HWACCEL_CODEC_CAP_*
- */
- int capabilities;
-
- /*****************************************************************
- * No fields below this line are part of the public API. They
- * may not be used outside of libavcodec and can be changed and
- * removed at will.
- * New public fields should be added right above.
- *****************************************************************
- */
-
- /**
- * Allocate a custom buffer
- */
- int (*alloc_frame)(AVCodecContext *avctx, AVFrame *frame);
-
- /**
- * Called at the beginning of each frame or field picture.
- *
- * Meaningful frame information (codec specific) is guaranteed to
- * be parsed at this point. This function is mandatory.
- *
- * Note that buf can be NULL along with buf_size set to 0.
- * Otherwise, this means the whole frame is available at this point.
- *
- * @param avctx the codec context
- * @param buf the frame data buffer base
- * @param buf_size the size of the frame in bytes
- * @return zero if successful, a negative value otherwise
- */
- int (*start_frame)(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size);
-
- /**
- * Callback for parameter data (SPS/PPS/VPS etc).
- *
- * Useful for hardware decoders which keep persistent state about the
- * video parameters, and need to receive any changes to update that state.
- *
- * @param avctx the codec context
- * @param type the nal unit type
- * @param buf the nal unit data buffer
- * @param buf_size the size of the nal unit in bytes
- * @return zero if successful, a negative value otherwise
- */
- int (*decode_params)(AVCodecContext *avctx, int type, const uint8_t *buf, uint32_t buf_size);
-
- /**
- * Callback for each slice.
- *
- * Meaningful slice information (codec specific) is guaranteed to
- * be parsed at this point. This function is mandatory.
- *
- * @param avctx the codec context
- * @param buf the slice data buffer base
- * @param buf_size the size of the slice in bytes
- * @return zero if successful, a negative value otherwise
- */
- int (*decode_slice)(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size);
-
- /**
- * Called at the end of each frame or field picture.
- *
- * The whole picture is parsed at this point and can now be sent
- * to the hardware accelerator. This function is mandatory.
- *
- * @param avctx the codec context
- * @return zero if successful, a negative value otherwise
- */
- int (*end_frame)(AVCodecContext *avctx);
-
- /**
- * Size of per-frame hardware accelerator private data.
- *
- * Private data is allocated with av_mallocz() before
- * AVCodecContext.get_buffer() and deallocated after
- * AVCodecContext.release_buffer().
- */
- int frame_priv_data_size;
-
- /**
- * Initialize the hwaccel private data.
- *
- * This will be called from ff_get_format(), after hwaccel and
- * hwaccel_context are set and the hwaccel private data in AVCodecInternal
- * is allocated.
- */
- int (*init)(AVCodecContext *avctx);
-
- /**
- * Uninitialize the hwaccel private data.
- *
- * This will be called from get_format() or avcodec_close(), after hwaccel
- * and hwaccel_context are already uninitialized.
- */
- int (*uninit)(AVCodecContext *avctx);
-
- /**
- * Size of the private data to allocate in
- * AVCodecInternal.hwaccel_priv_data.
- */
- int priv_data_size;
-
- /**
- * Internal hwaccel capabilities.
- */
- int caps_internal;
-
- /**
- * Fill the given hw_frames context with current codec parameters. Called
- * from get_format. Refer to avcodec_get_hw_frames_parameters() for
- * details.
- *
- * This CAN be called before AVHWAccel.init is called, and you must assume
- * that avctx->hwaccel_priv_data is invalid.
- */
- int (*frame_params)(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx);
-} AVHWAccel;
-
-/**
- * HWAccel is experimental and is thus avoided in favor of non experimental
- * codecs
- */
-#define AV_HWACCEL_CODEC_CAP_EXPERIMENTAL 0x0200
-
-/**
- * Hardware acceleration should be used for decoding even if the codec level
- * used is unknown or higher than the maximum supported level reported by the
- * hardware driver.
- *
- * It's generally a good idea to pass this flag unless you have a specific
- * reason not to, as hardware tends to under-report supported levels.
- */
-#define AV_HWACCEL_FLAG_IGNORE_LEVEL (1 << 0)
-
-/**
- * Hardware acceleration can output YUV pixel formats with a different chroma
- * sampling than 4:2:0 and/or other than 8 bits per component.
- */
-#define AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH (1 << 1)
-
-/**
- * Hardware acceleration should still be attempted for decoding when the
- * codec profile does not match the reported capabilities of the hardware.
- *
- * For example, this can be used to try to decode baseline profile H.264
- * streams in hardware - it will often succeed, because many streams marked
- * as baseline profile actually conform to constrained baseline profile.
- *
- * @warning If the stream is actually not supported then the behaviour is
- * undefined, and may include returning entirely incorrect output
- * while indicating success.
- */
-#define AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH (1 << 2)
-
-/**
- * Some hardware decoders (namely nvdec) can either output direct decoder
- * surfaces, or make an on-device copy and return said copy.
- * There is a hard limit on how many decoder surfaces there can be, and it
- * cannot be accurately guessed ahead of time.
- * For some processing chains, this can be okay, but others will run into the
- * limit and in turn produce very confusing errors that require fine tuning of
- * more or less obscure options by the user, or in extreme cases cannot be
- * resolved at all without inserting an avfilter that forces a copy.
- *
- * Thus, the hwaccel will by default make a copy for safety and resilience.
- * If a users really wants to minimize the amount of copies, they can set this
- * flag and ensure their processing chain does not exhaust the surface pool.
- */
-#define AV_HWACCEL_FLAG_UNSAFE_OUTPUT (1 << 3)
-
-/**
- * @}
- */
-
-enum AVSubtitleType {
- SUBTITLE_NONE,
-
- SUBTITLE_BITMAP, ///< A bitmap, pict will be set
-
- /**
- * Plain text, the text field must be set by the decoder and is
- * authoritative. ass and pict fields may contain approximations.
- */
- SUBTITLE_TEXT,
-
- /**
- * Formatted text, the ass field must be set by the decoder and is
- * authoritative. pict and text fields may contain approximations.
- */
- SUBTITLE_ASS,
-};
-
-#define AV_SUBTITLE_FLAG_FORCED 0x00000001
-
-typedef struct AVSubtitleRect {
- int x; ///< top left corner of pict, undefined when pict is not set
- int y; ///< top left corner of pict, undefined when pict is not set
- int w; ///< width of pict, undefined when pict is not set
- int h; ///< height of pict, undefined when pict is not set
- int nb_colors; ///< number of colors in pict, undefined when pict is not set
-
- /**
- * data+linesize for the bitmap of this subtitle.
- * Can be set for text/ass as well once they are rendered.
- */
- uint8_t *data[4];
- int linesize[4];
-
- enum AVSubtitleType type;
-
- char *text; ///< 0 terminated plain UTF-8 text
-
- /**
- * 0 terminated ASS/SSA compatible event line.
- * The presentation of this is unaffected by the other values in this
- * struct.
- */
- char *ass;
-
- int flags;
-} AVSubtitleRect;
-
-typedef struct AVSubtitle {
- uint16_t format; /* 0 = graphics */
- uint32_t start_display_time; /* relative to packet pts, in ms */
- uint32_t end_display_time; /* relative to packet pts, in ms */
- unsigned num_rects;
- AVSubtitleRect **rects;
- int64_t pts; ///< Same as packet pts, in AV_TIME_BASE
-} AVSubtitle;
-
-/**
- * Return the LIBAVCODEC_VERSION_INT constant.
- */
-unsigned avcodec_version(void);
-
-/**
- * Return the libavcodec build-time configuration.
- */
-const char *avcodec_configuration(void);
-
-/**
- * Return the libavcodec license.
- */
-const char *avcodec_license(void);
-
-/**
- * Allocate an AVCodecContext and set its fields to default values. The
- * resulting struct should be freed with avcodec_free_context().
- *
- * @param codec if non-NULL, allocate private data and initialize defaults
- * for the given codec. It is illegal to then call avcodec_open2()
- * with a different codec.
- * If NULL, then the codec-specific defaults won't be initialized,
- * which may result in suboptimal default settings (this is
- * important mainly for encoders, e.g. libx264).
- *
- * @return An AVCodecContext filled with default values or NULL on failure.
- */
-AVCodecContext *avcodec_alloc_context3(const AVCodec *codec);
-
-/**
- * Free the codec context and everything associated with it and write NULL to
- * the provided pointer.
- */
-void avcodec_free_context(AVCodecContext **avctx);
-
-/**
- * Get the AVClass for AVCodecContext. It can be used in combination with
- * AV_OPT_SEARCH_FAKE_OBJ for examining options.
- *
- * @see av_opt_find().
- */
-const AVClass *avcodec_get_class(void);
-
-/**
- * Get the AVClass for AVSubtitleRect. It can be used in combination with
- * AV_OPT_SEARCH_FAKE_OBJ for examining options.
- *
- * @see av_opt_find().
- */
-const AVClass *avcodec_get_subtitle_rect_class(void);
-
-/**
- * Fill the parameters struct based on the values from the supplied codec
- * context. Any allocated fields in par are freed and replaced with duplicates
- * of the corresponding fields in codec.
- *
- * @return >= 0 on success, a negative AVERROR code on failure
- */
-int avcodec_parameters_from_context(AVCodecParameters *par,
- const AVCodecContext *codec);
-
-/**
- * Fill the codec context based on the values from the supplied codec
- * parameters. Any allocated fields in codec that have a corresponding field in
- * par are freed and replaced with duplicates of the corresponding field in par.
- * Fields in codec that do not have a counterpart in par are not touched.
- *
- * @return >= 0 on success, a negative AVERROR code on failure.
- */
-int avcodec_parameters_to_context(AVCodecContext *codec,
- const AVCodecParameters *par);
-
-/**
- * Initialize the AVCodecContext to use the given AVCodec. Prior to using this
- * function the context has to be allocated with avcodec_alloc_context3().
- *
- * The functions avcodec_find_decoder_by_name(), avcodec_find_encoder_by_name(),
- * avcodec_find_decoder() and avcodec_find_encoder() provide an easy way for
- * retrieving a codec.
- *
- * Depending on the codec, you might need to set options in the codec context
- * also for decoding (e.g. width, height, or the pixel or audio sample format in
- * the case the information is not available in the bitstream, as when decoding
- * raw audio or video).
- *
- * Options in the codec context can be set either by setting them in the options
- * AVDictionary, or by setting the values in the context itself, directly or by
- * using the av_opt_set() API before calling this function.
- *
- * Example:
- * @code
- * av_dict_set(&opts, "b", "2.5M", 0);
- * codec = avcodec_find_decoder(AV_CODEC_ID_H264);
- * if (!codec)
- * exit(1);
- *
- * context = avcodec_alloc_context3(codec);
- *
- * if (avcodec_open2(context, codec, opts) < 0)
- * exit(1);
- * @endcode
- *
- * In the case AVCodecParameters are available (e.g. when demuxing a stream
- * using libavformat, and accessing the AVStream contained in the demuxer), the
- * codec parameters can be copied to the codec context using
- * avcodec_parameters_to_context(), as in the following example:
- *
- * @code
- * AVStream *stream = ...;
- * context = avcodec_alloc_context3(codec);
- * if (avcodec_parameters_to_context(context, stream->codecpar) < 0)
- * exit(1);
- * if (avcodec_open2(context, codec, NULL) < 0)
- * exit(1);
- * @endcode
- *
- * @note Always call this function before using decoding routines (such as
- * @ref avcodec_receive_frame()).
- *
- * @param avctx The context to initialize.
- * @param codec The codec to open this context for. If a non-NULL codec has been
- * previously passed to avcodec_alloc_context3() or
- * for this context, then this parameter MUST be either NULL or
- * equal to the previously passed codec.
- * @param options A dictionary filled with AVCodecContext and codec-private
- * options, which are set on top of the options already set in
- * avctx, can be NULL. On return this object will be filled with
- * options that were not found in the avctx codec context.
- *
- * @return zero on success, a negative value on error
- * @see avcodec_alloc_context3(), avcodec_find_decoder(), avcodec_find_encoder(),
- * av_dict_set(), av_opt_set(), av_opt_find(), avcodec_parameters_to_context()
- */
-int avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options);
-
-/**
- * Close a given AVCodecContext and free all the data associated with it
- * (but not the AVCodecContext itself).
- *
- * Calling this function on an AVCodecContext that hasn't been opened will free
- * the codec-specific data allocated in avcodec_alloc_context3() with a non-NULL
- * codec. Subsequent calls will do nothing.
- *
- * @note Do not use this function. Use avcodec_free_context() to destroy a
- * codec context (either open or closed). Opening and closing a codec context
- * multiple times is not supported anymore -- use multiple codec contexts
- * instead.
- */
-int avcodec_close(AVCodecContext *avctx);
-
-/**
- * Free all allocated data in the given subtitle struct.
- *
- * @param sub AVSubtitle to free.
- */
-void avsubtitle_free(AVSubtitle *sub);
-
-/**
- * @}
- */
-
-/**
- * @addtogroup lavc_decoding
- * @{
- */
-
-/**
- * The default callback for AVCodecContext.get_buffer2(). It is made public so
- * it can be called by custom get_buffer2() implementations for decoders without
- * AV_CODEC_CAP_DR1 set.
- */
-int avcodec_default_get_buffer2(AVCodecContext *s, AVFrame *frame, int flags);
-
-/**
- * The default callback for AVCodecContext.get_encode_buffer(). It is made public so
- * it can be called by custom get_encode_buffer() implementations for encoders without
- * AV_CODEC_CAP_DR1 set.
- */
-int avcodec_default_get_encode_buffer(AVCodecContext *s, AVPacket *pkt, int flags);
-
-/**
- * Modify width and height values so that they will result in a memory
- * buffer that is acceptable for the codec if you do not use any horizontal
- * padding.
- *
- * May only be used if a codec with AV_CODEC_CAP_DR1 has been opened.
- */
-void avcodec_align_dimensions(AVCodecContext *s, int *width, int *height);
-
-/**
- * Modify width and height values so that they will result in a memory
- * buffer that is acceptable for the codec if you also ensure that all
- * line sizes are a multiple of the respective linesize_align[i].
- *
- * May only be used if a codec with AV_CODEC_CAP_DR1 has been opened.
- */
-void avcodec_align_dimensions2(AVCodecContext *s, int *width, int *height,
- int linesize_align[AV_NUM_DATA_POINTERS]);
-
-#ifdef FF_API_AVCODEC_CHROMA_POS
-/**
- * Converts AVChromaLocation to swscale x/y chroma position.
- *
- * The positions represent the chroma (0,0) position in a coordinates system
- * with luma (0,0) representing the origin and luma(1,1) representing 256,256
- *
- * @param xpos horizontal chroma sample position
- * @param ypos vertical chroma sample position
- * @deprecated Use av_chroma_location_enum_to_pos() instead.
- */
- attribute_deprecated
-int avcodec_enum_to_chroma_pos(int *xpos, int *ypos, enum AVChromaLocation pos);
-
-/**
- * Converts swscale x/y chroma position to AVChromaLocation.
- *
- * The positions represent the chroma (0,0) position in a coordinates system
- * with luma (0,0) representing the origin and luma(1,1) representing 256,256
- *
- * @param xpos horizontal chroma sample position
- * @param ypos vertical chroma sample position
- * @deprecated Use av_chroma_location_pos_to_enum() instead.
- */
- attribute_deprecated
-enum AVChromaLocation avcodec_chroma_pos_to_enum(int xpos, int ypos);
-#endif
-
-/**
- * Decode a subtitle message.
- * Return a negative value on error, otherwise return the number of bytes used.
- * If no subtitle could be decompressed, got_sub_ptr is zero.
- * Otherwise, the subtitle is stored in *sub.
- * Note that AV_CODEC_CAP_DR1 is not available for subtitle codecs. This is for
- * simplicity, because the performance difference is expected to be negligible
- * and reusing a get_buffer written for video codecs would probably perform badly
- * due to a potentially very different allocation pattern.
- *
- * Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input
- * and output. This means that for some packets they will not immediately
- * produce decoded output and need to be flushed at the end of decoding to get
- * all the decoded data. Flushing is done by calling this function with packets
- * with avpkt->data set to NULL and avpkt->size set to 0 until it stops
- * returning subtitles. It is safe to flush even those decoders that are not
- * marked with AV_CODEC_CAP_DELAY, then no subtitles will be returned.
- *
- * @note The AVCodecContext MUST have been opened with @ref avcodec_open2()
- * before packets may be fed to the decoder.
- *
- * @param avctx the codec context
- * @param[out] sub The preallocated AVSubtitle in which the decoded subtitle will be stored,
- * must be freed with avsubtitle_free if *got_sub_ptr is set.
- * @param[in,out] got_sub_ptr Zero if no subtitle could be decompressed, otherwise, it is nonzero.
- * @param[in] avpkt The input AVPacket containing the input buffer.
- */
-int avcodec_decode_subtitle2(AVCodecContext *avctx, AVSubtitle *sub,
- int *got_sub_ptr, const AVPacket *avpkt);
-
-/**
- * Supply raw packet data as input to a decoder.
- *
- * Internally, this call will copy relevant AVCodecContext fields, which can
- * influence decoding per-packet, and apply them when the packet is actually
- * decoded. (For example AVCodecContext.skip_frame, which might direct the
- * decoder to drop the frame contained by the packet sent with this function.)
- *
- * @warning The input buffer, avpkt->data must be AV_INPUT_BUFFER_PADDING_SIZE
- * larger than the actual read bytes because some optimized bitstream
- * readers read 32 or 64 bits at once and could read over the end.
- *
- * @note The AVCodecContext MUST have been opened with @ref avcodec_open2()
- * before packets may be fed to the decoder.
- *
- * @param avctx codec context
- * @param[in] avpkt The input AVPacket. Usually, this will be a single video
- * frame, or several complete audio frames.
- * Ownership of the packet remains with the caller, and the
- * decoder will not write to the packet. The decoder may create
- * a reference to the packet data (or copy it if the packet is
- * not reference-counted).
- * Unlike with older APIs, the packet is always fully consumed,
- * and if it contains multiple frames (e.g. some audio codecs),
- * will require you to call avcodec_receive_frame() multiple
- * times afterwards before you can send a new packet.
- * It can be NULL (or an AVPacket with data set to NULL and
- * size set to 0); in this case, it is considered a flush
- * packet, which signals the end of the stream. Sending the
- * first flush packet will return success. Subsequent ones are
- * unnecessary and will return AVERROR_EOF. If the decoder
- * still has frames buffered, it will return them after sending
- * a flush packet.
- *
- * @retval 0 success
- * @retval AVERROR(EAGAIN) input is not accepted in the current state - user
- * must read output with avcodec_receive_frame() (once
- * all output is read, the packet should be resent,
- * and the call will not fail with EAGAIN).
- * @retval AVERROR_EOF the decoder has been flushed, and no new packets can be
- * sent to it (also returned if more than 1 flush
- * packet is sent)
- * @retval AVERROR(EINVAL) codec not opened, it is an encoder, or requires flush
- * @retval AVERROR(ENOMEM) failed to add packet to internal queue, or similar
- * @retval "another negative error code" legitimate decoding errors
- */
-int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
-
-/**
- * Return decoded output data from a decoder or encoder (when the
- * @ref AV_CODEC_FLAG_RECON_FRAME flag is used).
- *
- * @param avctx codec context
- * @param frame This will be set to a reference-counted video or audio
- * frame (depending on the decoder type) allocated by the
- * codec. Note that the function will always call
- * av_frame_unref(frame) before doing anything else.
- *
- * @retval 0 success, a frame was returned
- * @retval AVERROR(EAGAIN) output is not available in this state - user must
- * try to send new input
- * @retval AVERROR_EOF the codec has been fully flushed, and there will be
- * no more output frames
- * @retval AVERROR(EINVAL) codec not opened, or it is an encoder without the
- * @ref AV_CODEC_FLAG_RECON_FRAME flag enabled
- * @retval AVERROR_INPUT_CHANGED current decoded frame has changed parameters with
- * respect to first decoded frame. Applicable when flag
- * AV_CODEC_FLAG_DROPCHANGED is set.
- * @retval "other negative error code" legitimate decoding errors
- */
-int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);
-
-/**
- * Supply a raw video or audio frame to the encoder. Use avcodec_receive_packet()
- * to retrieve buffered output packets.
- *
- * @param avctx codec context
- * @param[in] frame AVFrame containing the raw audio or video frame to be encoded.
- * Ownership of the frame remains with the caller, and the
- * encoder will not write to the frame. The encoder may create
- * a reference to the frame data (or copy it if the frame is
- * not reference-counted).
- * It can be NULL, in which case it is considered a flush
- * packet. This signals the end of the stream. If the encoder
- * still has packets buffered, it will return them after this
- * call. Once flushing mode has been entered, additional flush
- * packets are ignored, and sending frames will return
- * AVERROR_EOF.
- *
- * For audio:
- * If AV_CODEC_CAP_VARIABLE_FRAME_SIZE is set, then each frame
- * can have any number of samples.
- * If it is not set, frame->nb_samples must be equal to
- * avctx->frame_size for all frames except the last.
- * The final frame may be smaller than avctx->frame_size.
- * @retval 0 success
- * @retval AVERROR(EAGAIN) input is not accepted in the current state - user must
- * read output with avcodec_receive_packet() (once all
- * output is read, the packet should be resent, and the
- * call will not fail with EAGAIN).
- * @retval AVERROR_EOF the encoder has been flushed, and no new frames can
- * be sent to it
- * @retval AVERROR(EINVAL) codec not opened, it is a decoder, or requires flush
- * @retval AVERROR(ENOMEM) failed to add packet to internal queue, or similar
- * @retval "another negative error code" legitimate encoding errors
- */
-int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
-
-/**
- * Read encoded data from the encoder.
- *
- * @param avctx codec context
- * @param avpkt This will be set to a reference-counted packet allocated by the
- * encoder. Note that the function will always call
- * av_packet_unref(avpkt) before doing anything else.
- * @retval 0 success
- * @retval AVERROR(EAGAIN) output is not available in the current state - user must
- * try to send input
- * @retval AVERROR_EOF the encoder has been fully flushed, and there will be no
- * more output packets
- * @retval AVERROR(EINVAL) codec not opened, or it is a decoder
- * @retval "another negative error code" legitimate encoding errors
- */
-int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);
-
-/**
- * Create and return a AVHWFramesContext with values adequate for hardware
- * decoding. This is meant to get called from the get_format callback, and is
- * a helper for preparing a AVHWFramesContext for AVCodecContext.hw_frames_ctx.
- * This API is for decoding with certain hardware acceleration modes/APIs only.
- *
- * The returned AVHWFramesContext is not initialized. The caller must do this
- * with av_hwframe_ctx_init().
- *
- * Calling this function is not a requirement, but makes it simpler to avoid
- * codec or hardware API specific details when manually allocating frames.
- *
- * Alternatively to this, an API user can set AVCodecContext.hw_device_ctx,
- * which sets up AVCodecContext.hw_frames_ctx fully automatically, and makes
- * it unnecessary to call this function or having to care about
- * AVHWFramesContext initialization at all.
- *
- * There are a number of requirements for calling this function:
- *
- * - It must be called from get_format with the same avctx parameter that was
- * passed to get_format. Calling it outside of get_format is not allowed, and
- * can trigger undefined behavior.
- * - The function is not always supported (see description of return values).
- * Even if this function returns successfully, hwaccel initialization could
- * fail later. (The degree to which implementations check whether the stream
- * is actually supported varies. Some do this check only after the user's
- * get_format callback returns.)
- * - The hw_pix_fmt must be one of the choices suggested by get_format. If the
- * user decides to use a AVHWFramesContext prepared with this API function,
- * the user must return the same hw_pix_fmt from get_format.
- * - The device_ref passed to this function must support the given hw_pix_fmt.
- * - After calling this API function, it is the user's responsibility to
- * initialize the AVHWFramesContext (returned by the out_frames_ref parameter),
- * and to set AVCodecContext.hw_frames_ctx to it. If done, this must be done
- * before returning from get_format (this is implied by the normal
- * AVCodecContext.hw_frames_ctx API rules).
- * - The AVHWFramesContext parameters may change every time time get_format is
- * called. Also, AVCodecContext.hw_frames_ctx is reset before get_format. So
- * you are inherently required to go through this process again on every
- * get_format call.
- * - It is perfectly possible to call this function without actually using
- * the resulting AVHWFramesContext. One use-case might be trying to reuse a
- * previously initialized AVHWFramesContext, and calling this API function
- * only to test whether the required frame parameters have changed.
- * - Fields that use dynamically allocated values of any kind must not be set
- * by the user unless setting them is explicitly allowed by the documentation.
- * If the user sets AVHWFramesContext.free and AVHWFramesContext.user_opaque,
- * the new free callback must call the potentially set previous free callback.
- * This API call may set any dynamically allocated fields, including the free
- * callback.
- *
- * The function will set at least the following fields on AVHWFramesContext
- * (potentially more, depending on hwaccel API):
- *
- * - All fields set by av_hwframe_ctx_alloc().
- * - Set the format field to hw_pix_fmt.
- * - Set the sw_format field to the most suited and most versatile format. (An
- * implication is that this will prefer generic formats over opaque formats
- * with arbitrary restrictions, if possible.)
- * - Set the width/height fields to the coded frame size, rounded up to the
- * API-specific minimum alignment.
- * - Only _if_ the hwaccel requires a pre-allocated pool: set the initial_pool_size
- * field to the number of maximum reference surfaces possible with the codec,
- * plus 1 surface for the user to work (meaning the user can safely reference
- * at most 1 decoded surface at a time), plus additional buffering introduced
- * by frame threading. If the hwaccel does not require pre-allocation, the
- * field is left to 0, and the decoder will allocate new surfaces on demand
- * during decoding.
- * - Possibly AVHWFramesContext.hwctx fields, depending on the underlying
- * hardware API.
- *
- * Essentially, out_frames_ref returns the same as av_hwframe_ctx_alloc(), but
- * with basic frame parameters set.
- *
- * The function is stateless, and does not change the AVCodecContext or the
- * device_ref AVHWDeviceContext.
- *
- * @param avctx The context which is currently calling get_format, and which
- * implicitly contains all state needed for filling the returned
- * AVHWFramesContext properly.
- * @param device_ref A reference to the AVHWDeviceContext describing the device
- * which will be used by the hardware decoder.
- * @param hw_pix_fmt The hwaccel format you are going to return from get_format.
- * @param out_frames_ref On success, set to a reference to an _uninitialized_
- * AVHWFramesContext, created from the given device_ref.
- * Fields will be set to values required for decoding.
- * Not changed if an error is returned.
- * @return zero on success, a negative value on error. The following error codes
- * have special semantics:
- * AVERROR(ENOENT): the decoder does not support this functionality. Setup
- * is always manual, or it is a decoder which does not
- * support setting AVCodecContext.hw_frames_ctx at all,
- * or it is a software format.
- * AVERROR(EINVAL): it is known that hardware decoding is not supported for
- * this configuration, or the device_ref is not supported
- * for the hwaccel referenced by hw_pix_fmt.
- */
-int avcodec_get_hw_frames_parameters(AVCodecContext *avctx,
- AVBufferRef *device_ref,
- enum AVPixelFormat hw_pix_fmt,
- AVBufferRef **out_frames_ref);
-
-
-
-/**
- * @defgroup lavc_parsing Frame parsing
- * @{
- */
-
-enum AVPictureStructure {
- AV_PICTURE_STRUCTURE_UNKNOWN, ///< unknown
- AV_PICTURE_STRUCTURE_TOP_FIELD, ///< coded as top field
- AV_PICTURE_STRUCTURE_BOTTOM_FIELD, ///< coded as bottom field
- AV_PICTURE_STRUCTURE_FRAME, ///< coded as frame
-};
-
-typedef struct AVCodecParserContext {
- void *priv_data;
- const struct AVCodecParser *parser;
- int64_t frame_offset; /* offset of the current frame */
- int64_t cur_offset; /* current offset
- (incremented by each av_parser_parse()) */
- int64_t next_frame_offset; /* offset of the next frame */
- /* video info */
- int pict_type; /* XXX: Put it back in AVCodecContext. */
- /**
- * This field is used for proper frame duration computation in lavf.
- * It signals, how much longer the frame duration of the current frame
- * is compared to normal frame duration.
- *
- * frame_duration = (1 + repeat_pict) * time_base
- *
- * It is used by codecs like H.264 to display telecined material.
- */
- int repeat_pict; /* XXX: Put it back in AVCodecContext. */
- int64_t pts; /* pts of the current frame */
- int64_t dts; /* dts of the current frame */
-
- /* private data */
- int64_t last_pts;
- int64_t last_dts;
- int fetch_timestamp;
-
-#define AV_PARSER_PTS_NB 4
- int cur_frame_start_index;
- int64_t cur_frame_offset[AV_PARSER_PTS_NB];
- int64_t cur_frame_pts[AV_PARSER_PTS_NB];
- int64_t cur_frame_dts[AV_PARSER_PTS_NB];
-
- int flags;
-#define PARSER_FLAG_COMPLETE_FRAMES 0x0001
-#define PARSER_FLAG_ONCE 0x0002
-/// Set if the parser has a valid file offset
-#define PARSER_FLAG_FETCHED_OFFSET 0x0004
-#define PARSER_FLAG_USE_CODEC_TS 0x1000
-
- int64_t offset; ///< byte offset from starting packet start
- int64_t cur_frame_end[AV_PARSER_PTS_NB];
-
- /**
- * Set by parser to 1 for key frames and 0 for non-key frames.
- * It is initialized to -1, so if the parser doesn't set this flag,
- * old-style fallback using AV_PICTURE_TYPE_I picture type as key frames
- * will be used.
- */
- int key_frame;
-
- // Timestamp generation support:
- /**
- * Synchronization point for start of timestamp generation.
- *
- * Set to >0 for sync point, 0 for no sync point and <0 for undefined
- * (default).
- *
- * For example, this corresponds to presence of H.264 buffering period
- * SEI message.
- */
- int dts_sync_point;
-
- /**
- * Offset of the current timestamp against last timestamp sync point in
- * units of AVCodecContext.time_base.
- *
- * Set to INT_MIN when dts_sync_point unused. Otherwise, it must
- * contain a valid timestamp offset.
- *
- * Note that the timestamp of sync point has usually a nonzero
- * dts_ref_dts_delta, which refers to the previous sync point. Offset of
- * the next frame after timestamp sync point will be usually 1.
- *
- * For example, this corresponds to H.264 cpb_removal_delay.
- */
- int dts_ref_dts_delta;
-
- /**
- * Presentation delay of current frame in units of AVCodecContext.time_base.
- *
- * Set to INT_MIN when dts_sync_point unused. Otherwise, it must
- * contain valid non-negative timestamp delta (presentation time of a frame
- * must not lie in the past).
- *
- * This delay represents the difference between decoding and presentation
- * time of the frame.
- *
- * For example, this corresponds to H.264 dpb_output_delay.
- */
- int pts_dts_delta;
-
- /**
- * Position of the packet in file.
- *
- * Analogous to cur_frame_pts/dts
- */
- int64_t cur_frame_pos[AV_PARSER_PTS_NB];
-
- /**
- * Byte position of currently parsed frame in stream.
- */
- int64_t pos;
-
- /**
- * Previous frame byte position.
- */
- int64_t last_pos;
-
- /**
- * Duration of the current frame.
- * For audio, this is in units of 1 / AVCodecContext.sample_rate.
- * For all other types, this is in units of AVCodecContext.time_base.
- */
- int duration;
-
- enum AVFieldOrder field_order;
-
- /**
- * Indicate whether a picture is coded as a frame, top field or bottom field.
- *
- * For example, H.264 field_pic_flag equal to 0 corresponds to
- * AV_PICTURE_STRUCTURE_FRAME. An H.264 picture with field_pic_flag
- * equal to 1 and bottom_field_flag equal to 0 corresponds to
- * AV_PICTURE_STRUCTURE_TOP_FIELD.
- */
- enum AVPictureStructure picture_structure;
-
- /**
- * Picture number incremented in presentation or output order.
- * This field may be reinitialized at the first picture of a new sequence.
- *
- * For example, this corresponds to H.264 PicOrderCnt.
- */
- int output_picture_number;
-
- /**
- * Dimensions of the decoded video intended for presentation.
- */
- int width;
- int height;
-
- /**
- * Dimensions of the coded video.
- */
- int coded_width;
- int coded_height;
-
- /**
- * The format of the coded data, corresponds to enum AVPixelFormat for video
- * and for enum AVSampleFormat for audio.
- *
- * Note that a decoder can have considerable freedom in how exactly it
- * decodes the data, so the format reported here might be different from the
- * one returned by a decoder.
- */
- int format;
-} AVCodecParserContext;
-
-typedef struct AVCodecParser {
- int codec_ids[7]; /* several codec IDs are permitted */
- int priv_data_size;
- int (*parser_init)(AVCodecParserContext *s);
- /* This callback never returns an error, a negative value means that
- * the frame start was in a previous packet. */
- int (*parser_parse)(AVCodecParserContext *s,
- AVCodecContext *avctx,
- const uint8_t **poutbuf, int *poutbuf_size,
- const uint8_t *buf, int buf_size);
- void (*parser_close)(AVCodecParserContext *s);
- int (*split)(AVCodecContext *avctx, const uint8_t *buf, int buf_size);
-} AVCodecParser;
-
-/**
- * Iterate over all registered codec parsers.
- *
- * @param opaque a pointer where libavcodec will store the iteration state. Must
- * point to NULL to start the iteration.
- *
- * @return the next registered codec parser or NULL when the iteration is
- * finished
- */
-const AVCodecParser *av_parser_iterate(void **opaque);
-
-AVCodecParserContext *av_parser_init(int codec_id);
-
-/**
- * Parse a packet.
- *
- * @param s parser context.
- * @param avctx codec context.
- * @param poutbuf set to pointer to parsed buffer or NULL if not yet finished.
- * @param poutbuf_size set to size of parsed buffer or zero if not yet finished.
- * @param buf input buffer.
- * @param buf_size buffer size in bytes without the padding. I.e. the full buffer
- size is assumed to be buf_size + AV_INPUT_BUFFER_PADDING_SIZE.
- To signal EOF, this should be 0 (so that the last frame
- can be output).
- * @param pts input presentation timestamp.
- * @param dts input decoding timestamp.
- * @param pos input byte position in stream.
- * @return the number of bytes of the input bitstream used.
- *
- * Example:
- * @code
- * while(in_len){
- * len = av_parser_parse2(myparser, AVCodecContext, &data, &size,
- * in_data, in_len,
- * pts, dts, pos);
- * in_data += len;
- * in_len -= len;
- *
- * if(size)
- * decode_frame(data, size);
- * }
- * @endcode
- */
-int av_parser_parse2(AVCodecParserContext *s,
- AVCodecContext *avctx,
- uint8_t **poutbuf, int *poutbuf_size,
- const uint8_t *buf, int buf_size,
- int64_t pts, int64_t dts,
- int64_t pos);
-
-void av_parser_close(AVCodecParserContext *s);
-
-/**
- * @}
- * @}
- */
-
-/**
- * @addtogroup lavc_encoding
- * @{
- */
-
-int avcodec_encode_subtitle(AVCodecContext *avctx, uint8_t *buf, int buf_size,
- const AVSubtitle *sub);
-
-
-/**
- * @}
- */
-
-/**
- * @defgroup lavc_misc Utility functions
- * @ingroup libavc
- *
- * Miscellaneous utility functions related to both encoding and decoding
- * (or neither).
- * @{
- */
-
-/**
- * @defgroup lavc_misc_pixfmt Pixel formats
- *
- * Functions for working with pixel formats.
- * @{
- */
-
-/**
- * Return a value representing the fourCC code associated to the
- * pixel format pix_fmt, or 0 if no associated fourCC code can be
- * found.
- */
-unsigned int avcodec_pix_fmt_to_codec_tag(enum AVPixelFormat pix_fmt);
-
-/**
- * Find the best pixel format to convert to given a certain source pixel
- * format. When converting from one pixel format to another, information loss
- * may occur. For example, when converting from RGB24 to GRAY, the color
- * information will be lost. Similarly, other losses occur when converting from
- * some formats to other formats. avcodec_find_best_pix_fmt_of_2() searches which of
- * the given pixel formats should be used to suffer the least amount of loss.
- * The pixel formats from which it chooses one, are determined by the
- * pix_fmt_list parameter.
- *
- *
- * @param[in] pix_fmt_list AV_PIX_FMT_NONE terminated array of pixel formats to choose from
- * @param[in] src_pix_fmt source pixel format
- * @param[in] has_alpha Whether the source pixel format alpha channel is used.
- * @param[out] loss_ptr Combination of flags informing you what kind of losses will occur.
- * @return The best pixel format to convert to or -1 if none was found.
- */
-enum AVPixelFormat avcodec_find_best_pix_fmt_of_list(const enum AVPixelFormat *pix_fmt_list,
- enum AVPixelFormat src_pix_fmt,
- int has_alpha, int *loss_ptr);
-
-enum AVPixelFormat avcodec_default_get_format(struct AVCodecContext *s, const enum AVPixelFormat * fmt);
-
-/**
- * @}
- */
-
-void avcodec_string(char *buf, int buf_size, AVCodecContext *enc, int encode);
-
-int avcodec_default_execute(AVCodecContext *c, int (*func)(AVCodecContext *c2, void *arg2),void *arg, int *ret, int count, int size);
-int avcodec_default_execute2(AVCodecContext *c, int (*func)(AVCodecContext *c2, void *arg2, int, int),void *arg, int *ret, int count);
-//FIXME func typedef
-
-/**
- * Fill AVFrame audio data and linesize pointers.
- *
- * The buffer buf must be a preallocated buffer with a size big enough
- * to contain the specified samples amount. The filled AVFrame data
- * pointers will point to this buffer.
- *
- * AVFrame extended_data channel pointers are allocated if necessary for
- * planar audio.
- *
- * @param frame the AVFrame
- * frame->nb_samples must be set prior to calling the
- * function. This function fills in frame->data,
- * frame->extended_data, frame->linesize[0].
- * @param nb_channels channel count
- * @param sample_fmt sample format
- * @param buf buffer to use for frame data
- * @param buf_size size of buffer
- * @param align plane size sample alignment (0 = default)
- * @return >=0 on success, negative error code on failure
- * @todo return the size in bytes required to store the samples in
- * case of success, at the next libavutil bump
- */
-int avcodec_fill_audio_frame(AVFrame *frame, int nb_channels,
- enum AVSampleFormat sample_fmt, const uint8_t *buf,
- int buf_size, int align);
-
-/**
- * Reset the internal codec state / flush internal buffers. Should be called
- * e.g. when seeking or when switching to a different stream.
- *
- * @note for decoders, this function just releases any references the decoder
- * might keep internally, but the caller's references remain valid.
- *
- * @note for encoders, this function will only do something if the encoder
- * declares support for AV_CODEC_CAP_ENCODER_FLUSH. When called, the encoder
- * will drain any remaining packets, and can then be re-used for a different
- * stream (as opposed to sending a null frame which will leave the encoder
- * in a permanent EOF state after draining). This can be desirable if the
- * cost of tearing down and replacing the encoder instance is high.
- */
-void avcodec_flush_buffers(AVCodecContext *avctx);
-
-/**
- * Return audio frame duration.
- *
- * @param avctx codec context
- * @param frame_bytes size of the frame, or 0 if unknown
- * @return frame duration, in samples, if known. 0 if not able to
- * determine.
- */
-int av_get_audio_frame_duration(AVCodecContext *avctx, int frame_bytes);
-
-/* memory */
-
-/**
- * Same behaviour av_fast_malloc but the buffer has additional
- * AV_INPUT_BUFFER_PADDING_SIZE at the end which will always be 0.
- *
- * In addition the whole buffer will initially and after resizes
- * be 0-initialized so that no uninitialized data will ever appear.
- */
-void av_fast_padded_malloc(void *ptr, unsigned int *size, size_t min_size);
-
-/**
- * Same behaviour av_fast_padded_malloc except that buffer will always
- * be 0-initialized after call.
- */
-void av_fast_padded_mallocz(void *ptr, unsigned int *size, size_t min_size);
-
-/**
- * @return a positive value if s is open (i.e. avcodec_open2() was called on it
- * with no corresponding avcodec_close()), 0 otherwise.
- */
-int avcodec_is_open(AVCodecContext *s);
-
-/**
- * @}
- */
-
-#endif /* AVCODEC_AVCODEC_H */
diff --git a/spaces/congsaPfin/Manga-OCR/Vbw Felgen Gutachten Pdf Downloadl.md b/spaces/congsaPfin/Manga-OCR/Vbw Felgen Gutachten Pdf Downloadl.md
deleted file mode 100644
index 2f50d8bec6e0d3139cfda56116ee12924381fb7c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/Vbw Felgen Gutachten Pdf Downloadl.md
+++ /dev/null
@@ -1,86 +0,0 @@
-## Vbw Felgen Gutachten Pdf Downloadl
-
-
-
-
-
-
-
-
-
-**LINK ⚙⚙⚙ [https://urlcod.com/2txiNn](https://urlcod.com/2txiNn)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Download a PDF of VBW Sport Felgen Gutachten
-
-
-
-VBW Sport Felgen are one of the most popular wheels for quads and ATVs. They have a sporty and stylish look that enhances the appearance of any vehicle. But before you can install them on your quad or ATV, you need to get a Teilegutachten (part certificate) from the manufacturer or the seller. A Teilegutachten is a document that proves that the wheels are compatible with your vehicle and comply with the legal requirements for road use. Without a Teilegutachten, you may not be able to register your vehicle or pass the technical inspection.
-
-
-
-So how can you download a PDF of VBW Sport Felgen Gutachten? Here are some steps you can follow:
-
-
-
-1. Find out the exact model and size of your VBW Sport Felgen. You can check the label on the wheel or the invoice from the seller. For example, if you have VBW Sport A3 wheels in 15x7 and 15x8 sizes with 4x110 bolt pattern, you need to look for the corresponding Teilegutachten.
-
-2. Go to the website of the seller or the manufacturer of your VBW Sport Felgen. For example, if you bought them from Quad World, you can go to [this page](https://www.quad-world.de/quad-atv-reifen-felgen-alufelgen/quad-atv-felgen-alufelgen/vbw-felgen-quad-atv-tires-felgensatz-felgensaetze/) [^5^] and browse through the categories until you find your wheel model and size.
-
-3. Click on the product image or name to open the product page. There you should see a link or a button that says "Teilegutachten" or "Download Teilegutachten". Click on it to open or save the PDF file of the Teilegutachten. You may need a PDF reader software like Adobe Acrobat Reader to view or print the file.
-
-4. If you cannot find the Teilegutachten on the website, you can contact the seller or the manufacturer by phone or email and ask them to send you a copy of it. You may need to provide some information like your order number, your vehicle model and year, and your wheel model and size.
-
-
-
-Once you have downloaded or received the PDF of VBW Sport Felgen Gutachten, you can print it out and keep it in your vehicle. You may need to show it to the authorities or the technical inspector when they ask for it. The Teilegutachten will also help you to find out which tires are suitable for your wheels and which modifications are allowed or required for your vehicle.
-
-
-
-We hope this article was helpful for you. If you have any questions or comments, please feel free to contact us.
-
-
-
-## How to Install VBW Sport Felgen on Your Quad
-
-
-
-After you have downloaded the PDF of VBW Sport Felgen Gutachten and checked that your wheels and tires are compatible with your vehicle, you can proceed to install them on your quad. Here are some steps you can follow:
-
-
-
-1. Make sure your vehicle is parked on a flat and stable surface. Use a jack or a stand to lift your vehicle and secure it with wheel chocks or blocks. Remove the lug nuts and take off the old wheels from your vehicle.
-
-2. Clean the wheel hub and the brake disc or drum with a cloth or a brush. Remove any dirt, rust, or grease that may interfere with the fitment of the new wheels.
-
-3. Align the new wheel with the wheel hub and the bolt pattern. Make sure the valve stem is facing outwards and the wheel is centered on the hub. Push the wheel onto the hub until it sits flush against the brake disc or drum.
-
-4. Hand-tighten the lug nuts in a criss-cross pattern. Do not use an impact wrench or a power tool to avoid damaging the threads or over-tightening the lug nuts. Use a torque wrench to tighten the lug nuts to the specified torque value. You can find the torque value in the Teilegutachten or in your vehicle's manual.
-
-5. Repeat the same steps for the other wheels. Lower your vehicle and remove the jack or the stand. Check the air pressure of your tires and adjust it if necessary. You can find the recommended air pressure in the Teilegutachten or on the tire sidewall.
-
-
-
-Congratulations! You have successfully installed VBW Sport Felgen on your quad. Before you hit the road, make sure you test drive your vehicle and check for any vibrations, noises, or steering issues. If you notice any problems, stop immediately and inspect your wheels and tires. You may need to re-torque the lug nuts, balance the wheels, or align the steering.
-
-
-
-We hope this article was helpful for you. If you have any questions or comments, please feel free to contact us.
-
- 1b8d091108
-
-
-
-
-
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Chess APK 3D The Most Realistic and Enjoyable Chess Game Available.md b/spaces/congsaPfin/Manga-OCR/logs/Chess APK 3D The Most Realistic and Enjoyable Chess Game Available.md
deleted file mode 100644
index 03d8ce6ec60649848dc7357ac949ce851748fb47..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Chess APK 3D The Most Realistic and Enjoyable Chess Game Available.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Chess APK 3D: How to Play Chess in Three Dimensions on Your Android Device
-
Introduction
-
Chess is one of the oldest and most popular board games in the world. It is a game of strategy, logic, and skill that can be enjoyed by anyone, regardless of age or background. But what if you could play chess in a more immersive and realistic way, with a full 3D board that can rotate, shift, and feel like a real chessboard?
-
That's exactly what chess apk 3d offers. Chess apk 3d is an Android game that lets you play chess in three dimensions on your mobile device. It is not the kind of 3D chess played on Star Trek - nobody really plays that. It is a normal chess game, but with a stunning 3D graphics and sound effects that make you feel like you are playing on a real wooden board.
In this article, we will show you how to download, install, and play chess apk 3d on your Android device. We will also give you some tips and tricks to improve your game and have more fun. Let's get started!
-
How to download and install chess apk 3d
-
Chess apk 3d is not available on the Google Play Store, so you will need to download and install it manually from an external source. Here are the steps you need to follow:
-
Step 1: Find a reliable source for the apk file
-
An apk file is a package file that contains all the data and code needed to run an Android app. You can find many websites that offer free apk files for various games and apps, but not all of them are safe and trustworthy. Some of them may contain malware or viruses that can harm your device or steal your personal information.
-
Therefore, you need to be careful when choosing a source for the chess apk 3d file. One of the best sources we recommend is APKCombo, which is a reputable website that provides original and verified apk files for thousands of Android games and apps. You can download the chess apk 3d file from APKCombo by clicking [here](^1^).
-
Step 2: Enable unknown sources on your device settings
-
By default, Android devices do not allow installing apps from unknown sources, meaning sources other than the Google Play Store. This is a security measure to prevent installing malicious or harmful apps. However, if you trust the source of the apk file, you can enable unknown sources on your device settings to allow installing it.
-
To do this, go to your device settings and look for the security or privacy option. Then, find the option that says "unknown sources" or "install unknown apps" and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device or data. Tap OK or Allow to proceed.
-
Step 3: Download and install the apk file
-
Now that you have enabled unknown sources on your device settings, you can download and install the chess apk 3d file from APKCombo. To do this, go to the website and tap on the download button next to the chess apk 3d icon. You may see a pop-up message that asks you to confirm the download. Tap OK or Download to start the download process. Once the download is complete, you will see a notification that says "Download complete" or "Open". Tap on it to open the apk file and start the installation process. You may see another pop-up message that asks you to confirm the installation. Tap Install or Next to install the app on your device. When the installation is done, you will see a message that says "App installed" or "Open". Tap on it to launch the app and enjoy playing chess apk 3d.
-
chess 3d free apk download
-chess 3d pro apk mod
-chess 3d offline apk android
-chess 3d online multiplayer apk
-chess 3d animation apk full version
-chess 3d hd apk premium
-chess 3d realistic apk latest
-chess 3d game apk for pc
-chess 3d simulator apk cracked
-chess 3d theme apk no ads
-chess 3d wallpaper apk unlocked
-chess 3d board apk best
-chess 3d classic apk old
-chess 3d fantasy apk new
-chess 3d master apk hack
-chess 3d battle apk war
-chess 3d puzzle apk fun
-chess 3d adventure apk cool
-chess 3d art apk amazing
-chess 3d design apk creative
-chess 3d editor apk custom
-chess 3d maker apk create
-chess 3d generator apk generate
-chess 3d converter apk convert
-chess 3d viewer apk view
-chess 3d scanner apk scan
-chess 3d printer apk print
-chess 3d model apk modeling
-chess 3d tutorial apk learn
-chess 3d training apk practice
-chess 3d coach apk teach
-chess 3d mentor apk guide
-chess 3d expert apk tips
-chess 3d strategy apk plan
-chess 3d tactics apk tricks
-chess 3d moves apk play
-chess 3d rules apk know
-chess 3d history apk discover
-chess 3d culture apk explore
-chess 3d legends apk admire
-chess 3d champions apk win
-chess 3d ratings apk rank
-chess 3d statistics apk analyze
-chess 3d database apk search
-chess 3d library apk browse
-chess 3d books apk read
-chess 3d videos apk watch
-chess 3d podcasts apk listen
-chess 3d blogs apk follow
-chess 3d forums apk join
-
How to play chess apk 3d
-
Chess apk 3d is easy to play and has a user-friendly interface. Here are the steps you need to follow:
-
Step 1: Choose your game mode and difficulty level
-
When you open the app, you will see a main menu with four options: Play, Settings, Help, and Exit. Tap on Play to start a new game. You will see another menu with three options: Single Player, Two Players, and Online. Tap on Single Player if you want to play against the computer, Two Players if you want to play with a friend on the same device, or Online if you want to play with other players around the world. You will also see a slider that lets you choose the difficulty level of the game, from Easy to Hard. Choose the level that suits your skill and preference.
-
Step 2: Rotate, shift, and zoom the 3D board as you like
-
Once you start a game, you will see a 3D chess board on your screen. You can rotate, shift, and zoom the board as you like by using your fingers. To rotate the board, swipe left or right on the screen. To shift the board, swipe up or down on the screen. To zoom in or out, pinch in or out on the screen. You can also use the buttons on the bottom of the screen to adjust the view of the board.
-
Step 3: Make your moves and enjoy the realistic graphics and sounds
-
To make a move, tap on the piece you want to move and then tap on the square you want to move it to. The app will show you all the possible moves for each piece with green dots. If you make an illegal move, the app will warn you with a red dot. If you capture an enemy piece, it will be removed from the board and placed on a tray at the side of the screen. You can also see your captured pieces on your tray.
-
The app will also show you some useful information on the top of the screen, such as your score, your time, and your turn. You can also pause or resume the game by tapping on the pause button at the top right corner of the screen. You can also undo or redo your moves by tapping on the undo or redo buttons at the bottom right corner of the screen. If you need some help, you can tap on the hint button at the bottom left corner of the screen and the app will suggest a good move for you.
-
As you play, you will enjoy the realistic graphics and sounds of the app. The 3D board and pieces look like they are made of wood and have shadows and reflections. The app also plays different sounds for each move, such as moving, capturing, checking, castling, and promoting. You can also hear some background music and sound effects that create a relaxing and immersive atmosphere.
-
Tips and tricks for playing chess apk 3d
-
Chess apk 3d is a fun and challenging game that can improve your chess skills and knowledge. Here are some tips and tricks to help you play better and have more fun:
-
Tip 1: Use the undo and hint buttons if you need help
-
If you make a mistake or get stuck, don't worry. You can use the undo and hint buttons to help you out. The undo button lets you take back your last move or several moves if you want. The hint button gives you a suggestion for your next move based on the best possible move for your situation. You can use these buttons as many times as you want, but be careful not to rely on them too much. They are meant to help you learn and improve, not to play for you.
-
Tip 2: Customize your chess set and board colors to suit your preference
-
If you want to change the look of your game, you can customize your chess set and board colors to suit your preference. To do this, go to the settings menu and tap on the customize option. You will see a list of different chess sets and board colors that you can choose from. You can also mix and match different sets and colors to create your own unique combination. You can preview your choice before applying it to your game.
-
Tip 3: Challenge yourself by playing against different A.I. levels or online players
-
If you want to test your skills and have more fun, you can challenge yourself by playing against different A.I. levels or online players. The app has four A.I. levels: Easy, Medium, Hard, and Expert. Each level has a different playing style and strength that will give you a different challenge. You can choose the level that matches your skill or try a higher level to learn from your mistakes and improve your game.
-
You can also play online with other players around the world who have the same app. To do this, go to the online menu and tap on the play option. You will see a list of available players that you can challenge or join their game. You can also create your own game and wait for someone to join you. You can chat with your opponent during the game by using the chat button at the bottom of the screen.
-
Conclusion
-
Chess apk 3d is an amazing Android game that lets you play chess in three dimensions on your mobile device. It has stunning 3D graphics and sound effects that make you feel like you are playing on a real wooden board. It is easy to download, install, and play, and has a user-friendly interface that lets you rotate, shift, and zoom the board as you like. It also has different game modes, difficulty levels, and customization options that let you choose how you want to play.
-
If you love chess or want to learn how to play it, chess apk 3d is a must-have app for you. It will not only entertain you but also improve your chess skills and knowledge. Download it now from APKCombo and enjoy playing chess in three dimensions on your Android device!
-
FAQs
-
Here are some frequently asked questions about chess apk 3d:
-
-
Q: Is chess apk 3d free?
A: Yes, chess apk 3d is free to download and play.
-
Q: Does chess apk 3d require an internet connection?
A: No, chess apk 3d does not require an internet connection to play. You can play offline with the computer or with a friend on the same device. However, if you want to play online with other players, you will need an internet connection.
-
Q: Is chess apk 3d safe to download and install?
A: Yes, chess apk 3d is safe to download and install if you get it from a reliable source like APKCombo. APKCombo provides original and verified apk files for thousands of Android games and apps. However, you should always be careful when downloading and installing apps from unknown sources and enable unknown sources on your device settings only when you trust the source.
-
Q: How can I improve my chess skills with chess apk 3d?
A: Chess apk 3d can help you improve your chess skills by letting you play against different A.I. levels or online players that can challenge you and teach you new strategies and tactics. You can also use the undo and hint buttons to learn from your mistakes and get some guidance. You can also read some chess books or watch some chess videos online to learn more about the game.
-
Q: What are the benefits of playing chess?
A: Playing chess has many benefits for your brain and your health. Chess can improve your memory, concentration, creativity, problem-solving, logical thinking, analytical thinking, decision-making, planning, and mental agility. Chess can also reduce stress, anxiety, depression, and dementia. Chess can also make you more confident, disciplined, patient, and resilient.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Unlimited Fun with Lego Junior Mod Apk Latest Version.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Unlimited Fun with Lego Junior Mod Apk Latest Version.md
deleted file mode 100644
index 1cdefae030319e923e24d80b5a6974ca45b1b97a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Unlimited Fun with Lego Junior Mod Apk Latest Version.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
Download Lego Junior Mod Apk: A Fun and Creative Game for Kids
-
If you are looking for a game that can stimulate your child's imagination and creativity, you might want to try Lego Junior. Lego Junior is a game app that allows children to create their own Lego vehicles and minifigures, and then cruise around Lego landscapes. It is designed for children ages 4 to 7, and it is easy to play and learn. In this article, we will tell you more about Lego Junior, and how you can download Lego Junior Mod Apk, which is a modified version of the game that offers more features and fun.
-
What is Lego Junior?
-
Lego Junior is a game app that is based on the popular Lego bricks, which are plastic building blocks that can be connected in various ways to create different models. Lego Junior is a part of the Lego Juniors series, which is a line of Lego sets that are aimed at younger children who are ready to move on from Duplo blocks, but not yet ready for more complex Lego sets. Lego Juniors sets are easy to build, and they feature familiar themes such as city, princess, superhero, and dinosaur.
Lego Junior has many features that make it an engaging and educational game for kids. Some of these features are:
-
-
No in-app purchases or ads: The game is completely free to play, and it does not have any third-party advertising or links to external websites. You don't have to worry about your child spending money or being exposed to inappropriate content.
-
New levels and models: The game updates regularly with new levels and models that offer more variety and challenge. Your child can build helicopters, trucks, castles, fire stations, and more.
-
Minifigure selector: Your child can choose from different minifigures to drive their vehicles, such as mechanics, princesses, pilots, and even Batman. They can also mix and match different parts of the minifigures to create their own characters.
-
Bright and colorful graphics: The game has bright and colorful graphics that are appealing to children. The game also has animations and sound effects that make it more lively and fun.
-
Virtual building with Lego bricks: The game allows your child to build their own vehicles with virtual Lego bricks. They can choose from different shapes, colors, and sizes of bricks, and then drag and drop them onto the vehicle template. They can also add stickers and accessories to customize their vehicles.
-
Inspiration for real-life play: The game provides inspiration for your child to play with real Lego bricks and sets. The game also shows some real-life play scenarios that you can talk to your child about, such as why the princess is driving a police car with legs instead of wheels.
-
-
Gameplay of Lego Junior
-
The gameplay of Lego Junior is simple and intuitive. Your child can follow these steps to play the game:
-
-
Select a minifigure to drive your vehicle.
-
Select a vehicle template to build your vehicle.
-
Add bricks, stickers, and accessories to your vehicle.
-
Cruise around the Lego landscape with your vehicle.
-
Collect coins and bricks along the way.
-
Unlock new levels, models, and parts with the coins and bricks you collected.
-
-
What is Lego Junior Mod Apk?
-
Lego Junior Mod Apk is a modified version of the original Lego Junior game app. It is not an official product of Lego or its affiliates. It is created by independent developers who modify the original game files to add or change some features. Some of the benefits of Lego Junior Mod Apk are:
-
Benefits of Lego Junior Mod Apk
-
-
Unlimited coins and bricks: With Lego Junior Mod Apk, you don't have to worry about running out of coins and bricks to unlock new levels, models, and parts. You can enjoy the game without any limitations or restrictions.
-
All levels and models unlocked: With Lego Junior Mod Apk, you can access all the levels and models that are available in the game. You don't have to wait for the game to update or complete certain tasks to unlock them. You can explore and play with any vehicle or minifigure you want.
-
No ads or pop-ups: With Lego Junior Mod Apk, you can play the game without any interruptions or distractions from ads or pop-ups. You can have a smooth and enjoyable gaming experience.
-
-
How to Download and Install Lego Junior Mod Apk
-
If you want to download and install Lego Junior Mod Apk, you need to follow these steps:
-
-
Make sure you have the original Lego Junior game app installed on your device. You can download it from the Google Play Store or the App Store.
-
Go to a trusted website that provides Lego Junior Mod Apk files. You can search for "Lego Junior Mod Apk" on your browser, or use this link: .
-
Download the Lego Junior Mod Apk file from the website. Make sure you check the file size and the version before downloading.
-
Enable the installation of apps from unknown sources on your device. You can do this by going to your device settings, security, and then allowing unknown sources.
-
Locate the downloaded Lego Junior Mod Apk file on your device storage, and tap on it to install it.
-
Wait for the installation process to finish, and then launch the game from your app drawer.
-
Enjoy playing Lego Junior Mod Apk with unlimited coins, bricks, levels, and models.
-
-
Conclusion
-
Lego Junior is a fun and creative game for kids that allows them to build their own Lego vehicles and minifigures, and then cruise around Lego landscapes. It is a free game that does not have any in-app purchases or ads, and it updates regularly with new levels and models. However, if you want to have more features and fun, you can download Lego Junior Mod Apk, which is a modified version of the game that offers unlimited coins, bricks, levels, and models. You can download Lego Junior Mod Apk from a trusted website, and install it on your device by following some simple steps. Lego Junior Mod Apk is a great way to enhance your child's imagination and creativity with Lego bricks.
-
FAQs
-
Here are some frequently asked questions about Lego Junior Mod Apk:
-
-
Is Lego Junior Mod Apk safe to use?
-
Lego Junior Mod Apk is safe to use as long as you download it from a trusted website that does not contain any viruses or malware. You should also scan the file with an antivirus app before installing it on your device.
-
Does Lego Junior Mod Apk require root access?
-
No, Lego Junior Mod Apk does not require root access to work. You can install it on any Android device without rooting it.
-
download lego junior mod apk unlock all terbaru 2023
-lego juniors mod apk v6.8.6085 download gratis di android
-lego junior mod apk unlock all versi terbaru
-lego juniors create & cruise mod apk unlimited money
-lego juniors quest mod apk free download
-download game lego junior mod apk offline
-lego juniors build & drive mod apk latest version
-lego juniors apk mod full unlocked
-download lego junior mod apk android 1
-lego juniors mod apk revdl
-lego junior mod apk no ads
-lego juniors hack mod apk download
-lego junior mod apk pure
-lego juniors cheat mod apk
-download lego junior mod apk rexdl
-lego juniors premium mod apk
-lego junior mod apk happymod
-lego juniors cracked mod apk
-download lego junior mod apk data obb
-lego juniors unlimited coins mod apk
-lego junior mod apk mega
-lego juniors pro mod apk
-download lego junior mod apk mediafıre
-lego juniors vip mod apk
-lego junior mod apk 2023 update
-
Can I play Lego Junior Mod Apk online with other players?
-
No, Lego Junior Mod Apk is an offline game that does not support online multiplayer mode. You can only play it solo or with your child.
-
Can I update Lego Junior Mod Apk when the original game updates?
-
No, Lego Junior Mod Apk does not update automatically when the original game updates. You need to download and install the latest version of Lego Junior Mod Apk from the website every time the original game updates.
-
Can I uninstall Lego Junior Mod Apk if I don't like it?
-
Yes, you can uninstall Lego Junior Mod Apk anytime you want. You just need to go to your device settings, apps, and then uninstall it like any other app. You can also reinstall the original Lego Junior game app from the Google Play Store or the App Store.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get the Best Cheats for Resident Evil 4 with Dolphin Emulator (Download Link Included).md b/spaces/congsaPfin/Manga-OCR/logs/Get the Best Cheats for Resident Evil 4 with Dolphin Emulator (Download Link Included).md
deleted file mode 100644
index 87ba288fae8631a06acdb51cf9e213b4889bd219..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Get the Best Cheats for Resident Evil 4 with Dolphin Emulator (Download Link Included).md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
How to Download Cheat Resident Evil 4 Dolphin Emulator
-
Resident Evil 4 is one of the most popular and acclaimed survival horror games ever made. It has been released on various platforms, including GameCube, PlayStation 2, Wii, PC, and more. But did you know that you can also play Resident Evil 4 on your PC with enhanced graphics, performance, and features using Dolphin Emulator? And not only that, but you can also use cheat codes to make the game easier, more fun, or more challenging. In this article, we will show you how to download cheat resident evil 4 dolphin emulator and enjoy this classic game like never before.
Resident Evil 4 is a survival horror game developed and published by Capcom for the GameCube in 2005. Players control special agent Leon S. Kennedy who is on a mission to rescue the US president's daughter, Ashley Graham, who has been kidnapped by a religious cult in rural Spain. Along the way, Leon faces hordes of infected villagers, monstrous creatures, and sinister enemies. The game features a third-person perspective, an over-the-shoulder camera, a dynamic aiming system, an inventory system, and various weapons and items to use. The game also has several modes, such as the main story mode, the Mercenaries mode, the Separate Ways mode, and the Assignment Ada mode.
-
Why is it considered one of the best survival horror games of all time?
-
Resident Evil 4 is widely regarded as one of the best survival horror games of all time because of its innovative gameplay, immersive atmosphere, thrilling action, memorable characters, and compelling story. The game received critical acclaim from critics and fans alike, winning numerous awards and selling over 10 million copies worldwide. The game also influenced many other games in the genre, such as Dead Space, Gears of War, The Last of Us, and more. Resident Evil 4 is a masterpiece that deserves to be played by every fan of survival horror.
-
What is Dolphin Emulator?
-
A free and open-source emulator for GameCube and Wii games
-
Dolphin Emulator is a free and open-source video game console emulator for GameCube and Wii that runs on Windows, Linux, macOS, Android, Xbox One, Xbox Series X and Series S. It allows PC gamers to enjoy games for these two consoles in full HD (1080p) with several enhancements: compatibility with all PC controllers, turbo speed, networked multiplayer, and even more. Dolphin Emulator was first released in 2003 as freeware for Windows, and since then it has been updated regularly by a team of developers and contributors. Dolphin Emulator is the best way to play GameCube and Wii games on your PC.
-
How to download and install Dolphin Em ulator on your PC
-
Downloading and installing Dolphin Emulator on your PC is very easy and straightforward. Here are the steps you need to follow:
-
How to enable cheat codes in dolphin emulator for resident evil 4
-Resident evil 4 gamecube cheat codes for dolphin emulator
-Resident evil 4 wii edition cheat codes for dolphin emulator
-Dolphin emulator cheats not working for resident evil 4
-Resident evil 4 cheats pc download for dolphin emulator
-Resident evil 4 cheats ps4 download for dolphin emulator
-Resident evil 4 cheats xbox one download for dolphin emulator
-Resident evil 4 cheats ps2 download for dolphin emulator
-Resident evil 4 cheat engine download for dolphin emulator
-Resident evil 4 ps2 cheats infinite ammo download for dolphin emulator
-Resident evil 4 cheats 1 hit kill download for dolphin emulator
-Resident evil 4 gcn cheats download for dolphin emulator
-Resident evil 4 eur cheats download for dolphin emulator
-Resident evil 4 usa cheats download for dolphin emulator
-Resident evil 4 leon is invincible cheat download for dolphin emulator
-Resident evil 4 infinite health cheat download for dolphin emulator
-Resident evil 4 infinite ammo cheat download for dolphin emulator
-Resident evil 4 infinite money cheat download for dolphin emulator
-Resident evil 4 all weapons cheat download for dolphin emulator
-Resident evil 4 all costumes cheat download for dolphin emulator
-Resident evil 4 all items cheat download for dolphin emulator
-Resident evil 4 all upgrades cheat download for dolphin emulator
-Resident evil 4 all unlockables cheat download for dolphin emulator
-Resident evil 4 all secrets cheat download for dolphin emulator
-Resident evil 4 all modes cheat download for dolphin emulator
-Resident evil 4 professional mode cheat download for dolphin emulator
-Resident evil 4 separate ways cheat download for dolphin emulator
-Resident evil 4 assignment ada cheat download for dolphin emulator
-Resident evil 4 mercenaries mode cheat download for dolphin emulator
-Resident evil 4 new game plus cheat download for dolphin emulator
-Resident evil 4 speedrun cheat download for dolphin emulator
-Resident evil 4 no damage cheat download for dolphin emulator
-Resident evil 4 no reload cheat download for dolphin emulator
-Resident evil 4 no merchant cheat download for dolphin emulator
-Resident evil 4 no hud cheat download for dolphin emulator
-Resident evil 4 first person mode cheat download for dolphin emulator
-Resident evil 4 hd texture pack cheat download for dolphin emulator
-Resident evil 4 widescreen fix cheat download for dolphin emulator
-Resident evil 4 controller fix cheat download for dolphin emulator
-Resident evil 4 mouse aim cheat download for dolphin emulator
-Resident evil 4 keyboard controls cheat download for dolphin emulator
-Resident evil 4 mod menu cheat download for dolphin emulator
-Resident evil 4 trainer cheat download for dolphin emulator
-Resident evil 4 hack tool cheat download for dolphin emulator
-Resident evil 4 save editor cheat download for dolphin emulator
-Resident evil 4 save game cheat download for dolphin emulator
-Resident evil 4 save file location cheat download for dolphin emulator
-
-
Go to the official website of Dolphin Emulator at [https://dolphin-emu.org/] and click on the Download button.
-
Select the version of Dolphin Emulator that matches your operating system and click on the Download button again.
-
Once the download is complete, extract the zip file to a folder of your choice.
-
Open the folder and double-click on the Dolphin.exe file to launch the emulator.
-
Follow the instructions on the screen to configure the emulator settings, such as graphics, audio, controller, and more.
-
Congratulations, you have successfully installed Dolphin Emulator on your PC!
-
-
How to Download Cheat Codes for Resident Evil 4
-
The benefits of using cheat codes in Resident Evil 4
-
Cheat codes are special commands or codes that can alter or enhance the gameplay of a video game. Some cheat codes can give you unlimited health, ammo, money, or weapons, while others can unlock hidden features, modes, or characters. Using cheat codes in Resident Evil 4 can make the game more enjoyable and fun for you, especially if you are stuck on a difficult level, want to explore more of the game's content, or just want to have some fun. Cheat codes can also help you to complete the game faster, get higher scores, or challenge yourself with harder settings.
-
The sources of cheat codes for Resident Evil 4
-
There are many sources of cheat codes for Resident Evil 4 online, but not all of them are reliable or safe. Some cheat codes may not work properly, may contain viruses or malware, or may damage your game files. Therefore, you should always be careful and cautious when downloading cheat codes from unknown or untrusted websites. Here are some of the best and safest sources of cheat codes for Resident Evil 4 that we recommend:
-
-
[https://www.ign.com/cheats/games/resident-evil-4-gamecube-490935]: This is one of the most popular and reputable gaming websites that offers a comprehensive list of cheat codes for Resident Evil 4 for GameCube. You can find cheat codes for various aspects of the game, such as weapons, items, costumes, modes, and more.
-
[https://www.supercheats.com/gamecube/residentevil4cheats.htm]: This is another well-known and trusted gaming website that provides a large collection of cheat codes for Resident Evil 4 for GameCube. You can find cheat codes for different features of the game, such as health, ammo, money, enemies, and more.
-
[https://gamehacking.org/game/10086]: This is a dedicated website for game hacking and cheating that offers a wide range of cheat codes for Resident Evil 4 for GameCube. You can find cheat codes for various aspects of the game, such as graphics, sound, gameplay, and more.
-
-
How to apply cheat codes to Resident Evil 4 using Dolphin Emulator
-
Applying cheat codes to Resident Evil 4 using Dolphin Emulator is very easy and simple. Here are the steps you need to follow:
-
-
Download the cheat codes file for Resident Evil 4 from one of the sources mentioned above and save it to your PC.
-
Open Dolphin Emulator and right-click on the Resident Evil 4 game icon in the main menu.
-
Select Properties from the drop-down menu and then click on the Edit Config button.
-
A text file will open in your default text editor. Scroll down to the bottom of the file and paste the cheat codes that you want to use under the [ActionReplay] section.
-
Save and close the text file and then click on OK in the Properties window.
-
Launch Resident Evil 4 from Dolphin Emulator and enjoy playing with cheat codes!
-
-
Conclusion
-
A summary of the main points and a call to action
-
In this article, we have shown you how to download cheat resident evil 4 dolphin emulator and play this amazing survival horror game on your PC with enhanced graphics, performance, and features. We have also explained what Resident Evil 4 is, what Dolphin Emulator is, how to download and install Dolphin Emulator on your PC, how to download cheat codes for Resident Evil 4, and how to apply cheat codes to Resident Evil 4 using Dolphin Emulator. We hope that you have found this article helpful and informative. If you are a fan of survival horror games, you should definitely try Resident Evil 4 with cheat codes and experience this classic game in a new way. You can download Dolphin Emulator and cheat codes for Resident Evil 4 from the links provided in this article. Have fun and stay safe!
-
FAQs
-
Q1: Is Resident Evil 4 Remake available on Dolphin Emulator?
-
A1: No, Resident Evil 4 Remake is not available on Dolphin Emulator. Resident Evil 4 Remake is a rumored project by Capcom that is supposed to be a modern remake of Resident Evil 4 for PlayStation 5, Xbox Series X and Series S, and PC. However, there is no official confirmation or release date for Resident Evil 4 Remake as of now. Therefore, you cannot play Resident Evil 4 Remake on Dolphin Emulator.
-
Q2: Can I use cheat codes for Resident Evil 4 on other platforms?
-
A2: Yes, you can use cheat codes for Resident Evil 4 on other platforms, such as PlayStation 2, Wii, PC, and more. However, the cheat codes may vary depending on the platform and the version of the game. You may need to use different methods or tools to apply cheat codes to Resident Evil 4 on other platforms, such as cheat devices, trainers, patches, mods, or console commands. You can search online for the specific cheat codes and instructions for your platform of choice.
-
Q3: Are there any risks or drawbacks of using cheat codes for Resident Evil 4?
-
A3: There are some risks or drawbacks of using cheat codes for Resident Evil 4 that you should be aware of before using them. Some of them are:
-
-
Cheat codes may cause glitches, bugs, crashes, or errors in the game that may affect your gameplay or progress.
-
Cheat codes may make the game too easy or too hard for you, which may reduce your enjoyment or satisfaction.
-
Cheat codes may interfere with the game's achievements, trophies, leaderboards, or online features that may affect your reputation or rewards.
-
Cheat codes may violate the game's terms of service or code of conduct that may result in penalties or bans from the game or its online services.
-
-
Therefore, you should use cheat codes for Resident Evil 4 at your own risk and discretion. You should also backup your game files and save data before using cheat codes in case anything goes wrong.
-
Q4: What are some of the best cheat codes for Resident Evil 4?
-
A4: Some of the best cheat codes for Resident Evil 4 are:
-
-
Infinite Health: This cheat code will make Leon and Ashley invincible to any damage from enemies or traps.
-
Infinite Ammo: This cheat code will give Leon unlimited ammo for all his weapons and grenades.
-
Infinite Money: This cheat code will give Leon unlimited money (pesetas) that he can use to buy or upgrade weapons and items from the merchant.
-
Unlock All Weapons: This cheat code will unlock all the weapons in the game, including the special weapons like the Chicago Typewriter, the Handcannon, the Infinite Rocket Launcher, and more.
-
Unlock All Costumes: This cheat code will unlock all the costumes in the game, including the alternate costumes for Leon and Ashley, such as the R.P.D. uniform, the mafia suit, the knight armor, and more.
-
-
Q5: Where can I find more information and tips about Resident Evil 4?
-
A5: You can find more information and tips about Resident Evil 4 from various sources online, such as:
-
-
[https://www.residentevil.com/]: This is the official website of Resident Evil that offers news, updates, media, merchandise, and more about the franchise.
-
[https://www.residentevil.fandom.com/wiki/Resident_Evil_4]: This is a fan-made wiki that provides detailed information, guides, trivia, and more about Resident Evil 4 and other games in the series.
-
[https://www.youtube.com/watch?v=Zbq7BnsQhrw]: This is a video by IGN that shows a walkthrough of Resident Evil 4 with commentary and tips.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/My Singing Monsters Elmas Hilesi Apk Canavarlarnzla Elenirken Para Kazann.md b/spaces/congsaPfin/Manga-OCR/logs/My Singing Monsters Elmas Hilesi Apk Canavarlarnzla Elenirken Para Kazann.md
deleted file mode 100644
index 6e0e80b7e294187a18b7d8172cc329daa59533f0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/My Singing Monsters Elmas Hilesi Apk Canavarlarnzla Elenirken Para Kazann.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
My Singing Monsters Elmas Hilesi APK: How to Get Unlimited Diamonds and Coins
-
Do you love playing My Singing Monsters, the musical simulation game where you can collect, breed, and listen to hundreds of adorable monsters? Do you wish you had more diamonds and coins to unlock new monsters, islands, decorations, and features? If you answered yes, then you might be interested in My Singing Monsters Elmas Hilesi APK, a modded version of the game that gives you unlimited resources and access to everything. In this article, we will tell you everything you need to know about this amazing app, including its features, how to download and install it, how to use it, and some frequently asked questions. Let's get started!
-
What is My Singing Monsters and why is it popular?
-
My Singing Monsters is a free-to-play game developed by Big Blue Bubble Inc. that combines music, creativity, and strategy. In this game, you can create your own monster paradise by collecting over 250 cute and funny monsters, each with their own unique personality and musical talent. You can breed them, feed them, listen to them sing, and watch your song evolve as you upgrade them. You can also design and build your own islands with cool decorations and catchy music, and share your creation with your friends. You can also join tribes, explore maps, complete challenges, earn rewards, and discover new updates and events year-round.
My Singing Monsters is popular because it is a fun and engaging game for players of all ages. It has awesome graphics and character animation, lush islands, diverse monsters, and catchy songs. It also allows players to express their creativity and musical skills by composing their own melodies with Composer Island. It has millions of fans around the world who love playing this game every day.
-
What are diamonds and coins and why are they important?
-
Diamonds and coins are the main currencies in My Singing Monsters. Diamonds are used to speed up processes such as breeding, hatching, feeding, upgrading structures, removing obstacles, buying rare monsters, keys, decorations, island skins, costumes, etc. Coins are used to buy common monsters, food, bakeries, castles, etc.
-
My Singing Monsters Mod Apk Sınırsız Para Hilesi
-My Singing Monsters Apk İndir Ücretsiz Elmas Kazan
-My Singing Monsters Hileli Oyun İndir Canavarları Topla
-My Singing Monsters Elmas Hilesi Nasıl Yapılır 2023
-My Singing Monsters Apk Son Sürüm Elmas Hileli Mod
-My Singing Monsters Elmas Hilesi Yapma Programı İndir
-My Singing Monsters Apk Hileli Oyna Online Canavarlar
-My Singing Monsters Elmas Hilesi Güncel Çalışan 2023
-My Singing Monsters Apk Mod Menu Elmas Hileli İndir
-My Singing Monsters Elmas Hilesi Android Oyun Club
-My Singing Monsters Apk Hileli Versiyonu İndir 3.8.0
-My Singing Monsters Elmas Hilesi Yapmak İstiyorum
-My Singing Monsters Apk Full Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Var Mı Gerçekten
-My Singing Monsters Apk Para Hileli Oyun İndir Club
-My Singing Monsters Elmas Hilesi Yapanlar Banlanır Mı
-My Singing Monsters Apk Mega Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Türkçe Anlatım Video
-My Singing Monsters Apk Altın Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Yapmadan Nasıl Kazanılır
-My Singing Monsters Apk Canavar Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Youtube Kanalı Abone Ol
-My Singing Monsters Apk Ses Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Discord Sunucusu Katıl
-My Singing Monsters Apk Enerji Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Instagram Sayfası Takip Et
-My Singing Monsters Apk Seviye Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Facebook Grubu Üye Ol
-My Singing Monsters Apk Adalar Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Twitter Hesabı Takip Et
-My Singing Monsters Apk Kostüm Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Reddit Sayfası Abone Ol
-My Singing Monsters Apk Kaynak Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Tiktok Videosu Beğen
-My Singing Monsters Apk Ödül Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Pinterest Panosu Takip Et
-My Singing Monsters Apk Güncelleme Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Blog Yazısı Oku Yorum Yap
-My Singing Monsters Apk Arkadaş Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi Podcast Dinle Abone Ol
-My Singing Monsters Apk Rehber Hileli Mod İndir 2023
-My Singing Monsters Elmas Hilesi E-posta Listesine Kaydol
-
Diamonds and coins are important because they allow you to progress faster in the game and enjoy more features. However, they are not easy to obtain. You can earn them by completing tasks, watching ads, tapping the Free Currency button in the Market Menu, mining them on Plant Island or Mirror Plant Island (requires a small in-app purchase), or buying them with real money. However, these methods are slow or expensive. That's why many players look for alternative ways to get more diamonds and coins for free.
-
What is My Singing Monsters Elmas Hilesi APK and what does it offer?
-
My Singing Monsters Elmas Hilesi APK is a modified version of the original game that offers unlimited diamonds and coins for free. It also gives you access to all the monsters and islands in the game, without any restrictions or limitations. You can enjoy the full potential of the game with this app, and create your own musical masterpiece with your favorite monsters. You don't need to root or jailbreak your device to use this app, and it is easy to install and use. It is also safe and secure, as it does not contain any viruses or malware that could harm your device or account.
-
Features of My Singing Monsters Elmas Hilesi APK
-
My Singing Monsters Elmas Hilesi APK has many amazing features that make it the best modded version of the game. Here are some of them:
-
-
Unlimited diamonds and coins: You can get as many diamonds and coins as you want with this app, without spending any real money or wasting any time. You can use them to buy anything you need in the game, such as rare monsters, decorations, island skins, costumes, etc. You can also speed up any process, such as breeding, hatching, feeding, upgrading, etc. You can enjoy the game without any interruptions or delays.
-
Access to all monsters and islands: You can unlock and collect all the monsters and islands in the game with this app, without any level or currency requirements. You can explore all the different types of monsters, such as natural, ethereal, seasonal, legendary, etc. You can also visit all the islands, such as Plant Island, Cold Island, Air Island, Water Island, Earth Island, Fire Island, Light Island, Psychic Island, Faerie Island, Bone Island, Shugabush Island, Wublin Island, Celestial Island, Tribal Island, Composer Island, etc. You can create your own unique song with each island and monster combination.
-
No root or jailbreak required: You don't need to root or jailbreak your device to use this app. It works on both Android and iOS devices without any modifications. You can install it easily and safely on your device without any risk of damaging it or losing your warranty.
-
Easy to install and use: You can download and install this app in a few simple steps that we will explain later in this article. You don't need any technical skills or knowledge to use this app. It has a user-friendly interface and a simple design that makes it easy to navigate and operate.
-
Safe and secure: This app is safe and secure to use on your device and account. It does not contain any viruses or malware that could harm your device or steal your data. It also has an anti-ban system that protects your account from being detected or banned by the game developers. You can use this app without any worries or fears.
-
-
How to download and install My Singing Monsters Elmas Hilesi APK
-
If you want to download and install My Singing Monsters Elmas Hilesi APK on your device, you need to follow these steps:
-
-
First of all, you need to uninstall the original version of My Singing Monsters from your device if you have it installed. This is to avoid any conflicts or errors between the two versions.
-
Next, you need to enable the option of "Unknown Sources" on your device settings. This is to allow your device to install apps from sources other than the official app store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Then, you need to download the My Singing Monsters Elmas Hilesi APK file from a trusted source on the internet. You can use the link below to download it directly on your device.
-
After downloading the file, you need to locate it on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for a few seconds until the installation is complete.
-
Finally, you can launch the app from your device menu and enjoy unlimited diamonds and coins and access to all monsters and islands in My Singing Monsters.
-
-
Here is a table that summarizes the steps for downloading and installing My Singing Monsters Elmas Hilesi APK:
-
-
Step
Action
-
1
Uninstall original version of My Singing Monsters
-
2
Enable Unknown Sources option on device settings
-
3
Download My Singing Monsters Elmas Hilesi APK file from a trusted source
-
4
Locate and tap on the file to start the installation
-
5
Launch the app and enjoy unlimited resources and access
-
-
How to use My Singing Monsters Elmas Hilesi APK
-
Using My Singing Monsters Elmas Hilesi APK is very easy and fun. You can use it just like the original version of the game, but with more advantages and possibilities. Here are some tips on how to use it:
-
-
How to breed, feed, and listen to your monsters: You can breed your monsters by placing two compatible monsters on the Breeding Structure and waiting for them to produce an egg. You can speed up the process by using diamonds. You can feed your monsters by tapping on them and dragging food from the Bakery to their mouths. You can listen to your monsters by tapping on them and adjusting their volume with the slider. You can also mute or unmute them by tapping on the speaker icon.
-
How to decorate your islands and create your own songs: You can decorate your islands by tapping on the Market Menu and buying decorations, island skins, costumes, etc. with diamonds or coins. You can place them anywhere on your island by dragging them from the inventory. You can also move, rotate, or sell them by tapping on them and selecting the appropriate option. You can create your own songs by using Composer Island, where you can place notes on a grid and assign them to different monsters. You can also edit, save, load, or share your songs with other players.
-
How to join tribes and friends and explore maps: You can join tribes by tapping on the Social Menu and selecting the Tribes option. You can either create your own tribe or join an existing one. You can also invite or accept invitations from other players. You can contribute to your tribe by placing a monster on the Tribal Island and feeding it regularly. You can also chat with your tribe members and earn rewards based on your tribe's performance. You can add friends by tapping on the Social Menu and selecting the Friends option. You can either enter a friend code or use Facebook to connect with other players. You can visit your friends' islands, send them gifts, or rate their songs. You can explore maps by tapping on the Map Menu and selecting an island. You can see all the available islands in the game, including the ones you have not unlocked yet. You can also see the monsters that inhabit each island and their breeding combinations.
-
-
Conclusion
-
My Singing Monsters Elmas Hilesi APK is a great app for fans of My Singing Monsters who want to enjoy unlimited diamonds and coins and access to all monsters and islands in the game. It is a modded version of the original game that offers many amazing features, such as no root or jailbreak required, easy to install and use, safe and secure, etc. It is a fun and engaging app that allows you to create your own monster paradise and musical masterpiece with your favorite monsters.
-
If you want to download and install My Singing Monsters Elmas Hilesi APK on your device, you can follow the steps we explained in this article. You don't need any technical skills or knowledge to use this app. It has a user-friendly interface and a simple design that makes it easy to navigate and operate.
-
So what are you waiting for? Download My Singing Monsters Elmas Hilesi APK today and enjoy unlimited resources and access in My Singing Monsters!
-
FAQs
-
Here are some frequently asked questions about My Singing Monsters Elmas Hilesi APK:
-
-
What are the minimum requirements for My Singing Monsters Elmas Hilesi APK?
-
You need an Android device with version 4.1 or higher or an iOS device with version 9.0 or higher to use this app. You also need at least 100 MB of free space on your device storage.
-
Is My Singing Monsters Elmas Hilesi APK legal and safe?
-
This app is not legal or endorsed by the game developers, as it violates their terms of service and intellectual property rights. However, it is safe to use on your device and account, as it does not contain any viruses or malware that could harm your device or steal your data. It also has an anti-ban system that protects your account from being detected or banned by the game developers.
-
Can I update My Singing Monsters Elmas Hilesi APK?
-
You can update this app whenever there is a new version available on the internet. However, you need to uninstall the previous version first before installing the new version. You can also backup your progress before updating, in case you encounter any issues or errors.
-
Will I lose my progress if I uninstall My Singing Monsters Elmas Hilesi APK?
-
You will not lose your progress if you uninstall this app, as long as you have synced your account with Facebook or Google Play Games. You can restore your progress by logging in with the same account on the original version of the game or on another device. However, you will lose the unlimited diamonds and coins and access to all monsters and islands that you gained from this app.
-
Where can I find more information about My Singing Monsters Elmas Hilesi APK?
-
You can find more information about this app by visiting its official website or by contacting its developers via email or social media. You can also read reviews and feedback from other users who have used this app and share your own experience and opinion.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/PUBG MOBILE LITE Everything you need to know about the game features and modes.md b/spaces/congsaPfin/Manga-OCR/logs/PUBG MOBILE LITE Everything you need to know about the game features and modes.md
deleted file mode 100644
index dd24a5d04ea1275dee1a71b08591e67fc5cc60bf..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/PUBG MOBILE LITE Everything you need to know about the game features and modes.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
PUBG Mobile Lite Game Download Kaise Karte Hain
-
PUBG Mobile Lite ek popular battle royale game hai jo low-end devices ke liye optimize kiya gaya hai. Is game mein aap 60 players ke sath ek island par utarte hain aur weapons, vehicles, aur supplies ko collect karte hain. Aapko apne enemies ko eliminate karna hai aur last one standing banna hai. Is article mein hum aapko batayenge ki PUBG Mobile Lite game download kaise karte hain aur install kaise karte hain. Aapko yeh bhi janege ki PUBG Mobile Lite game play kaise kare aur kuch tips aur tricks bhi share karenge.
-
PUBG Mobile Lite Kya Hai?
-
PUBG Mobile Lite ek lighter version hai PUBG Mobile ka jo original game ka gameplay aur graphics ko maintain karta hai lekin kam space aur RAM consume karta hai. Is game mein aapko 10 minutes ke matches khelne hote hain jismein 60 players hote hain. Is game mein aapko Unreal Engine 4 ka use milta hai jo realistic aur immersive gameplay provide karta hai. Is game mein aapko HD map, high definition audio, 3D sound effects, fair gaming environment, team up with friends, voice chat, aur clan modes jaise features milte hain.
60 players drop onto a 2km x 2km island rich in resources and duke it out for survival in a shrinking battlefield.
-
Search for weapons, vehicles, and supplies to aid you in the battle.
-
Prepare to land and fight to be the last one standing.
-
Supports 12 languages: English, Spanish, Portuguese, Russian, Turkish, Indonesian, Thai, Simplified Chinese, Traditional Chinese, Arabic, German, and French.
-
Advanced anti-cheat system to ensure all PUBG Mobile Lite players can enjoy a fair gaming experience.
-
Arena Warehouse: intense 4 vs 4 battle with endless respawns for thrilling matches.
-
Team up with friends using local team up, room cards and clan modes.
-
The amazing Unreal Engine 4 creates realistic and immersive gameplay on an expansive HD map.
-
High definition audio and 3D sound effects bring you into the firefights like never before.
-
Invite friends to play and create a winning strategy together using voice chat.
-
Set up ambushes and surprise your enemies.
-
Revive your teammates in the heat of battle and fight for your clan's dominance.
-
-
PUBG Mobile Lite Ke Requirements
-
PUBG Mobile Lite game khelne ke liye aapke device ki minimum requirements yeh hain:
-
-
Android version: 4.1 or above
-
Free space: 600 MB or more
-
RAM: 1 GB or more
-
-
PUBG Mobile Lite Game Download Kaise Kare?
-
PUBG Mobile Lite game download karne ke liye aapke paas teen options hain:
-
Google Play Store Se Download Kare
-
Google Play Store se PUBG Mobile Lite game download karne ke liye aapko yeh steps follow karne honge:
-
-
Google Play Store app ko open kare aur search box mein PUBG Mobile Lite type kare.
-
PUBG Mobile Lite game ko select kare aur Install button par click kare.
-
Game download hone tak wait kare aur phir Open button par click kare.
-
-
APK File Se Download Kare
-
APK file se PUBG Mobile Lite game download karne ke liye aapko yeh steps follow karne honge:
-
-
Kisi trusted website se PUBG Mobile Lite APK file ko download kare. Aap se bhi download kar sakte hain.
-
Downloaded APK file ko open kare aur Install button par click kare.
-
Agar aapko unknown sources se installation allow karne ka prompt aaye to Settings par jaaye aur Unknown sources ko enable kare.
-
Game install hone tak wait kare aur phir Open button par click kare.
-
-
Uptodown App Store Se Download Kare
-
Uptodown App Store se PUBG Mobile Lite game download karne ke liye aapko yeh steps follow karne honge:
-
-
Uptodown App Store app ko download kare aur install kare. Aap se bhi download kar sakte hain.
-
Uptodown App Store app ko open kare aur search box mein PUBG Mobile Lite type kare.
-
PUBG Mobile Lite game ko select kare aur Download button par click kare.
-
Game download hone tak wait kare aur phir Install button par click kare.
-
Game install hone tak wait kare aur phir Open button par click kare.
-
-
PUBG Mobile Lite Game Install Kaise Kare?
-
PUBG Mobile Lite game install karne ke liye aapko upar bataye gaye steps follow karne honge. Agar aap Google Play Store se download kiye hain to aapko install karne ki zaroorat nahi hai. Agar aap APK file se ya Uptodown App Store se download kiye hain to aapko install karne ke liye yeh steps follow karne honge:
-
Google Play Store Se Install Kare
-
Google Play Store se PUBG Mobile Lite game install karne ke liye aapko yeh steps follow karne honge:
-
-
Google Play Store app ko open kare aur PUBG Mobile Lite game ko search kare.
-
PUBG Mobile Lite game ko select kare aur Install button par click kare.
-
Game install hone tak wait kare aur phir Open button par click kare.
-
-
APK File Se Install Kare
-
APK file se PUBG Mobile Lite game install karne ke liye aapko yeh steps follow karne honge:
-
pubg mobile lite game download kaise kare
-pubg mobile lite game download karna hai
-pubg mobile lite game download karne ka tarika
-pubg mobile lite game download kaise hota hai
-pubg mobile lite game download karna sikhe
-pubg mobile lite game download karne ki video
-pubg mobile lite game download kaise kiya jata hai
-pubg mobile lite game download karne wala app
-pubg mobile lite game download karna chahta hoon
-pubg mobile lite game download karne ka link
-pubg mobile lite game download kaise hoga
-pubg mobile lite game download karna batao
-pubg mobile lite game download karne ke liye
-pubg mobile lite game download kaise karenge
-pubg mobile lite game download karne ka sahi tarika
-pubg mobile lite game download karna padega
-pubg mobile lite game download karne mein kitna time lagega
-pubg mobile lite game download kaise ho sakta hai
-pubg mobile lite game download karne ka asan tarika
-pubg mobile lite game download karna bataye
-pubg mobile lite game download karne se pehle kya kare
-pubg mobile lite game download kaise hua
-pubg mobile lite game download karne ke baad kya karna hai
-pubg mobile lite game download karne ka best tarika
-pubg mobile lite game download karne ki website
-pubg mobile lite game download kaise ki jati hai
-pubg mobile lite game download karne ka aasan tarika
-pubg mobile lite game download karne ke fayde
-pubg mobile lite game download karne ka size kitna hai
-pubg mobile lite game download karne ki trick
-pubg mobile lite game download kaise dekhe
-pubg mobile lite game download karne ka video dikhao
-pubg mobile lite game download karne me problem aa rahi hai
-pubg mobile lite game download karne ka naya tarika
-pubg mobile lite game download karne ke liye konsa app use kare
-pubg mobile lite game download kaise banaye
-pubg mobile lite game download karne ka samay kitna hai
-pubg mobile lite game download karne ki jankari hindi me
-pubg mobile lite game download karne ke liye internet speed kitni honi chahiye
-pubg mobile lite game download karne ke liye space kitna chahiye
-pubg mobile lite game download kaise update kare
-pubg mobile lite game download karne ka sabse aasan tarika
-pubg mobile lite game download karne par error aa raha hai
-pubg mobile lite game download karne ke liye phone me kya hona chahiye
-pubg mobile lite game download karne ki process bataye
-pubg mobile lite game download kaise install kare
-pubg mobile lite game download karne ka latest version konsa hai
-pubg mobile lite game download karne ke nuksan
-pubg mobile lite game download karne ke baad setting kaise kare
-
-
Downloaded APK file ko open kare aur Install button par click kare.
-
Agar aapko unknown sources se installation allow karne ka prompt aaye to Settings par jaaye aur Unknown sources ko enable kare.
-
Game install hone tak wait kare aur phir Open button par click kare.
-
-
Uptodown App Store Se Install Kare
-
Uptodown App Store se PUBG Mobile Lite game install karne ke liye aapko yeh steps follow karne honge:
-
-
Uptodown App Store app ko open kare aur PUBG Mobile Lite game ko search kare.
-
PUBG Mobile Lite game ko select kare aur Download button par click kare.
-
Game download hone tak wait kare aur phir Install button par click kare.
-
Game install hone tak wait kare aur phir Open button par click kare.
-
-
PUBG Mobile Lite Game Play Kaise Kare?
-
PUBG Mobile Lite game play karne ke liye aapko yeh steps follow karne honge:
-
Game Mode Ke Bare Mein Jane
-
PUBG Mobile Lite mein aapko do game modes milte hain: Classic aur Arcade. Classic mode mein aapko 60 players ke sath ek island par utarna hai aur last one standing banna hai. Aap solo, duo, ya squad mein khel sakte hain. Arcade mode mein aapko 4 vs 4 Warehouse match khelna hai jismein aapko unlimited respawns milte hain. Aap apne friends ya random players ke sath team up kar sakte hain. Aap apni preference ke hisab se game mode select kar sakte hain.
-
Game Settings Ko Customize Kare
-
PUBG Mobile Lite mein aapko game settings ko customize karne ka option milta hai. Aap apne device aur internet connection ke hisab se graphics, frame rate, sound, controls, sensitivity, auto pick up, quick chat, language, etc. ko adjust kar sakte hain. Aap apna character, outfit, parachute, vehicle, weapon skin, etc. bhi change kar sakte hain. Aap apne profile, achievements, statistics, leaderboards, etc. ko bhi dekh sakte hain. Aap game settings ko access karne ke liye lobby screen par settings icon par click kar sakte hain.
-
Game Tips Aur Tricks
-
PUBG Mobile Lite game play karne ke liye aapko yeh tips aur tricks follow kar sakte hain:
-
-
Land in a safe and loot-rich area. Avoid hotspots where many players land and fight.
-
Use the mini-map and the compass to locate enemies, vehicles, and safe zones.
-
Always stay in cover and move carefully. Avoid open fields and roads where you can be easily spotted.
-
Use the right weapon for the right situation. For close range, use shotguns, SMGs, or pistols. For medium range, use assault rifles, DMRs, or LMGs. For long range, use sniper rifles or crossbows.
-
Use attachments and scopes to improve your weapon's performance and accuracy.
-
Use grenades, molotovs, and smoke bombs to damage, distract, or conceal your enemies.
-
Use vehicles to travel faster and run over enemies. But be careful of vehicle damage and noise.
-
Use healing items, boosters, and armor to restore your health and increase your survivability.
-
Communicate with your teammates using voice chat or quick chat. Share loot, information, and strategies with them.
-
Play smart and strategically. Don't rush into fights without a plan. Use the terrain, buildings, and vehicles to your advantage.
-
-
Conclusion
-
PUBG Mobile Lite game download karna aur install karna bahut easy hai. Aap apne device aur internet connection ke hisab se kisi bhi option se game download kar sakte hain. PUBG Mobile Lite game play karna bhi bahut mazedaar hai. Aap apne skills aur strategies ko use karke apne enemies ko hara sakte hain aur chicken dinner jeet sakte hain. PUBG Mobile Lite game aapko realistic aur immersive gameplay experience provide karta hai jo aapko bore nahi hone dega. Toh abhi PUBG Mobile Lite game download kare aur khelna shuru kare.
-
FAQs
-
-
Q: PUBG Mobile Lite game free hai ya paid hai?
-
A: PUBG Mobile Lite game bilkul free hai. Aapko koi bhi paisa nahi dena hai game download karne ya khelne ke liye.
-
Q: PUBG Mobile Lite game mein kya kya modes available hain?
-
A: PUBG Mobile Lite game mein aapko Classic mode aur Arcade mode milte hain. Classic mode mein aapko 60 players ke sath ek island par utarna hai aur last one standing banna hai. Arcade mode mein aapko 4 vs 4 Warehouse match khelna hai jismein aapko unlimited respawns milte hain.
-
Q: PUBG Mobile Lite game mein kya kya weapons available hain?
-
A: PUBG Mobile Lite game mein aapko shotguns, SMGs, pistols, assault rifles, DMRs, LMGs, sniper rifles, crossbows, grenades, molotovs, aur smoke bombs jaise weapons milte hain.
-
Q: PUBG Mobile Lite game mein kya kya vehicles available hain?
-
A: PUBG Mobile Lite game mein aapko motorcycles, scooters, buggies, jeeps, vans, trucks, boats, aur gliders jaise vehicles milte hain.
-
Q: PUBG Mobile Lite game mein kaise friends ke sath khel sakte hain?
-
A: PUBG Mobile Lite game mein aapko local team up, room cards, aur clan modes jaise options milte hain jisse aap apne friends ke sath team up kar sakte hain. Aap apne friends ko invite kar sakte hain ya unke sath join kar sakte hain. Aap voice chat ya quick chat ka use karke unse communicate kar sakte hain.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/An overview of digital libraries and data warehousing Technologies architectures and standards.md b/spaces/contluForse/HuggingGPT/assets/An overview of digital libraries and data warehousing Technologies architectures and standards.md
deleted file mode 100644
index 3381b7d735e4edbdf21f5d6e1d4ff592d5d5ceae..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/An overview of digital libraries and data warehousing Technologies architectures and standards.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
When creating and maintaining data repositories, there are many hardware and software decisions to make. Before you get there, establishing some data warehousing best practices will inform the technical decisions and keep the data repository useful:
-
Data integration and data management are critical to cloud data warehousing. You need a comprehensive data management solution in order to discover relevant data across your organization, migrate it to your cloud data warehouse, and keep the cloud data warehouse updated with fresh and trustworthy data on a regular basis. To accommodate data that comes from sources outside the company, your integration and data management solution needs to be able to handle any data type (structured, semi-structured, or unstructured), any user, any data source, and any data integration pattern.
Cloud data warehouse modernization has helped businesses around the globe become more efficient and agile and prepare for the demands of the digital age. Here are just two examples of how companies have benefited from migrating to a cloud data warehouse:
-
"3," plus anywhere access to attendance data. Attendance is available in "real time." School does "period by period" attendance. Attendance is input into a "device" in "real time." Monthly registers are processed centrally, the data are maintained at the classroom level, data are available at the district level. Special education data are integrated with the LEA's student management system. The special education application is web based. The LEA has a library management system that is web based. The district's library holdings are available to community libraries. "2," plus district access to attendance data. Attendance is available by mid-morning. School has ability to do "period by period" attendance but does not use that functionality. Teachers complete bubble sheets that are later scanned for attendance. Monthly registers are processed at the building level, the data are maintained at the building level, data are available at the district level. The special education application is centralized. The LEA has a curriculum management application. The LEA has a library management system that is district based. Student attendance is computerized, with building access to data. Computerized attendance is input and available at the end of the day or week. Teachers take attendance manually and information is then entered into system. Monthly registers are processed at the building level, the data are maintained at the building level, online data are only available at the building level. The special education application is online and school based. The LEA has a library management system that is school based. Student attendance is manually processed with attendance cards. Attendance is available on manual records at the end of the day. Monthly registers are processed manually and available at the building and district levels in "hard copy." Technology Integration - Administrative Usage Rubric Objectives
-
HUMAN RESOURCE MANAGEMENT "3," plus staff attendance is available in "real time." Data processed at the building level is integrated into the payroll system. The LEA utilizes a substitute tracking system to identify and assign appropriate substitutes as needed. The LEA utilizes a certificate tracking system to verify staff certification, identify appropriately certified staff to fill needs, or to identify subjects that need resources. Staff members with remote access to their payroll/benefits data, district policies, and/or attendance/sick time/vacation records can interact to initiate changes to benefits status, tax deductions, etc. "2," plus staff attendance is available by mid-morning. Attendance is available by mid-morning. Data are available online at the building and district levels. The LEA utilizes a position control application to manage and fill vacancies without going over budget. Staff members have remote access to their payroll/benefits data, district policies, and/or attendance/sick time/vacation records. Staff attendance is computerized, with building access to online data for payroll purposes. Attendance is available online at the end of the day or week. Biweekly and/or monthly staff attendance data system generated for payroll purposes is available online at the building and in "hard copy" at the district level. Staff attendance is manually processed for payroll purposes. Attendance is available on manual records at the end of the day. Biweekly and/or monthly staff attendance data are processed manually for payroll purposes and available at the building and district levels in "hard copy." TRANSPORTATION MANAGEMENT "3," plus the LEA is responsible for maintaining vehicle inspection data and bus driver certification/basic data. Special education data are integrated with the LEA's transportation application. "2," plus the data for each building are available at the district level. The LEA is responsible for transportation of students, and there is a database with bus routing information and transported student basic data at the building level. The LEA is not responsible for transportation of students Technology Integration - Administrative Usage Rubric Objectives4321 FOOD SERVICE "3," plus the LEA's food service department utilizes a point-of-sale cafeteria application. "2," plus the LEA uses direct certifi-cation food service eligibility infor-mation available from the state. The food service department utilizes a point-of-sale cafeteria application. The LEA has an in-house or out-sourced food service program. There is a student database with food service eligibility identified. The LEA has no food service program, or the LEA has an in-house or out-sourced food service program with manual records of food service eligibility identified. ACCESS "3," plus the LEA's building access security control systems' information feeds student/staff attendance databases. Anytime anywhere access to student, financial, human resource, transportation, and staff data. "2," plus district access to student, financial, human resource, transportation, and staff data. The LEA has building access security control systems. Teacher/building administrative access to student, financial, human resource, transportation, and staff data. No access to online student, financial, human resource, transportation, and staff data. FINANCIAL MANAGEMENT "3," plus the warehousing and accounts payable/receivable systems feed the fixed assets system "2," plus building administrators and office staff utilize online budget development, purchase orders/requisitions, and/or action forms/ board resolutions. The LEA does a physical inventory of fixed assets at least annually. Building staff have online access to budget and purchasing information. The LEA's fixed assets are computerized and the LEA does a physical inventory of fixed assets at least every five years. The LEA maintains manual systems, or the LEA utilizes manual systems at the building level that are then input to centralized systems at the central office. The LEA keeps manual fixed assets records and does not do a physical inventory at least every five years. Top
-
Web application architecture keeps evolving to meet the digital business requirements and changing IT infrastructure environment. Technologies such as Artificial Intelligence, Analytics, Automation, Advanced Robotics, Edge Computing, Blockchain, Internet of Things (IoT), and APIs are redefining what is possible in many industries. Increasing complexity in infrastructure, application, and data size requires new architecture approaches. Most of enterprises are adopting a multicloud approach by using one or more cloud providers. Enterprises are consuming cloud services by either using private, public, or hybrid with SaaS, PaaS, or IaaS models.
-
-
In a data warehouse or OLAP system, the data is saved in a format that allows the effective creation of data mining documents. The data structure in a data warehousing has denormalized schema. Performance-wise, data warehouses are quite fast when it comes to analyzing queries.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Call Of Duty Ghosts Bots Offline [BETTER] Crack Download and Install COD in Minutes.md b/spaces/contluForse/HuggingGPT/assets/Call Of Duty Ghosts Bots Offline [BETTER] Crack Download and Install COD in Minutes.md
deleted file mode 100644
index 6c28c55f31f0797531b7df557f4721185fe78f71..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Call Of Duty Ghosts Bots Offline [BETTER] Crack Download and Install COD in Minutes.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Features: Mod is completely compatible with no internet, good for LAN with friends or just playing alone. (only if your client supports offline/lan)
Also mod is compatible with every game client, as long as the client's testclient handling works properly.
A clean and nice menu, you can edit every bot DVAR within in-game.
Everything can be customized, ideal for both personal use and dedicated servers.
This mod does not edit ANY stock .gsc files, meaning EVERY other mod is compatible with this mod. Mod doesn't add anything unnecessary, what you see is what you get.
Adds AI clients to multiplayer games to simulate playing real players. (essentially Combat Training for MW2) -Bots move around the maps. (all normal maps, most to all custom maps) -Bots play all gamemodes/objectives, they caputure flags, plant, defuse bombs, etc. (all normal modes, most custom modes) -Bots have animations, move their legs and don't slide. -Bots use all killstreaks. Including AC130 and chopper gunner. -Bots target killstreaks, use stingers and other weapons to take out all killstreaks. (even sentry guns) -Bots can capture and steal care packages. -Bots target equipment, and can even camp TIs. -Bots can camp randomly or when about to use the laptop. -Bots can follow others on own will. -Bots have smooth and realistic aim. -Bots respond smartly to their surroundings, they will go to you if you shoot, uav, etc. -Bots use all deathstreaks, perks and weapons, also perks do something and bots use guns tactically (use shotgun upclose, etc). -Bots difficulty level can be customized and are accurate. (hard is hard, easy is easy, etc.) -Bots each all have different classes, traits, and difficulty and remember it all. -Bots switch from between primaries and secondaries. -Bots can grenade, place claymores and TIs, they even use grenades and tubes in preset map locations. -Bots use grenade launchers and shotgun attachments. -Bots trip claymores indefinitely. -Bots can melee people and sentry guns. -Bots can run! -Bots can climb ladders! -Bots have foot sounds!! -Bots detect smoke grenades, stun grenades, flashed and airstrike slows. -Bots can watch killcams. -Bots talk, react to anything that they are doing or what happened to them, etc. -Bots will remember their class, killstreak, skill and traits, even on multiround based gametypes. -Bots can rage quit. -Bots can throwback grenades.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/lr_updater.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/lr_updater.py
deleted file mode 100644
index b9851d2ca3c4e60b95ad734c19a2484b9ca7c708..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/lr_updater.py
+++ /dev/null
@@ -1,670 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numbers
-from math import cos, pi
-
-import annotator.mmpkg.mmcv as mmcv
-from .hook import HOOKS, Hook
-
-
-class LrUpdaterHook(Hook):
- """LR Scheduler in MMCV.
-
- Args:
- by_epoch (bool): LR changes epoch by epoch
- warmup (string): Type of warmup used. It can be None(use no warmup),
- 'constant', 'linear' or 'exp'
- warmup_iters (int): The number of iterations or epochs that warmup
- lasts
- warmup_ratio (float): LR used at the beginning of warmup equals to
- warmup_ratio * initial_lr
- warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters
- means the number of epochs that warmup lasts, otherwise means the
- number of iteration that warmup lasts
- """
-
- def __init__(self,
- by_epoch=True,
- warmup=None,
- warmup_iters=0,
- warmup_ratio=0.1,
- warmup_by_epoch=False):
- # validate the "warmup" argument
- if warmup is not None:
- if warmup not in ['constant', 'linear', 'exp']:
- raise ValueError(
- f'"{warmup}" is not a supported type for warming up, valid'
- ' types are "constant" and "linear"')
- if warmup is not None:
- assert warmup_iters > 0, \
- '"warmup_iters" must be a positive integer'
- assert 0 < warmup_ratio <= 1.0, \
- '"warmup_ratio" must be in range (0,1]'
-
- self.by_epoch = by_epoch
- self.warmup = warmup
- self.warmup_iters = warmup_iters
- self.warmup_ratio = warmup_ratio
- self.warmup_by_epoch = warmup_by_epoch
-
- if self.warmup_by_epoch:
- self.warmup_epochs = self.warmup_iters
- self.warmup_iters = None
- else:
- self.warmup_epochs = None
-
- self.base_lr = [] # initial lr for all param groups
- self.regular_lr = [] # expected lr if no warming up is performed
-
- def _set_lr(self, runner, lr_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, lr in zip(optim.param_groups, lr_groups[k]):
- param_group['lr'] = lr
- else:
- for param_group, lr in zip(runner.optimizer.param_groups,
- lr_groups):
- param_group['lr'] = lr
-
- def get_lr(self, runner, base_lr):
- raise NotImplementedError
-
- def get_regular_lr(self, runner):
- if isinstance(runner.optimizer, dict):
- lr_groups = {}
- for k in runner.optimizer.keys():
- _lr_group = [
- self.get_lr(runner, _base_lr)
- for _base_lr in self.base_lr[k]
- ]
- lr_groups.update({k: _lr_group})
-
- return lr_groups
- else:
- return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr]
-
- def get_warmup_lr(self, cur_iters):
-
- def _get_warmup_lr(cur_iters, regular_lr):
- if self.warmup == 'constant':
- warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr]
- elif self.warmup == 'linear':
- k = (1 - cur_iters / self.warmup_iters) * (1 -
- self.warmup_ratio)
- warmup_lr = [_lr * (1 - k) for _lr in regular_lr]
- elif self.warmup == 'exp':
- k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters)
- warmup_lr = [_lr * k for _lr in regular_lr]
- return warmup_lr
-
- if isinstance(self.regular_lr, dict):
- lr_groups = {}
- for key, regular_lr in self.regular_lr.items():
- lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr)
- return lr_groups
- else:
- return _get_warmup_lr(cur_iters, self.regular_lr)
-
- def before_run(self, runner):
- # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved,
- # it will be set according to the optimizer params
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- for group in optim.param_groups:
- group.setdefault('initial_lr', group['lr'])
- _base_lr = [
- group['initial_lr'] for group in optim.param_groups
- ]
- self.base_lr.update({k: _base_lr})
- else:
- for group in runner.optimizer.param_groups:
- group.setdefault('initial_lr', group['lr'])
- self.base_lr = [
- group['initial_lr'] for group in runner.optimizer.param_groups
- ]
-
- def before_train_epoch(self, runner):
- if self.warmup_iters is None:
- epoch_len = len(runner.data_loader)
- self.warmup_iters = self.warmup_epochs * epoch_len
-
- if not self.by_epoch:
- return
-
- self.regular_lr = self.get_regular_lr(runner)
- self._set_lr(runner, self.regular_lr)
-
- def before_train_iter(self, runner):
- cur_iter = runner.iter
- if not self.by_epoch:
- self.regular_lr = self.get_regular_lr(runner)
- if self.warmup is None or cur_iter >= self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
- elif self.by_epoch:
- if self.warmup is None or cur_iter > self.warmup_iters:
- return
- elif cur_iter == self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
-
-
-@HOOKS.register_module()
-class FixedLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, **kwargs):
- super(FixedLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- return base_lr
-
-
-@HOOKS.register_module()
-class StepLrUpdaterHook(LrUpdaterHook):
- """Step LR scheduler with min_lr clipping.
-
- Args:
- step (int | list[int]): Step to decay the LR. If an int value is given,
- regard it as the decay interval. If a list is given, decay LR at
- these steps.
- gamma (float, optional): Decay LR ratio. Default: 0.1.
- min_lr (float, optional): Minimum LR value to keep. If LR after decay
- is lower than `min_lr`, it will be clipped to this value. If None
- is given, we don't perform lr clipping. Default: None.
- """
-
- def __init__(self, step, gamma=0.1, min_lr=None, **kwargs):
- if isinstance(step, list):
- assert mmcv.is_list_of(step, int)
- assert all([s > 0 for s in step])
- elif isinstance(step, int):
- assert step > 0
- else:
- raise TypeError('"step" must be a list or integer')
- self.step = step
- self.gamma = gamma
- self.min_lr = min_lr
- super(StepLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
-
- # calculate exponential term
- if isinstance(self.step, int):
- exp = progress // self.step
- else:
- exp = len(self.step)
- for i, s in enumerate(self.step):
- if progress < s:
- exp = i
- break
-
- lr = base_lr * (self.gamma**exp)
- if self.min_lr is not None:
- # clip to a minimum value
- lr = max(lr, self.min_lr)
- return lr
-
-
-@HOOKS.register_module()
-class ExpLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, **kwargs):
- self.gamma = gamma
- super(ExpLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * self.gamma**progress
-
-
-@HOOKS.register_module()
-class PolyLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, power=1., min_lr=0., **kwargs):
- self.power = power
- self.min_lr = min_lr
- super(PolyLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
- coeff = (1 - progress / max_progress)**self.power
- return (base_lr - self.min_lr) * coeff + self.min_lr
-
-
-@HOOKS.register_module()
-class InvLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, power=1., **kwargs):
- self.gamma = gamma
- self.power = power
- super(InvLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * (1 + self.gamma * progress)**(-self.power)
-
-
-@HOOKS.register_module()
-class CosineAnnealingLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook):
- """Flat + Cosine lr schedule.
-
- Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501
-
- Args:
- start_percent (float): When to start annealing the learning rate
- after the percentage of the total training steps.
- The value should be in range [0, 1).
- Default: 0.75
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- start_percent=0.75,
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- if start_percent < 0 or start_percent > 1 or not isinstance(
- start_percent, float):
- raise ValueError(
- 'expected float between 0 and 1 start_percent, but '
- f'got {start_percent}')
- self.start_percent = start_percent
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- start = round(runner.max_epochs * self.start_percent)
- progress = runner.epoch - start
- max_progress = runner.max_epochs - start
- else:
- start = round(runner.max_iters * self.start_percent)
- progress = runner.iter - start
- max_progress = runner.max_iters - start
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- if progress < 0:
- return base_lr
- else:
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class CosineRestartLrUpdaterHook(LrUpdaterHook):
- """Cosine annealing with restarts learning rate scheme.
-
- Args:
- periods (list[int]): Periods for each cosine anneling cycle.
- restart_weights (list[float], optional): Restart weights at each
- restart iteration. Default: [1].
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- periods,
- restart_weights=[1],
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.periods = periods
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- self.restart_weights = restart_weights
- assert (len(self.periods) == len(self.restart_weights)
- ), 'periods and restart_weights should have the same length.'
- super(CosineRestartLrUpdaterHook, self).__init__(**kwargs)
-
- self.cumulative_periods = [
- sum(self.periods[0:i + 1]) for i in range(0, len(self.periods))
- ]
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- else:
- progress = runner.iter
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- idx = get_position_from_periods(progress, self.cumulative_periods)
- current_weight = self.restart_weights[idx]
- nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1]
- current_periods = self.periods[idx]
-
- alpha = min((progress - nearest_restart) / current_periods, 1)
- return annealing_cos(base_lr, target_lr, alpha, current_weight)
-
-
-def get_position_from_periods(iteration, cumulative_periods):
- """Get the position from a period list.
-
- It will return the index of the right-closest number in the period list.
- For example, the cumulative_periods = [100, 200, 300, 400],
- if iteration == 50, return 0;
- if iteration == 210, return 2;
- if iteration == 300, return 3.
-
- Args:
- iteration (int): Current iteration.
- cumulative_periods (list[int]): Cumulative period list.
-
- Returns:
- int: The position of the right-closest number in the period list.
- """
- for i, period in enumerate(cumulative_periods):
- if iteration < period:
- return i
- raise ValueError(f'Current iteration {iteration} exceeds '
- f'cumulative_periods {cumulative_periods}')
-
-
-@HOOKS.register_module()
-class CyclicLrUpdaterHook(LrUpdaterHook):
- """Cyclic LR Scheduler.
-
- Implement the cyclical learning rate policy (CLR) described in
- https://arxiv.org/pdf/1506.01186.pdf
-
- Different from the original paper, we use cosine annealing rather than
- triangular policy inside a cycle. This improves the performance in the
- 3D detection area.
-
- Args:
- by_epoch (bool): Whether to update LR by epoch.
- target_ratio (tuple[float]): Relative ratio of the highest LR and the
- lowest LR to the initial LR.
- cyclic_times (int): Number of cycles during training
- step_ratio_up (float): The ratio of the increasing process of LR in
- the total cycle.
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing. Default: 'cos'.
- """
-
- def __init__(self,
- by_epoch=False,
- target_ratio=(10, 1e-4),
- cyclic_times=1,
- step_ratio_up=0.4,
- anneal_strategy='cos',
- **kwargs):
- if isinstance(target_ratio, float):
- target_ratio = (target_ratio, target_ratio / 1e5)
- elif isinstance(target_ratio, tuple):
- target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \
- if len(target_ratio) == 1 else target_ratio
- else:
- raise ValueError('target_ratio should be either float '
- f'or tuple, got {type(target_ratio)}')
-
- assert len(target_ratio) == 2, \
- '"target_ratio" must be list or tuple of two floats'
- assert 0 <= step_ratio_up < 1.0, \
- '"step_ratio_up" must be in range [0,1)'
-
- self.target_ratio = target_ratio
- self.cyclic_times = cyclic_times
- self.step_ratio_up = step_ratio_up
- self.lr_phases = [] # init lr_phases
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
-
- assert not by_epoch, \
- 'currently only support "by_epoch" = False'
- super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs)
-
- def before_run(self, runner):
- super(CyclicLrUpdaterHook, self).before_run(runner)
- # initiate lr_phases
- # total lr_phases are separated as up and down
- max_iter_per_phase = runner.max_iters // self.cyclic_times
- iter_up_phase = int(self.step_ratio_up * max_iter_per_phase)
- self.lr_phases.append(
- [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]])
- self.lr_phases.append([
- iter_up_phase, max_iter_per_phase, max_iter_per_phase,
- self.target_ratio[0], self.target_ratio[1]
- ])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- for (start_iter, end_iter, max_iter_per_phase, start_ratio,
- end_ratio) in self.lr_phases:
- curr_iter %= max_iter_per_phase
- if start_iter <= curr_iter < end_iter:
- progress = curr_iter - start_iter
- return self.anneal_func(base_lr * start_ratio,
- base_lr * end_ratio,
- progress / (end_iter - start_iter))
-
-
-@HOOKS.register_module()
-class OneCycleLrUpdaterHook(LrUpdaterHook):
- """One Cycle LR Scheduler.
-
- The 1cycle learning rate policy changes the learning rate after every
- batch. The one cycle learning rate policy is described in
- https://arxiv.org/pdf/1708.07120.pdf
-
- Args:
- max_lr (float or list): Upper learning rate boundaries in the cycle
- for each parameter group.
- total_steps (int, optional): The total number of steps in the cycle.
- Note that if a value is not provided here, it will be the max_iter
- of runner. Default: None.
- pct_start (float): The percentage of the cycle (in number of steps)
- spent increasing the learning rate.
- Default: 0.3
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing.
- Default: 'cos'
- div_factor (float): Determines the initial learning rate via
- initial_lr = max_lr/div_factor
- Default: 25
- final_div_factor (float): Determines the minimum learning rate via
- min_lr = initial_lr/final_div_factor
- Default: 1e4
- three_phase (bool): If three_phase is True, use a third phase of the
- schedule to annihilate the learning rate according to
- final_div_factor instead of modifying the second phase (the first
- two phases will be symmetrical about the step indicated by
- pct_start).
- Default: False
- """
-
- def __init__(self,
- max_lr,
- total_steps=None,
- pct_start=0.3,
- anneal_strategy='cos',
- div_factor=25,
- final_div_factor=1e4,
- three_phase=False,
- **kwargs):
- # validate by_epoch, currently only support by_epoch = False
- if 'by_epoch' not in kwargs:
- kwargs['by_epoch'] = False
- else:
- assert not kwargs['by_epoch'], \
- 'currently only support "by_epoch" = False'
- if not isinstance(max_lr, (numbers.Number, list, dict)):
- raise ValueError('the type of max_lr must be the one of list or '
- f'dict, but got {type(max_lr)}')
- self._max_lr = max_lr
- if total_steps is not None:
- if not isinstance(total_steps, int):
- raise ValueError('the type of total_steps must be int, but'
- f'got {type(total_steps)}')
- self.total_steps = total_steps
- # validate pct_start
- if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float):
- raise ValueError('expected float between 0 and 1 pct_start, but '
- f'got {pct_start}')
- self.pct_start = pct_start
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
- self.div_factor = div_factor
- self.final_div_factor = final_div_factor
- self.three_phase = three_phase
- self.lr_phases = [] # init lr_phases
- super(OneCycleLrUpdaterHook, self).__init__(**kwargs)
-
- def before_run(self, runner):
- if hasattr(self, 'total_steps'):
- total_steps = self.total_steps
- else:
- total_steps = runner.max_iters
- if total_steps < runner.max_iters:
- raise ValueError(
- 'The total steps must be greater than or equal to max '
- f'iterations {runner.max_iters} of runner, but total steps '
- f'is {total_steps}.')
-
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- _max_lr = format_param(k, optim, self._max_lr)
- self.base_lr[k] = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(optim.param_groups, self.base_lr[k]):
- group.setdefault('initial_lr', lr)
- else:
- k = type(runner.optimizer).__name__
- _max_lr = format_param(k, runner.optimizer, self._max_lr)
- self.base_lr = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(runner.optimizer.param_groups, self.base_lr):
- group.setdefault('initial_lr', lr)
-
- if self.three_phase:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append([
- float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1
- ])
- self.lr_phases.append(
- [total_steps - 1, 1, 1 / self.final_div_factor])
- else:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append(
- [total_steps - 1, self.div_factor, 1 / self.final_div_factor])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- start_iter = 0
- for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases):
- if curr_iter <= end_iter:
- pct = (curr_iter - start_iter) / (end_iter - start_iter)
- lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr,
- pct)
- break
- start_iter = end_iter
- return lr
-
-
-def annealing_cos(start, end, factor, weight=1):
- """Calculate annealing cos learning rate.
-
- Cosine anneal from `weight * start + (1 - weight) * end` to `end` as
- percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the cosine annealing.
- end (float): The ending learing rate of the cosine annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- weight (float, optional): The combination factor of `start` and `end`
- when calculating the actual starting learning rate. Default to 1.
- """
- cos_out = cos(pi * factor) + 1
- return end + 0.5 * weight * (start - end) * cos_out
-
-
-def annealing_linear(start, end, factor):
- """Calculate annealing linear learning rate.
-
- Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the linear annealing.
- end (float): The ending learing rate of the linear annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- """
- return start + (end - start) * factor
-
-
-def format_param(name, optim, param):
- if isinstance(param, numbers.Number):
- return [param] * len(optim.param_groups)
- elif isinstance(param, (list, tuple)): # multi param groups
- if len(param) != len(optim.param_groups):
- raise ValueError(f'expected {len(optim.param_groups)} '
- f'values for {name}, got {len(param)}')
- return param
- else: # multi optimizers
- if name not in param:
- raise KeyError(f'{name} is not found in {param.keys()}')
- return param[name]
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
deleted file mode 100644
index 48757e2b0156b2c1513b615d2a17e5aee5172ae7..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
+++ /dev/null
@@ -1,46 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-/*!
-* Copyright (c) Facebook, Inc. and its affiliates.
-* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
-*/
-
-#include
-
-#include
-#include
-
-
-at::Tensor
-ms_deform_attn_cpu_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step)
-{
- AT_ERROR("Not implement on cpu");
-}
-
-std::vector
-ms_deform_attn_cpu_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step)
-{
- AT_ERROR("Not implement on cpu");
-}
-
diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/sdf.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/sdf.py
deleted file mode 100644
index e87e639eb94993c3e4068d6bd4d21f902aee7694..0000000000000000000000000000000000000000
--- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/sdf.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import numpy as np
-
-
-def create_grid(resX, resY, resZ, b_min=np.array([0, 0, 0]), b_max=np.array([1, 1, 1]), transform=None):
- '''
- Create a dense grid of given resolution and bounding box
- :param resX: resolution along X axis
- :param resY: resolution along Y axis
- :param resZ: resolution along Z axis
- :param b_min: vec3 (x_min, y_min, z_min) bounding box corner
- :param b_max: vec3 (x_max, y_max, z_max) bounding box corner
- :return: [3, resX, resY, resZ] coordinates of the grid, and transform matrix from mesh index
- '''
- coords = np.mgrid[:resX, :resY, :resZ]
- coords = coords.reshape(3, -1)
- coords_matrix = np.eye(4)
- length = b_max - b_min
- coords_matrix[0, 0] = length[0] / resX
- coords_matrix[1, 1] = length[1] / resY
- coords_matrix[2, 2] = length[2] / resZ
- coords_matrix[0:3, 3] = b_min
- coords = np.matmul(coords_matrix[:3, :3], coords) + coords_matrix[:3, 3:4]
- if transform is not None:
- coords = np.matmul(transform[:3, :3], coords) + transform[:3, 3:4]
- coords_matrix = np.matmul(transform, coords_matrix)
- coords = coords.reshape(3, resX, resY, resZ)
- return coords, coords_matrix
-
-
-def batch_eval(points, eval_func, num_samples=512 * 512 * 512):
- num_pts = points.shape[1]
- sdf = np.zeros(num_pts)
-
- num_batches = num_pts // num_samples
- for i in range(num_batches):
- sdf[i * num_samples:i * num_samples + num_samples] = eval_func(
- points[:, i * num_samples:i * num_samples + num_samples])
- if num_pts % num_samples:
- sdf[num_batches * num_samples:] = eval_func(points[:, num_batches * num_samples:])
-
- return sdf
-
-
-def eval_grid(coords, eval_func, num_samples=512 * 512 * 512):
- resolution = coords.shape[1:4]
- coords = coords.reshape([3, -1])
- sdf = batch_eval(coords, eval_func, num_samples=num_samples)
- return sdf.reshape(resolution)
-
-
-def eval_grid_octree(coords, eval_func,
- init_resolution=64, threshold=0.01,
- num_samples=512 * 512 * 512):
- resolution = coords.shape[1:4]
-
- sdf = np.zeros(resolution)
-
- dirty = np.ones(resolution, dtype=np.bool)
- grid_mask = np.zeros(resolution, dtype=np.bool)
-
- reso = resolution[0] // init_resolution
-
- while reso > 0:
- # subdivide the grid
- grid_mask[0:resolution[0]:reso, 0:resolution[1]:reso, 0:resolution[2]:reso] = True
- # test samples in this iteration
- test_mask = np.logical_and(grid_mask, dirty)
- #print('step size:', reso, 'test sample size:', test_mask.sum())
- points = coords[:, test_mask]
-
- sdf[test_mask] = batch_eval(points, eval_func, num_samples=num_samples)
- dirty[test_mask] = False
-
- # do interpolation
- if reso <= 1:
- break
- for x in range(0, resolution[0] - reso, reso):
- for y in range(0, resolution[1] - reso, reso):
- for z in range(0, resolution[2] - reso, reso):
- # if center marked, return
- if not dirty[x + reso // 2, y + reso // 2, z + reso // 2]:
- continue
- v0 = sdf[x, y, z]
- v1 = sdf[x, y, z + reso]
- v2 = sdf[x, y + reso, z]
- v3 = sdf[x, y + reso, z + reso]
- v4 = sdf[x + reso, y, z]
- v5 = sdf[x + reso, y, z + reso]
- v6 = sdf[x + reso, y + reso, z]
- v7 = sdf[x + reso, y + reso, z + reso]
- v = np.array([v0, v1, v2, v3, v4, v5, v6, v7])
- v_min = v.min()
- v_max = v.max()
- # this cell is all the same
- if (v_max - v_min) < threshold:
- sdf[x:x + reso, y:y + reso, z:z + reso] = (v_max + v_min) / 2
- dirty[x:x + reso, y:y + reso, z:z + reso] = False
- reso //= 2
-
- return sdf.reshape(resolution)
diff --git a/spaces/cstimson/SentenceSimilarityHeatmapAndClustering/README.md b/spaces/cstimson/SentenceSimilarityHeatmapAndClustering/README.md
deleted file mode 100644
index f7c26771097038883ea38e790b03272507251434..0000000000000000000000000000000000000000
--- a/spaces/cstimson/SentenceSimilarityHeatmapAndClustering/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SentenceSimilarityHeatmapAndClustering
-emoji: ⚡
-colorFrom: red
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/generate_facerender_batch.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/generate_facerender_batch.py
deleted file mode 100644
index d20775a82842d047889f5486e010558826b051ab..0000000000000000000000000000000000000000
--- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/generate_facerender_batch.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import os
-import numpy as np
-from PIL import Image
-from skimage import io, img_as_float32, transform
-import torch
-import scipy.io as scio
-from torchvision import transforms
-
-def get_facerender_data(coeff_path, pic_path, first_coeff_path, audio_path,
- batch_size, input_yaw_list=None, input_pitch_list=None, input_roll_list=None,
- expression_scale=1.0, still_mode = False, preprocess='crop'):
-
- semantic_radius = 13
- video_name = os.path.splitext(os.path.split(coeff_path)[-1])[0]
- txt_path = os.path.splitext(coeff_path)[0]
-
- transform = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True),
- ])
-
- data={}
- # img1 = Image.open(pic_path)
- # source_image = np.array(img1)
- # source_image = img_as_float32(source_image)
- # source_image = transform.resize(source_image, (256, 256, 3))
- # source_image = source_image.transpose((2, 0, 1))
-
- src_img_tensor = Image.open(pic_path)
- source_image= transform(src_img_tensor)
-
-
- source_image_ts = torch.FloatTensor(source_image).unsqueeze(0)
- source_image_ts = source_image_ts.repeat(batch_size, 1, 1, 1)
- data['source_image'] = source_image_ts
-
- source_semantics_dict = scio.loadmat(first_coeff_path)
-
- if preprocess.lower() != 'full':
- source_semantics = source_semantics_dict['coeff_3dmm'][:1,:70] #1 70
- else:
- source_semantics = source_semantics_dict['coeff_3dmm'][:1,:73] #1 70
-
- source_semantics_new = transform_semantic_1(source_semantics, semantic_radius)
- source_semantics_ts = torch.FloatTensor(source_semantics_new).unsqueeze(0)
- source_semantics_ts = source_semantics_ts.repeat(batch_size, 1, 1)
- data['source_semantics'] = source_semantics_ts
-
- # target
- generated_dict = scio.loadmat(coeff_path)
- generated_3dmm = generated_dict['coeff_3dmm']
- generated_3dmm[:, :64] = generated_3dmm[:, :64] * expression_scale
-
- if preprocess.lower() == 'full':
- generated_3dmm = np.concatenate([generated_3dmm, np.repeat(source_semantics[:,70:], generated_3dmm.shape[0], axis=0)], axis=1)
-
- if still_mode:
- generated_3dmm[:, 64:] = np.repeat(source_semantics[:, 64:], generated_3dmm.shape[0], axis=0)
-
- # with open(txt_path+'.txt', 'w') as f:
- # for coeff in generated_3dmm:
- # for i in coeff:
- # f.write(str(i)[:7] + ' '+'\t')
- # f.write('\n')
-
- target_semantics_list = []
- frame_num = generated_3dmm.shape[0]
- data['frame_num'] = frame_num
- for frame_idx in range(frame_num):
- target_semantics = transform_semantic_target(generated_3dmm, frame_idx, semantic_radius)
- target_semantics_list.append(target_semantics)
-
- remainder = frame_num%batch_size
- if remainder!=0:
- for _ in range(batch_size-remainder):
- target_semantics_list.append(target_semantics)
-
- target_semantics_np = np.array(target_semantics_list) #frame_num 70 semantic_radius*2+1
- target_semantics_np = target_semantics_np.reshape(batch_size, -1, target_semantics_np.shape[-2], target_semantics_np.shape[-1])
- data['target_semantics_list'] = torch.FloatTensor(target_semantics_np)
- data['video_name'] = video_name
- data['audio_path'] = audio_path
-
- if input_yaw_list is not None:
- yaw_c_seq = gen_camera_pose(input_yaw_list, frame_num, batch_size)
- data['yaw_c_seq'] = torch.FloatTensor(yaw_c_seq)
- if input_pitch_list is not None:
- pitch_c_seq = gen_camera_pose(input_pitch_list, frame_num, batch_size)
- data['pitch_c_seq'] = torch.FloatTensor(pitch_c_seq)
- if input_roll_list is not None:
- roll_c_seq = gen_camera_pose(input_roll_list, frame_num, batch_size)
- data['roll_c_seq'] = torch.FloatTensor(roll_c_seq)
-
- return data
-
-def transform_semantic_1(semantic, semantic_radius):
- semantic_list = [semantic for i in range(0, semantic_radius*2+1)]
- coeff_3dmm = np.concatenate(semantic_list, 0)
- return coeff_3dmm.transpose(1,0)
-
-def transform_semantic_target(coeff_3dmm, frame_index, semantic_radius):
- num_frames = coeff_3dmm.shape[0]
- seq = list(range(frame_index- semantic_radius, frame_index + semantic_radius+1))
- index = [ min(max(item, 0), num_frames-1) for item in seq ]
- coeff_3dmm_g = coeff_3dmm[index, :]
- return coeff_3dmm_g.transpose(1,0)
-
-def gen_camera_pose(camera_degree_list, frame_num, batch_size):
-
- new_degree_list = []
- if len(camera_degree_list) == 1:
- for _ in range(frame_num):
- new_degree_list.append(camera_degree_list[0])
- remainder = frame_num%batch_size
- if remainder!=0:
- for _ in range(batch_size-remainder):
- new_degree_list.append(new_degree_list[-1])
- new_degree_np = np.array(new_degree_list).reshape(batch_size, -1)
- return new_degree_np
-
- degree_sum = 0.
- for i, degree in enumerate(camera_degree_list[1:]):
- degree_sum += abs(degree-camera_degree_list[i])
-
- degree_per_frame = degree_sum/(frame_num-1)
- for i, degree in enumerate(camera_degree_list[1:]):
- degree_last = camera_degree_list[i]
- degree_step = degree_per_frame * abs(degree-degree_last)/(degree-degree_last)
- new_degree_list = new_degree_list + list(np.arange(degree_last, degree, degree_step))
- if len(new_degree_list) > frame_num:
- new_degree_list = new_degree_list[:frame_num]
- elif len(new_degree_list) < frame_num:
- for _ in range(frame_num-len(new_degree_list)):
- new_degree_list.append(new_degree_list[-1])
- print(len(new_degree_list))
- print(frame_num)
-
- remainder = frame_num%batch_size
- if remainder!=0:
- for _ in range(batch_size-remainder):
- new_degree_list.append(new_degree_list[-1])
- new_degree_np = np.array(new_degree_list).reshape(batch_size, -1)
- return new_degree_np
-
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/environment.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/environment.py
deleted file mode 100644
index ea04e8b44330fe22909a2c875c6601e33bd1ffc2..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/environment.py
+++ /dev/null
@@ -1,1667 +0,0 @@
-"""Classes for managing templates and their runtime and compile time
-options.
-"""
-import os
-import typing
-import typing as t
-import weakref
-from collections import ChainMap
-from functools import lru_cache
-from functools import partial
-from functools import reduce
-from types import CodeType
-
-from markupsafe import Markup
-
-from . import nodes
-from .compiler import CodeGenerator
-from .compiler import generate
-from .defaults import BLOCK_END_STRING
-from .defaults import BLOCK_START_STRING
-from .defaults import COMMENT_END_STRING
-from .defaults import COMMENT_START_STRING
-from .defaults import DEFAULT_FILTERS
-from .defaults import DEFAULT_NAMESPACE
-from .defaults import DEFAULT_POLICIES
-from .defaults import DEFAULT_TESTS
-from .defaults import KEEP_TRAILING_NEWLINE
-from .defaults import LINE_COMMENT_PREFIX
-from .defaults import LINE_STATEMENT_PREFIX
-from .defaults import LSTRIP_BLOCKS
-from .defaults import NEWLINE_SEQUENCE
-from .defaults import TRIM_BLOCKS
-from .defaults import VARIABLE_END_STRING
-from .defaults import VARIABLE_START_STRING
-from .exceptions import TemplateNotFound
-from .exceptions import TemplateRuntimeError
-from .exceptions import TemplatesNotFound
-from .exceptions import TemplateSyntaxError
-from .exceptions import UndefinedError
-from .lexer import get_lexer
-from .lexer import Lexer
-from .lexer import TokenStream
-from .nodes import EvalContext
-from .parser import Parser
-from .runtime import Context
-from .runtime import new_context
-from .runtime import Undefined
-from .utils import _PassArg
-from .utils import concat
-from .utils import consume
-from .utils import import_string
-from .utils import internalcode
-from .utils import LRUCache
-from .utils import missing
-
-if t.TYPE_CHECKING:
- import typing_extensions as te
- from .bccache import BytecodeCache
- from .ext import Extension
- from .loaders import BaseLoader
-
-_env_bound = t.TypeVar("_env_bound", bound="Environment")
-
-
-# for direct template usage we have up to ten living environments
-@lru_cache(maxsize=10)
-def get_spontaneous_environment(cls: t.Type[_env_bound], *args: t.Any) -> _env_bound:
- """Return a new spontaneous environment. A spontaneous environment
- is used for templates created directly rather than through an
- existing environment.
-
- :param cls: Environment class to create.
- :param args: Positional arguments passed to environment.
- """
- env = cls(*args)
- env.shared = True
- return env
-
-
-def create_cache(
- size: int,
-) -> t.Optional[t.MutableMapping[t.Tuple[weakref.ref, str], "Template"]]:
- """Return the cache class for the given size."""
- if size == 0:
- return None
-
- if size < 0:
- return {}
-
- return LRUCache(size) # type: ignore
-
-
-def copy_cache(
- cache: t.Optional[t.MutableMapping],
-) -> t.Optional[t.MutableMapping[t.Tuple[weakref.ref, str], "Template"]]:
- """Create an empty copy of the given cache."""
- if cache is None:
- return None
-
- if type(cache) is dict:
- return {}
-
- return LRUCache(cache.capacity) # type: ignore
-
-
-def load_extensions(
- environment: "Environment",
- extensions: t.Sequence[t.Union[str, t.Type["Extension"]]],
-) -> t.Dict[str, "Extension"]:
- """Load the extensions from the list and bind it to the environment.
- Returns a dict of instantiated extensions.
- """
- result = {}
-
- for extension in extensions:
- if isinstance(extension, str):
- extension = t.cast(t.Type["Extension"], import_string(extension))
-
- result[extension.identifier] = extension(environment)
-
- return result
-
-
-def _environment_config_check(environment: "Environment") -> "Environment":
- """Perform a sanity check on the environment."""
- assert issubclass(
- environment.undefined, Undefined
- ), "'undefined' must be a subclass of 'jinja2.Undefined'."
- assert (
- environment.block_start_string
- != environment.variable_start_string
- != environment.comment_start_string
- ), "block, variable and comment start strings must be different."
- assert environment.newline_sequence in {
- "\r",
- "\r\n",
- "\n",
- }, "'newline_sequence' must be one of '\\n', '\\r\\n', or '\\r'."
- return environment
-
-
-class Environment:
- r"""The core component of Jinja is the `Environment`. It contains
- important shared variables like configuration, filters, tests,
- globals and others. Instances of this class may be modified if
- they are not shared and if no template was loaded so far.
- Modifications on environments after the first template was loaded
- will lead to surprising effects and undefined behavior.
-
- Here are the possible initialization parameters:
-
- `block_start_string`
- The string marking the beginning of a block. Defaults to ``'{%'``.
-
- `block_end_string`
- The string marking the end of a block. Defaults to ``'%}'``.
-
- `variable_start_string`
- The string marking the beginning of a print statement.
- Defaults to ``'{{'``.
-
- `variable_end_string`
- The string marking the end of a print statement. Defaults to
- ``'}}'``.
-
- `comment_start_string`
- The string marking the beginning of a comment. Defaults to ``'{#'``.
-
- `comment_end_string`
- The string marking the end of a comment. Defaults to ``'#}'``.
-
- `line_statement_prefix`
- If given and a string, this will be used as prefix for line based
- statements. See also :ref:`line-statements`.
-
- `line_comment_prefix`
- If given and a string, this will be used as prefix for line based
- comments. See also :ref:`line-statements`.
-
- .. versionadded:: 2.2
-
- `trim_blocks`
- If this is set to ``True`` the first newline after a block is
- removed (block, not variable tag!). Defaults to `False`.
-
- `lstrip_blocks`
- If this is set to ``True`` leading spaces and tabs are stripped
- from the start of a line to a block. Defaults to `False`.
-
- `newline_sequence`
- The sequence that starts a newline. Must be one of ``'\r'``,
- ``'\n'`` or ``'\r\n'``. The default is ``'\n'`` which is a
- useful default for Linux and OS X systems as well as web
- applications.
-
- `keep_trailing_newline`
- Preserve the trailing newline when rendering templates.
- The default is ``False``, which causes a single newline,
- if present, to be stripped from the end of the template.
-
- .. versionadded:: 2.7
-
- `extensions`
- List of Jinja extensions to use. This can either be import paths
- as strings or extension classes. For more information have a
- look at :ref:`the extensions documentation `.
-
- `optimized`
- should the optimizer be enabled? Default is ``True``.
-
- `undefined`
- :class:`Undefined` or a subclass of it that is used to represent
- undefined values in the template.
-
- `finalize`
- A callable that can be used to process the result of a variable
- expression before it is output. For example one can convert
- ``None`` implicitly into an empty string here.
-
- `autoescape`
- If set to ``True`` the XML/HTML autoescaping feature is enabled by
- default. For more details about autoescaping see
- :class:`~markupsafe.Markup`. As of Jinja 2.4 this can also
- be a callable that is passed the template name and has to
- return ``True`` or ``False`` depending on autoescape should be
- enabled by default.
-
- .. versionchanged:: 2.4
- `autoescape` can now be a function
-
- `loader`
- The template loader for this environment.
-
- `cache_size`
- The size of the cache. Per default this is ``400`` which means
- that if more than 400 templates are loaded the loader will clean
- out the least recently used template. If the cache size is set to
- ``0`` templates are recompiled all the time, if the cache size is
- ``-1`` the cache will not be cleaned.
-
- .. versionchanged:: 2.8
- The cache size was increased to 400 from a low 50.
-
- `auto_reload`
- Some loaders load templates from locations where the template
- sources may change (ie: file system or database). If
- ``auto_reload`` is set to ``True`` (default) every time a template is
- requested the loader checks if the source changed and if yes, it
- will reload the template. For higher performance it's possible to
- disable that.
-
- `bytecode_cache`
- If set to a bytecode cache object, this object will provide a
- cache for the internal Jinja bytecode so that templates don't
- have to be parsed if they were not changed.
-
- See :ref:`bytecode-cache` for more information.
-
- `enable_async`
- If set to true this enables async template execution which
- allows using async functions and generators.
- """
-
- #: if this environment is sandboxed. Modifying this variable won't make
- #: the environment sandboxed though. For a real sandboxed environment
- #: have a look at jinja2.sandbox. This flag alone controls the code
- #: generation by the compiler.
- sandboxed = False
-
- #: True if the environment is just an overlay
- overlayed = False
-
- #: the environment this environment is linked to if it is an overlay
- linked_to: t.Optional["Environment"] = None
-
- #: shared environments have this set to `True`. A shared environment
- #: must not be modified
- shared = False
-
- #: the class that is used for code generation. See
- #: :class:`~jinja2.compiler.CodeGenerator` for more information.
- code_generator_class: t.Type["CodeGenerator"] = CodeGenerator
-
- concat = "".join
-
- #: the context class that is used for templates. See
- #: :class:`~jinja2.runtime.Context` for more information.
- context_class: t.Type[Context] = Context
-
- template_class: t.Type["Template"]
-
- def __init__(
- self,
- block_start_string: str = BLOCK_START_STRING,
- block_end_string: str = BLOCK_END_STRING,
- variable_start_string: str = VARIABLE_START_STRING,
- variable_end_string: str = VARIABLE_END_STRING,
- comment_start_string: str = COMMENT_START_STRING,
- comment_end_string: str = COMMENT_END_STRING,
- line_statement_prefix: t.Optional[str] = LINE_STATEMENT_PREFIX,
- line_comment_prefix: t.Optional[str] = LINE_COMMENT_PREFIX,
- trim_blocks: bool = TRIM_BLOCKS,
- lstrip_blocks: bool = LSTRIP_BLOCKS,
- newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = NEWLINE_SEQUENCE,
- keep_trailing_newline: bool = KEEP_TRAILING_NEWLINE,
- extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = (),
- optimized: bool = True,
- undefined: t.Type[Undefined] = Undefined,
- finalize: t.Optional[t.Callable[..., t.Any]] = None,
- autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = False,
- loader: t.Optional["BaseLoader"] = None,
- cache_size: int = 400,
- auto_reload: bool = True,
- bytecode_cache: t.Optional["BytecodeCache"] = None,
- enable_async: bool = False,
- ):
- # !!Important notice!!
- # The constructor accepts quite a few arguments that should be
- # passed by keyword rather than position. However it's important to
- # not change the order of arguments because it's used at least
- # internally in those cases:
- # - spontaneous environments (i18n extension and Template)
- # - unittests
- # If parameter changes are required only add parameters at the end
- # and don't change the arguments (or the defaults!) of the arguments
- # existing already.
-
- # lexer / parser information
- self.block_start_string = block_start_string
- self.block_end_string = block_end_string
- self.variable_start_string = variable_start_string
- self.variable_end_string = variable_end_string
- self.comment_start_string = comment_start_string
- self.comment_end_string = comment_end_string
- self.line_statement_prefix = line_statement_prefix
- self.line_comment_prefix = line_comment_prefix
- self.trim_blocks = trim_blocks
- self.lstrip_blocks = lstrip_blocks
- self.newline_sequence = newline_sequence
- self.keep_trailing_newline = keep_trailing_newline
-
- # runtime information
- self.undefined: t.Type[Undefined] = undefined
- self.optimized = optimized
- self.finalize = finalize
- self.autoescape = autoescape
-
- # defaults
- self.filters = DEFAULT_FILTERS.copy()
- self.tests = DEFAULT_TESTS.copy()
- self.globals = DEFAULT_NAMESPACE.copy()
-
- # set the loader provided
- self.loader = loader
- self.cache = create_cache(cache_size)
- self.bytecode_cache = bytecode_cache
- self.auto_reload = auto_reload
-
- # configurable policies
- self.policies = DEFAULT_POLICIES.copy()
-
- # load extensions
- self.extensions = load_extensions(self, extensions)
-
- self.is_async = enable_async
- _environment_config_check(self)
-
- def add_extension(self, extension: t.Union[str, t.Type["Extension"]]) -> None:
- """Adds an extension after the environment was created.
-
- .. versionadded:: 2.5
- """
- self.extensions.update(load_extensions(self, [extension]))
-
- def extend(self, **attributes: t.Any) -> None:
- """Add the items to the instance of the environment if they do not exist
- yet. This is used by :ref:`extensions ` to register
- callbacks and configuration values without breaking inheritance.
- """
- for key, value in attributes.items():
- if not hasattr(self, key):
- setattr(self, key, value)
-
- def overlay(
- self,
- block_start_string: str = missing,
- block_end_string: str = missing,
- variable_start_string: str = missing,
- variable_end_string: str = missing,
- comment_start_string: str = missing,
- comment_end_string: str = missing,
- line_statement_prefix: t.Optional[str] = missing,
- line_comment_prefix: t.Optional[str] = missing,
- trim_blocks: bool = missing,
- lstrip_blocks: bool = missing,
- newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = missing,
- keep_trailing_newline: bool = missing,
- extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = missing,
- optimized: bool = missing,
- undefined: t.Type[Undefined] = missing,
- finalize: t.Optional[t.Callable[..., t.Any]] = missing,
- autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = missing,
- loader: t.Optional["BaseLoader"] = missing,
- cache_size: int = missing,
- auto_reload: bool = missing,
- bytecode_cache: t.Optional["BytecodeCache"] = missing,
- enable_async: bool = False,
- ) -> "Environment":
- """Create a new overlay environment that shares all the data with the
- current environment except for cache and the overridden attributes.
- Extensions cannot be removed for an overlayed environment. An overlayed
- environment automatically gets all the extensions of the environment it
- is linked to plus optional extra extensions.
-
- Creating overlays should happen after the initial environment was set
- up completely. Not all attributes are truly linked, some are just
- copied over so modifications on the original environment may not shine
- through.
-
- .. versionchanged:: 3.1.2
- Added the ``newline_sequence``,, ``keep_trailing_newline``,
- and ``enable_async`` parameters to match ``__init__``.
- """
- args = dict(locals())
- del args["self"], args["cache_size"], args["extensions"], args["enable_async"]
-
- rv = object.__new__(self.__class__)
- rv.__dict__.update(self.__dict__)
- rv.overlayed = True
- rv.linked_to = self
-
- for key, value in args.items():
- if value is not missing:
- setattr(rv, key, value)
-
- if cache_size is not missing:
- rv.cache = create_cache(cache_size)
- else:
- rv.cache = copy_cache(self.cache)
-
- rv.extensions = {}
- for key, value in self.extensions.items():
- rv.extensions[key] = value.bind(rv)
- if extensions is not missing:
- rv.extensions.update(load_extensions(rv, extensions))
-
- if enable_async is not missing:
- rv.is_async = enable_async
-
- return _environment_config_check(rv)
-
- @property
- def lexer(self) -> Lexer:
- """The lexer for this environment."""
- return get_lexer(self)
-
- def iter_extensions(self) -> t.Iterator["Extension"]:
- """Iterates over the extensions by priority."""
- return iter(sorted(self.extensions.values(), key=lambda x: x.priority))
-
- def getitem(
- self, obj: t.Any, argument: t.Union[str, t.Any]
- ) -> t.Union[t.Any, Undefined]:
- """Get an item or attribute of an object but prefer the item."""
- try:
- return obj[argument]
- except (AttributeError, TypeError, LookupError):
- if isinstance(argument, str):
- try:
- attr = str(argument)
- except Exception:
- pass
- else:
- try:
- return getattr(obj, attr)
- except AttributeError:
- pass
- return self.undefined(obj=obj, name=argument)
-
- def getattr(self, obj: t.Any, attribute: str) -> t.Any:
- """Get an item or attribute of an object but prefer the attribute.
- Unlike :meth:`getitem` the attribute *must* be a string.
- """
- try:
- return getattr(obj, attribute)
- except AttributeError:
- pass
- try:
- return obj[attribute]
- except (TypeError, LookupError, AttributeError):
- return self.undefined(obj=obj, name=attribute)
-
- def _filter_test_common(
- self,
- name: t.Union[str, Undefined],
- value: t.Any,
- args: t.Optional[t.Sequence[t.Any]],
- kwargs: t.Optional[t.Mapping[str, t.Any]],
- context: t.Optional[Context],
- eval_ctx: t.Optional[EvalContext],
- is_filter: bool,
- ) -> t.Any:
- if is_filter:
- env_map = self.filters
- type_name = "filter"
- else:
- env_map = self.tests
- type_name = "test"
-
- func = env_map.get(name) # type: ignore
-
- if func is None:
- msg = f"No {type_name} named {name!r}."
-
- if isinstance(name, Undefined):
- try:
- name._fail_with_undefined_error()
- except Exception as e:
- msg = f"{msg} ({e}; did you forget to quote the callable name?)"
-
- raise TemplateRuntimeError(msg)
-
- args = [value, *(args if args is not None else ())]
- kwargs = kwargs if kwargs is not None else {}
- pass_arg = _PassArg.from_obj(func)
-
- if pass_arg is _PassArg.context:
- if context is None:
- raise TemplateRuntimeError(
- f"Attempted to invoke a context {type_name} without context."
- )
-
- args.insert(0, context)
- elif pass_arg is _PassArg.eval_context:
- if eval_ctx is None:
- if context is not None:
- eval_ctx = context.eval_ctx
- else:
- eval_ctx = EvalContext(self)
-
- args.insert(0, eval_ctx)
- elif pass_arg is _PassArg.environment:
- args.insert(0, self)
-
- return func(*args, **kwargs)
-
- def call_filter(
- self,
- name: str,
- value: t.Any,
- args: t.Optional[t.Sequence[t.Any]] = None,
- kwargs: t.Optional[t.Mapping[str, t.Any]] = None,
- context: t.Optional[Context] = None,
- eval_ctx: t.Optional[EvalContext] = None,
- ) -> t.Any:
- """Invoke a filter on a value the same way the compiler does.
-
- This might return a coroutine if the filter is running from an
- environment in async mode and the filter supports async
- execution. It's your responsibility to await this if needed.
-
- .. versionadded:: 2.7
- """
- return self._filter_test_common(
- name, value, args, kwargs, context, eval_ctx, True
- )
-
- def call_test(
- self,
- name: str,
- value: t.Any,
- args: t.Optional[t.Sequence[t.Any]] = None,
- kwargs: t.Optional[t.Mapping[str, t.Any]] = None,
- context: t.Optional[Context] = None,
- eval_ctx: t.Optional[EvalContext] = None,
- ) -> t.Any:
- """Invoke a test on a value the same way the compiler does.
-
- This might return a coroutine if the test is running from an
- environment in async mode and the test supports async execution.
- It's your responsibility to await this if needed.
-
- .. versionchanged:: 3.0
- Tests support ``@pass_context``, etc. decorators. Added
- the ``context`` and ``eval_ctx`` parameters.
-
- .. versionadded:: 2.7
- """
- return self._filter_test_common(
- name, value, args, kwargs, context, eval_ctx, False
- )
-
- @internalcode
- def parse(
- self,
- source: str,
- name: t.Optional[str] = None,
- filename: t.Optional[str] = None,
- ) -> nodes.Template:
- """Parse the sourcecode and return the abstract syntax tree. This
- tree of nodes is used by the compiler to convert the template into
- executable source- or bytecode. This is useful for debugging or to
- extract information from templates.
-
- If you are :ref:`developing Jinja extensions `
- this gives you a good overview of the node tree generated.
- """
- try:
- return self._parse(source, name, filename)
- except TemplateSyntaxError:
- self.handle_exception(source=source)
-
- def _parse(
- self, source: str, name: t.Optional[str], filename: t.Optional[str]
- ) -> nodes.Template:
- """Internal parsing function used by `parse` and `compile`."""
- return Parser(self, source, name, filename).parse()
-
- def lex(
- self,
- source: str,
- name: t.Optional[str] = None,
- filename: t.Optional[str] = None,
- ) -> t.Iterator[t.Tuple[int, str, str]]:
- """Lex the given sourcecode and return a generator that yields
- tokens as tuples in the form ``(lineno, token_type, value)``.
- This can be useful for :ref:`extension development `
- and debugging templates.
-
- This does not perform preprocessing. If you want the preprocessing
- of the extensions to be applied you have to filter source through
- the :meth:`preprocess` method.
- """
- source = str(source)
- try:
- return self.lexer.tokeniter(source, name, filename)
- except TemplateSyntaxError:
- self.handle_exception(source=source)
-
- def preprocess(
- self,
- source: str,
- name: t.Optional[str] = None,
- filename: t.Optional[str] = None,
- ) -> str:
- """Preprocesses the source with all extensions. This is automatically
- called for all parsing and compiling methods but *not* for :meth:`lex`
- because there you usually only want the actual source tokenized.
- """
- return reduce(
- lambda s, e: e.preprocess(s, name, filename),
- self.iter_extensions(),
- str(source),
- )
-
- def _tokenize(
- self,
- source: str,
- name: t.Optional[str],
- filename: t.Optional[str] = None,
- state: t.Optional[str] = None,
- ) -> TokenStream:
- """Called by the parser to do the preprocessing and filtering
- for all the extensions. Returns a :class:`~jinja2.lexer.TokenStream`.
- """
- source = self.preprocess(source, name, filename)
- stream = self.lexer.tokenize(source, name, filename, state)
-
- for ext in self.iter_extensions():
- stream = ext.filter_stream(stream) # type: ignore
-
- if not isinstance(stream, TokenStream):
- stream = TokenStream(stream, name, filename) # type: ignore
-
- return stream
-
- def _generate(
- self,
- source: nodes.Template,
- name: t.Optional[str],
- filename: t.Optional[str],
- defer_init: bool = False,
- ) -> str:
- """Internal hook that can be overridden to hook a different generate
- method in.
-
- .. versionadded:: 2.5
- """
- return generate( # type: ignore
- source,
- self,
- name,
- filename,
- defer_init=defer_init,
- optimized=self.optimized,
- )
-
- def _compile(self, source: str, filename: str) -> CodeType:
- """Internal hook that can be overridden to hook a different compile
- method in.
-
- .. versionadded:: 2.5
- """
- return compile(source, filename, "exec") # type: ignore
-
- @typing.overload
- def compile( # type: ignore
- self,
- source: t.Union[str, nodes.Template],
- name: t.Optional[str] = None,
- filename: t.Optional[str] = None,
- raw: "te.Literal[False]" = False,
- defer_init: bool = False,
- ) -> CodeType:
- ...
-
- @typing.overload
- def compile(
- self,
- source: t.Union[str, nodes.Template],
- name: t.Optional[str] = None,
- filename: t.Optional[str] = None,
- raw: "te.Literal[True]" = ...,
- defer_init: bool = False,
- ) -> str:
- ...
-
- @internalcode
- def compile(
- self,
- source: t.Union[str, nodes.Template],
- name: t.Optional[str] = None,
- filename: t.Optional[str] = None,
- raw: bool = False,
- defer_init: bool = False,
- ) -> t.Union[str, CodeType]:
- """Compile a node or template source code. The `name` parameter is
- the load name of the template after it was joined using
- :meth:`join_path` if necessary, not the filename on the file system.
- the `filename` parameter is the estimated filename of the template on
- the file system. If the template came from a database or memory this
- can be omitted.
-
- The return value of this method is a python code object. If the `raw`
- parameter is `True` the return value will be a string with python
- code equivalent to the bytecode returned otherwise. This method is
- mainly used internally.
-
- `defer_init` is use internally to aid the module code generator. This
- causes the generated code to be able to import without the global
- environment variable to be set.
-
- .. versionadded:: 2.4
- `defer_init` parameter added.
- """
- source_hint = None
- try:
- if isinstance(source, str):
- source_hint = source
- source = self._parse(source, name, filename)
- source = self._generate(source, name, filename, defer_init=defer_init)
- if raw:
- return source
- if filename is None:
- filename = ""
- return self._compile(source, filename)
- except TemplateSyntaxError:
- self.handle_exception(source=source_hint)
-
- def compile_expression(
- self, source: str, undefined_to_none: bool = True
- ) -> "TemplateExpression":
- """A handy helper method that returns a callable that accepts keyword
- arguments that appear as variables in the expression. If called it
- returns the result of the expression.
-
- This is useful if applications want to use the same rules as Jinja
- in template "configuration files" or similar situations.
-
- Example usage:
-
- >>> env = Environment()
- >>> expr = env.compile_expression('foo == 42')
- >>> expr(foo=23)
- False
- >>> expr(foo=42)
- True
-
- Per default the return value is converted to `None` if the
- expression returns an undefined value. This can be changed
- by setting `undefined_to_none` to `False`.
-
- >>> env.compile_expression('var')() is None
- True
- >>> env.compile_expression('var', undefined_to_none=False)()
- Undefined
-
- .. versionadded:: 2.1
- """
- parser = Parser(self, source, state="variable")
- try:
- expr = parser.parse_expression()
- if not parser.stream.eos:
- raise TemplateSyntaxError(
- "chunk after expression", parser.stream.current.lineno, None, None
- )
- expr.set_environment(self)
- except TemplateSyntaxError:
- self.handle_exception(source=source)
-
- body = [nodes.Assign(nodes.Name("result", "store"), expr, lineno=1)]
- template = self.from_string(nodes.Template(body, lineno=1))
- return TemplateExpression(template, undefined_to_none)
-
- def compile_templates(
- self,
- target: t.Union[str, os.PathLike],
- extensions: t.Optional[t.Collection[str]] = None,
- filter_func: t.Optional[t.Callable[[str], bool]] = None,
- zip: t.Optional[str] = "deflated",
- log_function: t.Optional[t.Callable[[str], None]] = None,
- ignore_errors: bool = True,
- ) -> None:
- """Finds all the templates the loader can find, compiles them
- and stores them in `target`. If `zip` is `None`, instead of in a
- zipfile, the templates will be stored in a directory.
- By default a deflate zip algorithm is used. To switch to
- the stored algorithm, `zip` can be set to ``'stored'``.
-
- `extensions` and `filter_func` are passed to :meth:`list_templates`.
- Each template returned will be compiled to the target folder or
- zipfile.
-
- By default template compilation errors are ignored. In case a
- log function is provided, errors are logged. If you want template
- syntax errors to abort the compilation you can set `ignore_errors`
- to `False` and you will get an exception on syntax errors.
-
- .. versionadded:: 2.4
- """
- from .loaders import ModuleLoader
-
- if log_function is None:
-
- def log_function(x: str) -> None:
- pass
-
- assert log_function is not None
- assert self.loader is not None, "No loader configured."
-
- def write_file(filename: str, data: str) -> None:
- if zip:
- info = ZipInfo(filename)
- info.external_attr = 0o755 << 16
- zip_file.writestr(info, data)
- else:
- with open(os.path.join(target, filename), "wb") as f:
- f.write(data.encode("utf8"))
-
- if zip is not None:
- from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED, ZIP_STORED
-
- zip_file = ZipFile(
- target, "w", dict(deflated=ZIP_DEFLATED, stored=ZIP_STORED)[zip]
- )
- log_function(f"Compiling into Zip archive {target!r}")
- else:
- if not os.path.isdir(target):
- os.makedirs(target)
- log_function(f"Compiling into folder {target!r}")
-
- try:
- for name in self.list_templates(extensions, filter_func):
- source, filename, _ = self.loader.get_source(self, name)
- try:
- code = self.compile(source, name, filename, True, True)
- except TemplateSyntaxError as e:
- if not ignore_errors:
- raise
- log_function(f'Could not compile "{name}": {e}')
- continue
-
- filename = ModuleLoader.get_module_filename(name)
-
- write_file(filename, code)
- log_function(f'Compiled "{name}" as {filename}')
- finally:
- if zip:
- zip_file.close()
-
- log_function("Finished compiling templates")
-
- def list_templates(
- self,
- extensions: t.Optional[t.Collection[str]] = None,
- filter_func: t.Optional[t.Callable[[str], bool]] = None,
- ) -> t.List[str]:
- """Returns a list of templates for this environment. This requires
- that the loader supports the loader's
- :meth:`~BaseLoader.list_templates` method.
-
- If there are other files in the template folder besides the
- actual templates, the returned list can be filtered. There are two
- ways: either `extensions` is set to a list of file extensions for
- templates, or a `filter_func` can be provided which is a callable that
- is passed a template name and should return `True` if it should end up
- in the result list.
-
- If the loader does not support that, a :exc:`TypeError` is raised.
-
- .. versionadded:: 2.4
- """
- assert self.loader is not None, "No loader configured."
- names = self.loader.list_templates()
-
- if extensions is not None:
- if filter_func is not None:
- raise TypeError(
- "either extensions or filter_func can be passed, but not both"
- )
-
- def filter_func(x: str) -> bool:
- return "." in x and x.rsplit(".", 1)[1] in extensions # type: ignore
-
- if filter_func is not None:
- names = [name for name in names if filter_func(name)]
-
- return names
-
- def handle_exception(self, source: t.Optional[str] = None) -> "te.NoReturn":
- """Exception handling helper. This is used internally to either raise
- rewritten exceptions or return a rendered traceback for the template.
- """
- from .debug import rewrite_traceback_stack
-
- raise rewrite_traceback_stack(source=source)
-
- def join_path(self, template: str, parent: str) -> str:
- """Join a template with the parent. By default all the lookups are
- relative to the loader root so this method returns the `template`
- parameter unchanged, but if the paths should be relative to the
- parent template, this function can be used to calculate the real
- template name.
-
- Subclasses may override this method and implement template path
- joining here.
- """
- return template
-
- @internalcode
- def _load_template(
- self, name: str, globals: t.Optional[t.MutableMapping[str, t.Any]]
- ) -> "Template":
- if self.loader is None:
- raise TypeError("no loader for this environment specified")
- cache_key = (weakref.ref(self.loader), name)
- if self.cache is not None:
- template = self.cache.get(cache_key)
- if template is not None and (
- not self.auto_reload or template.is_up_to_date
- ):
- # template.globals is a ChainMap, modifying it will only
- # affect the template, not the environment globals.
- if globals:
- template.globals.update(globals)
-
- return template
-
- template = self.loader.load(self, name, self.make_globals(globals))
-
- if self.cache is not None:
- self.cache[cache_key] = template
- return template
-
- @internalcode
- def get_template(
- self,
- name: t.Union[str, "Template"],
- parent: t.Optional[str] = None,
- globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
- ) -> "Template":
- """Load a template by name with :attr:`loader` and return a
- :class:`Template`. If the template does not exist a
- :exc:`TemplateNotFound` exception is raised.
-
- :param name: Name of the template to load. When loading
- templates from the filesystem, "/" is used as the path
- separator, even on Windows.
- :param parent: The name of the parent template importing this
- template. :meth:`join_path` can be used to implement name
- transformations with this.
- :param globals: Extend the environment :attr:`globals` with
- these extra variables available for all renders of this
- template. If the template has already been loaded and
- cached, its globals are updated with any new items.
-
- .. versionchanged:: 3.0
- If a template is loaded from cache, ``globals`` will update
- the template's globals instead of ignoring the new values.
-
- .. versionchanged:: 2.4
- If ``name`` is a :class:`Template` object it is returned
- unchanged.
- """
- if isinstance(name, Template):
- return name
- if parent is not None:
- name = self.join_path(name, parent)
-
- return self._load_template(name, globals)
-
- @internalcode
- def select_template(
- self,
- names: t.Iterable[t.Union[str, "Template"]],
- parent: t.Optional[str] = None,
- globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
- ) -> "Template":
- """Like :meth:`get_template`, but tries loading multiple names.
- If none of the names can be loaded a :exc:`TemplatesNotFound`
- exception is raised.
-
- :param names: List of template names to try loading in order.
- :param parent: The name of the parent template importing this
- template. :meth:`join_path` can be used to implement name
- transformations with this.
- :param globals: Extend the environment :attr:`globals` with
- these extra variables available for all renders of this
- template. If the template has already been loaded and
- cached, its globals are updated with any new items.
-
- .. versionchanged:: 3.0
- If a template is loaded from cache, ``globals`` will update
- the template's globals instead of ignoring the new values.
-
- .. versionchanged:: 2.11
- If ``names`` is :class:`Undefined`, an :exc:`UndefinedError`
- is raised instead. If no templates were found and ``names``
- contains :class:`Undefined`, the message is more helpful.
-
- .. versionchanged:: 2.4
- If ``names`` contains a :class:`Template` object it is
- returned unchanged.
-
- .. versionadded:: 2.3
- """
- if isinstance(names, Undefined):
- names._fail_with_undefined_error()
-
- if not names:
- raise TemplatesNotFound(
- message="Tried to select from an empty list of templates."
- )
-
- for name in names:
- if isinstance(name, Template):
- return name
- if parent is not None:
- name = self.join_path(name, parent)
- try:
- return self._load_template(name, globals)
- except (TemplateNotFound, UndefinedError):
- pass
- raise TemplatesNotFound(names) # type: ignore
-
- @internalcode
- def get_or_select_template(
- self,
- template_name_or_list: t.Union[
- str, "Template", t.List[t.Union[str, "Template"]]
- ],
- parent: t.Optional[str] = None,
- globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
- ) -> "Template":
- """Use :meth:`select_template` if an iterable of template names
- is given, or :meth:`get_template` if one name is given.
-
- .. versionadded:: 2.3
- """
- if isinstance(template_name_or_list, (str, Undefined)):
- return self.get_template(template_name_or_list, parent, globals)
- elif isinstance(template_name_or_list, Template):
- return template_name_or_list
- return self.select_template(template_name_or_list, parent, globals)
-
- def from_string(
- self,
- source: t.Union[str, nodes.Template],
- globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
- template_class: t.Optional[t.Type["Template"]] = None,
- ) -> "Template":
- """Load a template from a source string without using
- :attr:`loader`.
-
- :param source: Jinja source to compile into a template.
- :param globals: Extend the environment :attr:`globals` with
- these extra variables available for all renders of this
- template. If the template has already been loaded and
- cached, its globals are updated with any new items.
- :param template_class: Return an instance of this
- :class:`Template` class.
- """
- gs = self.make_globals(globals)
- cls = template_class or self.template_class
- return cls.from_code(self, self.compile(source), gs, None)
-
- def make_globals(
- self, d: t.Optional[t.MutableMapping[str, t.Any]]
- ) -> t.MutableMapping[str, t.Any]:
- """Make the globals map for a template. Any given template
- globals overlay the environment :attr:`globals`.
-
- Returns a :class:`collections.ChainMap`. This allows any changes
- to a template's globals to only affect that template, while
- changes to the environment's globals are still reflected.
- However, avoid modifying any globals after a template is loaded.
-
- :param d: Dict of template-specific globals.
-
- .. versionchanged:: 3.0
- Use :class:`collections.ChainMap` to always prevent mutating
- environment globals.
- """
- if d is None:
- d = {}
-
- return ChainMap(d, self.globals)
-
-
-class Template:
- """A compiled template that can be rendered.
-
- Use the methods on :class:`Environment` to create or load templates.
- The environment is used to configure how templates are compiled and
- behave.
-
- It is also possible to create a template object directly. This is
- not usually recommended. The constructor takes most of the same
- arguments as :class:`Environment`. All templates created with the
- same environment arguments share the same ephemeral ``Environment``
- instance behind the scenes.
-
- A template object should be considered immutable. Modifications on
- the object are not supported.
- """
-
- #: Type of environment to create when creating a template directly
- #: rather than through an existing environment.
- environment_class: t.Type[Environment] = Environment
-
- environment: Environment
- globals: t.MutableMapping[str, t.Any]
- name: t.Optional[str]
- filename: t.Optional[str]
- blocks: t.Dict[str, t.Callable[[Context], t.Iterator[str]]]
- root_render_func: t.Callable[[Context], t.Iterator[str]]
- _module: t.Optional["TemplateModule"]
- _debug_info: str
- _uptodate: t.Optional[t.Callable[[], bool]]
-
- def __new__(
- cls,
- source: t.Union[str, nodes.Template],
- block_start_string: str = BLOCK_START_STRING,
- block_end_string: str = BLOCK_END_STRING,
- variable_start_string: str = VARIABLE_START_STRING,
- variable_end_string: str = VARIABLE_END_STRING,
- comment_start_string: str = COMMENT_START_STRING,
- comment_end_string: str = COMMENT_END_STRING,
- line_statement_prefix: t.Optional[str] = LINE_STATEMENT_PREFIX,
- line_comment_prefix: t.Optional[str] = LINE_COMMENT_PREFIX,
- trim_blocks: bool = TRIM_BLOCKS,
- lstrip_blocks: bool = LSTRIP_BLOCKS,
- newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = NEWLINE_SEQUENCE,
- keep_trailing_newline: bool = KEEP_TRAILING_NEWLINE,
- extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = (),
- optimized: bool = True,
- undefined: t.Type[Undefined] = Undefined,
- finalize: t.Optional[t.Callable[..., t.Any]] = None,
- autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = False,
- enable_async: bool = False,
- ) -> t.Any: # it returns a `Template`, but this breaks the sphinx build...
- env = get_spontaneous_environment(
- cls.environment_class, # type: ignore
- block_start_string,
- block_end_string,
- variable_start_string,
- variable_end_string,
- comment_start_string,
- comment_end_string,
- line_statement_prefix,
- line_comment_prefix,
- trim_blocks,
- lstrip_blocks,
- newline_sequence,
- keep_trailing_newline,
- frozenset(extensions),
- optimized,
- undefined, # type: ignore
- finalize,
- autoescape,
- None,
- 0,
- False,
- None,
- enable_async,
- )
- return env.from_string(source, template_class=cls)
-
- @classmethod
- def from_code(
- cls,
- environment: Environment,
- code: CodeType,
- globals: t.MutableMapping[str, t.Any],
- uptodate: t.Optional[t.Callable[[], bool]] = None,
- ) -> "Template":
- """Creates a template object from compiled code and the globals. This
- is used by the loaders and environment to create a template object.
- """
- namespace = {"environment": environment, "__file__": code.co_filename}
- exec(code, namespace)
- rv = cls._from_namespace(environment, namespace, globals)
- rv._uptodate = uptodate
- return rv
-
- @classmethod
- def from_module_dict(
- cls,
- environment: Environment,
- module_dict: t.MutableMapping[str, t.Any],
- globals: t.MutableMapping[str, t.Any],
- ) -> "Template":
- """Creates a template object from a module. This is used by the
- module loader to create a template object.
-
- .. versionadded:: 2.4
- """
- return cls._from_namespace(environment, module_dict, globals)
-
- @classmethod
- def _from_namespace(
- cls,
- environment: Environment,
- namespace: t.MutableMapping[str, t.Any],
- globals: t.MutableMapping[str, t.Any],
- ) -> "Template":
- t: "Template" = object.__new__(cls)
- t.environment = environment
- t.globals = globals
- t.name = namespace["name"]
- t.filename = namespace["__file__"]
- t.blocks = namespace["blocks"]
-
- # render function and module
- t.root_render_func = namespace["root"] # type: ignore
- t._module = None
-
- # debug and loader helpers
- t._debug_info = namespace["debug_info"]
- t._uptodate = None
-
- # store the reference
- namespace["environment"] = environment
- namespace["__jinja_template__"] = t
-
- return t
-
- def render(self, *args: t.Any, **kwargs: t.Any) -> str:
- """This method accepts the same arguments as the `dict` constructor:
- A dict, a dict subclass or some keyword arguments. If no arguments
- are given the context will be empty. These two calls do the same::
-
- template.render(knights='that say nih')
- template.render({'knights': 'that say nih'})
-
- This will return the rendered template as a string.
- """
- if self.environment.is_async:
- import asyncio
-
- close = False
-
- try:
- loop = asyncio.get_running_loop()
- except RuntimeError:
- loop = asyncio.new_event_loop()
- close = True
-
- try:
- return loop.run_until_complete(self.render_async(*args, **kwargs))
- finally:
- if close:
- loop.close()
-
- ctx = self.new_context(dict(*args, **kwargs))
-
- try:
- return self.environment.concat(self.root_render_func(ctx)) # type: ignore
- except Exception:
- self.environment.handle_exception()
-
- async def render_async(self, *args: t.Any, **kwargs: t.Any) -> str:
- """This works similar to :meth:`render` but returns a coroutine
- that when awaited returns the entire rendered template string. This
- requires the async feature to be enabled.
-
- Example usage::
-
- await template.render_async(knights='that say nih; asynchronously')
- """
- if not self.environment.is_async:
- raise RuntimeError(
- "The environment was not created with async mode enabled."
- )
-
- ctx = self.new_context(dict(*args, **kwargs))
-
- try:
- return self.environment.concat( # type: ignore
- [n async for n in self.root_render_func(ctx)] # type: ignore
- )
- except Exception:
- return self.environment.handle_exception()
-
- def stream(self, *args: t.Any, **kwargs: t.Any) -> "TemplateStream":
- """Works exactly like :meth:`generate` but returns a
- :class:`TemplateStream`.
- """
- return TemplateStream(self.generate(*args, **kwargs))
-
- def generate(self, *args: t.Any, **kwargs: t.Any) -> t.Iterator[str]:
- """For very large templates it can be useful to not render the whole
- template at once but evaluate each statement after another and yield
- piece for piece. This method basically does exactly that and returns
- a generator that yields one item after another as strings.
-
- It accepts the same arguments as :meth:`render`.
- """
- if self.environment.is_async:
- import asyncio
-
- async def to_list() -> t.List[str]:
- return [x async for x in self.generate_async(*args, **kwargs)]
-
- yield from asyncio.run(to_list())
- return
-
- ctx = self.new_context(dict(*args, **kwargs))
-
- try:
- yield from self.root_render_func(ctx) # type: ignore
- except Exception:
- yield self.environment.handle_exception()
-
- async def generate_async(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.AsyncIterator[str]:
- """An async version of :meth:`generate`. Works very similarly but
- returns an async iterator instead.
- """
- if not self.environment.is_async:
- raise RuntimeError(
- "The environment was not created with async mode enabled."
- )
-
- ctx = self.new_context(dict(*args, **kwargs))
-
- try:
- async for event in self.root_render_func(ctx): # type: ignore
- yield event
- except Exception:
- yield self.environment.handle_exception()
-
- def new_context(
- self,
- vars: t.Optional[t.Dict[str, t.Any]] = None,
- shared: bool = False,
- locals: t.Optional[t.Mapping[str, t.Any]] = None,
- ) -> Context:
- """Create a new :class:`Context` for this template. The vars
- provided will be passed to the template. Per default the globals
- are added to the context. If shared is set to `True` the data
- is passed as is to the context without adding the globals.
-
- `locals` can be a dict of local variables for internal usage.
- """
- return new_context(
- self.environment, self.name, self.blocks, vars, shared, self.globals, locals
- )
-
- def make_module(
- self,
- vars: t.Optional[t.Dict[str, t.Any]] = None,
- shared: bool = False,
- locals: t.Optional[t.Mapping[str, t.Any]] = None,
- ) -> "TemplateModule":
- """This method works like the :attr:`module` attribute when called
- without arguments but it will evaluate the template on every call
- rather than caching it. It's also possible to provide
- a dict which is then used as context. The arguments are the same
- as for the :meth:`new_context` method.
- """
- ctx = self.new_context(vars, shared, locals)
- return TemplateModule(self, ctx)
-
- async def make_module_async(
- self,
- vars: t.Optional[t.Dict[str, t.Any]] = None,
- shared: bool = False,
- locals: t.Optional[t.Mapping[str, t.Any]] = None,
- ) -> "TemplateModule":
- """As template module creation can invoke template code for
- asynchronous executions this method must be used instead of the
- normal :meth:`make_module` one. Likewise the module attribute
- becomes unavailable in async mode.
- """
- ctx = self.new_context(vars, shared, locals)
- return TemplateModule(
- self, ctx, [x async for x in self.root_render_func(ctx)] # type: ignore
- )
-
- @internalcode
- def _get_default_module(self, ctx: t.Optional[Context] = None) -> "TemplateModule":
- """If a context is passed in, this means that the template was
- imported. Imported templates have access to the current
- template's globals by default, but they can only be accessed via
- the context during runtime.
-
- If there are new globals, we need to create a new module because
- the cached module is already rendered and will not have access
- to globals from the current context. This new module is not
- cached because the template can be imported elsewhere, and it
- should have access to only the current template's globals.
- """
- if self.environment.is_async:
- raise RuntimeError("Module is not available in async mode.")
-
- if ctx is not None:
- keys = ctx.globals_keys - self.globals.keys()
-
- if keys:
- return self.make_module({k: ctx.parent[k] for k in keys})
-
- if self._module is None:
- self._module = self.make_module()
-
- return self._module
-
- async def _get_default_module_async(
- self, ctx: t.Optional[Context] = None
- ) -> "TemplateModule":
- if ctx is not None:
- keys = ctx.globals_keys - self.globals.keys()
-
- if keys:
- return await self.make_module_async({k: ctx.parent[k] for k in keys})
-
- if self._module is None:
- self._module = await self.make_module_async()
-
- return self._module
-
- @property
- def module(self) -> "TemplateModule":
- """The template as module. This is used for imports in the
- template runtime but is also useful if one wants to access
- exported template variables from the Python layer:
-
- >>> t = Template('{% macro foo() %}42{% endmacro %}23')
- >>> str(t.module)
- '23'
- >>> t.module.foo() == u'42'
- True
-
- This attribute is not available if async mode is enabled.
- """
- return self._get_default_module()
-
- def get_corresponding_lineno(self, lineno: int) -> int:
- """Return the source line number of a line number in the
- generated bytecode as they are not in sync.
- """
- for template_line, code_line in reversed(self.debug_info):
- if code_line <= lineno:
- return template_line
- return 1
-
- @property
- def is_up_to_date(self) -> bool:
- """If this variable is `False` there is a newer version available."""
- if self._uptodate is None:
- return True
- return self._uptodate()
-
- @property
- def debug_info(self) -> t.List[t.Tuple[int, int]]:
- """The debug info mapping."""
- if self._debug_info:
- return [
- tuple(map(int, x.split("="))) # type: ignore
- for x in self._debug_info.split("&")
- ]
-
- return []
-
- def __repr__(self) -> str:
- if self.name is None:
- name = f"memory:{id(self):x}"
- else:
- name = repr(self.name)
- return f"<{type(self).__name__} {name}>"
-
-
-class TemplateModule:
- """Represents an imported template. All the exported names of the
- template are available as attributes on this object. Additionally
- converting it into a string renders the contents.
- """
-
- def __init__(
- self,
- template: Template,
- context: Context,
- body_stream: t.Optional[t.Iterable[str]] = None,
- ) -> None:
- if body_stream is None:
- if context.environment.is_async:
- raise RuntimeError(
- "Async mode requires a body stream to be passed to"
- " a template module. Use the async methods of the"
- " API you are using."
- )
-
- body_stream = list(template.root_render_func(context)) # type: ignore
-
- self._body_stream = body_stream
- self.__dict__.update(context.get_exported())
- self.__name__ = template.name
-
- def __html__(self) -> Markup:
- return Markup(concat(self._body_stream))
-
- def __str__(self) -> str:
- return concat(self._body_stream)
-
- def __repr__(self) -> str:
- if self.__name__ is None:
- name = f"memory:{id(self):x}"
- else:
- name = repr(self.__name__)
- return f"<{type(self).__name__} {name}>"
-
-
-class TemplateExpression:
- """The :meth:`jinja2.Environment.compile_expression` method returns an
- instance of this object. It encapsulates the expression-like access
- to the template with an expression it wraps.
- """
-
- def __init__(self, template: Template, undefined_to_none: bool) -> None:
- self._template = template
- self._undefined_to_none = undefined_to_none
-
- def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Optional[t.Any]:
- context = self._template.new_context(dict(*args, **kwargs))
- consume(self._template.root_render_func(context)) # type: ignore
- rv = context.vars["result"]
- if self._undefined_to_none and isinstance(rv, Undefined):
- rv = None
- return rv
-
-
-class TemplateStream:
- """A template stream works pretty much like an ordinary python generator
- but it can buffer multiple items to reduce the number of total iterations.
- Per default the output is unbuffered which means that for every unbuffered
- instruction in the template one string is yielded.
-
- If buffering is enabled with a buffer size of 5, five items are combined
- into a new string. This is mainly useful if you are streaming
- big templates to a client via WSGI which flushes after each iteration.
- """
-
- def __init__(self, gen: t.Iterator[str]) -> None:
- self._gen = gen
- self.disable_buffering()
-
- def dump(
- self,
- fp: t.Union[str, t.IO],
- encoding: t.Optional[str] = None,
- errors: t.Optional[str] = "strict",
- ) -> None:
- """Dump the complete stream into a file or file-like object.
- Per default strings are written, if you want to encode
- before writing specify an `encoding`.
-
- Example usage::
-
- Template('Hello {{ name }}!').stream(name='foo').dump('hello.html')
- """
- close = False
-
- if isinstance(fp, str):
- if encoding is None:
- encoding = "utf-8"
-
- fp = open(fp, "wb")
- close = True
- try:
- if encoding is not None:
- iterable = (x.encode(encoding, errors) for x in self) # type: ignore
- else:
- iterable = self # type: ignore
-
- if hasattr(fp, "writelines"):
- fp.writelines(iterable)
- else:
- for item in iterable:
- fp.write(item)
- finally:
- if close:
- fp.close()
-
- def disable_buffering(self) -> None:
- """Disable the output buffering."""
- self._next = partial(next, self._gen)
- self.buffered = False
-
- def _buffered_generator(self, size: int) -> t.Iterator[str]:
- buf: t.List[str] = []
- c_size = 0
- push = buf.append
-
- while True:
- try:
- while c_size < size:
- c = next(self._gen)
- push(c)
- if c:
- c_size += 1
- except StopIteration:
- if not c_size:
- return
- yield concat(buf)
- del buf[:]
- c_size = 0
-
- def enable_buffering(self, size: int = 5) -> None:
- """Enable buffering. Buffer `size` items before yielding them."""
- if size <= 1:
- raise ValueError("buffer size too small")
-
- self.buffered = True
- self._next = partial(next, self._buffered_generator(size))
-
- def __iter__(self) -> "TemplateStream":
- return self
-
- def __next__(self) -> str:
- return self._next() # type: ignore
-
-
-# hook in default template class. if anyone reads this comment: ignore that
-# it's possible to use custom templates ;-)
-Environment.template_class = Template
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_core/smartquotes.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_core/smartquotes.py
deleted file mode 100644
index c98fbd71e7d2e644ca7c6ac95827962342326059..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_core/smartquotes.py
+++ /dev/null
@@ -1,202 +0,0 @@
-"""Convert straight quotation marks to typographic ones
-"""
-from __future__ import annotations
-
-import re
-from typing import Any
-
-from ..common.utils import charCodeAt, isMdAsciiPunct, isPunctChar, isWhiteSpace
-from ..token import Token
-from .state_core import StateCore
-
-QUOTE_TEST_RE = re.compile(r"['\"]")
-QUOTE_RE = re.compile(r"['\"]")
-APOSTROPHE = "\u2019" # ’
-
-
-def replaceAt(string: str, index: int, ch: str) -> str:
- # When the index is negative, the behavior is different from the js version.
- # But basically, the index will not be negative.
- assert index >= 0
- return string[:index] + ch + string[index + 1 :]
-
-
-def process_inlines(tokens: list[Token], state: StateCore) -> None:
- stack: list[dict[str, Any]] = []
-
- for i, token in enumerate(tokens):
- thisLevel = token.level
-
- j = 0
- for j in range(len(stack))[::-1]:
- if stack[j]["level"] <= thisLevel:
- break
- else:
- # When the loop is terminated without a "break".
- # Subtract 1 to get the same index as the js version.
- j -= 1
-
- stack = stack[: j + 1]
-
- if token.type != "text":
- continue
-
- text = token.content
- pos = 0
- maximum = len(text)
-
- while pos < maximum:
- goto_outer = False
- lastIndex = pos
- t = QUOTE_RE.search(text[lastIndex:])
- if not t:
- break
-
- canOpen = canClose = True
- pos = t.start(0) + lastIndex + 1
- isSingle = t.group(0) == "'"
-
- # Find previous character,
- # default to space if it's the beginning of the line
- lastChar: None | int = 0x20
-
- if t.start(0) + lastIndex - 1 >= 0:
- lastChar = charCodeAt(text, t.start(0) + lastIndex - 1)
- else:
- for j in range(i)[::-1]:
- if tokens[j].type == "softbreak" or tokens[j].type == "hardbreak":
- break
- # should skip all tokens except 'text', 'html_inline' or 'code_inline'
- if not tokens[j].content:
- continue
-
- lastChar = charCodeAt(tokens[j].content, len(tokens[j].content) - 1)
- break
-
- # Find next character,
- # default to space if it's the end of the line
- nextChar: None | int = 0x20
-
- if pos < maximum:
- nextChar = charCodeAt(text, pos)
- else:
- for j in range(i + 1, len(tokens)):
- # nextChar defaults to 0x20
- if tokens[j].type == "softbreak" or tokens[j].type == "hardbreak":
- break
- # should skip all tokens except 'text', 'html_inline' or 'code_inline'
- if not tokens[j].content:
- continue
-
- nextChar = charCodeAt(tokens[j].content, 0)
- break
-
- isLastPunctChar = lastChar is not None and (
- isMdAsciiPunct(lastChar) or isPunctChar(chr(lastChar))
- )
- isNextPunctChar = nextChar is not None and (
- isMdAsciiPunct(nextChar) or isPunctChar(chr(nextChar))
- )
-
- isLastWhiteSpace = lastChar is not None and isWhiteSpace(lastChar)
- isNextWhiteSpace = nextChar is not None and isWhiteSpace(nextChar)
-
- if isNextWhiteSpace: # noqa: SIM114
- canOpen = False
- elif isNextPunctChar and not (isLastWhiteSpace or isLastPunctChar):
- canOpen = False
-
- if isLastWhiteSpace: # noqa: SIM114
- canClose = False
- elif isLastPunctChar and not (isNextWhiteSpace or isNextPunctChar):
- canClose = False
-
- if nextChar == 0x22 and t.group(0) == '"': # 0x22: " # noqa: SIM102
- if (
- lastChar is not None and lastChar >= 0x30 and lastChar <= 0x39
- ): # 0x30: 0, 0x39: 9
- # special case: 1"" - count first quote as an inch
- canClose = canOpen = False
-
- if canOpen and canClose:
- # Replace quotes in the middle of punctuation sequence, but not
- # in the middle of the words, i.e.:
- #
- # 1. foo " bar " baz - not replaced
- # 2. foo-"-bar-"-baz - replaced
- # 3. foo"bar"baz - not replaced
- canOpen = isLastPunctChar
- canClose = isNextPunctChar
-
- if not canOpen and not canClose:
- # middle of word
- if isSingle:
- token.content = replaceAt(
- token.content, t.start(0) + lastIndex, APOSTROPHE
- )
- continue
-
- if canClose:
- # this could be a closing quote, rewind the stack to get a match
- for j in range(len(stack))[::-1]:
- item = stack[j]
- if stack[j]["level"] < thisLevel:
- break
- if item["single"] == isSingle and stack[j]["level"] == thisLevel:
- item = stack[j]
-
- if isSingle:
- openQuote = state.md.options.quotes[2]
- closeQuote = state.md.options.quotes[3]
- else:
- openQuote = state.md.options.quotes[0]
- closeQuote = state.md.options.quotes[1]
-
- # replace token.content *before* tokens[item.token].content,
- # because, if they are pointing at the same token, replaceAt
- # could mess up indices when quote length != 1
- token.content = replaceAt(
- token.content, t.start(0) + lastIndex, closeQuote
- )
- tokens[item["token"]].content = replaceAt(
- tokens[item["token"]].content, item["pos"], openQuote
- )
-
- pos += len(closeQuote) - 1
- if item["token"] == i:
- pos += len(openQuote) - 1
-
- text = token.content
- maximum = len(text)
-
- stack = stack[:j]
- goto_outer = True
- break
- if goto_outer:
- goto_outer = False
- continue
-
- if canOpen:
- stack.append(
- {
- "token": i,
- "pos": t.start(0) + lastIndex,
- "single": isSingle,
- "level": thisLevel,
- }
- )
- elif canClose and isSingle:
- token.content = replaceAt(
- token.content, t.start(0) + lastIndex, APOSTROPHE
- )
-
-
-def smartquotes(state: StateCore) -> None:
- if not state.md.options.typographer:
- return
-
- for token in state.tokens:
- if token.type != "inline" or not QUOTE_RE.search(token.content):
- continue
- if token.children is not None:
- process_inlines(token.children, state)
diff --git a/spaces/declare-lab/tango/diffusers/scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py b/spaces/declare-lab/tango/diffusers/scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
deleted file mode 100644
index 22e4271eba3aa859e4220b6f69e81c06550e9548..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
+++ /dev/null
@@ -1,185 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Conversion script for the NCSNPP checkpoints. """
-
-import argparse
-import json
-
-import torch
-
-from diffusers import ScoreSdeVePipeline, ScoreSdeVeScheduler, UNet2DModel
-
-
-def convert_ncsnpp_checkpoint(checkpoint, config):
- """
- Takes a state dict and the path to
- """
- new_model_architecture = UNet2DModel(**config)
- new_model_architecture.time_proj.W.data = checkpoint["all_modules.0.W"].data
- new_model_architecture.time_proj.weight.data = checkpoint["all_modules.0.W"].data
- new_model_architecture.time_embedding.linear_1.weight.data = checkpoint["all_modules.1.weight"].data
- new_model_architecture.time_embedding.linear_1.bias.data = checkpoint["all_modules.1.bias"].data
-
- new_model_architecture.time_embedding.linear_2.weight.data = checkpoint["all_modules.2.weight"].data
- new_model_architecture.time_embedding.linear_2.bias.data = checkpoint["all_modules.2.bias"].data
-
- new_model_architecture.conv_in.weight.data = checkpoint["all_modules.3.weight"].data
- new_model_architecture.conv_in.bias.data = checkpoint["all_modules.3.bias"].data
-
- new_model_architecture.conv_norm_out.weight.data = checkpoint[list(checkpoint.keys())[-4]].data
- new_model_architecture.conv_norm_out.bias.data = checkpoint[list(checkpoint.keys())[-3]].data
- new_model_architecture.conv_out.weight.data = checkpoint[list(checkpoint.keys())[-2]].data
- new_model_architecture.conv_out.bias.data = checkpoint[list(checkpoint.keys())[-1]].data
-
- module_index = 4
-
- def set_attention_weights(new_layer, old_checkpoint, index):
- new_layer.query.weight.data = old_checkpoint[f"all_modules.{index}.NIN_0.W"].data.T
- new_layer.key.weight.data = old_checkpoint[f"all_modules.{index}.NIN_1.W"].data.T
- new_layer.value.weight.data = old_checkpoint[f"all_modules.{index}.NIN_2.W"].data.T
-
- new_layer.query.bias.data = old_checkpoint[f"all_modules.{index}.NIN_0.b"].data
- new_layer.key.bias.data = old_checkpoint[f"all_modules.{index}.NIN_1.b"].data
- new_layer.value.bias.data = old_checkpoint[f"all_modules.{index}.NIN_2.b"].data
-
- new_layer.proj_attn.weight.data = old_checkpoint[f"all_modules.{index}.NIN_3.W"].data.T
- new_layer.proj_attn.bias.data = old_checkpoint[f"all_modules.{index}.NIN_3.b"].data
-
- new_layer.group_norm.weight.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.weight"].data
- new_layer.group_norm.bias.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.bias"].data
-
- def set_resnet_weights(new_layer, old_checkpoint, index):
- new_layer.conv1.weight.data = old_checkpoint[f"all_modules.{index}.Conv_0.weight"].data
- new_layer.conv1.bias.data = old_checkpoint[f"all_modules.{index}.Conv_0.bias"].data
- new_layer.norm1.weight.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.weight"].data
- new_layer.norm1.bias.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.bias"].data
-
- new_layer.conv2.weight.data = old_checkpoint[f"all_modules.{index}.Conv_1.weight"].data
- new_layer.conv2.bias.data = old_checkpoint[f"all_modules.{index}.Conv_1.bias"].data
- new_layer.norm2.weight.data = old_checkpoint[f"all_modules.{index}.GroupNorm_1.weight"].data
- new_layer.norm2.bias.data = old_checkpoint[f"all_modules.{index}.GroupNorm_1.bias"].data
-
- new_layer.time_emb_proj.weight.data = old_checkpoint[f"all_modules.{index}.Dense_0.weight"].data
- new_layer.time_emb_proj.bias.data = old_checkpoint[f"all_modules.{index}.Dense_0.bias"].data
-
- if new_layer.in_channels != new_layer.out_channels or new_layer.up or new_layer.down:
- new_layer.conv_shortcut.weight.data = old_checkpoint[f"all_modules.{index}.Conv_2.weight"].data
- new_layer.conv_shortcut.bias.data = old_checkpoint[f"all_modules.{index}.Conv_2.bias"].data
-
- for i, block in enumerate(new_model_architecture.downsample_blocks):
- has_attentions = hasattr(block, "attentions")
- for j in range(len(block.resnets)):
- set_resnet_weights(block.resnets[j], checkpoint, module_index)
- module_index += 1
- if has_attentions:
- set_attention_weights(block.attentions[j], checkpoint, module_index)
- module_index += 1
-
- if hasattr(block, "downsamplers") and block.downsamplers is not None:
- set_resnet_weights(block.resnet_down, checkpoint, module_index)
- module_index += 1
- block.skip_conv.weight.data = checkpoint[f"all_modules.{module_index}.Conv_0.weight"].data
- block.skip_conv.bias.data = checkpoint[f"all_modules.{module_index}.Conv_0.bias"].data
- module_index += 1
-
- set_resnet_weights(new_model_architecture.mid_block.resnets[0], checkpoint, module_index)
- module_index += 1
- set_attention_weights(new_model_architecture.mid_block.attentions[0], checkpoint, module_index)
- module_index += 1
- set_resnet_weights(new_model_architecture.mid_block.resnets[1], checkpoint, module_index)
- module_index += 1
-
- for i, block in enumerate(new_model_architecture.up_blocks):
- has_attentions = hasattr(block, "attentions")
- for j in range(len(block.resnets)):
- set_resnet_weights(block.resnets[j], checkpoint, module_index)
- module_index += 1
- if has_attentions:
- set_attention_weights(
- block.attentions[0], checkpoint, module_index
- ) # why can there only be a single attention layer for up?
- module_index += 1
-
- if hasattr(block, "resnet_up") and block.resnet_up is not None:
- block.skip_norm.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data
- block.skip_norm.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data
- module_index += 1
- block.skip_conv.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data
- block.skip_conv.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data
- module_index += 1
- set_resnet_weights(block.resnet_up, checkpoint, module_index)
- module_index += 1
-
- new_model_architecture.conv_norm_out.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data
- new_model_architecture.conv_norm_out.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data
- module_index += 1
- new_model_architecture.conv_out.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data
- new_model_architecture.conv_out.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data
-
- return new_model_architecture.state_dict()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--checkpoint_path",
- default="/Users/arthurzucker/Work/diffusers/ArthurZ/diffusion_pytorch_model.bin",
- type=str,
- required=False,
- help="Path to the checkpoint to convert.",
- )
-
- parser.add_argument(
- "--config_file",
- default="/Users/arthurzucker/Work/diffusers/ArthurZ/config.json",
- type=str,
- required=False,
- help="The config json file corresponding to the architecture.",
- )
-
- parser.add_argument(
- "--dump_path",
- default="/Users/arthurzucker/Work/diffusers/ArthurZ/diffusion_model_new.pt",
- type=str,
- required=False,
- help="Path to the output model.",
- )
-
- args = parser.parse_args()
-
- checkpoint = torch.load(args.checkpoint_path, map_location="cpu")
-
- with open(args.config_file) as f:
- config = json.loads(f.read())
-
- converted_checkpoint = convert_ncsnpp_checkpoint(
- checkpoint,
- config,
- )
-
- if "sde" in config:
- del config["sde"]
-
- model = UNet2DModel(**config)
- model.load_state_dict(converted_checkpoint)
-
- try:
- scheduler = ScoreSdeVeScheduler.from_config("/".join(args.checkpoint_path.split("/")[:-1]))
-
- pipe = ScoreSdeVePipeline(unet=model, scheduler=scheduler)
- pipe.save_pretrained(args.dump_path)
- except: # noqa: E722
- model.save_pretrained(args.dump_path)
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/.ipynb_checkpoints/pipeline_utils-checkpoint.py b/spaces/declare-lab/tango/diffusers/src/diffusers/.ipynb_checkpoints/pipeline_utils-checkpoint.py
deleted file mode 100644
index 5c0c2337dc048dd9ef164ac5cb92e4bf5e62d764..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/.ipynb_checkpoints/pipeline_utils-checkpoint.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-
-# limitations under the License.
-
-# NOTE: This file is deprecated and will be removed in a future version.
-# It only exists so that temporarely `from diffusers.pipelines import DiffusionPipeline` works
-
-from .pipelines import DiffusionPipeline, ImagePipelineOutput # noqa: F401
diff --git a/spaces/declare-lab/tango/diffusers/tests/__init__.py b/spaces/declare-lab/tango/diffusers/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_image_variation.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_image_variation.py
deleted file mode 100644
index b4eabb9e3a0e18dd71a445bb8960b27d8699daac..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_image_variation.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-
-import numpy as np
-import torch
-
-from diffusers import VersatileDiffusionImageVariationPipeline
-from diffusers.utils.testing_utils import load_image, require_torch_gpu, slow, torch_device
-
-
-torch.backends.cuda.matmul.allow_tf32 = False
-
-
-class VersatileDiffusionImageVariationPipelineFastTests(unittest.TestCase):
- pass
-
-
-@slow
-@require_torch_gpu
-class VersatileDiffusionImageVariationPipelineIntegrationTests(unittest.TestCase):
- def test_inference_image_variations(self):
- pipe = VersatileDiffusionImageVariationPipeline.from_pretrained("shi-labs/versatile-diffusion")
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- image_prompt = load_image(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/versatile_diffusion/benz.jpg"
- )
- generator = torch.manual_seed(0)
- image = pipe(
- image=image_prompt,
- generator=generator,
- guidance_scale=7.5,
- num_inference_steps=50,
- output_type="numpy",
- ).images
-
- image_slice = image[0, 253:256, 253:256, -1]
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.0441, 0.0469, 0.0507, 0.0575, 0.0632, 0.0650, 0.0865, 0.0909, 0.0945])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/static/cy_aps/assets/index-5df855a0.js b/spaces/deepwisdom/MetaGPT/metagpt/static/cy_aps/assets/index-5df855a0.js
deleted file mode 100644
index 6610a47518dfd517d272c0f65134740f53596116..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/static/cy_aps/assets/index-5df855a0.js
+++ /dev/null
@@ -1 +0,0 @@
-import{c as T,Y as re,Z as ce,r as ae,_ as ie,d as le,m as ue,f as de,p as fe,q as me,$ as I,v as ge,a0 as R}from"./vue-e0bc46a9.js";import{c as pe,l as he,a as ve,b as H,M as J,t as ye,d as be,C as Se,S as N,o as j,e as P,f as we}from"./vendor-4cd7d240.js";import"./__commonjsHelpers__-042e6b4d.js";(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const s of document.querySelectorAll('link[rel="modulepreload"]'))n(s);new MutationObserver(s=>{for(const o of s)if(o.type==="childList")for(const a of o.addedNodes)a.tagName==="LINK"&&a.rel==="modulepreload"&&n(a)}).observe(document,{childList:!0,subtree:!0});function r(s){const o={};return s.integrity&&(o.integrity=s.integrity),s.referrerPolicy&&(o.referrerPolicy=s.referrerPolicy),s.crossOrigin==="use-credentials"?o.credentials="include":s.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function n(s){if(s.ep)return;s.ep=!0;const o=r(s);fetch(s.href,o)}})();const _e={},Ce=new Proxy(_e,{get(e,t){return e[t]||t}}),ke={},Le=new Proxy(ke,{get(e,t){return!t.toString().startsWith("__v_")&&e[t],e[t]}}),Oe={"zh-CN":he,"en-US":ve},E={"zh-CN":"zh-CN","en-US":"en-US"};let x="lang";const Re="zh-CN";try{x=localStorage==null?void 0:localStorage.getItem(x)}catch{console.warn("[Client]:localStorage is not available.")}const Ee=Re,_=pe({locale:E[Ee],fallbackLocale:E["zh-CN"],messages:{"zh-CN":Ce,"en-US":Le}}),st=(e,t={})=>`${_.global.t(e,t)}`,xe=e=>{if(E[e]){_.global.locale=e;try{localStorage==null||localStorage.setItem(x,e)}catch{console.warn("[Client]:localStorage is not available.")}window.location.reload()}},Ae=()=>{const e=T(()=>_.global.locale);return{setLang:xe,lang:e}},qe=T(()=>Oe[_.global.locale]),y={error:-1,success:[0,200,204],needAuthorization:401,notFound:404,notAllowed:[403,1403],needRequest:202,needMessage:[208,1e3]},U=()=>{let e=()=>{};const t=new Promise(r=>{e=r});return[e,t]},{CancelToken:Y}=H,D=Math.random().toString().slice(2),Te=1*60*1e3,Me="/api";let k;const Ie=()=>{k==null||k();const{close:e}=J.error("登录失效,请重新登录!");k=e};let Z=Y.source();const Ne=()=>{Z=Y.source()},je=H.create({baseURL:Me,timeout:Te}),z=e=>y.success.includes(+e),Q=async e=>{const{getToken:t,logout:r}=ee(),{lang:n}=Ae(),s=Z;e.cancelToken=e.cancelToken||s.token;let o=!1;e.headers=e.headers||{},t()&&(e.headers.Authorization=t()),e.headers.lang=n.value;const a=i=>{const{data:c}=i,[u,h]=U(),f=new FileReader;return f.readAsText(c,"utf-8"),f.onload=()=>{var M;try{const C=JSON.parse(f.result);if(C){u(C);return}}catch(C){console.log(C)}const se=(M=/filename[^;=\n]*=((['"]).*?\2|[^;\n]*)/.exec(i.headers["content-disposition"]))==null?void 0:M[1],ne=new Blob([c],{type:i.headers["content-type"]});u({code:y.success[0],data:{name:se,blob:ne},message:"",isRequestSuccess:!0})},h},l=(i,c)=>{var h;const{code:u}=i.data;(u===y.needAuthorization||((h=c==null?void 0:c.response)==null?void 0:h.status)===401)&&(o=!0,Ie(),s.cancel(D),Ne(),r())},m=e.responseType==="blob";function d(i){o||J.error(i||this.message)}try{const i=await je.request(e);let{data:c}=i;return m&&(c=await a(i)),l(i),c.message=c.message||c.msg||"",c.isRequestSuccess=z(c.code),c.showRequestErrorMessage=d.bind(c),c}catch(i){if(i.message===D){const[,f]=U();return f}const c=i.response;if(!c){const f={code:y.error,message:i.message,isRequestSuccess:!1};return f.showRequestErrorMessage=d.bind(f),f}let{data:u}=c;if(m&&(u=await a(c)),typeof u=="string"){const f={code:y.error,message:i.message||c.statusText,isRequestSuccess:!1};return f.showRequestErrorMessage=d.bind(f),f}l(c,i),u&&(u.message=u.message||u.msg||i.message||"",u.isRequestSuccess=z(u.code),u.showRequestErrorMessage=d.bind(u));const h={code:y.error,message:i.message,isRequestSuccess:!1};return h.showRequestErrorMessage=d.bind(h),Object.assign(h,u)}},nt=e=>Q({url:"/v1/user/login",method:"post",data:e}),Pe=()=>Q({url:"/v1/user/detail"});var b=(e=>(e.home="/static/index.html",e.login="/login",e.register="/register",e.notFound="/404",e.app="/app",e.appConfig="/app/config/:id",e.appCreate="/appCreate",e.library="/library",e.knowledge="/library/knowledge",e.history="/library/history",e.config="/config",e))(b||{}),A=(e=>(e.Admin="admin",e.SuperAdmin="super_admin",e.User="user",e))(A||{});const Ue="modulepreload",De=function(e){return"https://public-frontend-1300249583.cos-website.ap-nanjing.myqcloud.com/"+e},B={},O=function(t,r,n){if(!r||r.length===0)return t();const s=document.getElementsByTagName("link");return Promise.all(r.map(o=>{if(o=De(o),o in B)return;B[o]=!0;const a=o.endsWith(".css"),l=a?'[rel="stylesheet"]':"";if(!!n)for(let i=s.length-1;i>=0;i--){const c=s[i];if(c.href===o&&(!a||c.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${o}"]${l}`))return;const d=document.createElement("link");if(d.rel=a?"stylesheet":Ue,a||(d.as="script",d.crossOrigin=""),d.href=o,document.head.appendChild(d),a)return new Promise((i,c)=>{d.addEventListener("load",i),d.addEventListener("error",()=>c(new Error(`Unable to preload CSS for ${o}`)))})})).then(()=>t())},ze=[{path:b.login,name:b.login,component:()=>O(()=>import("./login-684ccf2f.js"),["cy_aps/assets/login-684ccf2f.js","cy_aps/assets/vue-e0bc46a9.js","cy_aps/assets/vendor-4cd7d240.js","cy_aps/assets/__commonjsHelpers__-042e6b4d.js"])},{path:b.home,name:b.home,component:()=>O(()=>import("./home-0791050d.js"),["cy_aps/assets/home-0791050d.js","cy_aps/assets/vue-e0bc46a9.js","cy_aps/assets/vendor-4cd7d240.js","cy_aps/assets/__commonjsHelpers__-042e6b4d.js"]),meta:{showSideBar:!0,needLogin:!0}},{path:"/:pathMatch(.*)*",component:()=>O(()=>import("./home-0791050d.js"),["cy_aps/assets/home-0791050d.js","cy_aps/assets/vue-e0bc46a9.js","cy_aps/assets/vendor-4cd7d240.js","cy_aps/assets/__commonjsHelpers__-042e6b4d.js"])}],Be="/",X=re({history:ce(Be),routes:ze}),S=ae(),q="token",Fe=()=>localStorage.getItem(q)||"",F=e=>e?localStorage.setItem(q,e):localStorage.removeItem(q),ee=()=>{const e=async()=>{const{data:n,showRequestErrorMessage:s,isRequestSuccess:o}=await Pe();if(!o){s();return}S.value=n},t=async()=>{F(),S.value=void 0,X.push({name:b.login})},r=T(()=>{var n,s;return((n=S.value)==null?void 0:n.user_role)===A.Admin||((s=S.value)==null?void 0:s.user_role)===A.SuperAdmin});return{getUser:e,user:S,isAdmin:r,logout:t,setToken:F,getToken:Fe}},v=ie(null);function $e(){ee();const e=n=>{var o,a;if(!v.value)return;const s=()=>{var l;n(),(l=v.value)==null||l.off("connect",s)};(o=v.value)!=null&&o.connected?n():(a=v.value)==null||a.on("connect",s)};return{socket:v,onAndAutoOff:n=>{const s=Object.entries(n).reduce((o,[a,l])=>(o[a]=m=>{l(m)},o),{});ye(async()=>{for(const[o,a]of Object.entries(s))e(()=>{v.value.on(o,a)})}),be(async()=>{var o;for(const[a,l]of Object.entries(s))(o=v.value)==null||o.off(a,l)})},emit:(n,...s)=>{e(()=>{var o;(o=v.value)==null||o.emit(n,...s)})}}}const Ve=le({__name:"App",setup(e){return $e(),(t,r)=>{const n=ue("router-view");return de(),fe(I(Se),{locale:I(qe)},{default:me(()=>[ge(n)]),_:1},8,["locale"])}}});const te=(e,t)=>Object.prototype.toString.call(e)===t,Ke=e=>te(e,"[object Boolean]"),oe=e=>te(e,"[object Object]"),rt=e=>{var t,r;if((t=navigator==null?void 0:navigator.clipboard)!=null&&t.writeText)(r=navigator==null?void 0:navigator.clipboard)==null||r.writeText(e);else{const n=document.createElement("textarea");n.value=e,n.style.position="absolute",n.style.opacity="0",n.style.left="-999999px",n.style.top="-999999px",document.body.appendChild(n),n.focus(),n.select(),document.execCommand("copy")}},ct=e=>new TextDecoder("utf-8").decode(e),We=(e,t)=>oe(e)?t.every(r=>Object.hasOwn(e,r)):!1,Ge=e=>oe(e)&&We(e,["loading"]),$=e=>{const t={loading:!0,text:""};return Ke(e)?t.loading=e:Ge(e)?(t.loading=e.loading,t.text=e.text||""):(console.warn("please check v-loading binding, should be boolean or { loading: boolean; text: string; }"),t.loading=!!e),t},V="loadingDirectiveElement",K="fullScreen",w="posRelative",g=Symbol("vLoadingDirective"),p=Symbol("loadingSpinApp"),He={mounted:(e,t)=>{e.classList.remove(w);const n=window.getComputedStyle(e).position;if(e[g]&&(e[g].remove(),delete e[g]),e[p]&&(e[p].unmount(),delete e[p]),!t.value)return;const{loading:s,text:o}=$(t.value);if(!s)return;const a=t.arg==="fullScreen",l=document.createElement("div");l.classList.add(V),a&&l.classList.add(K);const m=R(N,{tip:o});m.mount(l),n==="static"&&e.classList.add(w),e[g]=l,e[p]=m,e.append(l)},updated:(e,t)=>{e.classList.remove(w);const n=window.getComputedStyle(e).position;if(e[g]&&(e[g].remove(),delete e[g]),e[p]&&(e[p].unmount(),delete e[p]),!t.value)return;const{loading:s,text:o}=$(t.value);if(!s)return;const a=t.arg==="fullScreen",l=document.createElement("div");l.classList.add(V),a&&l.classList.add(K);const m=R(N,{tip:o});m.mount(l),n==="static"&&e.classList.add(w),e[g]=l,e[p]=m,e.append(l)},unmounted:e=>{e.classList.remove(w),e[g]&&(e[g].remove(),delete e[g]),e[p]&&(e[p].unmount(),delete e[p])}},W=e=>typeof e=="function",L=Symbol("clickOutside"),G=e=>()=>{e()},Je={mounted:(e,t)=>{if(!W(t.value)){console.warn("v-clickoutside binding should be function");return}const r=G(t.value);e[L]=r,j("clickoutside",e,r)},updated:(e,t)=>{let r=e[L];if(r&&P("clickoutside",e,r),!W(t.value)){console.warn("v-clickoutside binding should be function");return}r=G(t.value),e[L]=r,j("clickoutside",e,r)},unmounted:e=>{const t=e[L];t&&P("clickoutside",e,t)}},Ye=e=>{e.directive("loading",He),e.directive("clickoutside",Je)},Ze={install:Ye};const Qe=R(Ve);Qe.use(we()).use(X).use(Ze).use(_).mount("#app");export{rt as C,b as P,ct as U,nt as l,st as t,ee as u};
diff --git a/spaces/deprem-ml/deprem_satellite_test/utils/dataloader.py b/spaces/deprem-ml/deprem_satellite_test/utils/dataloader.py
deleted file mode 100644
index 2a300b6444c3439dd8368566e229947ec789a069..0000000000000000000000000000000000000000
--- a/spaces/deprem-ml/deprem_satellite_test/utils/dataloader.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import albumentations as albu
-import numpy as np
-import cv2
-import os
-os.environ['CUDA_VISIBLE_DEVICES'] = '0'
-
-
-class Dataset:
- def __init__(
- self,
- image_path,
- augmentation=None,
- preprocessing=None,
- ):
- self.pil_image = image_path
- self.augmentation = augmentation
- self.preprocessing = preprocessing
-
- def get(self):
- # pil image > numpy array
- image = np.array(self.pil_image)
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
-
- # apply augmentations
- if self.augmentation:
- sample = self.augmentation(image=image)
- image = sample['image']
-
- # apply preprocessing
- if self.preprocessing:
- sample = self.preprocessing(image=image)
- image = sample['image']
-
- return image
-
-
-def get_validation_augmentation():
- """Add paddings to make image shape divisible by 32"""
- test_transform = [
- albu.PadIfNeeded(384, 480)
- ]
- return albu.Compose(test_transform)
-
-
-def to_tensor(x, **kwargs):
- return x.transpose(2, 0, 1).astype('float32')
-
-
-def get_preprocessing(preprocessing_fn):
-
- _transform = [
- albu.Lambda(image=preprocessing_fn),
- albu.Lambda(image=to_tensor),
- ]
- return albu.Compose(_transform)
diff --git a/spaces/dhanilka/illusion-image-ai/README.md b/spaces/dhanilka/illusion-image-ai/README.md
deleted file mode 100644
index c718f964ae87f4a1277bccb07f01568bcc834ec4..0000000000000000000000000000000000000000
--- a/spaces/dhanilka/illusion-image-ai/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: IllusionDiffusion
-emoji: 👁
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.44.3
-app_file: app.py
-pinned: false
-license: openrail
-hf_oauth: true
-disable_embedding: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub !EXCLUSIVE!.md b/spaces/diacanFperku/AutoGPT/CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub !EXCLUSIVE!.md
deleted file mode 100644
index 8be780d8032608984b305ce0be4357d8a86ec2e7..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub !EXCLUSIVE!.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
How to Download and Use CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub
-
Prezi is a popular presentation software that allows you to create dynamic and interactive slideshows with zooming and panning effects. However, Prezi is not free and requires a subscription to use all its features. If you want to use Prezi for free and without any limitations, you might be interested in CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub, a software that can activate Prezi on your computer and let you use it without any restrictions.
-
What is CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub?
-
CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is a software that can crack Prezi and make it work as if you have a premium account. It can bypass the activation process of Prezi and unlock all its features, such as unlimited storage, offline access, custom logo, privacy control, and more.
CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is very easy to use and does not require any installation or registration. You just need to download the software from our website, run it on your Windows PC, and follow the instructions on the screen. The software will automatically detect your Prezi version and apply the crack accordingly.
-
Why should you use CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub?
-
There are many reasons why you should use CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub to crack Prezi on your computer. Here are some of them:
-
-
It's free: You don't have to pay anything to use this software. You can download it from our website and use it as many times as you want.
-
It's safe: You don't have to worry about viruses or malware by using this software. We have scanned the file with multiple antivirus programs and found it to be clean and safe.
-
It's easy: You don't have to be a tech expert or have any special equipment to use this software. You just need your computer and an internet connection.
-
It's fast: You don't have to wait for hours or days to get your Prezi cracked by using this software. You can get it done in minutes.
-
It's reliable: You don't have to worry about getting errors or glitches by using this software. It works with any Prezi version and any Windows PC.
-
-
How to use CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub?
-
Using CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is very simple and straightforward. Here are the steps you need to follow:
-
-
Download CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub from our website and unzip it.
-
Run the CRACK_For_Prezi_5_2_8_Final_HelsEnBeRg.exe file and accept the terms and conditions.
-
Select your Prezi version from the drop-down menu and click on "Crack".
-
Wait for the software to finish cracking your Prezi and show you a confirmation message.
-
Congratulations! Your Prezi is now cracked and ready to use with all its features.
-
-
Frequently Asked Questions
-
Here are some of the most common questions and answers about CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub:
-
Is CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub legal?
-
No, CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is not legal as it violates the terms and conditions of Prezi and infringes its intellectual property rights. We do not encourage or endorse the use of this software for any illegal or unethical purposes.
-
Is CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub safe?
-
Yes, CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is safe to use as long as you download it from our official website and scan it with your antivirus program. We do not collect or store any of your personal data or information.
-
-
Will CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub work on Mac or Linux?
-
No, CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is only compatible with Windows PC.
-
Will CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub work with any Prezi version?
-
Yes, CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub will work with any Prezi version that is installed on your computer.
-
How can I get support for CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub?
-
If you have any questions or issues with CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub, you can contact us through our website or email us at support@crackforprezifinal.com . We will try to reply as soon as possible.
-
How to Download and Use Prezi After Cracking It
-
After you have cracked Prezi with CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub, you can download and use Prezi on your computer without any limitations. Here are the steps you need to follow:
-
-
Go to the official website of Prezi and sign up for a free account or log in with your existing account.
-
Download the Prezi desktop app for Windows and install it on your computer.
-
Launch the Prezi desktop app and log in with your account.
-
Create a new presentation or open an existing one.
-
Edit and customize your presentation as you wish. You can use all the features of Prezi, such as templates, themes, animations, transitions, images, videos, audio, etc.
-
Save and export your presentation as a PDF file or as a portable app that you can run on any computer without internet connection.
-
Share your presentation online or offline with your audience. You can also collaborate with other users and get feedback on your presentation.
-
-
That's it! You have successfully downloaded and used Prezi after cracking it with CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub!
-
Conclusion
-
Prezi is a great presentation software that can help you create dynamic and interactive slideshows that will impress your audience. However, Prezi is not free and requires a subscription to use all its features. If you want to use Prezi for free and without any limitations, you can use CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub, a software that can crack Prezi and make it work as if you have a premium account.
-
CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is free, safe, easy, fast, and reliable. It can crack any Prezi version and any Windows PC. You just need to download the software from our website, run it on your computer, and follow the instructions on the screen. The software will automatically detect your Prezi version and apply the crack accordingly.
-
So what are you waiting for? Download CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub today and crack Prezi like a pro! You won't regret it!
-
Conclusion
-
Prezi is a great presentation software that can help you create dynamic and interactive slideshows that will impress your audience. However, Prezi is not free and requires a subscription to use all its features. If you want to use Prezi for free and without any limitations, you can use CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub, a software that can crack Prezi and make it work as if you have a premium account.
-
CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub is free, safe, easy, fast, and reliable. It can crack any Prezi version and any Windows PC. You just need to download the software from our website, run it on your computer, and follow the instructions on the screen. The software will automatically detect your Prezi version and apply the crack accordingly.
-
So what are you waiting for? Download CRACK For Prezi 5.2.8 Final [HelsEnBeRg].epub today and crack Prezi like a pro! You won't regret it!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/DGS Ramsete III V968.md b/spaces/diacanFperku/AutoGPT/DGS Ramsete III V968.md
deleted file mode 100644
index bea66327fafae492fbf1f0357538fb554bbdec5d..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/DGS Ramsete III V968.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
ive been working on some data acquisition for this robot. im pretty much done. im still trying to figure out a way to get the velocity command to go to zero when the desired velocity goes to zero. ive read the ramsete controller and it seems like it should be possible but i dont know how to implement it.
^^^^^^^^ ive been working on some data acquisition for this robot. im pretty much done. im still trying to figure out a way to get the velocity command to go to zero when the desired velocity goes to zero. ive read the ramsete controller and it seems like it should be possible but i dont know how to implement it.
-
i would like to use ramsete. do not want to use pure pursuit. to be clear, none of these (pid, pure pursuit, ramsete, etc.) are majik bullets that instantly make your robot super reliable. they require some tuning, which requires understanding of what the algo is doing based on whatever data it needs.
-
then i use pure pursuit. i use the value of the yaw and pitch calculated by pure pursuit, which will be different from the value calculated by ramsete. both values will be used to move the robot around the path. after that, i just use pure pursuit to correct for the effect of the robot being off the path.
-
-
jumanji: welcome to the jungle (english) 2 1080p watch online 101 wedding malayalam movie free 50 dgs ramsete iii v9.68 105. dgs ramsete iii v9.68 105 android app: runtastic push-ups workout pro for free apna sapna money money 2 free download utorrent. q >v^v >&)'rxg txo 5}p! a%}dv aysl pl68 >^%> >-*u >)jj -!g1 >l9 >yse 'f2.. >>5 >#rp l+vy >i#] >m c >h62 p.wxh qtg >ii=f tw,( > -~d1 >:03 dt:u >p;lm. dvi [k:y% >+na >v3ji >^hb >d6 >tw] >qg5 >ph- uzyx+ 105 2w*qf >xm. >dce >l%l >&ir 7-v9 >pgw 8hnz >ldb >:pg >pt^ up>ts >){ >v2s w. gold.v9.3.4 photopia.v2.0.10 opt 2005 tracepro v4.16. simtraffic v6.614 tsis corsim v5.1 wintrack.3d.v7.5 simwalk.v1.2.7.68 ptv vissim. analyst.1.105 wasp. bryce.v5.5 dgs ramsete iii v9.05 drafix. dsignation. feuillage. floraison. port. abelia chinensis white surprise caduc, vert. 1,5m x 1,5m. abelia edward goucher. 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Descargar Matrices Bordados Gratis Download ((NEW)).md b/spaces/diacanFperku/AutoGPT/Descargar Matrices Bordados Gratis Download ((NEW)).md
deleted file mode 100644
index a53bae767ea5e19e90ae9a8d23ebf9e1a8dc2b96..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Descargar Matrices Bordados Gratis Download ((NEW)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Descargar. Visitor. Guardar ... Ponchado FREE. Diseños point bordado FREE OF CHARGE. ... 600 Mil Matrises De Bordados + Programa De Visualizar Matriz. Descargar. Visitor. guardar. ... Ponchado FREE. Diseños point bordado FREE OF CHARGE. ... 600 Mil Matrises De Bordados + Programa De Visualizar Matriz. Descargar. Visitor. guardar. ... Ponchado FREE. Diseños point bordado FREE OF CHARGE. ... 600 Mil Matrises De Bordados + Programa De Visualizar Matriz. Descargar. Visitor. guardar. ... Ponchado FREE. Diseños point bordado FREE OF CHARGE. ... 600 Mil Matrises De Bordados + Programa De Visualizar Matriz. Desc 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Easyobdii Version 300 Crack Free.md b/spaces/diacanFperku/AutoGPT/Easyobdii Version 300 Crack Free.md
deleted file mode 100644
index 0591f511a40a10f49e78d6c5c661d334461bcae3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Easyobdii Version 300 Crack Free.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-elm327 apk cracked, If Carport or any other file download has a keygen or ... model year E-Series 2008 model year; EasyObdII.com produces software ... now been shipped since the 12th of last Full versions serial crack keygen warez torrent . ... Mb sd connect, ELM327 Bluetooth, T300 KEY Programmer, BMW icom ... 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (Dhoom 2 Mp4 Movie !!LINK!! Download).md b/spaces/diacanFperku/AutoGPT/HD Online Player (Dhoom 2 Mp4 Movie !!LINK!! Download).md
deleted file mode 100644
index fad76d12c80d185e6c8e6c379d65f6814fb31af3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/HD Online Player (Dhoom 2 Mp4 Movie !!LINK!! Download).md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
-Watch Dhoom 2 movie on free movie streaming website 111.90.159.132 Hello user, if this video is not playing in Firefox, please use GOOGLE CHROME. You can watch videos in HD quality.
-I hope you enjoy the video and don't forget to share with your friends and loved ones.
-You can share it on Facebook, Twitter, VKontakte, Odnoklassniki, Pinteres.
-Dhoom 2 is the sequel to the Dhoom movie, and with this movie, fans will be returning to their favorite Dhoom.
-The film was released in 2012, and a year after the film's release, in 2013.
-The film was shown in Russia in 2014, and was shown in cinemas in Russia in 2014, during the weekend. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Life Is Strange Nude Mods !LINK!.md b/spaces/diacanFperku/AutoGPT/Life Is Strange Nude Mods !LINK!.md
deleted file mode 100644
index 0a7fd620e42ed0ad9dbf73fcf3f295f7fdad974e..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Life Is Strange Nude Mods !LINK!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-March 25, 2021 - Products. AWRC Pro (Atelier Web Remote Commander Pro) AWRC Pro Portable AW Trader Toolkit for MT4 and MT5 IP address; Other software. AWT AWT Portable AWT for MT4 AWT for MT5 AWT for Metastock AWT Power Manager AWT for Stocks AWT for S&P AWT for futures AWT for S&P/EX AWT for VXX AWT for options (Net Call) AWT for options (Net Put) AWT for futures (Net Stop) AWT for options (Net Take) AWT for futures (Net Target) AWT for options (Net Call) AWT for options (Net Put) AWT for futures (Net Stop) · AWT for options (Net Target) · AWT for options (Net Call) · AW 8a78ff9644
-
-
-
diff --git a/spaces/fatiXbelha/sd/Call of Duty Mobile - The Ultimate FPS Experience on Android.md b/spaces/fatiXbelha/sd/Call of Duty Mobile - The Ultimate FPS Experience on Android.md
deleted file mode 100644
index 4255f6d1589ceefa6e367fba0d5f1fa6bdc2bbf7..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Call of Duty Mobile - The Ultimate FPS Experience on Android.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
Call of Duty Mobile Android APK Download: Everything You Need to Know
-
Call of Duty Mobile is one of the most popular and successful mobile games in the world. It is a free-to-play first-person shooter game that brings the thrill and excitement of the Call of Duty franchise to your smartphone. You can play various multiplayer modes such as Team Deathmatch, Domination, and Kill-Confirmed on iconic maps such as Shipment, Raid, and Standoff, as well as 100-player Battle Royale mode on a huge map that includes locations from previous Call of Duty games. You can also customize your loadout, unlock and upgrade weapons, operators, scorestreaks, and more.
If you are an Android user and want to play Call of Duty Mobile on your device, you might be wondering how to download and install the game. In this article, we will show you how to get the APK file for Call of Duty Mobile, how to install and play the game on your Android device, and some features and tips to help you enjoy the game better.
-
How to Download the APK File for Call of Duty Mobile
-
The APK file is a package that contains all the files and data needed to install an app on an Android device. You can download the APK file for Call of Duty Mobile from various sources online, such as Uptodown, Google Play, or Garena. However, you should always be careful when downloading APK files from unknown or untrusted sources, as they might contain malware or viruses that can harm your device or steal your personal information.
-
To download the APK file for Call of Duty Mobile, you need to have at least 2 GB of RAM, Android 5.1 or higher, and about 2.2 GB of free storage space on your device. You also need an internet connection to download the file and play the game online. Here are the steps to download the APK file for Call of Duty Mobile:
-
-
Go to one of the trusted sources mentioned above and search for Call of Duty Mobile.
-
Tap on the download button and wait for the file to be downloaded on your device.
-
You might need to enable the option to install apps from unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Once the download is complete, locate the APK file on your device using a file manager app or your browser's downloads folder.
-
Tap on the APK file and follow the instructions to install it on your device.
-
-
How to Install and Play Call of Duty Mobile on Android Devices
-
After installing the APK file for Call of Duty Mobile, you are ready to play the game on your Android device. Here are the steps to install and play Call of Duty Mobile on Android devices:
-
-
Launch the game by tapping on its icon on your home screen or app drawer.
-
You might need to download some additional data or resources for the game depending on your device model and region. This can take up some more storage space on your device, so make sure you have enough before proceeding.
-
You will be asked to log in with your Facebook account, your Call of Duty Activision account, or play as a guest. We recommend logging in with one of these options so you can save your progress, access your profile across different devices, and get some rewards.
-
You will be taken to the main menu where you can choose between different game modes such as Multiplayer, Battle Royale, Zombies (if available), or Events. You can also access other features such as Loadout, Battle Pass, Store, Clan Wars, Settings, etc.
-
Select a game mode that you want to play and tap on Start Match. You will be matched with other players online based on your level, region, and preferences. You can also invite your friends to join you in a private lobby or join a clan for more social interaction.
-
Enjoy playing Call of Duty Mobile on your Android device!
-
-
Features and Gameplay of Call of Duty Mobile
-
Call of Duty Mobile is a game that offers a lot of features and gameplay options for players of all skill levels and preferences. Here are some of the main features and gameplay aspects of Call of Duty Mobile:
-
-
Multiplayer Mode: This is the core mode of the game where you can play various 5v5 or 10v10 matches with different objectives and rules. You can choose from different modes such as Team Deathmatch, Domination, Kill-Confirmed, Hardpoint, Search and Destroy, etc. You can also play ranked matches to climb the leaderboards and earn rewards.
-
Battle Royale Mode: This is the mode where you can play a 100-player survival match on a huge map that includes locations from previous Call of Duty games. You can play solo, duo, or squad mode with your friends or random players. You can also choose from different classes such as Medic, Scout, Ninja, Defender, etc. that have unique abilities and perks. You can loot weapons, armor, attachments, vehicles, and other items from the map or from airdrops. You have to survive the shrinking circle and eliminate other players to be the last one standing.
-
Zombies Mode: This is the mode where you can team up with up to four players to fight against hordes of zombies and bosses in different maps and scenarios. You can choose from different modes such as Survival, Raid, or Hardcore. You can also customize your loadout, perks, and skills to suit your playstyle. You have to survive waves of zombies, complete objectives, and earn points to buy weapons, ammo, and upgrades.
-
Events Mode: This is the mode where you can participate in various seasonal or limited-time events that offer exclusive rewards and challenges. You can play different modes such as Gun Game, Prop Hunt, Capture the Flag, etc. You can also earn event tokens or credits to exchange for event-specific items such as skins, weapons, operators, etc.
-
Loadout: This is the feature where you can customize your loadout for each game mode. You can choose from different primary and secondary weapons, attachments, operators, scorestreaks, perks, grenades, etc. You can also unlock and upgrade your weapons and operators by playing the game and completing missions.
-
Battle Pass: This is the feature where you can earn rewards by playing the game and completing tasks. You can buy the premium battle pass to get more rewards such as skins, weapons, operators, crates, etc. You can also level up your battle pass by earning XP and unlocking tiers.
-
Store: This is the feature where you can buy various items using real money or in-game currency. You can buy crates, bundles, battle pass, cod points, etc. You can also get some items for free by watching ads or completing offers.
-
Clan Wars: This is the feature where you can join or create a clan with other players and compete against other clans in different modes and maps. You can earn clan points by playing clan matches and completing clan tasks. You can also get clan rewards such as crates, skins, weapons, etc.
-
Settings: This is the feature where you can adjust various settings such as graphics, sound, controls, sensitivity, etc. to optimize your game performance and experience.
-
-
Tips and Tricks to Improve Your Skills and Win Matches
-
Call of Duty Mobile is a game that requires skill, strategy, and teamwork to win matches and rank up. Here are some tips and tricks to help you improve your skills and win matches:
-
-
Aim for the head: This is a basic but essential tip for any shooter game. Aiming for the head will deal more damage and kill your enemies faster than aiming for the body or limbs. You can practice your aim in the training mode or use aim assist if you are a beginner.
-
Use cover: This is another basic but important tip for any shooter game. Using cover will protect you from enemy fire and give you time to heal or reload. You can use walls, buildings, crates, vehicles, etc. as cover. You can also crouch or prone to reduce your visibility and exposure.
-
Move constantly: This is a tip that will help you avoid being an easy target for your enemies. Moving constantly will make it harder for them to hit you and give you an advantage in close-range combat. You can use sprinting, sliding, jumping, etc. to move faster and dodge bullets.
-
Communicate with your team: This is a tip that will help you work better with your team and coordinate your actions. You can use voice chat, text chat, or ping system to communicate with your team. You can also use callouts, commands, or emojis to convey your messages. You can also join a clan or a discord server to find more players to play with.
-
Know your weapons and loadout: This is a tip that will help you choose the best weapons and loadout for each game mode and map. You can use different weapons such as assault rifles, submachine guns, sniper rifles, shotguns, etc. depending on your playstyle and preference. You can also use different attachments, perks, scorestreaks, etc. to enhance your weapons and abilities. You can also experiment with different combinations and see what works best for you.
-
Know the maps and modes: This is a tip that will help you learn the layout and features of each map and mode. You can use the mini-map, compass, or radar to navigate the map and locate enemies, objectives, and items. You can also use the map knowledge to find the best spots, routes, angles, and strategies for each mode and map. You can also play the practice mode or watch some videos or guides to learn more about the maps and modes.
-
Adjust your settings: This is a tip that will help you optimize your game performance and experience. You can adjust your settings such as graphics, sound, controls, sensitivity, etc. to suit your device and preference. You can also use some tips such as turning off battery saver mode, closing background apps, using headphones, etc. to improve your game quality.
-
-
Conclusion: Summary and Recommendation
-
Call of Duty Mobile is a game that offers a lot of fun and excitement for mobile gamers. It is a game that you can download and play for free on your Android device using the APK file. It is a game that you can enjoy various modes such as Multiplayer, Battle Royale, Zombies, or Events with your friends or other players online. It is a game that you can customize your loadout, unlock and upgrade your weapons, operators, scorestreaks, and more. It is a game that you can improve your skills and win matches by following some tips and tricks.
-
If you are looking for a mobile game that will keep you entertained and challenged for hours, we recommend you to try Call of Duty Mobile on your Android device. It is a game that will not disappoint you if you are a fan of the Call of Duty franchise or shooter games in general. It is a game that will make you feel like you are playing on a console or PC with its amazing graphics, sound effects, and gameplay.
-
FAQs: Five Common Questions and Answers About Call of Duty Mobile
-
-
Question
Answer
-
Is Call of Duty Mobile free to play?
Yes, Call of Duty Mobile is free to play on Android devices. However, you can buy some items such as cod points, crates, bundles, battle pass, etc. using real money or in-game currency.
-
Is Call of Duty Mobile offline?
No, Call of Duty Mobile requires an internet connection to play online with other players or download additional data or resources for the game.
-
Is Call of Duty Mobile cross-platform?
Yes, Call of Duty Mobile supports cross-platform play between Android and iOS devices. However, it does not support cross-play with PC or console versions of the game.
-
Is Call of Duty Mobile compatible with my device?
To play Call of Duty Mobile on your Android device, you need to have at least 2 GB of RAM, Android 5.1 or higher, and about 2.2 GB of free storage space on your device.
-
How can I contact Call of Duty Mobile support?
You can contact Call of Duty Mobile support by visiting their official website, Facebook page, Twitter account, or Reddit community. You can also use the in-game feedback option or email them at codmobile@activision.com.
-
-
call of duty mobile apk download for android free
-call of duty mobile android app download latest version
-call of duty mobile season 5 android apk download
-how to download call of duty mobile on android device
-call of duty mobile android apk + obb download
-call of duty mobile legends of war android apk download
-call of duty mobile android game download from uptodown
-call of duty mobile apk download for android offline
-call of duty mobile zombies mode android apk download
-call of duty mobile hack apk download for android
-call of duty mobile beta apk download for android
-call of duty mobile mod apk download for android unlimited money
-call of duty mobile lite apk download for android
-call of duty mobile china version apk download for android
-call of duty mobile garena apk download for android
-call of duty mobile global apk download for android
-call of duty mobile india apk download for android
-call of duty mobile vietnam apk download for android
-call of duty mobile korea apk download for android
-call of duty mobile japan apk download for android
-call of duty mobile brazil apk download for android
-call of duty mobile russia apk download for android
-call of duty mobile europe apk download for android
-call of duty mobile australia apk download for android
-call of duty mobile canada apk download for android
-call of duty mobile usa apk download for android
-call of duty mobile uk apk download for android
-call of duty mobile france apk download for android
-call of duty mobile germany apk download for android
-call of duty mobile italy apk download for android
-call of duty mobile spain apk download for android
-call of duty mobile portugal apk download for android
-call of duty mobile turkey apk download for android
-call of duty mobile iran apk download for android
-call of duty mobile pakistan apk download for android
-call of duty mobile bangladesh apk download for android
-call of duty mobile nepal apk download for android
-call of duty mobile sri lanka apk download for android
-call of duty mobile indonesia apk download for android
-call of duty mobile malaysia apk download for android
-call of duty mobile singapore apk download for android
-call of duty mobile philippines apk download for android
-call of duty mobile thailand apk download for android
-call of duty mobile cambodia apk download for android
-call of duty mobile laos apk download for android
-call of duty mobile myanmar apk download for android
-call of duty mobile vietnam mod menu apk download for android
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Genshin Impact v3.4 Full Data Tanpa Launcher Mudah dan Cepat.md b/spaces/fatiXbelha/sd/Download Genshin Impact v3.4 Full Data Tanpa Launcher Mudah dan Cepat.md
deleted file mode 100644
index 29aeb94ae8b849388bfe038822d6b9e180f6708a..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Genshin Impact v3.4 Full Data Tanpa Launcher Mudah dan Cepat.md
+++ /dev/null
@@ -1,260 +0,0 @@
-
-
Download Data Genshin Impact Versi Terbaru: A Guide for PC and Mobile Gamers
-
If you are a fan of anime-style open-world RPGs, you might have heard of Genshin Impact, a free-to-play game that has taken the gaming world by storm. In this article, we will tell you everything you need to know about how to download data genshin impact versi terbaru, or the latest version of the game, for both PC and mobile devices. We will also share some tips and tricks to help you enjoy this amazing game even more.
-
What is Genshin Impact?
-
Genshin Impact is a free-to-play action role-playing game developed by miHoYo, a Chinese studio that also created the popular Honkai Impact series. The game was released globally on September 28, 2020, for Windows, PlayStation 4, iOS, and Android platforms, and later on April 28, 2021, for PlayStation 5. The game is also planned to be released for Nintendo Switch in the future.
Genshin Impact is a game that lets you explore a vast and beautiful world called Teyvat, where you can find various landscapes, cultures, creatures, secrets, and mysteries. You can also interact with many characters that have their own personalities, stories, and abilities. You can play as one of four interchangeable characters in a party, each with their own elemental affinity and combat style. You can switch between them at any time during combat or exploration, creating dynamic and strategic gameplay.
-
A fantasy world with seven nations and diverse characters
-
The world of Teyvat is divided into seven nations, each based on a different real-world culture and element. They are Mondstadt (Anemo/wind), Liyue (Geo/earth), Inazuma (Electro/lightning), Sumeru (Dendro/nature), Fontaine (Hydro/water), Natlan (Pyro/fire), and Snezhnaya (Cryo/ice). Each nation has its own history, politics, religion, architecture, and aesthetics. You can travel between them using fast travel points or by gliding, swimming, climbing, or riding mounts.
-
The game also features a diverse cast of playable characters that
can be obtained through various methods, such as completing quests, participating in events, or using a currency called primogems to make wishes. Each character has a unique design, voice, personality, backstory, and skill set. You can customize your characters by equipping them with different weapons and artifacts that enhance their stats and abilities. You can also unlock and upgrade their talents and constellations that grant them more power and effects.
-
A deep elemental combat system and character progression
-
One of the most exciting aspects of Genshin Impact is its elemental combat system, which allows you to use the power of the seven elements to unleash various attacks and effects. Each character has a normal attack, a charged attack, an elemental skill, and an elemental burst. You can combine these attacks with the environment and the enemies to create elemental reactions, such as burning, freezing, electro-charging, vaporizing, and more. These reactions can deal extra damage, inflict status effects, or provide buffs or debuffs.
-
The game also has a rich character progression system that lets you level up your characters, weapons, and artifacts using various materials and resources. You can also ascend your characters to increase their level cap and unlock new skills. You can also enhance your weapons and artifacts to improve their stats and effects. You can also refine your weapons to increase their passive abilities. You can also use resin, a limited resource that regenerates over time, to claim rewards from domains, bosses, and events.
-
Why download data genshin impact versi terbaru?
-
Genshin Impact is a game that is constantly updated with new content and features to keep the players engaged and entertained. The latest version of the game, v3.3, was released on November 24, 2021, and introduced many new additions and improvements to the game. Here are some of the reasons why you should download data genshin impact versi terbaru:
-
The latest version (v3.3) introduces new content and features
-
The v3.3 update of Genshin Impact brought many new things to the game, such as:
-
-
A new playable character: Arataki Itto, a Geo claymore user from Inazuma who is the leader of the Arataki Gang.
-
A new event: Shadows Amidst Snowstorms, which features a new boss fight against the Frostborn Miracle Resurgent Cryo Regisvine.
-
A new weapon banner: Epitome Invocation, which features two new weapons: the Summit Shaper (a 5-star sword) and the Dragonspine Spear (a 4-star polearm).
-
A new story quest: The Immovable God and the Eternal Euthymia, which continues the main storyline of Inazuma and reveals more secrets about the Electro Archon Raiden Shogun.
-
A new hangout event: Hangout Events Series II, which allows you to spend time with four characters: Bennett, Chongyun, Xinyan, and Noelle.
-
A new feature: Serenitea Pot Gardening, which allows you to plant and harvest various crops in your personal realm.
-
A new feature: Character Archive, which allows you to view information and stories about the characters you have met or obtained in the game.
-
Many other changes and optimizations, such as new achievements, new items, new enemies, new quests, new voice-overs, bug fixes, and more.
-
-
The benefits of downloading the full data game offline
-
If you want to enjoy the full experience of Genshin Impact without any interruptions or delays, you should consider downloading the full data game offline. This means that you will download all the game data files (including audio files) to your device before launching the game. This way, you will not have to wait for the game to download additional data while playing online. This will also save you bandwidth and data usage if you have a limited internet connection.
-
Download Genshin Impact v3.3 Full Data - MASADI.ID
-Download Genshin Impact v3.4 Full Data Version - MASADI.ID
-Download Genshin Impact v3.2 Full Data Version - MASADI.ID
-Cara Install Manual Genshin Impact v3.3 Full Data
-Cara Install Manual Genshin Impact v3.4 Full Data
-Cara Install Manual Genshin Impact v3.2 Full Data
-Game Data Genshin Impact v3.3 (42.5 GB)
-Game Data Genshin Impact v3.4 (44.6 GB)
-Game Data Genshin Impact v3.2 (40.7 GB)
-Audio CN v3.3 (7.5 GB)
-Audio CN v3.4 (8 GB)
-Audio CN v3.2 (7 GB)
-Audio EN v3.3 (8.6 GB)
-Audio EN v3.4 (9 GB)
-Audio EN v3.2 (8 GB)
-Audio JP v3.3 (9.5 GB)
-Audio JP v3.4 (10.1 GB)
-Audio JP v3.2 (9 GB)
-Audio KO v3.3 (7.4 GB)
-Audio KO v3.4 (7.8 GB)
-Audio KO v3.2 (7 GB)
-Download Launcher Genshin Impact Windows
-File Config Update Genshin Impact
-Download Winrar untuk Extrak file Genshin Impact
-Rekomendasi Spesifikasi PC/Laptop untuk bermain Genshin Impact
-Update file download Genshin Impact v3.4 terbaru disini
-Update file download Genshin Impact v3.2 terbaru disini
-Genshin Impact free to play action RPG open world
-Genshin Impact dikembangkan oleh MiHoYo
-Genshin Impact rilis global pada 28 September 2020
-Genshin Impact rilis di platform Android, iOS, Windows, dan Playstation 4
-Genshin Impact rilis di platform Playstation 5 pada 28 April 2021
-Genshin Impact memiliki grafis yang sangat bagus
-Genshin Impact memiliki karakter-karakter yang mirip seperti anime
-Konten menarik dan event keren di Genshin Impact v3.3
-Konten menarik dan event keren di Genshin Impact v3.4
-Konten menarik dan event keren di Genshin Impact v3.2
-Mendapatkan primogems dan hadiah lainnya di Genshin Impact
-‘holy grail’ fusion experiment to create a mini Sun di Genshin Impact
-Mengurangi resiko terjadinya eror pada installing di launcher game genshin impact
-Mendownload data game Genshin Impact melalui HP ataupun Komputer tanpa limit kecepatan downloadnya
-
Some of the benefits of downloading the full data game offline are:
-
-
You will have faster loading times and smoother performance when playing the game.
-
You will not encounter any errors or crashes due to incomplete or corrupted data files.
-
You will not miss any important dialogues or cutscenes due to missing audio files.
-
You will be able to play the game even if your internet connection is unstable or unavailable.
-
-
The drawbacks of downloading through the launcher game online
-
If you choose to download through the launcher game online instead of downloading the full data game offline , you will have to download the game data files as you play the game online. This means that you will have to wait for the game to download the necessary data files before you can access certain areas, characters, or features of the game. This can cause some inconveniences and problems for your gameplay experience. Some of the drawbacks of downloading through the launcher game online are:
-
You will have slower loading times and lower performance when playing the game.
-
You will encounter errors or crashes if the game fails to download the required data files.
-
You will miss some dialogues or cutscenes if the game does not download the audio files in time.
-
You will not be able to play the game if your internet connection is interrupted or unavailable.
-
-
How to download data genshin impact versi terbaru for PC?
-
If you want to play Genshin Impact on your PC, you will need to download the game from the official website and install it on your device. You will also need to download the game data and audio files offline if you want to enjoy the full data game offline. Here are the steps to do so:
-
The system requirements and storage file size
-
Before you download and install Genshin Impact on your PC, you should check if your device meets the minimum or recommended system requirements for the game. These are:
-
-
-
Minimum Requirements
-
Recommended Requirements
-
-
-
OS: Windows 7 SP1 64-bit, Windows 8.1 64-bit, or Windows 10 64-bit
-
OS: Windows 10 64-bit
-
-
-
CPU: Intel Core i5 or equivalent
-
CPU: Intel Core i7 or equivalent
-
-
-
RAM: 8 GB
-
RAM: 16 GB
-
-
-
GPU: NVIDIA GeForce GT 1030 or higher
-
GPU: NVIDIA GeForce RTX 1060 6 GB or higher
-
-
-
DirectX Version: 11
-
DirectX Version: 11
-
-
-
Storage Space: 30 GB or more (depending on the version)
-
Storage Space: 30 GB or more (depending on the version)
-
-
-
You should also make sure that you have enough storage space on your device to download and install the game data and audio files offline. The file size of the game data is about 21 GB, while the file size of the audio files varies depending on the language. For example, the file size of the English audio is about 2.5 GB, while the file size of the Japanese audio is about 3.5 GB. You can choose which audio languages you want to download based on your preference.
-
The steps to download the game data and audio files
-
To download the game data and audio files offline, you will need to follow these steps:
-
-
Go to the official website of Genshin Impact and click on "Windows" under "Choose your platform". This will start downloading the launcher installer file.
-
Run the launcher installer file and follow the instructions to install the launcher on your device.
-
Launch the launcher and log in with your miHoYo account. If you don't have one, you can create one for free.
-
Click on "Game Pre-Installation" at the bottom left corner of the launcher. This will open a window where you can choose which game data and audio files you want to download offline.
-
Select "Full Data Game" under "Game Data" and check the boxes for the audio languages you want under "Audio". You can also change the download path if you want.
-
Click on "Confirm" and wait for the launcher to download all the selected files. This may take some time depending on your internet speed and file size.
-
Once the download is complete, you can close the window and exit the launcher.
-
-
The steps to install the game data offline using the launcher
-
To install the game data offline using the launcher, you will need to follow these steps:
-
Launch the launcher and log in with your miHoYo account.
-
Click on "Launch" at the bottom right corner of the launcher. This will start installing the game data offline on your device.
-
Wait for the installation to finish. This may take some time depending on your device and file size.
-
Once the installation is complete, you can click on "Start Game" to launch the game and enjoy playing it offline.
-
-
How to download data genshin impact versi terbaru for mobile?
-
If you want to play Genshin Impact on your mobile device, you will need to download the game from the official website or the app store and install it on your device. You will also need to update the game data online using the in-game option if you want to play the latest version of the game. Here are the steps to do so:
-
The supported devices and platforms
-
Before you download and install Genshin Impact on your mobile device, you should check if your device meets the minimum or recommended device requirements for the game. These are:
-
-
-
Minimum Requirements
-
Recommended Requirements
-
-
-
OS: Android 7.0 or iOS 9.0 or higher
-
OS: Android 8.1 or iOS 10.0 or higher
-
-
-
CPU: Arm v8a 64-bit device
-
CPU: Qualcomm Snapdragon 845, Kirin 810 or higher
-
-
-
RAM: 3 GB or more
-
RAM: 4 GB or more
-
-
-
Storage Space: 8 GB or more (depending on the version)
-
Storage Space: 8 GB or more (depending on the version)
-
-
-
You should also make sure that your device has enough storage space to download and install the game data online. The file size of the game data varies depending on the version and platform. For example, the file size of the game data for Android is about 6 GB, while the file size of the game data for iOS is about 9 GB. You can check the file size of the game data before downloading it in the app store or in the in-game option.
-
The steps to download the game from the official website or app store
-
To download the game from the official website or app store, you will need to follow these steps:
-
-
Go to the official website of Genshin Impact and click on "Android" or "iOS" under "Choose your platform". This will redirect you to the Google Play Store or the App Store where you can download the game.
-
Alternatively, you can also scan the QR code on the official website using your mobile device to download the game directly.
-
Tap on "Install" or "Get" and wait for the game to download and install on your device.
-
Once the installation is complete, you can tap on "Open" or "Play" to launch the game and start playing it online.
-
-
The steps to update the game data online using the in-game option
-
To update the game data online using the in-game option, you will need to follow these steps:
-
-
Launch the game and tap on "Update" at the bottom right corner of the screen. This will open a window where you can check the file size and progress of the game data update.
-
Tap on "Download Now" and wait for the game to download the latest game data online. This may take some time depending on your internet speed and file size.
-
Once the download is complete, you can tap on "Start Game" to launch the game and enjoy playing the latest version of the game online.
-
-
Tips and tricks for playing genshin impact versi terbaru
-
Genshin Impact is a game that offers a lot of fun and challenge for both new and veteran players. However, it can also be overwhelming and confusing at times, especially with the amount of content and mechanics that the game has. To help you get the most out of your gameplay experience, here are some tips and tricks that you should know when playing genshin impact versi terbaru:
-
How to use the elemental interactions and character switching
-
One of the key features of Genshin Impact is its elemental interactions, which allow you to create various effects and advantages by combining different elements. You can use your characters' elemental skills and bursts, as well as environmental objects and enemies, to trigger these interactions. For example, you can use a Pyro character to ignite a wooden shield or a grassy area, then use an Anemo character to spread the fire or create a fire tornado. You can also use a Hydro character to wet an enemy or a surface, then use an Electro character to electrocute them or create an electric field.
-
To make the most of these interactions, you should learn how to switch between your characters quickly and effectively. You can switch between your characters by tapping on their icons at the bottom right corner of the screen, or by swiping left or right on the screen. You can also hold down on a character's icon to activate their elemental burst directly, without switching to them. You should practice switching between your characters and using their skills and bursts in different situations and combinations, to find out what works best for you.
-
How to level up your characters, weapons, and artifacts
-
Another important feature of Genshin Impact is its character progression system, which allows you to level up your characters, weapons, and artifacts using various materials and resources. You can obtain these materials and resources from different sources, such as enemies, chests, domains, bosses, events, quests, shops, and more. You should always keep an eye out for these sources and collect as much as you can, as they are essential for improving your characters' power and performance.
-
To level up your characters, you will need to use character EXP materials, such as Wanderer's Advice, Adventurer's Experience, or Hero's Wit. You can obtain these materials from enemies, chests, quests, events, or by using resin to claim rewards from Ley Line Outcrops. You can also ascend your characters once they reach their level cap (20/40/50/60/70/80/90), by using specific ascension materials that vary depending on the character's element and rarity. You can obtain these materials from enemies, domains, bosses , or by using resin to claim rewards from Trounce Domains or Hypostatic Symphony. You can also unlock and upgrade your characters' talents, which are their normal attack, elemental skill, and elemental burst, by using specific talent level-up materials that vary depending on the character's element and day of the week. You can obtain these materials from enemies, domains, bosses, or by using resin to claim rewards from Forsaken Rift or Taishan Mansion.
-
To level up your weapons, you will need to use weapon EXP materials, such as Enhancement Ore, Fine Enhancement Ore, or Mystic Enhancement Ore. You can obtain these materials from enemies, chests, quests, events, or by crafting them using ore materials that you can mine from various locations in the world. You can also ascend your weapons once they reach their level cap (20/40/50/60/70/80/90), by using specific ascension materials that vary depending on the weapon's type and rarity. You can obtain these materials from enemies, domains, bosses, or by using resin to claim rewards from Hidden Palace of Lianshan Formula or Domain of Forgery. You can also refine your weapons to increase their passive abilities, by using duplicate copies of the same weapon.
-
To level up your artifacts, you will need to use other artifacts as fodder, which will provide artifact EXP depending on their rarity and level. You can obtain artifacts from enemies, chests, domains, bosses, events, quests, or by using resin to claim rewards from various domains and bosses. You can also enhance your artifacts to improve their stats and effects, by using specific enhancement materials that vary depending on the artifact's set and rarity. You can obtain these materials from enemies, domains, bosses, or by using resin to claim rewards from Domain of Mastery or Peak of Vindagnyr.
-
How to explore the world and find secrets, chests, puzzles, and quests
-
Genshin Impact is a game that encourages you to explore its vast and beautiful world and discover its many secrets, chests, puzzles, and quests. You can use various methods and tools to traverse the world, such as gliding, swimming, climbing, riding mounts, using teleport waypoints, or using gadgets like the Wind Catcher or the Kamera. You can also use your elemental sight to reveal hidden clues or objects in the environment.
-
As you explore the world, you will encounter many things that will reward you with items, resources, primogems , adventure rank, or character EXP. Some of these things are:
-
Secrets: These are hidden or obscure locations or objects that require you to use your elemental skills, interact with the environment, or solve puzzles to reveal them. For example, you can find secret caves, islands, shrines, domains, or treasure hoards that contain valuable rewards.
-
Chests: These are containers that hold various items and resources that you can collect by opening them. There are different types of chests, such as common, exquisite, precious, luxurious, or seelie chests, that vary in rarity and quality of rewards. Some chests are easy to find, while others are hidden or guarded by enemies or puzzles.
-
Puzzles: These are challenges that test your logic, observation, and elemental skills. They usually involve activating mechanisms, aligning symbols, matching colors, or following clues to unlock rewards or secrets. For example, you can find puzzles involving seelies, statues, torches, pillars, or pressure plates.
-
Quests: These are tasks that involve following a storyline, completing objectives, or helping characters. There are different types of quests, such as archon quests, story quests, world quests, commission quests, or event quests, that vary in difficulty and reward. Some quests are mandatory, while others are optional or hidden.
-
-
You should explore the world as much as you can and find as many secrets, chests, puzzles, and quests as possible. This will not only enrich your gameplay experience and immersion but also help you progress faster and easier in the game.
-
Conclusion
-
Genshin Impact is a free-to-play open-world action RPG that offers a lot of fun and challenge for both PC and mobile gamers. If you want to play the latest version of the game offline on your PC or online on your mobile device, you should download data genshin impact versi terbaru from the official website or app store. You should also learn how to use the elemental interactions and character switching, how to level up your characters , weapons, and artifacts, and how to explore the world and find secrets, chests, puzzles, and quests. This will help you enjoy the game even more and become a better player.
-
We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!
-
FAQs
-
What are the new features in genshin impact versi terbaru?
-
Some of the new features in genshin impact versi terbaru are:
-
-
A new region: Dragonspine, a snowy mountain range that has a unique climate system, new enemies, new resources, and new secrets.
-
A new event: The Chalk Prince and the Dragon, which involves helping a mysterious young man named Albedo to investigate the secrets of Dragonspine and fight against a powerful ancient dragon.
-
A new character: Albedo, a Geo sword user who is the chief alchemist and captain of the Knights of Favonius' Investigation Team.
-
A new weapon: Festering Desire, a 4-star sword that can be obtained and refined through the event.
-
A new feature: Subzero Climate, which affects your exploration in Dragonspine by reducing your Sheer Cold gauge. You can use various methods to prevent or reduce the Sheer Cold effect, such as using heat sources, eating food, or using certain characters or items.
-
A new feature: Frostbearing Tree, which is a sacred tree that can be leveled up by offering Crimson Agate, a special resource found in Dragonspine. You can obtain various rewards from the tree, such as primogems, mora, enhancement materials, and more.
-
A new feature: Gadgets, which are useful items that can be crafted or obtained from various sources. Some of the gadgets are: Portable Waypoint, which allows you to create a temporary teleport point; Warming Bottle, which creates a heat source that reduces Sheer Cold; Treasure-Seeking Seelie, which helps you find nearby chests; and Kamera, which allows you to take screenshots with various filters and frames.
-
-
How to get more primogems and wishes in genshin impact?
-
Primogems are the premium currency in Genshin Impact that can be used to make wishes, which are gacha-style draws that can give you characters, weapons, or other items. Wishes can also be made using other currencies, such as Acquaint Fate or Intertwined Fate, which can be obtained from various sources or exchanged with primogems. There are different types of wishes, such as standard wishes , such as Wanderlust Invocation, which have a permanent pool of characters and weapons; and limited-time wishes, such as Character Event Wishes or Weapon Event Wishes, which have a higher chance of giving you specific characters or weapons. There are many ways to get more primogems and wishes in Genshin Impact, such as:
-
Completing quests, such as archon quests, story quests, world quests, commission quests, or event quests.
-
Opening chests, such as common, exquisite, precious, luxurious, or seelie chests.
-
Exploring the world and finding secrets, such as anemoculi, geoculi, electroculi, crimson agate, or oculi of other elements.
-
Leveling up your adventure rank, which gives you rewards every time you reach a new rank.
-
Leveling up your statues of the seven, which give you rewards every time you offer them oculi of their corresponding element.
-
Leveling up your frostbearing tree, which gives you rewards every time you offer it crimson agate.
-
Participating in events, such as web events, in-game events, or community events.
-
Claiming daily login rewards, such as Seize the Day or HoYoLAB Community.
-
Claiming achievements, which are tasks that reward you for completing certain milestones or challenges in the game.
-
Purchasing primogems or wishes with real money, such as through the top-up option, the monthly card option, or the battle pass option.
-
-
How to play co-op mode with friends in genshin impact?
-
Genshin Impact is a game that can be played solo or with friends in co-op mode. Co-op mode allows you to team up with up to three other players online and explore the world, complete quests, fight enemies, or join domains and events together. Co-op mode can also give you more rewards and fun than playing solo. To play co-op mode with friends in Genshin Impact, you will need to follow these steps:
-
Reach adventure rank 16 or higher. This is the minimum requirement to unlock co-op mode in the game.
-
Add your friends to your friend list. You can do this by tapping on the friend icon at the top left corner of the screen and entering their UID (user ID) number. You can also scan their QR code or use the nearby feature to find them.
-
Invite your friends to join your world or request to join their world. You can do this by tapping on their avatar in your friend list and selecting the invite or request option. You can also use the co-op icon at the top right corner of the screen to create or join a random co-op session.
-
Wait for your friends to accept your invitation or request. Once they do, you will be able to see them in your world or join their world. You can also chat with them using the chat icon at the bottom left corner of the screen.
-
Select one character from your party to use in co-op mode. You can only use one character at a time in co-op mode, and you cannot switch between them. You can also change your character by using a statue of the seven or a teleport waypoint.
-
Enjoy playing co-op mode with your friends. You can explore the world , complete quests, fight enemies, or join domains and events together. You can also share your items, resources, or rewards with your friends by using the co-op option in the inventory menu.
-
-
How to change the language and voice-over in genshin impact?
-
Genshin Impact is a game that supports multiple languages and voice-overs, which you can change according to your preference. The game currently supports 13 text languages (English, Simplified Chinese, Traditional Chinese, Japanese, Korean, French, German, Spanish, Portuguese, Russian, Thai, Vietnamese, and Indonesian) and 4 voice-over languages (English, Chinese, Japanese, and Korean).
-
To change the language and voice-over in Genshin Impact, you will need to follow these steps:
-
-
Go to the main menu by tapping on the menu icon at the top left corner of the screen.
-
Go to the settings menu by tapping on the gear icon at the bottom right corner of the screen.
-
Go to the language menu by tapping on the globe icon at the top of the screen.
-
Select the text language and voice-over language that you want from the drop-down menus.
-
Confirm your selection by tapping on "Apply" at the bottom of the screen.
-
Restart the game to apply the changes.
-
-
How to contact customer service or report bugs in genshin impact?
-
If you encounter any problems or issues while playing Genshin Impact, such as bugs, glitches, errors, or crashes, you can contact customer service or report bugs in the game. Customer service can help you with account-related issues, such as login problems, password recovery, or account security. Bug reports can help the developers fix any errors or improve any aspects of the game.
-
To contact customer service or report bugs in Genshin Impact, you will need to follow these steps:
-
-
Go to the main menu by tapping on the menu icon at the top left corner of the screen.
-
Go to the feedback menu by tapping on the feedback icon at the bottom left corner of the screen.
-
Select either "Customer Service" or "Submit Feedback" depending on your issue.
-
For customer service, you can either chat with a live agent or send an email with your inquiry. You will need to provide your UID (user ID) number and other relevant information.
-
For bug reports, you can either use the quick feedback option or fill out a detailed feedback form. You will need to provide your UID (user ID) number and other relevant information. You can also attach screenshots or videos to illustrate your issue.
-
Wait for a response from customer service or a confirmation from bug reports. You can also check the status of your feedback by tapping on "My Feedback" in the feedback menu.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Marvel Puzzle Quest MOD APK and Unleash Your Superpowers.md b/spaces/fatiXbelha/sd/Download Marvel Puzzle Quest MOD APK and Unleash Your Superpowers.md
deleted file mode 100644
index 1f82e0770225a2a8a9cfd1d3eb31891b689f820d..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Marvel Puzzle Quest MOD APK and Unleash Your Superpowers.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Marvel Puzzle Quest Hero RPG Mod Apk: A Review
-
If you are a fan of Marvel comics and movies, you might have heard of Marvel Puzzle Quest, a popular mobile game that combines match-three puzzles with role-playing elements. In this game, you can create your own team of superheroes and villains, collect and upgrade their abilities, and fight against various enemies in an epic storyline. But what if you want to enjoy the game without spending real money or waiting for energy refills? That's where Marvel Puzzle Quest mod apk comes in. In this article, we will review what Marvel Puzzle Quest is, what the mod apk version offers, how to download and install it, and what are its features and benefits. We will also answer some frequently asked questions about the game and the mod apk file.
Marvel Puzzle Quest is a mobile game that was released in 2013 by D3 Go! and Demiurge Studios. It is based on the Marvel Comics universe and features characters from various franchises, such as Avengers, X-Men, Spider-Man, Guardians of the Galaxy, and more. The gameplay is similar to other match-three puzzle games, such as Candy Crush Saga or Bejeweled, but with a twist. You have to match colored gems on a board to generate power for your characters, who can then use their special abilities to attack or defend. You can also use items, such as health packs or boosts, to enhance your performance.
-
A Marvel-themed adventure with your favorite heroes and villains
-
Marvel Puzzle Quest has a rich and immersive storyline that follows the events of various Marvel comics and movies. You can choose your own team of heroes and villains from over 200 characters, each with their own skills, stats, and personalities. You can also customize your team by changing their costumes, upgrading their abilities, and equipping them with supports. You can play solo or co-op missions, join alliances, participate in events, compete in tournaments, and more. You can also unlock new characters, stories, and rewards as you progress through the game.
-
A free-to-play game with optional in-app purchases
-
Marvel Puzzle Quest is free to download and play on Android and iOS devices. However, like many other free-to-play games, it also has optional in-app purchases that can enhance your gaming experience. You can buy currency, such as coins or hero points, to unlock new characters, upgrade your abilities, or buy items. You can also buy VIP passes or bundles that offer extra benefits, such as daily rewards, bonus resources, or exclusive content. However, these purchases are not necessary to enjoy the game, as you can also earn currency and items by playing regularly.
-
What is the mod apk version of Marvel Puzzle Quest?
-
A modified version of the original game that offers unlimited money and resourcesA way to enjoy the game without spending real money or waiting for energy refills
-
Marvel Puzzle Quest mod apk is a modified version of the original game that offers unlimited money and resources. This means that you can unlock and upgrade any character you want, buy any item you need, and play as much as you like without worrying about running out of energy or currency. You can also access all the content and features of the game without any restrictions or limitations. This way, you can enjoy the game to the fullest and have more fun and excitement.
-
A risk-free download that does not require rooting or jailbreaking your device
-
Another advantage of Marvel Puzzle Quest mod apk is that it is a risk-free download that does not require rooting or jailbreaking your device. Rooting or jailbreaking your device can expose it to security risks, void your warranty, or cause compatibility issues with other apps. However, with Marvel Puzzle Quest mod apk, you do not need to do any of that. You just need to download the mod apk file from a trusted source, install it on your device, and start playing. You do not need to modify your device settings or permissions, and you can uninstall the mod apk file anytime you want.
-
marvel puzzle quest hero rpg mod apk download
-marvel puzzle quest hero rpg mod apk unlimited money
-marvel puzzle quest hero rpg mod apk latest version
-marvel puzzle quest hero rpg mod apk android 1
-marvel puzzle quest hero rpg mod apk free shopping
-marvel puzzle quest hero rpg mod apk offline
-marvel puzzle quest hero rpg mod apk no root
-marvel puzzle quest hero rpg mod apk 2023
-marvel puzzle quest hero rpg mod apk revdl
-marvel puzzle quest hero rpg mod apk rexdl
-marvel puzzle quest hero rpg mod apk hack
-marvel puzzle quest hero rpg mod apk obb
-marvel puzzle quest hero rpg mod apk data
-marvel puzzle quest hero rpg mod apk online
-marvel puzzle quest hero rpg mod apk 279.640181
-marvel puzzle quest hero rpg mod apk 4.0.0
-marvel puzzle quest hero rpg mod apk 3.0.0
-marvel puzzle quest hero rpg mod apk 2.0.0
-marvel puzzle quest hero rpg mod apk 1.0.0
-marvel puzzle quest hero rpg mod apk 5.0.0
-marvel puzzle quest hero rpg mod apk full version
-marvel puzzle quest hero rpg mod apk premium
-marvel puzzle quest hero rpg mod apk pro
-marvel puzzle quest hero rpg mod apk vip
-marvel puzzle quest hero rpg mod apk mega mod
-marvel puzzle quest hero rpg mod apk unlocked all characters
-marvel puzzle quest hero rpg mod apk unlimited gems
-marvel puzzle quest hero rpg mod apk unlimited coins
-marvel puzzle quest hero rpg mod apk unlimited crystals
-marvel puzzle quest hero rpg mod apk unlimited iso 8
-marvel puzzle quest hero rpg mod apk unlimited hp
-marvel puzzle quest hero rpg mod apk unlimited energy
-marvel puzzle quest hero rpg mod apk unlimited moves
-marvel puzzle quest hero rpg mod apk unlimited everything
-marvel puzzle quest hero rpg mod apk cheat codes
-marvel puzzle quest hero rpg mod apk gameplay
-marvel puzzle quest hero rpg mod apk features
-marvel puzzle quest hero rpg mod apk review
-marvel puzzle quest hero rpg mod apk tips and tricks
-marvel puzzle quest hero rpg mod apk best team
-marvel puzzle quest hero rpg mod apk best characters
-marvel puzzle quest hero rpg mod apk best strategy
-marvel puzzle quest hero rpg mod apk best heroes
-marvel puzzle quest hero rpg mod apk best villains
-marvel puzzle quest hero rpg mod apk best combos
-marvel puzzle quest hero rpg mod apk best skills
-marvel puzzle quest hero rpg mod apk best weapons
-marvel puzzle quest hero rpg mod apk best events
-marvel puzzle quest hero rpg mod apk best missions
-
How to download and install Marvel Puzzle Quest mod apk?
-
A simple and easy process that takes only a few minutes
-
Downloading and installing Marvel Puzzle Quest mod apk is a simple and easy process that takes only a few minutes. You do not need any special skills or knowledge to do it. Here are the steps you need to follow:
-
-
Go to a reliable website that offers Marvel Puzzle Quest mod apk files, such as [ModApkStore] or [ApkPure].
-
Find the latest version of Marvel Puzzle Quest mod apk and click on the download button.
-
Wait for the download to finish and locate the mod apk file on your device.
-
Tap on the mod apk file and allow the installation from unknown sources if prompted.
-
Follow the instructions on the screen and wait for the installation to complete.
-
Launch the game and enjoy unlimited money and resources.
-
What are the features and benefits of Marvel Puzzle Quest mod apk?
-
A list of the main features and benefits of the mod apk version of the game
-
Marvel Puzzle Quest mod apk has many features and benefits that make it a better choice than the original version of the game. Here are some of them:
-
-
Unlimited money and resources: You can get unlimited coins, hero points, command points, ISO-8, and other resources that you can use to unlock and upgrade your characters, buy items, and play without limits.
-
All characters unlocked: You can access all the characters in the game, including the rare and legendary ones, without having to spend real money or wait for events. You can also choose any costume or support for your characters.
-
No ads: You can enjoy the game without any annoying ads or pop-ups that can interrupt your gameplay or waste your time.
-
No energy system: You can play as much as you want without having to wait for your energy to refill or buy energy packs. You can also skip the timers for missions and events.
-
High-quality graphics and sound: You can experience the game with high-quality graphics and sound that enhance the immersion and excitement. You can also adjust the settings according to your preferences and device performance.
-
-
A comparison with the original version of the game and other similar games
-
Marvel Puzzle Quest mod apk is a superior version of the game compared to the original version and other similar games. Here are some reasons why:
-
-
It offers more freedom and flexibility: You can play the game however you want, without any restrictions or limitations. You can customize your team, choose your missions, participate in events, and more. You can also experiment with different strategies and combinations without worrying about losing resources or progress.
-
It saves you time and money: You do not have to spend real money or wait for hours to enjoy the game. You can get everything you need for free and instantly. You can also avoid the hassle of watching ads or completing surveys to earn currency or items.
-
It is more fun and enjoyable: You can have more fun and enjoyment with the game, as you can access all the content and features of the game. You can also challenge yourself with harder levels and opponents, or relax with easier ones. You can also share your achievements and experiences with your friends or other players online.
-
-
A summary of the pros and cons of using the mod apk version of the game
-
Marvel Puzzle Quest mod apk has many advantages, but it also has some disadvantages that you should be aware of. Here is a summary of the pros and cons of using the mod apk version of the game:
- | Pros | Cons | | --- | --- | | Unlimited money and resources | Possible compatibility issues with some devices | | All characters unlocked | Possible security risks from unknown sources | | No ads | Possible legal issues from violating terms of service | | No energy system | Possible loss of data or progress if not backed up | | High-quality graphics and sound | Possible boredom or loss of interest if too easy |
Conclusion and FAQs
-
A brief recap of the main points of the article and a call to action
-
In conclusion, Marvel Puzzle Quest is a great mobile game that combines match-three puzzles with RPG elements. It is based on the Marvel Comics universe and features over 200 characters from various franchises. It is free to download and play, but it also has optional in-app purchases that can enhance your gaming experience. However, if you want to enjoy the game without spending real money or waiting for energy refills, you can try Marvel Puzzle Quest mod apk. This is a modified version of the original game that offers unlimited money and resources, all characters unlocked, no ads, no energy system, and high-quality graphics and sound. It is a simple and easy process to download and install it on your device, but you should also be aware of the possible consequences of using mod apk files. If you are interested in trying Marvel Puzzle Quest mod apk, you can follow the steps we provided in this article and start playing today.
-
If you have any questions or feedback about Marvel Puzzle Quest mod apk, feel free to leave a comment below. We would love to hear from you.
-
A list of five unique FAQs with answers
-
-
Q: Is Marvel Puzzle Quest mod apk safe to use?
-
A: Marvel Puzzle Quest mod apk is generally safe to use, as long as you download it from a trusted source and scan it for viruses before installing it on your device. However, you should also be careful about giving permissions to unknown apps, as they might contain malware or spyware that can harm your device or steal your personal information. You should also be aware that using mod apk files might violate the terms of service of the original game and result in a ban or a lawsuit.
-
Q: How can I update Marvel Puzzle Quest mod apk?
-
A: Marvel Puzzle Quest mod apk is usually updated by the developers or the modders who create it. You can check the website where you downloaded it from for any updates or notifications. You can also enable the auto-update feature on your device settings, if available. However, you should also backup your data and progress before updating, as some updates might cause errors or glitches.
-
Q: Can I play Marvel Puzzle Quest mod apk online or offline?
-
A: Marvel Puzzle Quest mod apk can be played both online and offline. You can play online to access the multiplayer features, such as alliances, events, tournaments, and leaderboards. You can also play offline to enjoy the solo missions and stories. However, you might need an internet connection to download or update the game, or to sync your data and progress with the cloud.
-
Q: Can I play Marvel Puzzle Quest mod apk with my friends or other players?
-
A: Yes, you can play Marvel Puzzle Quest mod apk with your friends or other players. You can join or create alliances, chat with other players, send and receive gifts, and cooperate or compete in various modes and challenges. However, you should also be respectful and fair to other players, as some might not appreciate or approve of using mod apk files.
-
Q: What are some tips and tricks for playing Marvel Puzzle Quest mod apk?
-
A: Some tips and tricks for playing Marvel Puzzle Quest mod apk are:
-
-
Choose your team wisely: You can create your own team of heroes and villains from over 200 characters, each with their own strengths and weaknesses. You should choose your team based on their abilities, synergies, and roles. You should also balance your team between offense and defense, and between different colors and classes.
-
Match smartly: You can match colored gems on the board to generate power for your characters, who can then use their special abilities to attack or defend. You should match smartly by creating combos, cascades, or critical tiles. You should also match strategically by targeting the enemy's weak points, denying their power, or creating opportunities for your team.
-
Upgrade your characters: You can upgrade your characters by leveling them up, increasing their ranks, or enhancing their abilities. You can also equip them with supports or costumes that offer extra bonuses or effects. You should upgrade your characters regularly to improve their performance and unlock new features.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Enjoy Amazing Graphics and Sound Effects in Car Simulator 2 APK.md b/spaces/fatiXbelha/sd/Enjoy Amazing Graphics and Sound Effects in Car Simulator 2 APK.md
deleted file mode 100644
index 27de588e3ec6ba41e81f1189c58c77ff8ae882df..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Enjoy Amazing Graphics and Sound Effects in Car Simulator 2 APK.md
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
Car Simulator 2 Apkpure: A Review of One of the Top Car Racing Games
-
Do you love car racing games? Do you want to experience the thrill of driving realistic cars on open-world maps? Do you want to compete with other players online and show off your skills? If you answered yes to any of these questions, then you should try Car Simulator 2 Apkpure, one of the top car racing games for Android devices. In this article, we will review Car Simulator 2 Apkpure, its features, pros and cons, and some tips and tricks for playing it.
-
What is Car Simulator 2 Apkpure?
-
Car Simulator 2 Apkpure is a free car racing game developed by Oppana Games. It is available for download from the APKPure website, which is a platform that offers safe and fast downloads of Android apps and games. Car Simulator 2 Apkpure is the sequel to the popular Car Simulator game, which has over 10 million downloads on Google Play Store.
Car Simulator 2 Apkpure has many features that make it an exciting and realistic car racing game. Some of these features are:
-
-
A huge open-world map with different locations, such as city, countryside, airport, desert, and more.
-
More than 20 different cars to choose from, each with its own characteristics and performance.
-
A realistic driving physics system that simulates engine sounds, brakes, suspension, and damage.
-
A dynamic day-night cycle and weather effects that affect the gameplay.
-
A multiplayer mode that allows you to play with friends or other players online.
-
A career mode that lets you complete missions and earn money.
-
A garage where you can upgrade and customize your car and driver.
-
A gas station where you can refuel your car.
-
A police system that will chase you if you break the law.
-
-
How to download and install Car Simulator 2 Apkpure
-
To download and install Car Simulator 2 Apkpure, you need to follow these steps:
-
-
Go to the APKPure website and search for Car Simulator 2.
-
Click on the download button and wait for the APK file to be downloaded.
-
Once the download is complete, open the APK file and allow it to install on your device.
-
After the installation is done, you can launch the game and enjoy it.
-
-
Why should you play Car Simulator 2 Apkpure?
-
Car Simulator 2 Apkpure is a fun and addictive car racing game that will keep you entertained for hours. Here are some of the reasons why you should play it:
-
Pros of Car Simulator 2 Apkpure
-
-
It has amazing graphics and sound effects that create a realistic driving experience.
-
It has a variety of cars, locations, missions, and modes that offer a lot of gameplay options.
-
It has a multiplayer mode that lets you play with other people online and chat with them.
-
It has a simple and intuitive control system that is easy to use.
-
It is free to download and play, with no in-app purchases or ads.
-
-
Cons of Car Simulator 2 Apkpure
-
-
It requires a To customize your driver, you need to go to the menu and select the driver option. You can then change various aspects of your driver's appearance, such as gender, skin tone, hair, clothes, shoes, hats, glasses, and more. You can also choose a name and a license plate for your driver.
Conclusion
-
Car Simulator 2 Apkpure is a great car racing game that offers a realistic and immersive driving experience. It has many features, such as a huge open-world map, a variety of cars, a realistic physics system, a multiplayer mode, a career mode, a garage, and a customization option. It is free to download and play, with no in-app purchases or ads. It is also easy to download and install from the APKPure website. However, it also has some drawbacks, such as requiring a stable internet connection, having some bugs and glitches, and consuming a lot of battery and storage space. Overall, Car Simulator 2 Apkpure is a fun and addictive game that will keep you entertained for hours.
-
FAQs
-
Here are some of the frequently asked questions about Car Simulator 2 Apkpure:
-
-
-
Question
-
Answer
-
-
-
What are the minimum requirements to play Car Simulator 2 Apkpure?
-
You need an Android device with at least 4.4 version and 1 GB of RAM.
-
-
-
How can I contact the developers of Car Simulator 2 Apkpure?
-
You can contact them by email at oppanagames@gmail.com or by visiting their website.
-
-
-
How can I report a bug or a problem with Car Simulator 2 Apkpure?
-
You can report it by leaving a comment on the APKPure website or by sending an email to the developers.
-
-
-
How can I get more information about Car Simulator 2 Apkpure?
-
You can get more information by visiting the APKPure website or by following the official Facebook page of the game.
-
-
-
Is Car Simulator 2 Apkpure safe to download and play?
-
Yes, Car Simulator 2 Apkpure is safe to download and play. APKPure is a trusted platform that verifies and scans all the apps and games before uploading them. However, you should always be careful when downloading apps from unknown sources and check the permissions they require.
-
-
- : https://apkpure.com/car-simulator-2/com.oppanagames.car.simulator : https://www.oppanagames.com/ : https://www.facebook.com/OppanaGames/ 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chat3/core_functional.py b/spaces/fb700/chat3/core_functional.py
deleted file mode 100644
index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000
--- a/spaces/fb700/chat3/core_functional.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# 'primary' 颜色对应 theme.py 中的 primary_hue
-# 'secondary' 颜色对应 theme.py 中的 neutral_hue
-# 'stop' 颜色对应 theme.py 中的 color_er
-# 默认按钮颜色是 secondary
-from toolbox import clear_line_break
-
-
-def get_core_functions():
- return {
- "英语学术润色": {
- # 前言
- "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
- r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
- r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
- # 后语
- "Suffix": r"",
- "Color": r"secondary", # 按钮颜色
- },
- "中文学术润色": {
- "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
- r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
- "Suffix": r"",
- },
- "查找语法错误": {
- "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
- r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
- r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
- r"put the original text the first column, " +
- r"put the corrected text in the second column and highlight the key words you fixed.""\n"
- r"Example:""\n"
- r"Paragraph: How is you? Do you knows what is it?""\n"
- r"| Original sentence | Corrected sentence |""\n"
- r"| :--- | :--- |""\n"
- r"| How **is** you? | How **are** you? |""\n"
- r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
- r"Below is a paragraph from an academic paper. "
- r"You need to report all grammar and spelling mistakes as the example before."
- + "\n\n",
- "Suffix": r"",
- "PreProcess": clear_line_break, # 预处理:清除换行符
- },
- "中译英": {
- "Prefix": r"Please translate following sentence to English:" + "\n\n",
- "Suffix": r"",
- },
- "学术中英互译": {
- "Prefix": r"I want you to act as a scientific English-Chinese translator, " +
- r"I will provide you with some paragraphs in one language " +
- r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
- r"Do not repeat the original provided paragraphs after translation. " +
- r"You should use artificial intelligence tools, " +
- r"such as natural language processing, and rhetorical knowledge " +
- r"and experience about effective writing techniques to reply. " +
- r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
- "Suffix": "",
- "Color": "secondary",
- },
- "英译中": {
- "Prefix": r"翻译成地道的中文:" + "\n\n",
- "Suffix": r"",
- },
- "找图片": {
- "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
- r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
- "Suffix": r"",
- },
- "解释代码": {
- "Prefix": r"请解释以下代码:" + "\n```\n",
- "Suffix": "\n```\n",
- },
- }
diff --git "a/spaces/fb700/chat3/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/fb700/chat3/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py"
deleted file mode 100644
index da03686f77ec1409b1bf212bd0e711c86c5e35f8..0000000000000000000000000000000000000000
--- "a/spaces/fb700/chat3/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py"
+++ /dev/null
@@ -1,176 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-fast_debug = False
-
-class PaperFileGroup():
- def __init__(self):
- self.file_paths = []
- self.file_contents = []
- self.sp_file_contents = []
- self.sp_file_index = []
- self.sp_file_tag = []
-
- # count_token
- import tiktoken
- from toolbox import get_conf
- enc = tiktoken.encoding_for_model("gpt-3.5-turbo")
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- self.get_token_num = get_token_num
-
- def run_file_split(self, max_token_limit=1900):
- """
- 将长文本分离开来
- """
- for index, file_content in enumerate(self.file_contents):
- if self.get_token_num(file_content) < max_token_limit:
- self.sp_file_contents.append(file_content)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index])
- else:
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
- for j, segment in enumerate(segments):
- self.sp_file_contents.append(segment)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
-
- print('Segmentation: done')
-
-def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
- import time, os, re
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
-
-
- # <-------- 读取Latex文件,删除其中的所有注释 ---------->
- pfg = PaperFileGroup()
-
- for index, fp in enumerate(file_manifest):
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- # 定义注释的正则表达式
- comment_pattern = r'%.*'
- # 使用正则表达式查找注释,并替换为空字符串
- clean_tex_content = re.sub(comment_pattern, '', file_content)
- # 记录删除注释后的文本
- pfg.file_paths.append(fp)
- pfg.file_contents.append(clean_tex_content)
-
- # <-------- 拆分过长的latex文件 ---------->
- pfg.run_file_split(max_token_limit=1024)
- n_split = len(pfg.sp_file_contents)
-
- # <-------- 抽取摘要 ---------->
- # if language == 'en':
- # abs_extract_inputs = f"Please write an abstract for this paper"
-
- # # 单线,获取文章meta信息
- # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
- # inputs=abs_extract_inputs,
- # inputs_show_user=f"正在抽取摘要信息。",
- # llm_kwargs=llm_kwargs,
- # chatbot=chatbot, history=[],
- # sys_prompt="Your job is to collect information from materials。",
- # )
-
- # <-------- 多线程润色开始 ---------->
- if language == 'en':
- inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
- sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
- elif language == 'zh':
- inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
- sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
-
-
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=inputs_array,
- inputs_show_user_array=inputs_show_user_array,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[""] for _ in range(n_split)],
- sys_prompt_array=sys_prompt_array,
- # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待
- scroller_max_len = 80
- )
-
- # <-------- 整理结果,退出 ---------->
- create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
- res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
- history = gpt_response_collection
- chatbot.append((f"{fp}完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
-@CatchException
-def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en')
-
-
-
-
-
-
-@CatchException
-def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh')
\ No newline at end of file
diff --git a/spaces/fffiloni/ControlNet-Video/share_btn.py b/spaces/fffiloni/ControlNet-Video/share_btn.py
deleted file mode 100644
index 1e961c7a8b71f44a40c305aaab3d2d5068f44cba..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/ControlNet-Video/share_btn.py
+++ /dev/null
@@ -1,86 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
-
- async function getVideoBlobFile(videoEL){
- const res = await fetch(videoEL.src);
- const blob = await res.blob();
- const videoId = Date.now() % 200;
- const fileName = `vid-pix2pix-${{videoId}}.wav`;
- const videoBlob = new File([blob], fileName, { type: 'video/mp4' });
- console.log(videoBlob);
- return videoBlob;
- }
-
- const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app');
- const captionTxt = gradioEl.querySelector('#prompt-in textarea').value;
- const controlTask = gradioEl.querySelector('#controltask-in select').value;
- const seedValue = gradioEl.querySelector('#seed-in input').value;
- const inputVidEl = gradioEl.querySelector('#input-vid video');
- const outputVideo = gradioEl.querySelector('#video-output video');
- const outputPrepVideo = gradioEl.querySelector('#prep-video-output video');
-
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputVideo){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
-
- const inputFile = await getVideoBlobFile(inputVidEl);
- const urlInputVid = await uploadFile(inputFile);
-
- const prepVideoOutFile = await getVideoBlobFile(outputPrepVideo);
- const dataOutputPrepVid = await uploadFile(prepVideoOutFile);
-
- const videoOutFile = await getVideoBlobFile(outputVideo);
- const dataOutputVid = await uploadFile(videoOutFile);
-
- const descriptionMd = `
-#### Settings
-Prompt: ${captionTxt}
-Control Task: ${controlTask} • Seed: ${seedValue}
-
-#### Video input:
-${urlInputVid}
-
-#### Preprcessor output:
-${dataOutputPrepVid}
-
-#### ControlNet result:
-${dataOutputVid}
-`;
- const params = new URLSearchParams({
- title: captionTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/fffiloni/ControlNet-Video/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/dom-events.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/dom-events.d.ts
deleted file mode 100644
index b9c1c3aa4f0d337eb151caf6ac77306ed739acb8..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/dom-events.d.ts
+++ /dev/null
@@ -1,126 +0,0 @@
-export {}; // Don't export anything!
-
-//// DOM-like Events
-// NB: The Event / EventTarget / EventListener implementations below were copied
-// from lib.dom.d.ts, then edited to reflect Node's documentation at
-// https://nodejs.org/api/events.html#class-eventtarget.
-// Please read that link to understand important implementation differences.
-
-// This conditional type will be the existing global Event in a browser, or
-// the copy below in a Node environment.
-type __Event = typeof globalThis extends { onmessage: any, Event: any }
-? {}
-: {
- /** This is not used in Node.js and is provided purely for completeness. */
- readonly bubbles: boolean;
- /** Alias for event.stopPropagation(). This is not used in Node.js and is provided purely for completeness. */
- cancelBubble: () => void;
- /** True if the event was created with the cancelable option */
- readonly cancelable: boolean;
- /** This is not used in Node.js and is provided purely for completeness. */
- readonly composed: boolean;
- /** Returns an array containing the current EventTarget as the only entry or empty if the event is not being dispatched. This is not used in Node.js and is provided purely for completeness. */
- composedPath(): [EventTarget?]
- /** Alias for event.target. */
- readonly currentTarget: EventTarget | null;
- /** Is true if cancelable is true and event.preventDefault() has been called. */
- readonly defaultPrevented: boolean;
- /** This is not used in Node.js and is provided purely for completeness. */
- readonly eventPhase: 0 | 2;
- /** The `AbortSignal` "abort" event is emitted with `isTrusted` set to `true`. The value is `false` in all other cases. */
- readonly isTrusted: boolean;
- /** Sets the `defaultPrevented` property to `true` if `cancelable` is `true`. */
- preventDefault(): void;
- /** This is not used in Node.js and is provided purely for completeness. */
- returnValue: boolean;
- /** Alias for event.target. */
- readonly srcElement: EventTarget | null;
- /** Stops the invocation of event listeners after the current one completes. */
- stopImmediatePropagation(): void;
- /** This is not used in Node.js and is provided purely for completeness. */
- stopPropagation(): void;
- /** The `EventTarget` dispatching the event */
- readonly target: EventTarget | null;
- /** The millisecond timestamp when the Event object was created. */
- readonly timeStamp: number;
- /** Returns the type of event, e.g. "click", "hashchange", or "submit". */
- readonly type: string;
-};
-
-// See comment above explaining conditional type
-type __EventTarget = typeof globalThis extends { onmessage: any, EventTarget: any }
-? {}
-: {
- /**
- * Adds a new handler for the `type` event. Any given `listener` is added only once per `type` and per `capture` option value.
- *
- * If the `once` option is true, the `listener` is removed after the next time a `type` event is dispatched.
- *
- * The `capture` option is not used by Node.js in any functional way other than tracking registered event listeners per the `EventTarget` specification.
- * Specifically, the `capture` option is used as part of the key when registering a `listener`.
- * Any individual `listener` may be added once with `capture = false`, and once with `capture = true`.
- */
- addEventListener(
- type: string,
- listener: EventListener | EventListenerObject,
- options?: AddEventListenerOptions | boolean,
- ): void;
- /** Dispatches a synthetic event event to target and returns true if either event's cancelable attribute value is false or its preventDefault() method was not invoked, and false otherwise. */
- dispatchEvent(event: Event): boolean;
- /** Removes the event listener in target's event listener list with the same type, callback, and options. */
- removeEventListener(
- type: string,
- listener: EventListener | EventListenerObject,
- options?: EventListenerOptions | boolean,
- ): void;
-};
-
-interface EventInit {
- bubbles?: boolean;
- cancelable?: boolean;
- composed?: boolean;
-}
-
-interface EventListenerOptions {
- /** Not directly used by Node.js. Added for API completeness. Default: `false`. */
- capture?: boolean;
-}
-
-interface AddEventListenerOptions extends EventListenerOptions {
- /** When `true`, the listener is automatically removed when it is first invoked. Default: `false`. */
- once?: boolean;
- /** When `true`, serves as a hint that the listener will not call the `Event` object's `preventDefault()` method. Default: false. */
- passive?: boolean;
-}
-
-interface EventListener {
- (evt: Event): void;
-}
-
-interface EventListenerObject {
- handleEvent(object: Event): void;
-}
-
-import {} from 'events'; // Make this an ambient declaration
-declare global {
- /** An event which takes place in the DOM. */
- interface Event extends __Event {}
- var Event: typeof globalThis extends { onmessage: any, Event: infer T }
- ? T
- : {
- prototype: __Event;
- new (type: string, eventInitDict?: EventInit): __Event;
- };
-
- /**
- * EventTarget is a DOM interface implemented by objects that can
- * receive events and may have listeners for them.
- */
- interface EventTarget extends __EventTarget {}
- var EventTarget: typeof globalThis extends { onmessage: any, EventTarget: infer T }
- ? T
- : {
- prototype: __EventTarget;
- new (): __EventTarget;
- };
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/vary/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/vary/index.js
deleted file mode 100644
index 5b5e741279d4b800b0c408c5efbac8de6ece450b..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/vary/index.js
+++ /dev/null
@@ -1,149 +0,0 @@
-/*!
- * vary
- * Copyright(c) 2014-2017 Douglas Christopher Wilson
- * MIT Licensed
- */
-
-'use strict'
-
-/**
- * Module exports.
- */
-
-module.exports = vary
-module.exports.append = append
-
-/**
- * RegExp to match field-name in RFC 7230 sec 3.2
- *
- * field-name = token
- * token = 1*tchar
- * tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*"
- * / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~"
- * / DIGIT / ALPHA
- * ; any VCHAR, except delimiters
- */
-
-var FIELD_NAME_REGEXP = /^[!#$%&'*+\-.^_`|~0-9A-Za-z]+$/
-
-/**
- * Append a field to a vary header.
- *
- * @param {String} header
- * @param {String|Array} field
- * @return {String}
- * @public
- */
-
-function append (header, field) {
- if (typeof header !== 'string') {
- throw new TypeError('header argument is required')
- }
-
- if (!field) {
- throw new TypeError('field argument is required')
- }
-
- // get fields array
- var fields = !Array.isArray(field)
- ? parse(String(field))
- : field
-
- // assert on invalid field names
- for (var j = 0; j < fields.length; j++) {
- if (!FIELD_NAME_REGEXP.test(fields[j])) {
- throw new TypeError('field argument contains an invalid header name')
- }
- }
-
- // existing, unspecified vary
- if (header === '*') {
- return header
- }
-
- // enumerate current values
- var val = header
- var vals = parse(header.toLowerCase())
-
- // unspecified vary
- if (fields.indexOf('*') !== -1 || vals.indexOf('*') !== -1) {
- return '*'
- }
-
- for (var i = 0; i < fields.length; i++) {
- var fld = fields[i].toLowerCase()
-
- // append value (case-preserving)
- if (vals.indexOf(fld) === -1) {
- vals.push(fld)
- val = val
- ? val + ', ' + fields[i]
- : fields[i]
- }
- }
-
- return val
-}
-
-/**
- * Parse a vary header into an array.
- *
- * @param {String} header
- * @return {Array}
- * @private
- */
-
-function parse (header) {
- var end = 0
- var list = []
- var start = 0
-
- // gather tokens
- for (var i = 0, len = header.length; i < len; i++) {
- switch (header.charCodeAt(i)) {
- case 0x20: /* */
- if (start === end) {
- start = end = i + 1
- }
- break
- case 0x2c: /* , */
- list.push(header.substring(start, end))
- start = end = i + 1
- break
- default:
- end = i + 1
- break
- }
- }
-
- // final token
- list.push(header.substring(start, end))
-
- return list
-}
-
-/**
- * Mark that a request is varied on a header field.
- *
- * @param {Object} res
- * @param {String|Array} field
- * @public
- */
-
-function vary (res, field) {
- if (!res || !res.getHeader || !res.setHeader) {
- // quack quack
- throw new TypeError('res argument is required')
- }
-
- // get existing header
- var val = res.getHeader('Vary') || ''
- var header = Array.isArray(val)
- ? val.join(', ')
- : String(val)
-
- // set new header
- if ((val = append(header, field))) {
- res.setHeader('Vary', val)
- }
-}
diff --git a/spaces/firefighter/PdfSumGPT/utils/truncate.py b/spaces/firefighter/PdfSumGPT/utils/truncate.py
deleted file mode 100644
index d774b2e6045b620ca6441a5a6d51e64951d228e3..0000000000000000000000000000000000000000
--- a/spaces/firefighter/PdfSumGPT/utils/truncate.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import tiktoken
-
-encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
-
-
-def truncate_string(s, max_length=1024) -> str:
- e = encoding.encode(s)[:max_length]
- s = encoding.decode(e)
- return s
diff --git a/spaces/fiyen/YangyangChatGPT/run_macOS.command b/spaces/fiyen/YangyangChatGPT/run_macOS.command
deleted file mode 100644
index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000
--- a/spaces/fiyen/YangyangChatGPT/run_macOS.command
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$0")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir"
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/flax-community/chef-transformer/utils/ext.py b/spaces/flax-community/chef-transformer/utils/ext.py
deleted file mode 100644
index acaaa1aca6e8b49015a3ff6e8d8dab299c4e0597..0000000000000000000000000000000000000000
--- a/spaces/flax-community/chef-transformer/utils/ext.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import re
-from utils.utils import replace_regex
-# from .utils import replace_regex
-
-DEFAULT_MAP_DICT = {
- " c ": " c. ",
- ", chopped": " (chopped)",
- ", crumbled": " (crumbled)",
- ", thawed": " (thawed)",
- ", melted": " (melted)",
-}
-
-
-def ingredient(text, map_dict):
- if len(map_dict) > 0:
- map_dict.update(**DEFAULT_MAP_DICT)
- else:
- map_dict = DEFAULT_MAP_DICT
-
- text = replace_regex(text, map_dict)
- text = re.sub(r"(\d)\s(\d\/\d)", r" \1+\2 ", text)
- text = " ".join([word.strip() for word in text.split() if word.strip()])
- return text
-
-
-def ingredients(text_list, item_list, without_mapping=False):
- map_dict = {
- item: f'{item}' for item in list(map(lambda x: x.lower().strip(), item_list))
- }
- text_list = list(map(lambda x: x.lower(), text_list))
-
- output = []
- for text in text_list:
- map_dict = map_dict if not without_mapping else {}
- text = ingredient(text, map_dict)
- output.append(text)
-
- return output
-
-
-def directions(text_list):
- text_list = list(map(lambda x: x.lower().capitalize(), text_list))
-
- return text_list
diff --git a/spaces/florim/MedGPT/autogpt/commands/execute_code.py b/spaces/florim/MedGPT/autogpt/commands/execute_code.py
deleted file mode 100644
index 11266f852727f2f8aedbc995b1e504a17acbfb77..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/autogpt/commands/execute_code.py
+++ /dev/null
@@ -1,158 +0,0 @@
-"""Execute code in a Docker container"""
-import os
-import subprocess
-
-import docker
-from docker.errors import ImageNotFound
-
-from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
-
-
-def execute_python_file(file: str) -> str:
- """Execute a Python file in a Docker container and return the output
-
- Args:
- file (str): The name of the file to execute
-
- Returns:
- str: The output of the file
- """
-
- print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'")
-
- if not file.endswith(".py"):
- return "Error: Invalid file type. Only .py files are allowed."
-
- file_path = path_in_workspace(file)
-
- if not os.path.isfile(file_path):
- return f"Error: File '{file}' does not exist."
-
- if we_are_running_in_a_docker_container():
- result = subprocess.run(
- f"python {file_path}", capture_output=True, encoding="utf8", shell=True
- )
- if result.returncode == 0:
- return result.stdout
- else:
- return f"Error: {result.stderr}"
-
- try:
- client = docker.from_env()
-
- # You can replace this with the desired Python image/version
- # You can find available Python images on Docker Hub:
- # https://hub.docker.com/_/python
- image_name = "python:3-alpine"
- try:
- client.images.get(image_name)
- print(f"Image '{image_name}' found locally")
- except ImageNotFound:
- print(f"Image '{image_name}' not found locally, pulling from Docker Hub")
- # Use the low-level API to stream the pull response
- low_level_client = docker.APIClient()
- for line in low_level_client.pull(image_name, stream=True, decode=True):
- # Print the status and progress, if available
- status = line.get("status")
- progress = line.get("progress")
- if status and progress:
- print(f"{status}: {progress}")
- elif status:
- print(status)
-
- container = client.containers.run(
- image_name,
- f"python {file}",
- volumes={
- os.path.abspath(WORKSPACE_PATH): {
- "bind": "/workspace",
- "mode": "ro",
- }
- },
- working_dir="/workspace",
- stderr=True,
- stdout=True,
- detach=True,
- )
-
- container.wait()
- logs = container.logs().decode("utf-8")
- container.remove()
-
- # print(f"Execution complete. Output: {output}")
- # print(f"Logs: {logs}")
-
- return logs
-
- except docker.errors.DockerException as e:
- print(
- "Could not run the script in a container. If you haven't already, please install Docker https://docs.docker.com/get-docker/"
- )
- return f"Error: {str(e)}"
-
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def execute_shell(command_line: str) -> str:
- """Execute a shell command and return the output
-
- Args:
- command_line (str): The command line to execute
-
- Returns:
- str: The output of the command
- """
- current_dir = os.getcwd()
- # Change dir into workspace if necessary
- if str(WORKSPACE_PATH) not in current_dir:
- os.chdir(WORKSPACE_PATH)
-
- print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
-
- result = subprocess.run(command_line, capture_output=True, shell=True)
- output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
-
- # Change back to whatever the prior working dir was
-
- os.chdir(current_dir)
-
- return output
-
-
-def execute_shell_popen(command_line) -> str:
- """Execute a shell command with Popen and returns an english description
- of the event and the process id
-
- Args:
- command_line (str): The command line to execute
-
- Returns:
- str: Description of the fact that the process started and its id
- """
- current_dir = os.getcwd()
- # Change dir into workspace if necessary
- if str(WORKSPACE_PATH) not in current_dir:
- os.chdir(WORKSPACE_PATH)
-
- print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
-
- do_not_show_output = subprocess.DEVNULL
- process = subprocess.Popen(
- command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output
- )
-
- # Change back to whatever the prior working dir was
-
- os.chdir(current_dir)
-
- return f"Subprocess started with PID:'{str(process.pid)}'"
-
-
-def we_are_running_in_a_docker_container() -> bool:
- """Check if we are running in a Docker container
-
- Returns:
- bool: True if we are running in a Docker container, False otherwise
- """
- return os.path.exists("/.dockerenv")
diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/box2d.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/box2d.js
deleted file mode 100644
index b4a2f283f700fb7c91d828806a0dcfdef87ba43c..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/box2d.js
+++ /dev/null
@@ -1,11581 +0,0 @@
-/** https://github.com/kripken/box2d.js v2.3.1 */
-
-var performance = {
- now: function () {
- return +new Date()
- }
-}
-Function.prototype._extend = function (parent) {
- this.prototype.parent = parent;
- for (var x in parent.prototype) {
- if (!this.prototype[x]) {
- this.prototype[x] = parent.prototype[x]
- }
- }
-};
-Function.prototype._implement = function (parent) {
- return this._extend(parent)
-};
-var b2Profiler = (function () {
-
- function profileStruct(name, parent) {
- this.name = name;
- this.parent = parent;
- this.children = {};
- this.startTime = 0;
- this.elapsedTime = 0;
- this.totalTime = 0;
- this.running = false;
- this.childrenCount = 0
- }
- profileStruct.prototype = {
- start: function () {
- this.startTime = performance.now();
- this.running = true
- },
- stop: function (reset) {
- if (!this.running) {
- return
- }
- this.running = false;
- this.elapsedTime += performance.now() - this.startTime;
- if (reset) {
- this.start()
- }
- for (var x in this.children) {
- this.children[x].stop()
- }
- },
- reset: function (dontRun) {
- if (!dontRun) {
- this.running = true;
- this.totalTime += this.elapsedTime;
- this.start()
- }
- this.elapsedTime = 0;
- for (var x in this.children) {
- this.children[x].reset(true)
- }
- }
- };
- var profiles = [];
- var root = new profileStruct("root");
-
- function create(name, parent) {
- if (!profiles) {
- throw new Error("late profile creation not allowed")
- }
- var s = new profileStruct(name, parent || "root");
- profiles.push(s);
- return s
- }
-
- function destroy(profile) {
- profile.childrenCount--;
- delete profile.children[profile.name]
- }
-
- function recursiveParentCheck(node, profile) {
- if (node.name === profile.parent) {
- return node
- }
- for (var x in node.children) {
- var n;
- if (n = recursiveParentCheck(node.children[x], profile)) {
- return n
- }
- }
- return null
- }
-
- function init() {
- while (profiles.length) {
- var p = profiles.pop();
- if (!(p.parentNode = recursiveParentCheck(root, p))) {
- profiles.unshift(p)
- } else {
- p.parentNode.children[p.name] = p;
- p.parentNode.childrenCount++
- }
- }
- profiles = null
- }
-
- function resetAll() {
- root.reset(true)
- }
- return {
- create: create,
- destroy: destroy,
- init: init,
- reset: resetAll,
- profileRoot: root
- }
-}());
-"use strict";
-var b2_maxFloat = Number.MAX_VALUE;
-var b2_epsilon = 2.220446049250313e-16;
-var b2_pi = Math.PI;
-var b2_maxManifoldPoints = 2;
-var b2_maxPolygonVertices = 8;
-var b2_aabbExtension = 0.1;
-var b2_aabbMultiplier = 2;
-var b2_linearSlop = 0.005;
-var b2_angularSlop = (2 / 180 * b2_pi);
-var b2_polygonRadius = (2 * b2_linearSlop);
-var b2_maxSubSteps = 8;
-var b2_maxTOIContacts = 32;
-var b2_velocityThreshold = 1;
-var b2_maxLinearCorrection = 0.2;
-var b2_maxAngularCorrection = (8 / 180 * b2_pi);
-var b2_maxTranslation = 2;
-var b2_maxTranslationSquared = (b2_maxTranslation * b2_maxTranslation);
-var b2_maxRotation = (0.5 * b2_pi);
-var b2_maxRotationSquared = (b2_maxRotation * b2_maxRotation);
-var b2_baumgarte = 0.2;
-var b2_toiBaugarte = 0.75;
-var b2_timeToSleep = 0.5;
-var b2_linearSleepTolerance = 0.01;
-var b2_angularSleepTolerance = (2 / 180 * b2_pi);
-
-function b2Version(ma, mi, re) {
- this.major = ma;
- this.minor = mi;
- this.revision = re
-}
-b2Version.prototype = {
- toString: function () {
- return this.major + "." + this.minor + "." + this.revision
- }
-};
-var b2_version = new b2Version(2, 3, 1);
-"use strict";
-
-function b2IsValid(x) {
- return isFinite(x) && !isNaN(x)
-}
-var sqrtf = Math.sqrt;
-var atan2f = Math.atan2;
-var sinf = Math.sin;
-var cosf = Math.cos;
-var floorf = Math.floor;
-var ceilf = Math.ceil;
-var b2Sqrt = sqrtf;
-var b2Atan2 = atan2f;
-
-function b2InvSqrt(x) {
- return 1 / sqrtf(x)
-}
-
-function b2Vec2(x, y) {
- if (typeof (x) !== "undefined") {
- this.x = x;
- this.y = y
- } else {
- this.x = this.y = 0
- }
-}
-b2Vec2.prototype = {
- Clone: function () {
- return new b2Vec2(this.x, this.y)
- },
- SetZero: function () {
- this.x = 0;
- this.y = 0;
- return this
- },
- Set: function (x_, y_) {
- this.x = x_;
- this.y = y_;
- return this
- },
- Assign: function (l) {
- this.x = l.x;
- this.y = l.y;
- return this
- },
- Negate: function () {
- var v = new b2Vec2();
- v.Set(-this.x, -this.y);
- return v
- },
- get_i: function (i) {
- switch (i) {
- case 0:
- return this.x;
- case 1:
- return this.y
- }
- },
- set_i: function (i, v) {
- switch (i) {
- case 0:
- return this.x = v;
- case 1:
- return this.y = v
- }
- },
- Add: function (v) {
- this.x += v.x;
- this.y += v.y;
- return this
- },
- Subtract: function (v) {
- this.x -= v.x;
- this.y -= v.y;
- return this
- },
- Multiply: function (a) {
- this.x *= a;
- this.y *= a;
- return this
- },
- Length: function () {
- return b2Sqrt(this.x * this.x + this.y * this.y)
- },
- LengthSquared: function () {
- return this.x * this.x + this.y * this.y
- },
- Normalize: function () {
- var length = this.Length();
- if (length < b2_epsilon) {
- return 0
- }
- var invLength = 1 / length;
- this.x *= invLength;
- this.y *= invLength;
- return length
- },
- IsValid: function () {
- return b2IsValid(this.x) && b2IsValid(this.y)
- },
- Skew: function () {
- return new b2Vec2(-this.y, this.x)
- },
- _serialize: function (out) {
- var obj = out || [];
- obj[0] = this.x;
- obj[1] = this.y;
- return obj
- },
- _deserialize: function (data) {
- this.x = data[0];
- this.y = data[1]
- }
-};
-b2Vec2.Add = function (a, b) {
- return new b2Vec2(a.x + b.x, a.y + b.y)
-};
-b2Vec2.Subtract = function (a, b) {
- return new b2Vec2(a.x - b.x, a.y - b.y)
-};
-b2Vec2.Equals = function (a, b) {
- return a.x == b.x && a.y == b.y
-};
-b2Vec2.Multiply = function (s, a) {
- return new b2Vec2(s * a.x, s * a.y)
-};
-b2Vec2.Negate = function (a) {
- return new b2Vec2(-a.x, -a.y)
-};
-
-function b2Vec3(x, y, z) {
- if (typeof (x) !== "undefined") {
- this.x = x;
- this.y = y;
- this.z = z
- }
-}
-b2Vec3.prototype = {
- Clone: function () {
- return new b2Vec3(this.x, this.y, this.z)
- },
- SetZero: function () {
- this.x = 0;
- this.y = 0;
- this.z = 0
- },
- Set: function (x_, y_, z_) {
- this.x = x_;
- this.y = y_;
- this.z = z_
- },
- Negate: function () {
- var v = new b2Vec3();
- v.Set(-this.x, -this.y, -this.z);
- return v
- },
- Add: function (v) {
- this.x += v.x;
- this.y += v.y;
- this.z += v.z
- },
- Subtract: function (v) {
- this.x -= v.x;
- this.y -= v.y;
- this.z -= v.z
- },
- Multiply: function (s) {
- this.x *= s;
- this.y *= s;
- this.z *= s
- },
- x: 0,
- y: 0,
- z: 0
-};
-b2Vec3.Multiply = function (s, a) {
- return new b2Vec3(s * a.x, s * a.y, s * a.z)
-};
-b2Vec3.Add = function (a, b) {
- return new b2Vec3(a.x + b.x, a.y + b.y, a.z + b.z)
-};
-b2Vec3.Subtract = function (a, b) {
- return new b2Vec3(a.x - b.x, a.y - b.y, a.z - b.z)
-};
-
-function b2Mat22(c1, c2) {
- this.ex = c1 ? c1.Clone() : new b2Vec2();
- this.ey = c2 ? c2.Clone() : new b2Vec2()
-}
-b2Mat22.prototype = {
- Set: function (c1, c2) {
- this.ex.Assign(c1);
- this.ey.Assign(c2)
- },
- Assign: function (mat) {
- this.ex.Assign(mat.ex);
- this.ey.Assign(mat.ey)
- },
- SetIdentity: function () {
- this.ex.x = 1;
- this.ey.x = 0;
- this.ex.y = 0;
- this.ey.y = 1
- },
- SetZero: function () {
- this.ex.x = 0;
- this.ey.x = 0;
- this.ex.y = 0;
- this.ey.y = 0
- },
- GetInverse: function () {
- var a = this.ex.x,
- b = this.ey.x,
- c = this.ex.y,
- d = this.ey.y;
- var B = new b2Mat22();
- var det = a * d - b * c;
- if (det != 0) {
- det = 1 / det
- }
- B.ex.x = det * d;
- B.ey.x = -det * b;
- B.ex.y = -det * c;
- B.ey.y = det * a;
- return B
- },
- Solve: function (b) {
- var a11 = this.ex.x,
- a12 = this.ey.x,
- a21 = this.ex.y,
- a22 = this.ey.y;
- var det = a11 * a22 - a12 * a21;
- if (det != 0) {
- det = 1 / det
- }
- var x = new b2Vec2();
- x.x = det * (a22 * b.x - a12 * b.y);
- x.y = det * (a11 * b.y - a21 * b.x);
- return x
- }
-};
-b2Mat22.Add = function (A, B) {
- return new b2Mat22(b2Vec2.Add(A.ex, B.ex), b2Vec2.Add(A.ey, B.ey))
-};
-
-function b2Mat33(c1, c2, c3) {
- this.ex = c1 ? c1.Clone() : new b2Vec3();
- this.ey = c2 ? c2.Clone() : new b2Vec3();
- this.ez = c3 ? c3.Clone() : new b2Vec3()
-}
-b2Mat33.prototype = {
- SetZero: function () {
- this.ex.SetZero();
- this.ey.SetZero();
- this.ez.SetZero()
- },
- Solve33: function (b) {
- var det = b2Dot_v3_v3(this.ex, b2Cross_v3_v3(this.ey, this.ez));
- if (det != 0) {
- det = 1 / det
- }
- var x = new b2Vec3();
- x.x = det * b2Dot_v3_v3(b, b2Cross_v3_v3(this.ey, this.ez));
- x.y = det * b2Dot_v3_v3(this.ex, b2Cross_v3_v3(b, this.ez));
- x.z = det * b2Dot_v3_v3(this.ex, b2Cross_v3_v3(this.ey, b));
- return x
- },
- Solve22: function (b) {
- var a11 = this.ex.x,
- a12 = this.ey.x,
- a21 = this.ex.y,
- a22 = this.ey.y;
- var det = a11 * a22 - a12 * a21;
- if (det != 0) {
- det = 1 / det
- }
- var x = new b2Vec2();
- x.x = det * (a22 * b.x - a12 * b.y);
- x.y = det * (a11 * b.y - a21 * b.x);
- return x
- },
- GetInverse22: function (M) {
- var a = this.ex.x,
- b = this.ey.x,
- c = this.ex.y,
- d = this.ey.y;
- var det = a * d - b * c;
- if (det != 0) {
- det = 1 / det
- }
- M.ex.x = det * d;
- M.ey.x = -det * b;
- M.ex.z = 0;
- M.ex.y = -det * c;
- M.ey.y = det * a;
- M.ey.z = 0;
- M.ez.x = 0;
- M.ez.y = 0;
- M.ez.z = 0
- },
- GetSymInverse33: function (M) {
- var det = b2Dot_v3_v3(this.ex, b2Cross_v3_v3(this.ey, this.ez));
- if (det != 0) {
- det = 1 / det
- }
- var a11 = this.ex.x,
- a12 = this.ey.x,
- a13 = this.ez.x;
- var a22 = this.ey.y,
- a23 = this.ez.y;
- var a33 = this.ez.z;
- M.ex.x = det * (a22 * a33 - a23 * a23);
- M.ex.y = det * (a13 * a23 - a12 * a33);
- M.ex.z = det * (a12 * a23 - a13 * a22);
- M.ey.x = M.ex.y;
- M.ey.y = det * (a11 * a33 - a13 * a13);
- M.ey.z = det * (a13 * a12 - a11 * a23);
- M.ez.x = M.ex.z;
- M.ez.y = M.ey.z;
- M.ez.z = det * (a11 * a22 - a12 * a12)
- }
-};
-
-function b2Rot(angle, c) {
- if (typeof (c) !== "undefined") {
- this.s = angle;
- this.c = c
- } else {
- if (typeof (angle) !== "undefined") {
- this.Set(angle)
- }
- }
-}
-b2Rot.prototype = {
- Clone: function () {
- return new b2Rot(this.s, this.c)
- },
- Assign: function (l) {
- this.s = l.s;
- this.c = l.c
- },
- Set: function (x) {
- this.s = sinf(x);
- this.c = cosf(x)
- },
- SetIdentity: function () {
- this.s = 0;
- this.c = 1
- },
- GetAngle: function () {
- return b2Atan2(this.s, this.c)
- },
- GetXAxis: function () {
- return new b2Vec2(this.c, this.s)
- },
- GetYAxis: function () {
- return new b2Vec2(-this.s, this.c)
- },
- s: 0,
- c: 1
-};
-
-function b2Transform(position, rotation) {
- this.p = new b2Vec2();
- this.q = new b2Rot();
- if (position) {
- this.p.Assign(position);
- this.q.Assign(rotation)
- }
-}
-b2Transform.prototype = {
- Clone: function () {
- var xf = new b2Transform(this.p, this.q);
- return xf
- },
- Assign: function (xf) {
- this.p.Assign(xf.p);
- this.q.Assign(xf.q)
- },
- SetIdentity: function () {
- this.p.SetZero();
- this.q.SetIdentity()
- },
- Set: function (position, angle) {
- this.p.Assign(position);
- this.q.Set(angle)
- }
-};
-
-function b2Sweep() {
- this.localCenter = new b2Vec2();
- this.c0 = new b2Vec2();
- this.c = new b2Vec2()
-}
-b2Sweep.prototype = {
- Assign: function (sweep) {
- this.localCenter.Assign(sweep.localCenter);
- this.c0.Assign(sweep.c0);
- this.c.Assign(sweep.c);
- this.a = sweep.a;
- this.a0 = sweep.a0;
- this.alpha0 = sweep.alpha0
- },
- Clone: function () {
- var sweep = new b2Sweep();
- sweep.localCenter.Assign(this.localCenter);
- sweep.c0.Assign(this.c0);
- sweep.c.Assign(this.c);
- sweep.a = this.a;
- sweep.a0 = this.a0;
- sweep.alpha0 = this.alpha0;
- return sweep
- },
- GetTransform: function (xf, beta) {
- xf.p.x = ((1 - beta) * this.c0.x) + (beta * this.c.x);
- xf.p.y = ((1 - beta) * this.c0.y) + (beta * this.c.y);
- var angle = (1 - beta) * this.a0 + beta * this.a;
- xf.q.Set(angle);
- xf.p.x -= xf.q.c * this.localCenter.x - xf.q.s * this.localCenter.y;
- xf.p.y -= xf.q.s * this.localCenter.x + xf.q.c * this.localCenter.y
- },
- Advance: function (alpha) {
- var beta = (alpha - this.alpha0) / (1 - this.alpha0);
- this.c0.Add(b2Vec2.Multiply(beta, b2Vec2.Subtract(this.c, this.c0)));
- this.a0 += beta * (this.a - this.a0);
- this.alpha0 = alpha
- },
- Normalize: function () {
- var twoPi = 2 * b2_pi;
- var d = twoPi * floorf(this.a0 / twoPi);
- this.a0 -= d;
- this.a -= d
- },
- a0: 0,
- a: 0,
- alpha0: 0
-};
-
-function b2Dot_v2_v2(a, b) {
- return a.x * b.x + a.y * b.y
-}
-
-function b2Cross_v2_v2(a, b) {
- return a.x * b.y - a.y * b.x
-}
-
-function b2Cross_v2_f(a, s) {
- return new b2Vec2(s * a.y, -s * a.x)
-}
-
-function b2Cross_f_v2(s, a) {
- return new b2Vec2(-s * a.y, s * a.x)
-}
-
-function b2Mul_m22_v2(A, v) {
- return new b2Vec2(A.ex.x * v.x + A.ey.x * v.y, A.ex.y * v.x + A.ey.y * v.y)
-}
-
-function b2MulT_m22_v2(A, v) {
- return new b2Vec2(b2Dot_v2_v2(v, A.ex), b2Dot_v2_v2(v, A.ey))
-}
-
-function b2Distance(a, b) {
- var c = b2Vec2.Subtract(a, b);
- return c.Length()
-}
-
-function b2DistanceSquared(a, b) {
- var c = b2Vec2.Subtract(a, b);
- return b2Dot_v2_v2(c, c)
-}
-
-function b2Dot_v3_v3(a, b) {
- return a.x * b.x + a.y * b.y + a.z * b.z
-}
-
-function b2Cross_v3_v3(a, b) {
- return new b2Vec3(a.y * b.z - a.z * b.y, a.z * b.x - a.x * b.z, a.x * b.y - a.y * b.x)
-}
-
-function b2Mul_m22_m22(A, B) {
- return new b2Mat22(b2Mul_m22_v2(A, B.ex), b2Mul_m22_v2(A, B.ey))
-}
-
-function b2MulT_m22_m22(A, B) {
- var c1 = new b2Vec2(b2Dot_v2_v2(A.ex, B.ex), b2Dot_v2_v2(A.ey, B.ex));
- var c2 = new b2Vec2(b2Dot_v2_v2(A.ex, B.ey), b2Dot_v2_v2(A.ey, B.ey));
- return new b2Mat22(c1, c2)
-}
-
-function b2Mul_m33_v3(A, v) {
- return b2Vec3.Add(b2Vec3.Add(b2Vec3.Multiply(v.x, A.ex), b2Vec3.Multiply(v.y, A.ey)), b2Vec3.Multiply(v.z, A.ez))
-}
-
-function b2Mul22_m33_v2(A, v) {
- return new b2Vec2(A.ex.x * v.x + A.ey.x * v.y, A.ex.y * v.x + A.ey.y * v.y)
-}
-
-function b2Mul_r_r(q, r) {
- var qr = new b2Rot();
- qr.s = q.s * r.c + q.c * r.s;
- qr.c = q.c * r.c - q.s * r.s;
- return qr
-}
-
-function b2MulT_r_r(q, r) {
- var qr = new b2Rot();
- qr.s = q.c * r.s - q.s * r.c;
- qr.c = q.c * r.c + q.s * r.s;
- return qr
-}
-
-function b2Mul_r_v2(q, v) {
- return new b2Vec2(q.c * v.x - q.s * v.y, q.s * v.x + q.c * v.y)
-}
-
-function b2MulT_r_v2(q, v) {
- return new b2Vec2(q.c * v.x + q.s * v.y, -q.s * v.x + q.c * v.y)
-}
-
-function b2Mul_t_v2(T, v) {
- return new b2Vec2((T.q.c * v.x - T.q.s * v.y) + T.p.x, (T.q.s * v.x + T.q.c * v.y) + T.p.y)
-}
-
-function b2MulT_t_v2(T, v) {
- var px = v.x - T.p.x;
- var py = v.y - T.p.y;
- var x = (T.q.c * px + T.q.s * py);
- var y = (-T.q.s * px + T.q.c * py);
- return new b2Vec2(x, y)
-}
-
-function b2Mul_t_t(A, B) {
- var C = new b2Transform();
- C.q = b2Mul_r_r(A.q, B.q);
- C.p = b2Vec2.Add(b2Mul_r_v2(A.q, B.p), A.p);
- return C
-}
-
-function b2MulT_t_t(A, B) {
- var C = new b2Transform();
- C.q = b2MulT_r_r(A.q, B.q);
- var tvx = B.p.x - A.p.x;
- var tvy = B.p.y - A.p.y;
- C.p.x = A.q.c * tvx + A.q.s * tvy;
- C.p.y = -A.q.s * tvx + A.q.c * tvy;
- return C
-}
-var b2Abs = Math.abs;
-
-function b2Abs_v2(a) {
- return new b2Vec2(b2Abs(a.x), b2Abs(a.y))
-}
-
-function b2Abs_m22(A) {
- return new b2Mat22(b2Abs_v2(A.ex), b2Abs_v2(A.ey))
-}
-var b2Min = Math.min;
-
-function b2Min_v2(a, b) {
- return new b2Vec2(b2Min(a.x, b.x), b2Min(a.y, b.y))
-}
-var b2Max = Math.max;
-
-function b2Max_v2(a, b) {
- return new b2Vec2(b2Max(a.x, b.x), b2Max(a.y, b.y))
-}
-
-function b2Clamp(a, low, high) {
- return b2Max(low, b2Min(a, high))
-}
-
-function b2Clamp_v2(a, low, high) {
- return b2Max_v2(low, b2Min_v2(a, high))
-}
-
-function b2NextPowerOfTwo(x) {
- x |= (x >> 1);
- x |= (x >> 2);
- x |= (x >> 4);
- x |= (x >> 8);
- x |= (x >> 16);
- return x + 1
-}
-
-function b2IsPowerOfTwo(x) {
- var result = x > 0 && (x & (x - 1)) == 0;
- return result
-}
-var RAND_LIMIT = 32767;
-
-function b2RandomFloat(lo, hi) {
- var r = Math.random();
- if (typeof (lo) !== "undefined") {
- r = (hi - lo) * r + lo
- } else {
- r = 2 * r - 1
- }
- return r
-}
-"use strict";
-
-function b2Color(r, g, b) {
- this.r = r || 0;
- this.g = g || 0;
- this.b = b || 0
-}
-b2Color.prototype = {
- Set: function (r, g, b) {
- this.r = r;
- this.g = g;
- this.b = b
- }
-};
-
-function b2Draw() {}
-b2Draw.prototype = {
- SetFlags: function (flags) {
- this.m_drawFlags = flags
- },
- GetFlags: function () {
- return this.m_drawFlags
- },
- AppendFlags: function (flags) {
- this.m_drawFlags |= flags
- },
- ClearFlags: function (flags) {
- this.m_drawFlags &= ~flags
- },
- ToggleFlags: function (flags) {
- this.m_drawFlags ^= flags
- },
- DrawPolygon: function (vertices, vertexCount, color) {},
- DrawSolidPolygon: function (vertices, vertexCount, color) {},
- DrawCircle: function (center, radius, color) {},
- DrawSolidCircle: function (center, radius, axis, color) {},
- DrawSegment: function (p1, p2, color) {},
- DrawTransform: function (xf) {},
- m_drawFlags: 0
-};
-b2Draw.e_shapeBit = 1;
-b2Draw.e_jointBit = 2;
-b2Draw.e_aabbBit = 4;
-b2Draw.e_centerOfMassBit = 8;
-b2Draw.e_contactPoints = 16;
-b2Draw.e_contactNormals = 32;
-b2Draw.e_contactImpulses = 64;
-b2Draw.e_frictionImpulses = 128;
-b2Draw.e_statistics = 256;
-b2Draw.e_profile = 512;
-b2Draw.e_pairBit = 1024;
-
-"use strict";
-function b2Timer() {
- this.Reset()
-}
-b2Timer.prototype = {
- Reset: function () {
- this.m_start = performance.now()
- },
- GetMilliseconds: function () {
- return performance.now() - this.m_start
- }
-};
-"use strict";
-
-function b2MassData() {
- this.mass = 0;
- this.center = new b2Vec2();
- this.I = 0
-}
-
-function b2Shape() {
- this.m_type = 0;
- this.m_radius = 0
-}
-b2Shape.prototype = {
- Clone: function () {},
- GetType: function () {
- return this.m_type
- },
- GetChildCount: function () {},
- TestPoint: function (xf, p) {},
- RayCast: function (output, input, transform, childIndex) {},
- ComputeAABB: function (aabb, xf, childIndex) {},
- ComputeMass: function (massData, density) {},
- _serialize: function (out) {
- var obj = out || {};
- obj.m_type = this.m_type;
- obj.m_radius = this.m_radius;
- return obj
- },
- _deserialize: function (data) {
- this.m_radius = data.m_radius
- }
-};
-b2Shape.e_circle = 0;
-b2Shape.e_edge = 1;
-b2Shape.e_polygon = 2;
-b2Shape.e_chain = 3;
-b2Shape.e_typeCount = 4;
-"use strict";
-
-function b2CircleShape() {
- this.parent.call(this);
- this.m_type = b2Shape.e_circle;
- this.m_radius = 0;
- this.m_p = new b2Vec2();
- Object.seal(this)
-}
-b2CircleShape.prototype = {
- Clone: function () {
- var shape = new b2CircleShape();
- shape.m_radius = this.m_radius;
- shape.m_p = this.m_p.Clone();
- return shape
- },
- GetChildCount: function () {
- return 1
- },
- TestPoint: function (transform, p) {
- var center = b2Vec2.Add(transform.p, b2Mul_r_v2(transform.q, this.m_p));
- var d = b2Vec2.Subtract(p, center);
- return b2Dot_v2_v2(d, d) <= this.m_radius * this.m_radius
- },
- RayCast: function (output, input, transform, childIndex) {
- var position = b2Vec2.Add(transform.p, b2Mul_r_v2(transform.q, this.m_p));
- var s = b2Vec2.Subtract(input.p1, position);
- var b = b2Dot_v2_v2(s, s) - this.m_radius * this.m_radius;
- var r = b2Vec2.Subtract(input.p2, input.p1);
- var c = b2Dot_v2_v2(s, r);
- var rr = b2Dot_v2_v2(r, r);
- var sigma = c * c - rr * b;
- if (sigma < 0 || rr < b2_epsilon) {
- return false
- }
- var a = -(c + b2Sqrt(sigma));
- if (0 <= a && a <= input.maxFraction * rr) {
- a /= rr;
- output.fraction = a;
- output.normal = b2Vec2.Add(s, b2Vec2.Multiply(a, r));
- output.normal.Normalize();
- return true
- }
- return false
- },
- ComputeAABB: function (aabb, transform, childIndex) {
- var px = transform.p.x + (transform.q.c * this.m_p.x - transform.q.s * this.m_p.y);
- var py = transform.p.y + (transform.q.s * this.m_p.x + transform.q.c * this.m_p.y);
- aabb.lowerBound.x = px - this.m_radius;
- aabb.lowerBound.y = py - this.m_radius;
- aabb.upperBound.x = px + this.m_radius;
- aabb.upperBound.y = py + this.m_radius
- },
- ComputeMass: function (massData, density) {
- massData.mass = density * b2_pi * this.m_radius * this.m_radius;
- massData.center = this.m_p;
- massData.I = massData.mass * (0.5 * this.m_radius * this.m_radius + b2Dot_v2_v2(this.m_p, this.m_p))
- },
- GetSupport: function (d) {
- return 0
- },
- GetSupportVertex: function (d) {
- return this.m_p
- },
- GetVertexCount: function () {
- return 1
- },
- GetVertex: function (index) {
- return this.m_p
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.m_p = this.m_p._serialize();
- return obj
- },
- _deserialize: function (data) {
- this.parent.prototype._deserialize.call(this, data);
- this.m_p._deserialize(data.m_p)
- }
-};
-b2CircleShape._extend(b2Shape);
-"use strict";
-
-function b2EdgeShape() {
- this.parent.call(this);
- this.m_type = b2Shape.e_edge;
- this.m_radius = b2_polygonRadius;
- this.m_vertex0 = new b2Vec2();
- this.m_vertex1 = new b2Vec2();
- this.m_vertex2 = new b2Vec2();
- this.m_vertex3 = new b2Vec2();
- this.m_hasVertex0 = false;
- this.m_hasVertex3 = false;
- Object.seal(this)
-}
-b2EdgeShape.prototype = {
- Set: function (v1, v2) {
- this.m_vertex1.Assign(v1);
- this.m_vertex2.Assign(v2);
- this.m_hasVertex0 = false;
- this.m_hasVertex3 = false
- },
- Clone: function () {
- var shape = new b2EdgeShape();
- shape.m_vertex0 = this.m_vertex0.Clone();
- shape.m_vertex1 = this.m_vertex1.Clone();
- shape.m_vertex2 = this.m_vertex2.Clone();
- shape.m_vertex3 = this.m_vertex3.Clone();
- shape.m_hasVertex0 = this.m_hasVertex0;
- shape.m_hasVertex3 = this.m_hasVertex3;
- return shape
- },
- GetChildCount: function () {
- return 1
- },
- TestPoint: function (transform, p) {
- return false
- },
- RayCast: function (output, input, xf, childIndex) {
- var p1 = b2MulT_r_v2(xf.q, b2Vec2.Subtract(input.p1, xf.p));
- var p2 = b2MulT_r_v2(xf.q, b2Vec2.Subtract(input.p2, xf.p));
- var d = b2Vec2.Subtract(p2, p1);
- var v1 = this.m_vertex1;
- var v2 = this.m_vertex2;
- var e = b2Vec2.Subtract(v2, v1);
- var normal = new b2Vec2(e.y, -e.x);
- normal.Normalize();
- var numerator = b2Dot_v2_v2(normal, b2Vec2.Subtract(v1, p1));
- var denominator = b2Dot_v2_v2(normal, d);
- if (denominator == 0) {
- return false
- }
- var t = numerator / denominator;
- if (t < 0 || input.maxFraction < t) {
- return false
- }
- var q = b2Vec2.Add(p1, b2Vec2.Multiply(t, d));
- var r = b2Vec2.Subtract(v2, v1);
- var rr = b2Dot_v2_v2(r, r);
- if (rr == 0) {
- return false
- }
- var s = b2Dot_v2_v2(b2Vec2.Subtract(q, v1), r) / rr;
- if (s < 0 || 1 < s) {
- return false
- }
- output.fraction = t;
- if (numerator > 0) {
- output.normal = b2Mul_r_v2(xf.q, normal).Negate()
- } else {
- output.normal = b2Mul_r_v2(xf.q, normal)
- }
- return true
- },
- ComputeAABB: function (aabb, xf, childIndex) {
- var v1x = (xf.q.c * this.m_vertex1.x - xf.q.s * this.m_vertex1.y) + xf.p.x;
- var v1y = (xf.q.s * this.m_vertex1.x + xf.q.c * this.m_vertex1.y) + xf.p.y;
- var v2x = (xf.q.c * this.m_vertex2.x - xf.q.s * this.m_vertex2.y) + xf.p.x;
- var v2y = (xf.q.s * this.m_vertex2.x + xf.q.c * this.m_vertex2.y) + xf.p.y;
- var lowerx = b2Min(v1x, v2x);
- var lowery = b2Min(v1y, v2y);
- var upperx = b2Max(v1x, v2x);
- var uppery = b2Max(v1y, v2y);
- aabb.lowerBound.x = lowerx - this.m_radius;
- aabb.lowerBound.y = lowery - this.m_radius;
- aabb.upperBound.x = upperx + this.m_radius;
- aabb.upperBound.y = uppery + this.m_radius
- },
- ComputeMass: function (massData, density) {
- massData.mass = 0;
- massData.center = b2Vec2.Multiply(0.5, b2Vec2.Add(this.m_vertex1, this.m_vertex2));
- massData.I = 0
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.m_vertex1 = this.m_vertex1._serialize();
- obj.m_vertex2 = this.m_vertex2._serialize();
- obj.m_hasVertex0 = this.m_hasVertex0;
- if (this.m_hasVertex0) {
- obj.m_vertex0 = this.m_vertex0._serialize()
- }
- obj.m_hasVertex3 = this.m_hasVertex3;
- if (this.m_hasVertex3) {
- obj.m_vertex3 = this.m_vertex3._serialize()
- }
- return obj
- },
- _deserialize: function (data) {
- this.parent.prototype._deserialize.call(this, data);
- this.m_vertex1._deserialize(data.m_vertex1);
- this.m_vertex2._deserialize(data.m_vertex2);
- this.m_hasVertex0 = data.m_hasVertex0;
- if (this.m_hasVertex0) {
- this.m_vertex0._deserialize(data.m_vertex0)
- }
- this.m_hasVertex3 = data.m_hasVertex3;
- if (this.m_hasVertex3) {
- this.m_vertex3._deserialize(data.m_vertex3)
- }
- }
-};
-b2EdgeShape._extend(b2Shape);
-"use strict";
-
-function b2ChainShape() {
- this.parent.call(this);
- this.m_type = b2Shape.e_chain;
- this.m_radius = b2_polygonRadius;
- this.m_vertices = null;
- this.m_count = 0;
- this.m_prevVertex = new b2Vec2();
- this.m_nextVertex = new b2Vec2();
- this.m_hasPrevVertex = false;
- this.m_hasNextVertex = false;
- Object.seal(this)
-}
-b2ChainShape.prototype = {
- CreateLoop: function (vertices, count) {
- for (var i = 1; i < count; ++i) {
- var v1 = vertices[i - 1];
- var v2 = vertices[i]
- }
- this.m_count = count + 1;
- this.m_vertices = new Array(this.m_count);
- for (var i = 0; i < count; ++i) {
- this.m_vertices[i] = vertices[i].Clone()
- }
- this.m_vertices[count] = this.m_vertices[0].Clone();
- this.m_prevVertex.Assign(this.m_vertices[this.m_count - 2]);
- this.m_nextVertex.Assign(this.m_vertices[1]);
- this.m_hasPrevVertex = true;
- this.m_hasNextVertex = true
- },
- CreateChain: function (vertices, count) {
- for (var i = 1; i < count; ++i) {
- var v1 = vertices[i - 1];
- var v2 = vertices[i]
- }
- this.m_count = count;
- this.m_vertices = new Array(count);
- for (var i = 0; i < count; ++i) {
- this.m_vertices[i] = vertices[i].Clone()
- }
- this.m_hasPrevVertex = false;
- this.m_hasNextVertex = false;
- this.m_prevVertex.SetZero();
- this.m_nextVertex.SetZero()
- },
- SetPrevVertex: function (prevVertex) {
- this.m_prevVertex.Assign(prevVertex);
- this.m_hasPrevVertex = true
- },
- SetNextVertex: function (nextVertex) {
- this.m_nextVertex.Assign(nextVertex);
- this.m_hasNextVertex = true
- },
- Clone: function () {
- var shape = new b2ChainShape();
- shape.m_count = this.m_count;
- shape.m_vertices = new Array(this.m_count);
- for (var i = 0; i < this.m_count; ++i) {
- shape.m_vertices[i] = this.m_vertices[i].Clone()
- }
- shape.m_prevVertex = this.m_prevVertex.Clone();
- shape.m_nextVertex = this.m_nextVertex.Clone();
- shape.m_hasPrevVertex = this.m_hasPrevVertex;
- shape.m_hasNextVertex = this.m_hasNextVertex;
- return shape
- },
- GetChildCount: function () {
- return this.m_count - 1
- },
- GetChildEdge: function (edge, index) {
- edge.m_type = b2Shape.e_edge;
- edge.m_radius = this.m_radius;
- edge.m_vertex1 = this.m_vertices[index + 0];
- edge.m_vertex2 = this.m_vertices[index + 1];
- if (index > 0) {
- edge.m_vertex0 = this.m_vertices[index - 1];
- edge.m_hasVertex0 = true
- } else {
- edge.m_vertex0 = this.m_prevVertex;
- edge.m_hasVertex0 = this.m_hasPrevVertex
- }
- if (index < this.m_count - 2) {
- edge.m_vertex3 = this.m_vertices[index + 2];
- edge.m_hasVertex3 = true
- } else {
- edge.m_vertex3 = this.m_nextVertex;
- edge.m_hasVertex3 = this.m_hasNextVertex
- }
- },
- TestPoint: function (transform, p) {
- return false
- },
- RayCast: function (output, input, xf, childIndex) {
- var edgeShape = new b2EdgeShape();
- var i1 = childIndex;
- var i2 = childIndex + 1;
- if (i2 == this.m_count) {
- i2 = 0
- }
- edgeShape.m_vertex1 = this.m_vertices[i1].Clone();
- edgeShape.m_vertex2 = this.m_vertices[i2].Clone();
- return edgeShape.RayCast(output, input, xf, 0)
- },
- ComputeAABB: function (aabb, xf, childIndex) {
- var i1 = childIndex;
- var i2 = childIndex + 1;
- if (i2 == this.m_count) {
- i2 = 0
- }
- var v1x = (xf.q.c * this.m_vertices[i1].x - xf.q.s * this.m_vertices[i1].y) + xf.p.x;
- var v1y = (xf.q.s * this.m_vertices[i1].x + xf.q.c * this.m_vertices[i1].y) + xf.p.y;
- var v2x = (xf.q.c * this.m_vertices[i2].x - xf.q.s * this.m_vertices[i2].y) + xf.p.x;
- var v2y = (xf.q.s * this.m_vertices[i2].x + xf.q.c * this.m_vertices[i2].y) + xf.p.y;
- aabb.lowerBound.x = b2Min(v1x, v2x);
- aabb.lowerBound.y = b2Min(v1y, v2y);
- aabb.upperBound.x = b2Max(v1x, v2x);
- aabb.upperBound.y = b2Max(v1y, v2y)
- },
- ComputeMass: function (massData, density) {
- massData.mass = 0;
- massData.center.SetZero();
- massData.I = 0
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.m_count = this.m_count;
- obj.m_vertices = [];
- for (var i = 0; i < this.m_count; ++i) {
- obj.m_vertices.push(this.m_vertices[i]._serialize())
- }
- obj.m_hasPrevVertex = this.m_hasPrevVertex;
- if (this.m_hasPrevVertex) {
- obj.m_prevVertex = this.m_prevVertex._serialize()
- }
- obj.m_hasNextVertex = this.m_hasNextVertex;
- if (this.m_hasNextVertex) {
- obj.m_nextVertex = this.m_nextVertex._serialize()
- }
- return obj
- },
- _deserialize: function (data) {
- this.parent.prototype._deserialize.call(this, data);
- this.m_count = data.m_count;
- this.m_vertices = [];
- for (var i = 0; i < this.m_count; ++i) {
- this.m_vertices[i] = new b2Vec2();
- this.m_vertices[i]._deserialize(data.m_vertices[i])
- }
- this.m_hasPrevVertex = data.m_hasPrevVertex;
- if (this.m_hasPrevVertex) {
- this.m_prevVertex._deserialize(data.m_prevVertex)
- }
- this.m_hasNextVertex = data.m_hasNextVertex;
- if (this.m_hasNextVertex) {
- this.m_nextVertex._deserialize(data.m_nextVertex)
- }
- }
-};
-b2ChainShape._extend(b2Shape);
-"use strict";
-
-function b2PolygonShape() {
- this.parent.call(this);
- this.m_type = b2Shape.e_polygon;
- this.m_radius = b2_polygonRadius;
- this.m_count = 0;
- this.m_centroid = new b2Vec2();
- this.m_vertices = new Array(b2_maxPolygonVertices);
- this.m_normals = new Array(b2_maxPolygonVertices);
- Object.seal(this)
-}
-b2PolygonShape.prototype = {
- Clone: function () {
- var shape = new b2PolygonShape();
- shape.m_count = this.m_count;
- shape.m_centroid = this.m_centroid.Clone();
- for (var i = 0; i < this.m_count; ++i) {
- shape.m_vertices[i] = this.m_vertices[i].Clone();
- shape.m_normals[i] = this.m_normals[i].Clone()
- }
- return shape
- },
- GetChildCount: function () {
- return 1
- },
- Set: function (vertices, count) {
- if (count < 3) {
- this.SetAsBox(1, 1);
- return
- }
- var n = b2Min(count, b2_maxPolygonVertices);
- var ps = new Array(b2_maxPolygonVertices);
- var tempCount = 0;
- for (var i = 0; i < n; ++i) {
- var v = vertices[i];
- var unique = true;
- for (var j = 0; j < tempCount; ++j) {
- if (b2DistanceSquared(v, ps[j]) < 0.5 * b2_linearSlop) {
- unique = false;
- break
- }
- }
- if (unique) {
- ps[tempCount++] = v.Clone()
- }
- }
- n = tempCount;
- if (n < 3) {
- this.SetAsBox(1, 1);
- return
- }
- var i0 = 0;
- var x0 = ps[0].x;
- for (i = 1; i < n; ++i) {
- var x = ps[i].x;
- if (x > x0 || (x == x0 && ps[i].y < ps[i0].y)) {
- i0 = i;
- x0 = x
- }
- }
- var hull = new Array(b2_maxPolygonVertices);
- var m = 0;
- var ih = i0;
- for (;;) {
- hull[m] = ih;
- var ie = 0;
- for (j = 1; j < n; ++j) {
- if (ie == ih) {
- ie = j;
- continue
- }
- var r = b2Vec2.Subtract(ps[ie], ps[hull[m]]);
- var v = b2Vec2.Subtract(ps[j], ps[hull[m]]);
- var c = b2Cross_v2_v2(r, v);
- if (c < 0) {
- ie = j
- }
- if (c == 0 && v.LengthSquared() > r.LengthSquared()) {
- ie = j
- }
- }++m;
- ih = ie;
- if (ie == i0) {
- break
- }
- }
- this.m_count = m;
- for (i = 0; i < m; ++i) {
- this.m_vertices[i] = ps[hull[i]].Clone()
- }
- for (i = 0; i < m; ++i) {
- var i1 = i;
- var i2 = i + 1 < m ? i + 1 : 0;
- var edge = b2Vec2.Subtract(this.m_vertices[i2], this.m_vertices[i1]);
- this.m_normals[i] = b2Cross_v2_f(edge, 1).Clone();
- this.m_normals[i].Normalize()
- }
- this.m_centroid = b2PolygonShape.ComputeCentroid(this.m_vertices, m)
- },
- SetAsBox: function (hx, hy, center, angle) {
- this.m_count = 4;
- this.m_vertices[0] = new b2Vec2(-hx, -hy);
- this.m_vertices[1] = new b2Vec2(hx, -hy);
- this.m_vertices[2] = new b2Vec2(hx, hy);
- this.m_vertices[3] = new b2Vec2(-hx, hy);
- this.m_normals[0] = new b2Vec2(0, -1);
- this.m_normals[1] = new b2Vec2(1, 0);
- this.m_normals[2] = new b2Vec2(0, 1);
- this.m_normals[3] = new b2Vec2(-1, 0);
- if (!center) {
- return
- }
- this.m_centroid.Assign(center);
- var xf = new b2Transform();
- xf.p = center;
- xf.q.Set(angle);
- for (var i = 0; i < this.m_count; ++i) {
- this.m_vertices[i].Assign(b2Mul_t_v2(xf, this.m_vertices[i]));
- this.m_normals[i].Assign(b2Mul_r_v2(xf.q, this.m_normals[i]))
- }
- },
- TestPoint: function (xf, p) {
- var pLocal = b2MulT_r_v2(xf.q, b2Vec2.Subtract(p, xf.p));
- for (var i = 0; i < this.m_count; ++i) {
- var dot = b2Dot_v2_v2(this.m_normals[i], b2Vec2.Subtract(pLocal, this.m_vertices[i]));
- if (dot > 0) {
- return false
- }
- }
- return true
- },
- RayCast: function (output, input, xf, childIndex) {
- var p1 = b2MulT_r_v2(xf.q, b2Vec2.Subtract(input.p1, xf.p));
- var p2 = b2MulT_r_v2(xf.q, b2Vec2.Subtract(input.p2, xf.p));
- var d = b2Vec2.Subtract(p2, p1);
- var lower = 0,
- upper = input.maxFraction;
- var index = -1;
- for (var i = 0; i < this.m_count; ++i) {
- var numerator = b2Dot_v2_v2(this.m_normals[i], b2Vec2.Subtract(this.m_vertices[i], p1));
- var denominator = b2Dot_v2_v2(this.m_normals[i], d);
- if (denominator == 0) {
- if (numerator < 0) {
- return false
- }
- } else {
- if (denominator < 0 && numerator < lower * denominator) {
- lower = numerator / denominator;
- index = i
- } else {
- if (denominator > 0 && numerator < upper * denominator) {
- upper = numerator / denominator
- }
- }
- }
- if (upper < lower) {
- return false
- }
- }
- if (index >= 0) {
- output.fraction = lower;
- output.normal = b2Mul_r_v2(xf.q, this.m_normals[index]);
- return true
- }
- return false
- },
- ComputeAABB: function (aabb, xf, childIndex) {
- var lowerx = (xf.q.c * this.m_vertices[0].x - xf.q.s * this.m_vertices[0].y) + xf.p.x;
- var lowery = (xf.q.s * this.m_vertices[0].x + xf.q.c * this.m_vertices[0].y) + xf.p.y;
- var upperx = lowerx;
- var uppery = lowery;
- for (var i = 1; i < this.m_count; ++i) {
- var vx = (xf.q.c * this.m_vertices[i].x - xf.q.s * this.m_vertices[i].y) + xf.p.x;
- var vy = (xf.q.s * this.m_vertices[i].x + xf.q.c * this.m_vertices[i].y) + xf.p.y;
- lowerx = b2Min(lowerx, vx);
- lowery = b2Min(lowery, vy);
- upperx = b2Max(upperx, vx);
- uppery = b2Max(uppery, vy)
- }
- aabb.lowerBound.x = lowerx - this.m_radius;
- aabb.lowerBound.y = lowery - this.m_radius;
- aabb.upperBound.x = upperx + this.m_radius;
- aabb.upperBound.y = uppery + this.m_radius
- },
- ComputeMass: function (massData, density) {
- var center = new b2Vec2(0, 0);
- var area = 0;
- var I = 0;
- var s = new b2Vec2(0, 0);
- for (var i = 0; i < this.m_count; ++i) {
- s.Add(this.m_vertices[i])
- }
- s.Multiply(1 / this.m_count);
- var k_inv3 = 1 / 3;
- for (var i = 0; i < this.m_count; ++i) {
- var e1 = b2Vec2.Subtract(this.m_vertices[i], s);
- var e2 = i + 1 < this.m_count ? b2Vec2.Subtract(this.m_vertices[i + 1], s) : b2Vec2.Subtract(this.m_vertices[0], s);
- var D = b2Cross_v2_v2(e1, e2);
- var triangleArea = 0.5 * D;
- area += triangleArea;
- center.Add(b2Vec2.Multiply(triangleArea * k_inv3, b2Vec2.Add(e1, e2)));
- var ex1 = e1.x,
- ey1 = e1.y;
- var ex2 = e2.x,
- ey2 = e2.y;
- var intx2 = ex1 * ex1 + ex2 * ex1 + ex2 * ex2;
- var inty2 = ey1 * ey1 + ey2 * ey1 + ey2 * ey2;
- I += (0.25 * k_inv3 * D) * (intx2 + inty2)
- }
- massData.mass = density * area;
- center.Multiply(1 / area);
- massData.center = b2Vec2.Add(center, s);
- massData.I = density * I;
- massData.I += massData.mass * (b2Dot_v2_v2(massData.center, massData.center) - b2Dot_v2_v2(center, center))
- },
- GetVertexCount: function () {
- return this.m_count
- },
- GetVertex: function (index) {
- return this.m_vertices[index]
- },
- Validate: function () {
- for (var i = 0; i < this.m_count; ++i) {
- var i1 = i;
- var i2 = i < this.m_count - 1 ? i1 + 1 : 0;
- var p = this.m_vertices[i1];
- var e = b2Vec2.Subtract(this.m_vertices[i2], p);
- for (var j = 0; j < this.m_count; ++j) {
- if (j == i1 || j == i2) {
- continue
- }
- var v = b2Vec2.Subtract(this.m_vertices[j], p);
- var c = b2Cross_v2_v2(e, v);
- if (c < 0) {
- return false
- }
- }
- }
- return true
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.m_count = this.m_count;
- obj.m_centroid = this.m_centroid._serialize();
- obj.m_vertices = [];
- obj.m_normals = [];
- for (var i = 0; i < this.m_count; ++i) {
- obj.m_vertices.push(this.m_vertices[i]._serialize());
- obj.m_normals.push(this.m_normals[i]._serialize())
- }
- return obj
- },
- _deserialize: function (data) {
- this.parent.prototype._deserialize.call(this, data);
- this.m_count = data.m_count;
- this.m_centroid._deserialize(data.m_centroid);
- this.m_vertices = [];
- this.m_normals = [];
- for (var i = 0; i < this.m_count; ++i) {
- this.m_vertices[i] = new b2Vec2();
- this.m_vertices[i]._deserialize(data.m_vertices[i]);
- this.m_normals[i] = new b2Vec2();
- this.m_normals[i]._deserialize(data.m_normals[i])
- }
- }
-};
-b2PolygonShape.ComputeCentroid = function (vs, count) {
- var c = new b2Vec2();
- var area = 0;
- var pRef = new b2Vec2(0, 0);
- var inv3 = 1 / 3;
- for (var i = 0; i < count; ++i) {
- var p1 = pRef;
- var p2 = vs[i];
- var p3 = i + 1 < count ? vs[i + 1] : vs[0];
- var e1 = b2Vec2.Subtract(p2, p1);
- var e2 = b2Vec2.Subtract(p3, p1);
- var D = b2Cross_v2_v2(e1, e2);
- var triangleArea = 0.5 * D;
- area += triangleArea;
- c.Add(b2Vec2.Multiply(triangleArea, b2Vec2.Multiply(inv3, b2Vec2.Add(b2Vec2.Add(p1, p2), p3))))
- }
- c.Multiply(1 / area);
- return c
-};
-b2PolygonShape._extend(b2Shape);
-"use strict";
-
-function b2Pair() {
- this.proxyIdA = 0;
- this.proxyIdB = 0
-}
-
-function b2PairLessThan(pair1, pair2) {
- if (pair1.proxyIdA == pair2.proxyIdA) {
- return pair1.proxyIdB - pair2.proxyIdB
- }
- return pair1.proxyIdA - pair2.proxyIdA
-}
-
-function b2BroadPhase() {
- this.m_tree = new b2DynamicTree();
- this.m_queryProxyId = 0;
- this.m_proxyCount = 0;
- this.m_pairCount = 0;
- this.m_pairBuffer = [];
- this.m_moveCount = 0;
- this.m_moveBuffer = []
-}
-b2BroadPhase.prototype = {
- CreateProxy: function (aabb, userData) {
- var proxyId = this.m_tree.CreateProxy(aabb, userData);
- ++this.m_proxyCount;
- this.BufferMove(proxyId);
- return proxyId
- },
- DestroyProxy: function (proxyId) {
- this.UnBufferMove(proxyId);
- --this.m_proxyCount;
- this.m_tree.DestroyProxy(proxyId)
- },
- MoveProxy: function (proxyId, aabb, displacement) {
- var buffer = this.m_tree.MoveProxy(proxyId, aabb, displacement);
- if (buffer) {
- this.BufferMove(proxyId)
- }
- },
- TouchProxy: function (proxyId) {
- this.BufferMove(proxyId)
- },
- GetFatAABB: function (proxyId) {
- return this.m_tree.GetFatAABB(proxyId)
- },
- GetUserData: function (proxyId) {
- return this.m_tree.GetUserData(proxyId)
- },
- TestOverlap: function (proxyIdA, proxyIdB) {
- var aabbA = this.m_tree.GetFatAABB(proxyIdA);
- var aabbB = this.m_tree.GetFatAABB(proxyIdB);
- return b2TestOverlap(aabbA, aabbB)
- },
- GetProxyCount: function () {
- return this.m_proxyCount
- },
- UpdatePairs: function (callback) {
- this.m_pairCount = 0;
- this.m_pairBuffer.length = 0;
- for (var i = 0; i < this.m_moveCount; ++i) {
- this.m_queryProxyId = this.m_moveBuffer[i];
- if (this.m_queryProxyId == b2BroadPhase.e_nullProxy) {
- continue
- }
- var fatAABB = this.m_tree.GetFatAABB(this.m_queryProxyId);
- this.m_tree.Query(this, fatAABB)
- }
- this.m_moveCount = 0;
- this.m_pairBuffer.sort(b2PairLessThan);
- var i = 0;
- while (i < this.m_pairCount) {
- var primaryPair = this.m_pairBuffer[i];
- var userDataA = this.m_tree.GetUserData(primaryPair.proxyIdA);
- var userDataB = this.m_tree.GetUserData(primaryPair.proxyIdB);
- callback.AddPair(userDataA, userDataB);
- ++i;
- while (i < this.m_pairCount) {
- var pair = this.m_pairBuffer[i];
- if (pair.proxyIdA != primaryPair.proxyIdA || pair.proxyIdB != primaryPair.proxyIdB) {
- break
- }++i
- }
- }
- },
- Query: function (callback, aabb) {
- this.m_tree.Query(callback, aabb)
- },
- RayCast: function (callback, input) {
- this.m_tree.RayCast(callback, input)
- },
- GetTreeHeight: function () {
- return this.m_tree.GetHeight()
- },
- GetTreeBalance: function () {
- return this.m_tree.GetMaxBalance()
- },
- GetTreeQuality: function () {
- return this.m_tree.GetAreaRatio()
- },
- ShiftOrigin: function (newOrigin) {
- this.m_tree.ShiftOrigin(newOrigin)
- },
- BufferMove: function (proxyId) {
- this.m_moveBuffer[this.m_moveCount] = proxyId;
- ++this.m_moveCount
- },
- UnBufferMove: function (proxyId) {
- for (var i = 0; i < this.m_moveCount; ++i) {
- if (this.m_moveBuffer[i] == proxyId) {
- this.m_moveBuffer[i] = b2BroadPhase.e_nullProxy
- }
- }
- },
- QueryCallback: function (proxyId) {
- if (proxyId == this.m_queryProxyId) {
- return true
- }
- this.m_pairBuffer[this.m_pairCount] = new b2Pair();
- this.m_pairBuffer[this.m_pairCount].proxyIdA = b2Min(proxyId, this.m_queryProxyId);
- this.m_pairBuffer[this.m_pairCount].proxyIdB = b2Max(proxyId, this.m_queryProxyId);
- ++this.m_pairCount;
- return true
- }
-};
-b2BroadPhase.e_nullProxy = -1;
-"use strict";
-
-function b2DistanceProxy() {
- this.m_vertices = null;
- this.m_count = 0;
- this.m_radius = 0
-}
-b2DistanceProxy.prototype = {
- Assign: function (l) {
- this.m_vertices = l.m_vertices;
- this.m_count = l.m_count;
- this.m_radius = l.m_radius
- },
- Set: function (shape, index) {
- switch (shape.GetType()) {
- case b2Shape.e_circle:
- var circle = shape;
- this.m_vertices = [circle.m_p];
- this.m_count = 1;
- this.m_radius = circle.m_radius;
- break;
- case b2Shape.e_polygon:
- var polygon = shape;
- this.m_vertices = polygon.m_vertices;
- this.m_count = polygon.m_count;
- this.m_radius = polygon.m_radius;
- break;
- case b2Shape.e_chain:
- var chain = shape;
- this.m_vertices = [chain.m_vertices[index]];
- if (index + 1 < chain.m_count) {
- this.m_vertices[1] = chain.m_vertices[index + 1]
- } else {
- this.m_vertices[1] = chain.m_vertices[0]
- }
- this.m_count = 2;
- this.m_radius = chain.m_radius;
- break;
- case b2Shape.e_edge:
- var edge = shape;
- this.m_vertices = [edge.m_vertex1, edge.m_vertex2];
- this.m_count = 2;
- this.m_radius = edge.m_radius;
- break
- }
- },
- GetSupport: function (dx, dy) {
- var bestIndex = 0;
- var bestValue = this.m_vertices[0].x * dx + this.m_vertices[0].y * dy;
- for (var i = 1; i < this.m_count; ++i) {
- var value = this.m_vertices[i].x * dx + this.m_vertices[i].y * dy;
- if (value > bestValue) {
- bestIndex = i;
- bestValue = value
- }
- }
- return bestIndex
- },
- GetSupportVertex: function (dx, dy) {
- return this.m_vertices[this.GetSupport(dx, dy)]
- },
- GetVertexCount: function () {
- return this.m_count
- },
- GetVertex: function (index) {
- return this.m_vertices[index]
- }
-};
-
-function b2SimplexCache() {
- this.metric = 0;
- this.count = 0;
- this.indexA = [0, 0, 0];
- this.indexB = [0, 0, 0]
-}
-
-function b2DistanceInput() {
- this.proxyA = new b2DistanceProxy();
- this.proxyB = new b2DistanceProxy();
- this.transformA = new b2Transform();
- this.transformB = new b2Transform();
- this.useRadii = false
-}
-
-function b2DistanceOutput() {
- this.pointA = new b2Vec2();
- this.pointB = new b2Vec2();
- this.distance = 0;
- this.iterations = 0
-}
-
-function b2SimplexVertex() {
- this.wA = new b2Vec2();
- this.wB = new b2Vec2();
- this.w = new b2Vec2();
- this.a = 0;
- this.indexA = 0;
- this.indexB = 0
-}
-b2SimplexVertex.prototype = {
- Assign: function (l) {
- this.wA.x = l.wA.x;
- this.wA.y = l.wA.y;
- this.wB.x = l.wB.x;
- this.wB.y = l.wB.y;
- this.w.x = l.w.x;
- this.w.y = l.w.y;
- this.a = l.a;
- this.indexA = l.indexA;
- this.indexB = l.indexB
- }
-};
-
-function b2Simplex() {
- this.m_v = [new b2SimplexVertex(), new b2SimplexVertex(), new b2SimplexVertex()];
- this.m_count = 0
-}
-b2Simplex.prototype = {
- ReadCache: function (cache, proxyA, transformA, proxyB, transformB) {
- this.m_count = cache.count;
- var vertices = this.m_v;
- for (var i = 0; i < this.m_count; ++i) {
- var v = vertices[i];
- v.indexA = cache.indexA[i];
- v.indexB = cache.indexB[i];
- var wALocal = proxyA.GetVertex(v.indexA);
- var wBLocal = proxyB.GetVertex(v.indexB);
- v.wA.x = (transformA.q.c * wALocal.x - transformA.q.s * wALocal.y) + transformA.p.x;
- v.wA.y = (transformA.q.s * wALocal.x + transformA.q.c * wALocal.y) + transformA.p.y;
- v.wB.x = (transformB.q.c * wBLocal.x - transformB.q.s * wBLocal.y) + transformB.p.x;
- v.wB.y = (transformB.q.s * wBLocal.x + transformB.q.c * wBLocal.y) + transformB.p.y;
- v.w.x = v.wB.x - v.wA.x;
- v.w.y = v.wB.y - v.wA.y;
- v.a = 0
- }
- if (this.m_count > 1) {
- var metric1 = cache.metric;
- var metric2 = this.GetMetric();
- if (metric2 < 0.5 * metric1 || 2 * metric1 < metric2 || metric2 < b2_epsilon) {
- this.m_count = 0
- }
- }
- if (this.m_count == 0) {
- var v = vertices[0];
- v.indexA = 0;
- v.indexB = 0;
- var wALocal = proxyA.GetVertex(0);
- var wBLocal = proxyB.GetVertex(0);
- v.wA.x = (transformA.q.c * wALocal.x - transformA.q.s * wALocal.y) + transformA.p.x;
- v.wA.y = (transformA.q.s * wALocal.x + transformA.q.c * wALocal.y) + transformA.p.y;
- v.wB.x = (transformB.q.c * wBLocal.x - transformB.q.s * wBLocal.y) + transformB.p.x;
- v.wB.y = (transformB.q.s * wBLocal.x + transformB.q.c * wBLocal.y) + transformB.p.y;
- v.w.x = v.wB.x - v.wA.x;
- v.w.y = v.wB.y - v.wA.y;
- v.a = 1;
- this.m_count = 1
- }
- },
- WriteCache: function (cache) {
- cache.metric = this.GetMetric();
- cache.count = this.m_count;
- var vertices = this.m_v;
- for (var i = 0; i < this.m_count; ++i) {
- cache.indexA[i] = vertices[i].indexA;
- cache.indexB[i] = vertices[i].indexB
- }
- },
- GetSearchDirection: function (p) {
- switch (this.m_count) {
- case 1:
- p.x = -this.m_v[0].w.x;
- p.y = -this.m_v[0].w.y;
- break;
- case 2:
- var e12x = this.m_v[1].w.x - this.m_v[0].w.x;
- var e12y = this.m_v[1].w.y - this.m_v[0].w.y;
- var sgn = e12x * -this.m_v[0].w.y - e12y * -this.m_v[0].w.x;
- if (sgn > 0) {
- p.x = -1 * e12y;
- p.y = 1 * e12x
- } else {
- p.x = 1 * e12y;
- p.y = -1 * e12x
- }
- break
- }
- },
- GetClosestPoint: function (p) {
- switch (this.m_count) {
- case 1:
- p.x = this.m_v[0].w.x;
- p.y = this.m_v[0].w.y;
- break;
- case 2:
- p.x = (this.m_v[0].a * this.m_v[0].w.x) + (this.m_v[1].a * this.m_v[1].w.x);
- p.y = (this.m_v[0].a * this.m_v[0].w.y) + (this.m_v[1].a * this.m_v[1].w.y);
- break;
- case 3:
- p.x = p.y = 0;
- break
- }
- },
- GetWitnessPoints: function (pA, pB) {
- switch (this.m_count) {
- case 1:
- pA.x = this.m_v[0].wA.x;
- pA.y = this.m_v[0].wA.y;
- pB.x = this.m_v[0].wB.x;
- pB.y = this.m_v[0].wB.y;
- break;
- case 2:
- pA.x = (this.m_v[0].a * this.m_v[0].wA.x) + (this.m_v[1].a * this.m_v[1].wA.x);
- pA.y = (this.m_v[0].a * this.m_v[0].wA.y) + (this.m_v[1].a * this.m_v[1].wA.y);
- pB.x = (this.m_v[0].a * this.m_v[0].wB.x) + (this.m_v[1].a * this.m_v[1].wB.x);
- pB.y = (this.m_v[0].a * this.m_v[0].wB.y) + (this.m_v[1].a * this.m_v[1].wB.y);
- break;
- case 3:
- pA.x = (this.m_v[0].a, this.m_v[0].wA.x) + (this.m_v[1].a, this.m_v[1].wA.x) + (this.m_v[2].a, this.m_v[2].wA.x);
- pA.y = (this.m_v[0].a, this.m_v[0].wA.y) + (this.m_v[1].a, this.m_v[1].wA.y) + (this.m_v[2].a, this.m_v[2].wA.y);
- pB.x = pA.x;
- pB.y = pA.y;
- break
- }
- },
- GetMetric: function () {
- switch (this.m_count) {
- case 1:
- return 0;
- case 2:
- return b2Distance(this.m_v[0].w, this.m_v[1].w);
- case 3:
- return (this.m_v[1].w.x - this.m_v[0].w.x) * (this.m_v[2].w.y - this.m_v[0].w.y) - (this.m_v[1].w.y - this.m_v[0].w.y) * (this.m_v[2].w.x - this.m_v[0].w.x)
- }
- },
- Solve2: function () {
- var w1 = this.m_v[0].w;
- var w2 = this.m_v[1].w;
- var e12x = w2.x - w1.x;
- var e12y = w2.y - w1.y;
- var d12_2 = -(w1.x * e12x + w1.y * e12y);
- if (d12_2 <= 0) {
- this.m_v[0].a = 1;
- this.m_count = 1;
- return
- }
- var d12_1 = w2.x * e12x + w2.y * e12y;
- if (d12_1 <= 0) {
- this.m_v[1].a = 1;
- this.m_count = 1;
- this.m_v[0].Assign(this.m_v[1]);
- return
- }
- var inv_d12 = 1 / (d12_1 + d12_2);
- this.m_v[0].a = d12_1 * inv_d12;
- this.m_v[1].a = d12_2 * inv_d12;
- this.m_count = 2
- },
- Solve3: function () {
- var w1 = this.m_v[0].w;
- var w2 = this.m_v[1].w;
- var w3 = this.m_v[2].w;
- var e12x = w2.x - w1.x;
- var e12y = w2.y - w1.y;
- var w1e12 = w1.x * e12x + w1.y * e12y;
- var w2e12 = w2.x * e12x + w2.y * e12y;
- var d12_1 = w2e12;
- var d12_2 = -w1e12;
- var e13x = w3.x - w1.x;
- var e13y = w3.y - w1.y;
- var w1e13 = w1.x * e13x + w1.y * e13y;
- var w3e13 = w3.x * e13x + w3.y * e13y;
- var d13_1 = w3e13;
- var d13_2 = -w1e13;
- var e23x = w3.x - w2.x;
- var e23y = w3.y - w2.y;
- var w2e23 = w2.x * e23x + w2.y * e23y;
- var w3e23 = w3.x * e23x + w3.y * e23y;
- var d23_1 = w3e23;
- var d23_2 = -w2e23;
- var n123 = e12x * e13y - e12y * e13x;
- var d123_1 = n123 * (w2.x * w3.y - w2.y * w3.x);
- var d123_2 = n123 * (w3.x * w1.y - w3.y * w1.x);
- var d123_3 = n123 * (w1.x * w2.y - w1.y * w2.x);
- if (d12_2 <= 0 && d13_2 <= 0) {
- this.m_v[0].a = 1;
- this.m_count = 1;
- return
- }
- if (d12_1 > 0 && d12_2 > 0 && d123_3 <= 0) {
- var inv_d12 = 1 / (d12_1 + d12_2);
- this.m_v[0].a = d12_1 * inv_d12;
- this.m_v[1].a = d12_2 * inv_d12;
- this.m_count = 2;
- return
- }
- if (d13_1 > 0 && d13_2 > 0 && d123_2 <= 0) {
- var inv_d13 = 1 / (d13_1 + d13_2);
- this.m_v[0].a = d13_1 * inv_d13;
- this.m_v[2].a = d13_2 * inv_d13;
- this.m_count = 2;
- this.m_v[1].Assign(this.m_v[2]);
- return
- }
- if (d12_1 <= 0 && d23_2 <= 0) {
- this.m_v[1].a = 1;
- this.m_count = 1;
- this.m_v[0].Assign(this.m_v[1]);
- return
- }
- if (d13_1 <= 0 && d23_1 <= 0) {
- this.m_v[2].a = 1;
- this.m_count = 1;
- this.m_v[0].Assign(this.m_v[2]);
- return
- }
- if (d23_1 > 0 && d23_2 > 0 && d123_1 <= 0) {
- var inv_d23 = 1 / (d23_1 + d23_2);
- this.m_v[1].a = d23_1 * inv_d23;
- this.m_v[2].a = d23_2 * inv_d23;
- this.m_count = 2;
- this.m_v[0].Assign(this.m_v[2]);
- return
- }
- var inv_d123 = 1 / (d123_1 + d123_2 + d123_3);
- this.m_v[0].a = d123_1 * inv_d123;
- this.m_v[1].a = d123_2 * inv_d123;
- this.m_v[2].a = d123_3 * inv_d123;
- this.m_count = 3
- }
-};
-var _b2Distance_simplex = new b2Simplex();
-var _b2Distance_normal = new b2Vec2();
-var _b2Distance_p = new b2Vec2();
-
-function b2DistanceFunc(output, cache, input) {
- ++b2DistanceFunc.b2_gjkCalls;
- var proxyA = input.proxyA;
- var proxyB = input.proxyB;
- var transformA = input.transformA;
- var transformB = input.transformB;
- _b2Distance_simplex.ReadCache(cache, proxyA, transformA, proxyB, transformB);
- var vertices = _b2Distance_simplex.m_v;
- var k_maxIters = 20;
- var saveA = [0, 0, 0],
- saveB = [0, 0, 0];
- var saveCount = 0;
- var distanceSqr1 = b2_maxFloat;
- var distanceSqr2 = distanceSqr1;
- var iter = 0;
- while (iter < k_maxIters) {
- saveCount = _b2Distance_simplex.m_count;
- for (var i = 0; i < saveCount; ++i) {
- saveA[i] = vertices[i].indexA;
- saveB[i] = vertices[i].indexB
- }
- switch (_b2Distance_simplex.m_count) {
- case 1:
- break;
- case 2:
- _b2Distance_simplex.Solve2();
- break;
- case 3:
- _b2Distance_simplex.Solve3();
- break
- }
- if (_b2Distance_simplex.m_count == 3) {
- break
- }
- _b2Distance_simplex.GetClosestPoint(_b2Distance_p);
- distanceSqr2 = _b2Distance_p.LengthSquared();
- if (distanceSqr2 >= distanceSqr1) {}
- distanceSqr1 = distanceSqr2;
- _b2Distance_simplex.GetSearchDirection(_b2Distance_p);
- if (_b2Distance_p.LengthSquared() < b2_epsilon * b2_epsilon) {
- break
- }
- var vertex = vertices[_b2Distance_simplex.m_count];
- vertex.indexA = proxyA.GetSupport(transformA.q.c * -_b2Distance_p.x + transformA.q.s * -_b2Distance_p.y, -transformA.q.s * -_b2Distance_p.x + transformA.q.c * -_b2Distance_p.y);
- var pva = proxyA.GetVertex(vertex.indexA);
- vertex.wA.x = (transformA.q.c * pva.x - transformA.q.s * pva.y) + transformA.p.x;
- vertex.wA.y = (transformA.q.s * pva.x + transformA.q.c * pva.y) + transformA.p.y;
- vertex.indexB = proxyB.GetSupport(transformB.q.c * _b2Distance_p.x + transformB.q.s * _b2Distance_p.y, -transformB.q.s * _b2Distance_p.x + transformB.q.c * _b2Distance_p.y);
- var pvb = proxyB.GetVertex(vertex.indexB);
- vertex.wB.x = (transformB.q.c * pvb.x - transformB.q.s * pvb.y) + transformB.p.x;
- vertex.wB.y = (transformB.q.s * pvb.x + transformB.q.c * pvb.y) + transformB.p.y;
- vertex.w.x = vertex.wB.x - vertex.wA.x;
- vertex.w.y = vertex.wB.y - vertex.wA.y;
- ++iter;
- ++b2DistanceFunc.b2_gjkIters;
- var duplicate = false;
- for (var i = 0; i < saveCount; ++i) {
- if (vertex.indexA == saveA[i] && vertex.indexB == saveB[i]) {
- duplicate = true;
- break
- }
- }
- if (duplicate) {
- break
- }++_b2Distance_simplex.m_count
- }
- b2DistanceFunc.b2_gjkMaxIters = b2Max(b2DistanceFunc.b2_gjkMaxIters, iter);
- _b2Distance_simplex.GetWitnessPoints(output.pointA, output.pointB);
- output.distance = b2Distance(output.pointA, output.pointB);
- output.iterations = iter;
- _b2Distance_simplex.WriteCache(cache);
- if (input.useRadii) {
- var rA = proxyA.m_radius;
- var rB = proxyB.m_radius;
- if (output.distance > rA + rB && output.distance > b2_epsilon) {
- output.distance -= rA + rB;
- _b2Distance_normal.x = output.pointB.x - output.pointA.x;
- _b2Distance_normal.y = output.pointB.y - output.pointA.y;
- _b2Distance_normal.Normalize();
- output.pointA.x += (rA * _b2Distance_normal.x);
- output.pointA.y += (rA * _b2Distance_normal.y);
- output.pointB.x -= (rB * _b2Distance_normal.x);
- output.pointB.y -= (rB * _b2Distance_normal.y)
- } else {
- var px = (0.5 * (output.pointA.x + output.pointB.x));
- var py = (0.5 * (output.pointA.y + output.pointB.y));
- output.pointA.x = px;
- output.pointA.y = py;
- output.pointB.x = px;
- output.pointB.y = py;
- output.distance = 0
- }
- }
-}
-b2DistanceFunc.b2_gjkCalls = 0;
-b2DistanceFunc.b2_gjkIters = 0;
-b2DistanceFunc.b2_gjkMaxIters = 0;
-"use strict";
-var b2_nullFeature = 255;
-
-function b2ContactID() {}
-b2ContactID.prototype = {
- indexA: 0,
- indexB: 0,
- typeA: 0,
- typeB: 0,
- Reset: function () {
- this.indexA = this.indexB = this.typeA = this.typeB = 0
- },
- Get: function () {
- return this.indexA | (this.indexB << 8) | (this.typeA << 16) | (this.typeB << 24)
- },
- Assign: function (k) {
- this.indexA = k.indexA;
- this.indexB = k.indexB;
- this.typeA = k.typeA;
- this.typeB = k.typeB
- }
-};
-b2ContactID.e_vertex = 0;
-b2ContactID.e_face = 1;
-
-function b2ManifoldPoint() {
- this.localPoint = new b2Vec2();
- this.normalImpulse = 0;
- this.tangentImpulse = 0;
- this.id = new b2ContactID()
-}
-b2ManifoldPoint.prototype = {
- Clone: function () {
- var point = new b2ManifoldPoint();
- point.localPoint.x = this.localPoint.x;
- point.localPoint.y = this.localPoint.y;
- point.normalImpulse = this.normalImpulse;
- point.tangentImpulse = this.tangentImpulse;
- point.id.Assign(this.id);
- return point
- }
-};
-
-function b2Manifold() {
- this.points = new Array(b2_maxManifoldPoints);
- this.localNormal = new b2Vec2();
- this.localPoint = new b2Vec2();
- this.type = 0;
- this.pointCount = 0
-}
-b2Manifold.prototype = {
- Clone: function () {
- var manifold = new b2Manifold();
- manifold.pointCount = this.pointCount;
- manifold.type = this.type;
- manifold.localPoint.x = this.localPoint.x;
- manifold.localPoint.y = this.localPoint.y;
- manifold.localNormal.x = this.localNormal.x;
- manifold.localNormal.y = this.localNormal.y;
- for (var i = 0; i < this.pointCount; ++i) {
- manifold.points[i] = this.points[i].Clone()
- }
- return manifold
- },
- Assign: function (manifold) {
- this.pointCount = manifold.pointCount;
- this.type = manifold.type;
- this.localPoint.x = manifold.localPoint.x;
- this.localPoint.y = manifold.localPoint.y;
- this.localNormal.x = manifold.localNormal.x;
- this.localNormal.y = manifold.localNormal.y;
- for (var i = 0; i < this.pointCount; ++i) {
- this.points[i] = manifold.points[i].Clone()
- }
- }
-};
-b2Manifold.e_circles = 0;
-b2Manifold.e_faceA = 1;
-b2Manifold.e_faceB = 2;
-b2Manifold.b2_nullState = 0;
-b2Manifold.b2_addState = 1;
-b2Manifold.b2_persistState = 2;
-b2Manifold.b2_removeState = 3;
-
-function b2WorldManifold() {
- this.normal = new b2Vec2();
- this.points = new Array(b2_maxManifoldPoints);
- this.separations = new Array(b2_maxManifoldPoints)
-}
-b2WorldManifold.prototype = {
- Initialize: function (manifold, xfA, radiusA, xfB, radiusB) {
- if (manifold.pointCount == 0) {
- return
- }
- switch (manifold.type) {
- case b2Manifold.e_circles:
- this.normal.x = 1;
- this.normal.y = 0;
- var pointAx = (xfA.q.c * manifold.localPoint.x - xfA.q.s * manifold.localPoint.y) + xfA.p.x;
- var pointAy = (xfA.q.s * manifold.localPoint.x + xfA.q.c * manifold.localPoint.y) + xfA.p.y;
- var pointBx = (xfB.q.c * manifold.points[0].localPoint.x - xfB.q.s * manifold.points[0].localPoint.y) + xfB.p.x;
- var pointBy = (xfB.q.s * manifold.points[0].localPoint.x + xfB.q.c * manifold.points[0].localPoint.y) + xfB.p.y;
- var cx = pointAx - pointBx;
- var cy = pointAy - pointBy;
- if ((cx * cx + cy * cy) > b2_epsilon * b2_epsilon) {
- this.normal.x = pointBx - pointAx;
- this.normal.y = pointBy - pointAy;
- this.normal.Normalize()
- }
- var cAx = pointAx + (radiusA * this.normal.x);
- var cAy = pointAy + (radiusA * this.normal.y);
- var cBx = pointBx - (radiusB * this.normal.x);
- var cBy = pointBy - (radiusB * this.normal.y);
- this.points[0] = new b2Vec2(0.5 * (cAx + cBx), 0.5 * (cAy + cBy));
- this.separations[0] = (cBx - cAx) * this.normal.x + (cBy - cAy) * this.normal.y;
- break;
- case b2Manifold.e_faceA:
- this.normal.x = xfA.q.c * manifold.localNormal.x - xfA.q.s * manifold.localNormal.y;
- this.normal.y = xfA.q.s * manifold.localNormal.x + xfA.q.c * manifold.localNormal.y;
- var planePointx = (xfA.q.c * manifold.localPoint.x - xfA.q.s * manifold.localPoint.y) + xfA.p.x;
- var planePointy = (xfA.q.s * manifold.localPoint.x + xfA.q.c * manifold.localPoint.y) + xfA.p.y;
- for (var i = 0; i < manifold.pointCount; ++i) {
- var clipPointx = (xfB.q.c * manifold.points[i].localPoint.x - xfB.q.s * manifold.points[i].localPoint.y) + xfB.p.x;
- var clipPointy = (xfB.q.s * manifold.points[i].localPoint.x + xfB.q.c * manifold.points[i].localPoint.y) + xfB.p.y;
- var d = (clipPointx - planePointx) * this.normal.x + (clipPointy - planePointy) * this.normal.y;
- var cAx = clipPointx + ((radiusA - d) * this.normal.x);
- var cAy = clipPointy + ((radiusA - d) * this.normal.y);
- var cBx = (clipPointx - (radiusB * this.normal.x));
- var cBy = (clipPointy - (radiusB * this.normal.y));
- this.points[i] = new b2Vec2(0.5 * (cAx + cBx), 0.5 * (cAy + cBy));
- this.separations[i] = (cBx - cAx) * this.normal.x + (cBy - cAy) * this.normal.y
- }
- break;
- case b2Manifold.e_faceB:
- this.normal.x = xfB.q.c * manifold.localNormal.x - xfB.q.s * manifold.localNormal.y;
- this.normal.y = xfB.q.s * manifold.localNormal.x + xfB.q.c * manifold.localNormal.y;
- var planePointx = (xfB.q.c * manifold.localPoint.x - xfB.q.s * manifold.localPoint.y) + xfB.p.x;
- var planePointy = (xfB.q.s * manifold.localPoint.x + xfB.q.c * manifold.localPoint.y) + xfB.p.y;
- for (var i = 0; i < manifold.pointCount; ++i) {
- var clipPointx = (xfA.q.c * manifold.points[i].localPoint.x - xfA.q.s * manifold.points[i].localPoint.y) + xfA.p.x;
- var clipPointy = (xfA.q.s * manifold.points[i].localPoint.x + xfA.q.c * manifold.points[i].localPoint.y) + xfA.p.y;
- var d = (clipPointx - planePointx) * this.normal.x + (clipPointy - planePointy) * this.normal.y;
- var cBx = clipPointx + ((radiusB - d) * this.normal.x);
- var cBy = clipPointy + ((radiusB - d) * this.normal.y);
- var cAx = (clipPointx - (radiusA * this.normal.x));
- var cAy = (clipPointy - (radiusA * this.normal.y));
- this.points[i] = new b2Vec2(0.5 * (cAx + cBx), 0.5 * (cAy + cBy));
- this.separations[i] = (cAx - cBx) * this.normal.x + (cAy - cBy) * this.normal.y
- }
- this.normal.x = -this.normal.x;
- this.normal.y = -this.normal.y;
- break
- }
- }
-};
-
-function b2GetPointStates(state1, state2, manifold1, manifold2) {
- for (var i = 0; i < b2_maxManifoldPoints; ++i) {
- state1[i] = b2Manifold.b2_nullState;
- state2[i] = b2Manifold.b2_nullState
- }
- for (var i = 0; i < manifold1.pointCount; ++i) {
- var id = manifold1.points[i].id;
- state1[i] = b2Manifold.b2_removeState;
- for (var j = 0; j < manifold2.pointCount; ++j) {
- if (manifold2.points[j].id.Get() == id.Get()) {
- state1[i] = b2Manifold.b2_persistState;
- break
- }
- }
- }
- for (var i = 0; i < manifold2.pointCount; ++i) {
- var id = manifold2.points[i].id;
- state2[i] = b2Manifold.b2_addState;
- for (var j = 0; j < manifold1.pointCount; ++j) {
- if (manifold1.points[j].id.Get() == id.Get()) {
- state2[i] = b2Manifold.b2_persistState;
- break
- }
- }
- }
-}
-
-function b2ClipVertex() {
- this.v = new b2Vec2();
- this.id = new b2ContactID()
-}
-
-function b2RayCastInput() {
- this.p1 = new b2Vec2(), this.p2 = new b2Vec2();
- this.maxFraction = 0
-}
-
-function b2RayCastOutput() {
- this.normal = new b2Vec2();
- this.fraction = 0
-}
-
-function b2AABB() {
- this.lowerBound = new b2Vec2();
- this.upperBound = new b2Vec2()
-}
-b2AABB.prototype = {
- Assign: function (other) {
- this.lowerBound.x = other.lowerBound.x;
- this.lowerBound.y = other.lowerBound.y;
- this.upperBound.x = other.upperBound.x;
- this.upperBound.y = other.upperBound.y
- },
- Clone: function () {
- var clone = new b2AABB();
- clone.lowerBound.x = this.lowerBound.x;
- clone.lowerBound.y = this.lowerBound.y;
- clone.lowerBound.x = this.lowerBound.x;
- clone.lowerBound.y = this.lowerBound.y;
- return clone
- },
- IsValid: function () {
- return (this.upperBound.x - this.lowerBound.x) >= 0 && (this.upperBound.y - this.lowerBound.y) >= 0 && this.lowerBound.IsValid() && this.upperBound.IsValid()
- },
- GetCenter: function () {
- return new b2Vec2(0.5 * (this.lowerBound.x + this.upperBound.x), 0.5 * (this.lowerBound.y + this.upperBound.y))
- },
- GetExtents: function () {
- return new b2Vec2(0.5 * (this.upperBound.x - this.lowerBound.x), 0.5 * (this.upperBound.y - this.lowerBound.y))
- },
- GetPerimeter: function () {
- return 2 * ((this.upperBound.x - this.lowerBound.x) + (this.upperBound.y - this.lowerBound.y))
- },
- Combine: function (aabb1, aabb2) {
- if (aabb2) {
- this.lowerBound.x = b2Min(aabb1.lowerBound.x, aabb2.lowerBound.x);
- this.lowerBound.y = b2Min(aabb1.lowerBound.y, aabb2.lowerBound.y);
- this.upperBound.x = b2Max(aabb1.upperBound.x, aabb2.upperBound.x);
- this.upperBound.y = b2Max(aabb1.upperBound.y, aabb2.upperBound.y)
- } else {
- this.lowerBound.x = b2Min(this.lowerBound.x, aabb1.lowerBound.x);
- this.lowerBound.y = b2Min(this.lowerBound.y, aabb1.lowerBound.y);
- this.upperBound.x = b2Max(this.upperBound.x, aabb1.upperBound.x);
- this.upperBound.y = b2Max(this.upperBound.y, aabb1.upperBound.y)
- }
- },
- Contains: function (aabb) {
- return this.lowerBound.x <= aabb.lowerBound.x && this.lowerBound.y <= aabb.lowerBound.y && aabb.upperBound.x <= this.upperBound.x && aabb.upperBound.y <= this.upperBound.y
- },
- RayCast: function (output, input) {
- var tmin = -b2_maxFloat;
- var tmax = b2_maxFloat;
- var p = input.p1;
- var d = b2Vec2.Subtract(input.p2, input.p1);
- var absD = b2Abs_v2(d);
- var normal = new b2Vec2();
- for (var i = 0; i < 2; ++i) {
- if (absD.get_i(i) < b2_epsilon) {
- if (p.get_i(i) < this.lowerBound.get_i(i) || this.upperBound.get_i(i) < p.get_i(i)) {
- return false
- }
- } else {
- var inv_d = 1 / d.get_i(i);
- var t1 = (this.lowerBound.get_i(i) - p.get_i(i)) * inv_d;
- var t2 = (this.upperBound.get_i(i) - p.get_i(i)) * inv_d;
- var s = -1;
- if (t1 > t2) {
- var temp = t2;
- t2 = t1;
- t1 = temp;
- s = 1
- }
- if (t1 > tmin) {
- normal.x = normal.y = 0;
- normal.set_i(i, s);
- tmin = t1
- }
- tmax = b2Min(tmax, t2);
- if (tmin > tmax) {
- return false
- }
- }
- }
- if (tmin < 0 || input.maxFraction < tmin) {
- return false
- }
- output.fraction = tmin;
- output.normal.x = normal.x;
- output.normal.y = normal.y;
- return true
- }
-};
-
-function b2CollideCircles(manifold, circleA, xfA, circleB, xfB) {
- manifold.pointCount = 0;
- var pA = b2Mul_t_v2(xfA, circleA.m_p);
- var pB = b2Mul_t_v2(xfB, circleB.m_p);
- var dx = pB.x - pA.x;
- var dy = pB.y - pA.y;
- var distSqr = dx * dx + dy * dy;
- var rA = circleA.m_radius,
- rB = circleB.m_radius;
- var radius = rA + rB;
- if (distSqr > radius * radius) {
- return
- }
- manifold.type = b2Manifold.e_circles;
- manifold.localPoint.x = circleA.m_p.x;
- manifold.localPoint.y = circleA.m_p.y;
- manifold.localNormal.x = manifold.localNormal.y = 0;
- manifold.pointCount = 1;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- manifold.points[0].id.Reset()
-}
-
-function b2CollidePolygonAndCircle(manifold, polygonA, xfA, circleB, xfB) {
- manifold.pointCount = 0;
- var c = b2Mul_t_v2(xfB, circleB.m_p);
- var cLocal = b2MulT_t_v2(xfA, c);
- var normalIndex = 0;
- var separation = -b2_maxFloat;
- var radius = polygonA.m_radius + circleB.m_radius;
- var vertexCount = polygonA.m_count;
- var vertices = polygonA.m_vertices;
- var normals = polygonA.m_normals;
- for (var i = 0; i < vertexCount; ++i) {
- var s = normals[i].x * (cLocal.x - vertices[i].x) + normals[i].y * (cLocal.y - vertices[i].y);
- if (s > radius) {
- return
- }
- if (s > separation) {
- separation = s;
- normalIndex = i
- }
- }
- var vertIndex1 = normalIndex;
- var vertIndex2 = vertIndex1 + 1 < vertexCount ? vertIndex1 + 1 : 0;
- var v1 = vertices[vertIndex1];
- var v2 = vertices[vertIndex2];
- if (separation < b2_epsilon) {
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_faceA;
- manifold.localNormal.x = normals[normalIndex].x;
- manifold.localNormal.y = normals[normalIndex].y;
- manifold.localPoint.x = 0.5 * (v1.x + v2.x);
- manifold.localPoint.y = 0.5 * (v1.y + v2.y);
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- manifold.points[0].id.Reset();
- return
- }
- var u1 = (cLocal.x - v1.x) * (v2.x - v1.x) + (cLocal.y - v1.y) * (v2.y - v1.y);
- var u2 = (cLocal.x - v2.x) * (v1.x - v2.x) + (cLocal.y - v2.y) * (v1.y - v2.y);
- if (u1 <= 0) {
- if (b2DistanceSquared(cLocal, v1) > radius * radius) {
- return
- }
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_faceA;
- manifold.localNormal.x = cLocal.x - v1.x;
- manifold.localNormal.y = cLocal.y - v1.y;
- manifold.localNormal.Normalize();
- manifold.localPoint.x = v1.x;
- manifold.localPoint.y = v1.y;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- manifold.points[0].id.Reset()
- } else {
- if (u2 <= 0) {
- if (b2DistanceSquared(cLocal, v2) > radius * radius) {
- return
- }
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_faceA;
- manifold.localNormal.x = cLocal.x - v2.x;
- manifold.localNormal.y = cLocal.y - v2.y;
- manifold.localNormal.Normalize();
- manifold.localPoint.x = v2.x;
- manifold.localPoint.y = v2.y;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- manifold.points[0].id.Reset()
- } else {
- var faceCenterx = 0.5 * (v1.x + v2.x);
- var faceCentery = 0.5 * (v1.y + v2.y);
- var separation = (cLocal.x - faceCenterx) * normals[vertIndex1].x + (cLocal.y - faceCentery) * normals[vertIndex1].y;
- if (separation > radius) {
- return
- }
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_faceA;
- manifold.localNormal.x = normals[vertIndex1].x;
- manifold.localNormal.y = normals[vertIndex1].y;
- manifold.localPoint.x = faceCenterx;
- manifold.localPoint.y = faceCentery;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- manifold.points[0].id.Reset()
- }
- }
-}
-
-function b2FindMaxSeparation(edgeIndex, poly1, xf1, poly2, xf2) {
- var count1 = poly1.m_count;
- var count2 = poly2.m_count;
- var n1s = poly1.m_normals;
- var v1s = poly1.m_vertices;
- var v2s = poly2.m_vertices;
- var xf = b2MulT_t_t(xf2, xf1);
- var bestIndex = 0;
- var maxSeparation = -b2_maxFloat;
- for (var i = 0; i < count1; ++i) {
- var nx = xf.q.c * n1s[i].x - xf.q.s * n1s[i].y;
- var ny = xf.q.s * n1s[i].x + xf.q.c * n1s[i].y;
- var v1x = (xf.q.c * v1s[i].x - xf.q.s * v1s[i].y) + xf.p.x;
- var v1y = (xf.q.s * v1s[i].x + xf.q.c * v1s[i].y) + xf.p.y;
- var si = b2_maxFloat;
- for (var j = 0; j < count2; ++j) {
- var sij = nx * (v2s[j].x - v1x) + ny * (v2s[j].y - v1y);
- if (sij < si) {
- si = sij
- }
- }
- if (si > maxSeparation) {
- maxSeparation = si;
- bestIndex = i
- }
- }
- edgeIndex[0] = bestIndex;
- return maxSeparation
-}
-
-function b2FindIncidentEdge(c, poly1, xf1, edge1, poly2, xf2) {
- var normals1 = poly1.m_normals;
- var count2 = poly2.m_count;
- var vertices2 = poly2.m_vertices;
- var normals2 = poly2.m_normals;
- var t1x = xf1.q.c * normals1[edge1].x - xf1.q.s * normals1[edge1].y;
- var t1y = xf1.q.s * normals1[edge1].x + xf1.q.c * normals1[edge1].y;
- var normal1x = xf2.q.c * t1x + xf2.q.s * t1y;
- var normal1y = -xf2.q.s * t1x + xf2.q.c * t1y;
- var index = 0;
- var minDot = b2_maxFloat;
- for (var i = 0; i < count2; ++i) {
- var dot = normal1x * normals2[i].x + normal1y * normals2[i].y;
- if (dot < minDot) {
- minDot = dot;
- index = i
- }
- }
- var i1 = index;
- var i2 = i1 + 1 < count2 ? i1 + 1 : 0;
- c[0].v.x = (xf2.q.c * vertices2[i1].x - xf2.q.s * vertices2[i1].y) + xf2.p.x;
- c[0].v.y = (xf2.q.s * vertices2[i1].x + xf2.q.c * vertices2[i1].y) + xf2.p.y;
- c[0].id.indexA = edge1;
- c[0].id.indexB = i1;
- c[0].id.typeA = b2ContactID.e_face;
- c[0].id.typeB = b2ContactID.e_vertex;
- c[1].v.x = (xf2.q.c * vertices2[i2].x - xf2.q.s * vertices2[i2].y) + xf2.p.x;
- c[1].v.y = (xf2.q.s * vertices2[i2].x + xf2.q.c * vertices2[i2].y) + xf2.p.y;
- c[1].id.indexA = edge1;
- c[1].id.indexB = i2;
- c[1].id.typeA = b2ContactID.e_face;
- c[1].id.typeB = b2ContactID.e_vertex
-}
-
-function b2CollidePolygons(manifold, polyA, xfA, polyB, xfB) {
- manifold.pointCount = 0;
- var totalRadius = polyA.m_radius + polyB.m_radius;
- var edgeA = [0];
- var separationA = b2FindMaxSeparation(edgeA, polyA, xfA, polyB, xfB);
- if (separationA > totalRadius) {
- return
- }
- var edgeB = [0];
- var separationB = b2FindMaxSeparation(edgeB, polyB, xfB, polyA, xfA);
- if (separationB > totalRadius) {
- return
- }
- var poly1;
- var poly2;
- var xf1, xf2;
- var edge1 = 0;
- var flip = 0;
- var k_tol = 0.1 * b2_linearSlop;
- if (separationB > separationA + k_tol) {
- poly1 = polyB;
- poly2 = polyA;
- xf1 = xfB;
- xf2 = xfA;
- edge1 = edgeB[0];
- manifold.type = b2Manifold.e_faceB;
- flip = 1
- } else {
- poly1 = polyA;
- poly2 = polyB;
- xf1 = xfA;
- xf2 = xfB;
- edge1 = edgeA[0];
- manifold.type = b2Manifold.e_faceA;
- flip = 0
- }
- b2FindIncidentEdge(b2CollidePolygons._local_incidentEdges, poly1, xf1, edge1, poly2, xf2);
- var count1 = poly1.m_count;
- var vertices1 = poly1.m_vertices;
- var iv1 = edge1;
- var iv2 = edge1 + 1 < count1 ? edge1 + 1 : 0;
- var v11 = vertices1[iv1];
- var v12 = vertices1[iv2];
- b2CollidePolygons._localTangent.x = v12.x - v11.x;
- b2CollidePolygons._localTangent.y = v12.y - v11.y;
- b2CollidePolygons._localTangent.Normalize();
- var localNormalx = 1 * b2CollidePolygons._localTangent.y;
- var localNormaly = -1 * b2CollidePolygons._localTangent.x;
- var planePointx = 0.5 * (v11.x + v12.x);
- var planePointy = 0.5 * (v11.y + v12.y);
- var tangentx = xf1.q.c * b2CollidePolygons._localTangent.x - xf1.q.s * b2CollidePolygons._localTangent.y;
- var tangenty = xf1.q.s * b2CollidePolygons._localTangent.x + xf1.q.c * b2CollidePolygons._localTangent.y;
- var normalx = 1 * tangenty;
- var normaly = -1 * tangentx;
- v11 = b2Mul_t_v2(xf1, v11);
- v12 = b2Mul_t_v2(xf1, v12);
- var frontOffset = normalx * v11.x + normaly * v11.y;
- var sideOffset1 = -(tangentx * v11.x + tangenty * v11.y) + totalRadius;
- var sideOffset2 = (tangentx * v12.x + tangenty * v12.y) + totalRadius;
- var clipPoints1 = new Array(2);
- var clipPoints2 = new Array(2);
- var np;
- np = b2ClipSegmentToLine(clipPoints1, b2CollidePolygons._local_incidentEdges, -tangentx, -tangenty, sideOffset1, iv1);
- if (np < 2) {
- return
- }
- np = b2ClipSegmentToLine(clipPoints2, clipPoints1, tangentx, tangenty, sideOffset2, iv2);
- if (np < 2) {
- return
- }
- manifold.localNormal.x = localNormalx;
- manifold.localNormal.y = localNormaly;
- manifold.localPoint.x = planePointx;
- manifold.localPoint.y = planePointy;
- var pointCount = 0;
- for (var i = 0; i < b2_maxManifoldPoints; ++i) {
- var separation = (normalx * clipPoints2[i].v.x + normaly * clipPoints2[i].v.y) - frontOffset;
- if (separation <= totalRadius) {
- var cp = manifold.points[pointCount] = new b2ManifoldPoint();
- cp.localPoint.Assign(b2MulT_t_v2(xf2, clipPoints2[i].v));
- cp.id.Assign(clipPoints2[i].id);
- if (flip) {
- var cf = new b2ContactID();
- cf.Assign(cp.id);
- cp.id.indexA = cf.indexB;
- cp.id.indexB = cf.indexA;
- cp.id.typeA = cf.typeB;
- cp.id.typeB = cf.typeA
- }++pointCount
- }
- }
- manifold.pointCount = pointCount
-}
-b2CollidePolygons._localTangent = new b2Vec2();
-b2CollidePolygons._local_incidentEdges = [new b2ClipVertex(), new b2ClipVertex()];
-
-function b2CollideEdgeAndCircle(manifold, edgeA, xfA, circleB, xfB) {
- manifold.pointCount = 0;
- var Q = b2MulT_t_v2(xfA, b2Mul_t_v2(xfB, circleB.m_p));
- var A = edgeA.m_vertex1,
- B = edgeA.m_vertex2;
- var ex = B.x - A.x;
- var ey = B.y - A.y;
- var u = ex * (B.x - Q.x) + ey * (B.y - Q.y);
- var v = ex * (Q.x - A.x) + ey * (Q.y - A.y);
- var radius = edgeA.m_radius + circleB.m_radius;
- var cf = new b2ContactID();
- cf.indexB = 0;
- cf.typeB = b2ContactID.e_vertex;
- if (v <= 0) {
- var P = A;
- var dx = Q.x - P.x;
- var dy = Q.y - P.y;
- var dd = dx * dx + dy * dy;
- if (dd > radius * radius) {
- return
- }
- if (edgeA.m_hasVertex0) {
- var A1 = edgeA.m_vertex0;
- var B1 = A;
- var e1x = B1.x - A1.x;
- var e1y = B1.y - A1.y;
- var u1 = e1x * (B1.x - Q.x) + e1y * (B1.y - Q.y);
- if (u1 > 0) {
- return
- }
- }
- cf.indexA = 0;
- cf.typeA = b2ContactID.e_vertex;
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_circles;
- manifold.localNormal.x = manifold.localNormal.y = 0;
- manifold.localPoint.x = P.x;
- manifold.localPoint.y = P.y;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].id.Assign(cf);
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- return
- }
- if (u <= 0) {
- var P = B;
- var dx = Q.x - P.x;
- var dy = Q.y - P.y;
- var dd = dx * dx + dy * dy;
- if (dd > radius * radius) {
- return
- }
- if (edgeA.m_hasVertex3) {
- var B2 = edgeA.m_vertex3;
- var A2 = B;
- var e2x = B2.x - A2.x;
- var e2y = B2.y - A2.y;
- var v2 = e2x * (Q.x - A2.x) + e2y * (Q.y - A2.y);
- if (v2 > 0) {
- return
- }
- }
- cf.indexA = 1;
- cf.typeA = b2ContactID.e_vertex;
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_circles;
- manifold.localNormal.x = manifold.localNormal.y = 0;
- manifold.localPoint.x = P.x;
- manifold.localPoint.y = P.y;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].id.Assign(cf);
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y;
- return
- }
- var den = ex * ex + ey * ey;
- var Px = (1 / den) * ((u * A.x) + (v * B.x));
- var Py = (1 / den) * ((u * A.y) + (v * B.y));
- var dx = Q.x - Px;
- var dy = Q.y - Py;
- var dd = dx * dx + dy * dy;
- if (dd > radius * radius) {
- return
- }
- var nx = -ey;
- var ny = ex;
- if (nx * (Q.x - A.x) + ny * (Q.y - A.y) < 0) {
- nx = -nx;
- ny = -ny
- }
- cf.indexA = 0;
- cf.typeA = b2ContactID.e_face;
- manifold.pointCount = 1;
- manifold.type = b2Manifold.e_faceA;
- manifold.localNormal.x = nx;
- manifold.localNormal.y = ny;
- manifold.localNormal.Normalize();
- manifold.localPoint.x = A.x;
- manifold.localPoint.y = A.y;
- manifold.points[0] = new b2ManifoldPoint();
- manifold.points[0].id.Assign(cf);
- manifold.points[0].localPoint.x = circleB.m_p.x;
- manifold.points[0].localPoint.y = circleB.m_p.y
-}
-
-function b2EPAxis() {
- this.type = 0;
- this.index = 0;
- this.separation = 0
-}
-b2EPAxis.e_unknown = 0;
-b2EPAxis.e_edgeA = 1;
-b2EPAxis.e_edgeB = 2;
-
-function b2TempPolygon() {
- this.vertices = new Array(b2_maxPolygonVertices);
- this.normals = new Array(b2_maxPolygonVertices);
- this.count = 0
-}
-
-function b2ReferenceFace() {
- this.i1 = 0, this.i2 = 0;
- this.v1 = new b2Vec2(), this.v2 = new b2Vec2();
- this.normal = new b2Vec2();
- this.sideNormal1 = new b2Vec2();
- this.sideOffset1 = 0;
- this.sideNormal2 = new b2Vec2();
- this.sideOffset2 = 0
-}
-
-function b2EPCollider() {
- this.m_polygonB = new b2TempPolygon();
- this.m_xf = new b2Transform();
- this.m_centroidB = new b2Vec2();
- this.m_v0 = new b2Vec2(), this.m_v1 = new b2Vec2(), this.m_v2 = new b2Vec2(), this.m_v3 = new b2Vec2();
- this.m_normal0 = new b2Vec2(), this.m_normal1 = new b2Vec2(), this.m_normal2 = new b2Vec2();
- this.m_normal = new b2Vec2();
- this.m_type1 = 0, this.m_type2 = 0;
- this.m_lowerLimit = new b2Vec2(), this.m_upperLimit = new b2Vec2();
- this.m_radius = 0;
- this.m_front = false
-}
-b2EPCollider._temp_edge = new b2Vec2();
-b2EPCollider._temp_edge0 = new b2Vec2();
-b2EPCollider._temp_edge2 = new b2Vec2();
-b2EPCollider.prototype = {
- Collide: function (manifold, edgeA, xfA, polygonB, xfB) {
- this.m_xf.Assign(b2MulT_t_t(xfA, xfB));
- this.m_centroidB.x = (this.m_xf.q.c * polygonB.m_centroid.x - this.m_xf.q.s * polygonB.m_centroid.y) + this.m_xf.p.x;
- this.m_centroidB.y = (this.m_xf.q.s * polygonB.m_centroid.x + this.m_xf.q.c * polygonB.m_centroid.y) + this.m_xf.p.y;
- this.m_v0.x = edgeA.m_vertex0.x;
- this.m_v0.y = edgeA.m_vertex0.y;
- this.m_v1.x = edgeA.m_vertex1.x;
- this.m_v1.y = edgeA.m_vertex1.y;
- this.m_v2.x = edgeA.m_vertex2.x;
- this.m_v2.y = edgeA.m_vertex2.y;
- this.m_v3.x = edgeA.m_vertex3.x;
- this.m_v3.y = edgeA.m_vertex3.y;
- var hasVertex0 = edgeA.m_hasVertex0;
- var hasVertex3 = edgeA.m_hasVertex3;
- b2EPCollider._temp_edge.x = this.m_v2.x - this.m_v1.x;
- b2EPCollider._temp_edge.y = this.m_v2.y - this.m_v1.y;
- b2EPCollider._temp_edge.Normalize();
- this.m_normal1.x = b2EPCollider._temp_edge.y;
- this.m_normal1.y = -b2EPCollider._temp_edge.x;
- var offset1 = this.m_normal1.x * (this.m_centroidB.x - this.m_v1.x) + this.m_normal1.y * (this.m_centroidB.y - this.m_v1.y);
- var offset0 = 0,
- offset2 = 0;
- var convex1 = false,
- convex2 = false;
- if (hasVertex0) {
- b2EPCollider._temp_edge0.x = this.m_v1.x - this.m_v0.x;
- b2EPCollider._temp_edge0.y = this.m_v1.y - this.m_v0.y;
- b2EPCollider._temp_edge0.Normalize();
- this.m_normal0.x = b2EPCollider._temp_edge0.y;
- this.m_normal0.y = -b2EPCollider._temp_edge0.x;
- convex1 = (b2EPCollider._temp_edge0.x * b2EPCollider._temp_edge.y - b2EPCollider._temp_edge0.y * b2EPCollider._temp_edge.x) >= 0;
- offset0 = this.m_normal0.x * (this.m_centroidB.x - this.m_v0.x) + this.m_normal0.y * (this.m_centroidB.y - this.m_v0.y)
- }
- if (hasVertex3) {
- b2EPCollider._temp_edge2.x = this.m_v3.x - this.m_v2.x;
- b2EPCollider._temp_edge2.y = this.m_v3.y - this.m_v2.y;
- b2EPCollider._temp_edge2.Normalize();
- this.m_normal2.x = b2EPCollider._temp_edge2.y;
- this.m_normal2.y = -b2EPCollider._temp_edge2.x;
- convex2 = (b2EPCollider._temp_edge.x * b2EPCollider._temp_edge2.y - b2EPCollider._temp_edge.y * b2EPCollider._temp_edge2.x) > 0;
- offset2 = this.m_normal2.x * (this.m_centroidB.x - this.m_v2.x) + this.m_normal2.y * (this.m_centroidB.y - this.m_v2.y)
- }
- if (hasVertex0 && hasVertex3) {
- if (convex1 && convex2) {
- this.m_front = offset0 >= 0 || offset1 >= 0 || offset2 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal0.x;
- this.m_lowerLimit.y = this.m_normal0.y;
- this.m_upperLimit.x = this.m_normal2.x;
- this.m_upperLimit.y = this.m_normal2.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal1.x;
- this.m_lowerLimit.y = -this.m_normal1.y;
- this.m_upperLimit.x = -this.m_normal1.x;
- this.m_upperLimit.y = -this.m_normal1.y
- }
- } else {
- if (convex1) {
- this.m_front = offset0 >= 0 || (offset1 >= 0 && offset2 >= 0);
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal0.x;
- this.m_lowerLimit.y = this.m_normal0.y;
- this.m_upperLimit.x = this.m_normal1.x;
- this.m_upperLimit.y = this.m_normal1.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal2.x;
- this.m_lowerLimit.y = -this.m_normal2.y;
- this.m_upperLimit.x = -this.m_normal1.x;
- this.m_upperLimit.y = -this.m_normal1.y
- }
- } else {
- if (convex2) {
- this.m_front = offset2 >= 0 || (offset0 >= 0 && offset1 >= 0);
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal1.x;
- this.m_lowerLimit.y = this.m_normal1.y;
- this.m_upperLimit.x = this.m_normal2.x;
- this.m_upperLimit.y = this.m_normal2.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal1.x;
- this.m_lowerLimit.y = -this.m_normal1.y;
- this.m_upperLimit.x = -this.m_normal0.x;
- this.m_upperLimit.y = -this.m_normal0.y
- }
- } else {
- this.m_front = offset0 >= 0 && offset1 >= 0 && offset2 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal1.x;
- this.m_lowerLimit.y = this.m_normal1.y;
- this.m_upperLimit.x = this.m_normal1.x;
- this.m_upperLimit.y = this.m_normal1.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal2.x;
- this.m_lowerLimit.y = -this.m_normal2.y;
- this.m_upperLimit.x = -this.m_normal0.x;
- this.m_upperLimit.y = -this.m_normal0.y
- }
- }
- }
- }
- } else {
- if (hasVertex0) {
- if (convex1) {
- this.m_front = offset0 >= 0 || offset1 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal0.x;
- this.m_lowerLimit.y = this.m_normal0.y;
- this.m_upperLimit.x = -this.m_normal1.x;
- this.m_upperLimit.y = -this.m_normal1.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal1.x;
- this.m_lowerLimit.y = this.m_normal1.y;
- this.m_upperLimit.x = -this.m_normal1.x;
- this.m_upperLimit.y = -this.m_normal1.y
- }
- } else {
- this.m_front = offset0 >= 0 && offset1 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal1.x;
- this.m_lowerLimit.y = this.m_normal1.y;
- this.m_upperLimit.x = -this.m_normal1.x;
- this.m_upperLimit.y = -this.m_normal1.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal1.x;
- this.m_lowerLimit.y = this.m_normal1.y;
- this.m_upperLimit.x = -this.m_normal0.x;
- this.m_upperLimit.y = -this.m_normal0.y
- }
- }
- } else {
- if (hasVertex3) {
- if (convex2) {
- this.m_front = offset1 >= 0 || offset2 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal1.x;
- this.m_lowerLimit.y = -this.m_normal1.y;
- this.m_upperLimit.x = this.m_normal2.x;
- this.m_upperLimit.y = this.m_normal2.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal1.x;
- this.m_lowerLimit.y = -this.m_normal1.y;
- this.m_upperLimit.x = this.m_normal1.x;
- this.m_upperLimit.y = this.m_normal1.y
- }
- } else {
- this.m_front = offset1 >= 0 && offset2 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal1.x;
- this.m_lowerLimit.y = -this.m_normal1.y;
- this.m_upperLimit.x = this.m_normal1.x;
- this.m_upperLimit.y = this.m_normal1.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal2.x;
- this.m_lowerLimit.y = -this.m_normal2.y;
- this.m_upperLimit.x = this.m_normal1.x;
- this.m_upperLimit.y = this.m_normal1.y
- }
- }
- } else {
- this.m_front = offset1 >= 0;
- if (this.m_front) {
- this.m_normal.x = this.m_normal1.x;
- this.m_normal.y = this.m_normal1.y;
- this.m_lowerLimit.x = -this.m_normal1.x;
- this.m_lowerLimit.y = -this.m_normal1.y;
- this.m_upperLimit.x = -this.m_normal1.x;
- this.m_upperLimit.y = -this.m_normal1.y
- } else {
- this.m_normal.x = -this.m_normal1.x;
- this.m_normal.y = -this.m_normal1.y;
- this.m_lowerLimit.x = this.m_normal1.x;
- this.m_lowerLimit.y = this.m_normal1.y;
- this.m_upperLimit.x = this.m_normal1.x;
- this.m_upperLimit.y = this.m_normal1.y
- }
- }
- }
- }
- this.m_polygonB.count = polygonB.m_count;
- for (var i = 0; i < polygonB.m_count; ++i) {
- this.m_polygonB.vertices[i] = b2Mul_t_v2(this.m_xf, polygonB.m_vertices[i]);
- this.m_polygonB.normals[i] = b2Mul_r_v2(this.m_xf.q, polygonB.m_normals[i])
- }
- this.m_radius = 2 * b2_polygonRadius;
- manifold.pointCount = 0;
- var edgeAxis = this.ComputeEdgeSeparation();
- if (edgeAxis.type == b2EPAxis.e_unknown) {
- return
- }
- if (edgeAxis.separation > this.m_radius) {
- return
- }
- var polygonAxis = this.ComputePolygonSeparation();
- if (polygonAxis.type != b2EPAxis.e_unknown && polygonAxis.separation > this.m_radius) {
- return
- }
- var k_relativeTol = 0.98;
- var k_absoluteTol = 0.001;
- var primaryAxis = new b2EPAxis();
- if (polygonAxis.type == b2EPAxis.e_unknown) {
- primaryAxis = edgeAxis
- } else {
- if (polygonAxis.separation > k_relativeTol * edgeAxis.separation + k_absoluteTol) {
- primaryAxis = polygonAxis
- } else {
- primaryAxis = edgeAxis
- }
- }
- var ie = new Array(2);
- var rf = new b2ReferenceFace();
- if (primaryAxis.type == b2EPAxis.e_edgeA) {
- manifold.type = b2Manifold.e_faceA;
- var bestIndex = 0;
- var bestValue = this.m_normal.x * this.m_polygonB.normals[0].x + this.m_normal.y * this.m_polygonB.normals[0].y;
- for (var i = 1; i < this.m_polygonB.count; ++i) {
- var value = this.m_normal.x * this.m_polygonB.normals[i].x + this.m_normal.y * this.m_polygonB.normals[i].y;
- if (value < bestValue) {
- bestValue = value;
- bestIndex = i
- }
- }
- var i1 = bestIndex;
- var i2 = i1 + 1 < this.m_polygonB.count ? i1 + 1 : 0;
- ie[0] = new b2ClipVertex();
- ie[0].v.x = this.m_polygonB.vertices[i1].x;
- ie[0].v.y = this.m_polygonB.vertices[i1].y;
- ie[0].id.indexA = 0;
- ie[0].id.indexB = i1;
- ie[0].id.typeA = b2ContactID.e_face;
- ie[0].id.typeB = b2ContactID.e_vertex;
- ie[1] = new b2ClipVertex();
- ie[1].v.x = this.m_polygonB.vertices[i2].x;
- ie[1].v.y = this.m_polygonB.vertices[i2].y;
- ie[1].id.indexA = 0;
- ie[1].id.indexB = i2;
- ie[1].id.typeA = b2ContactID.e_face;
- ie[1].id.typeB = b2ContactID.e_vertex;
- if (this.m_front) {
- rf.i1 = 0;
- rf.i2 = 1;
- rf.v1.x = this.m_v1.x;
- rf.v1.y = this.m_v1.y;
- rf.v2.x = this.m_v2.x;
- rf.v2.y = this.m_v2.y;
- rf.normal.x = this.m_normal1.x;
- rf.normal.y = this.m_normal1.y
- } else {
- rf.i1 = 1;
- rf.i2 = 0;
- rf.v1.x = this.m_v2.x;
- rf.v1.y = this.m_v2.y;
- rf.v2.x = this.m_v1.x;
- rf.v2.y = this.m_v1.y;
- rf.normal.x = -this.m_normal1.x;
- rf.normal.y = -this.m_normal1.y
- }
- } else {
- manifold.type = b2Manifold.e_faceB;
- ie[0] = new b2ClipVertex();
- ie[0].v = this.m_v1;
- ie[0].id.indexA = 0;
- ie[0].id.indexB = primaryAxis.index;
- ie[0].id.typeA = b2ContactID.e_vertex;
- ie[0].id.typeB = b2ContactID.e_face;
- ie[1] = new b2ClipVertex();
- ie[1].v = this.m_v2;
- ie[1].id.indexA = 0;
- ie[1].id.indexB = primaryAxis.index;
- ie[1].id.typeA = b2ContactID.e_vertex;
- ie[1].id.typeB = b2ContactID.e_face;
- rf.i1 = primaryAxis.index;
- rf.i2 = rf.i1 + 1 < this.m_polygonB.count ? rf.i1 + 1 : 0;
- rf.v1.x = this.m_polygonB.vertices[rf.i1].x;
- rf.v1.y = this.m_polygonB.vertices[rf.i1].y;
- rf.v2.x = this.m_polygonB.vertices[rf.i2].x;
- rf.v2.y = this.m_polygonB.vertices[rf.i2].y;
- rf.normal.x = this.m_polygonB.normals[rf.i1].x;
- rf.normal.y = this.m_polygonB.normals[rf.i1].y
- }
- rf.sideNormal1.x = rf.normal.y;
- rf.sideNormal1.y = -rf.normal.x;
- rf.sideNormal2.x = -rf.sideNormal1.x;
- rf.sideNormal2.y = -rf.sideNormal1.y;
- rf.sideOffset1 = rf.sideNormal1.x * rf.v1.x + rf.sideNormal1.y * rf.v1.y;
- rf.sideOffset2 = rf.sideNormal2.x * rf.v2.x + rf.sideNormal2.y * rf.v2.y;
- var clipPoints1 = new Array(2);
- var clipPoints2 = new Array(2);
- var np;
- np = b2ClipSegmentToLine(clipPoints1, ie, rf.sideNormal1.x, rf.sideNormal1.y, rf.sideOffset1, rf.i1);
- if (np < b2_maxManifoldPoints) {
- return
- }
- np = b2ClipSegmentToLine(clipPoints2, clipPoints1, rf.sideNormal2.x, rf.sideNormal2.y, rf.sideOffset2, rf.i2);
- if (np < b2_maxManifoldPoints) {
- return
- }
- if (primaryAxis.type == b2EPAxis.e_edgeA) {
- manifold.localNormal.x = rf.normal.x;
- manifold.localNormal.y = rf.normal.y;
- manifold.localPoint.x = rf.v1.x;
- manifold.localPoint.y = rf.v1.y
- } else {
- manifold.localNormal.x = polygonB.m_normals[rf.i1].x;
- manifold.localNormal.y = polygonB.m_normals[rf.i1].y;
- manifold.localPoint.x = polygonB.m_vertices[rf.i1].x;
- manifold.localPoint.y = polygonB.m_vertices[rf.i1].y
- }
- var pointCount = 0;
- for (var i = 0; i < b2_maxManifoldPoints; ++i) {
- var separation = rf.normal.x * (clipPoints2[i].v.x - rf.v1.x) + rf.normal.y * (clipPoints2[i].v.y - rf.v1.y);
- if (separation <= this.m_radius) {
- var cp = manifold.points[pointCount] = new b2ManifoldPoint();
- if (primaryAxis.type == b2EPAxis.e_edgeA) {
- cp.localPoint.Assign(b2MulT_t_v2(this.m_xf, clipPoints2[i].v));
- cp.id.Assign(clipPoints2[i].id)
- } else {
- cp.localPoint.x = clipPoints2[i].v.x;
- cp.localPoint.y = clipPoints2[i].v.y;
- cp.id.typeA = clipPoints2[i].id.typeB;
- cp.id.typeB = clipPoints2[i].id.typeA;
- cp.id.indexA = clipPoints2[i].id.indexB;
- cp.id.indexB = clipPoints2[i].id.indexA
- }++pointCount
- }
- }
- manifold.pointCount = pointCount
- },
- ComputeEdgeSeparation: function () {
- var axis = new b2EPAxis();
- axis.type = b2EPAxis.e_edgeA;
- axis.index = this.m_front ? 0 : 1;
- axis.separation = Number.MAX_VALUE;
- for (var i = 0; i < this.m_polygonB.count; ++i) {
- var s = this.m_normal.x * (this.m_polygonB.vertices[i].x - this.m_v1.x) + this.m_normal.y * (this.m_polygonB.vertices[i].y - this.m_v1.y);
- if (s < axis.separation) {
- axis.separation = s
- }
- }
- return axis
- },
- ComputePolygonSeparation: function () {
- var axis = new b2EPAxis();
- axis.type = b2EPAxis.e_unknown;
- axis.index = -1;
- axis.separation = -Number.MAX_VALUE;
- var perpx = -this.m_normal.y;
- var perpy = this.m_normal.x;
- for (var i = 0; i < this.m_polygonB.count; ++i) {
- var nx = -this.m_polygonB.normals[i].x;
- var ny = -this.m_polygonB.normals[i].y;
- var s1 = nx * (this.m_polygonB.vertices[i].x - this.m_v1.x) + ny * (this.m_polygonB.vertices[i].y - this.m_v1.y);
- var s2 = nx * (this.m_polygonB.vertices[i].x - this.m_v2.x) + ny * (this.m_polygonB.vertices[i].y - this.m_v2.y);
- var s = b2Min(s1, s2);
- if (s > this.m_radius) {
- axis.type = b2EPAxis.e_edgeB;
- axis.index = i;
- axis.separation = s;
- return axis
- }
- if (nx * perpx + ny * perpy >= 0) {
- if ((nx - this.m_upperLimit.x) * this.m_normal.x + (ny - this.m_upperLimit.y) * this.m_normal.y < -b2_angularSlop) {
- continue
- }
- } else {
- if ((nx - this.m_lowerLimit.x) * this.m_normal.x + (ny - this.m_lowerLimit.y) * this.m_normal.y < -b2_angularSlop) {
- continue
- }
- }
- if (s > axis.separation) {
- axis.type = b2EPAxis.e_edgeB;
- axis.index = i;
- axis.separation = s
- }
- }
- return axis
- }
-};
-b2EPCollider.e_isolated = 0;
-b2EPCollider.e_concave = 1;
-b2EPCollider.e_convex = 2;
-
-function b2CollideEdgeAndPolygon(manifold, edgeA, xfA, polygonB, xfB) {
- b2CollideEdgeAndPolygon.collider.Collide(manifold, edgeA, xfA, polygonB, xfB)
-}
-b2CollideEdgeAndPolygon.collider = new b2EPCollider();
-
-function b2ClipSegmentToLine(vOut, vIn, normalx, normaly, offset, vertexIndexA) {
- var numOut = 0;
- var distance0 = (normalx * vIn[0].v.x + normaly * vIn[0].v.y) - offset;
- var distance1 = (normalx * vIn[1].v.x + normaly * vIn[1].v.y) - offset;
- if (distance0 <= 0) {
- vOut[numOut++] = vIn[0]
- }
- if (distance1 <= 0) {
- vOut[numOut++] = vIn[1]
- }
- if (distance0 * distance1 < 0) {
- var interp = distance0 / (distance0 - distance1);
- vOut[numOut] = new b2ClipVertex();
- vOut[numOut].v.x = vIn[0].v.x + (interp * (vIn[1].v.x - vIn[0].v.x));
- vOut[numOut].v.y = vIn[0].v.y + (interp * (vIn[1].v.y - vIn[0].v.y));
- vOut[numOut].id.indexA = vertexIndexA;
- vOut[numOut].id.indexB = vIn[0].id.indexB;
- vOut[numOut].id.typeA = b2ContactID.e_vertex;
- vOut[numOut].id.typeB = b2ContactID.e_face;
- ++numOut
- }
- return numOut
-}
-
-function b2TestShapeOverlap(shapeA, indexA, shapeB, indexB, xfA, xfB) {
- b2TestShapeOverlap.input.proxyA.Set(shapeA, indexA);
- b2TestShapeOverlap.input.proxyB.Set(shapeB, indexB);
- b2TestShapeOverlap.input.transformA = xfA;
- b2TestShapeOverlap.input.transformB = xfB;
- b2TestShapeOverlap.input.useRadii = true;
- b2TestShapeOverlap.cache.count = 0;
- b2DistanceFunc(b2TestShapeOverlap.output, b2TestShapeOverlap.cache, b2TestShapeOverlap.input);
- return b2TestShapeOverlap.output.distance < 10 * b2_epsilon
-}
-b2TestShapeOverlap.input = new b2DistanceInput();
-b2TestShapeOverlap.cache = new b2SimplexCache();
-b2TestShapeOverlap.output = new b2DistanceOutput();
-
-function b2TestOverlap(a, b) {
- return !((b.lowerBound.x - a.upperBound.x) > 0 || (b.lowerBound.y - a.upperBound.y) > 0 || (a.lowerBound.x - b.upperBound.x) > 0 || (a.lowerBound.y - b.upperBound.y) > 0)
-}
-"use strict";
-var b2_nullNode = -1;
-
-function b2TreeNode() {
- this.aabb = new b2AABB();
- this.userData = null;
- this.parent = 0;
- this.child1 = this.child2 = this.height = 0
-}
-b2TreeNode.prototype = {
- IsLeaf: function () {
- return this.child1 == b2_nullNode
- }
-};
-
-function b2DynamicTree() {
- this.m_root = b2_nullNode;
- this.m_nodeCapacity = 16;
- this.m_nodeCount = 0;
- this.m_nodes = new Array(this.m_nodeCapacity);
- for (var i = 0; i < this.m_nodeCapacity - 1; ++i) {
- this.m_nodes[i] = new b2TreeNode();
- this.m_nodes[i].parent = i + 1;
- this.m_nodes[i].height = -1
- }
- this.m_nodes[this.m_nodeCapacity - 1] = new b2TreeNode();
- this.m_nodes[this.m_nodeCapacity - 1].parent = b2_nullNode;
- this.m_nodes[this.m_nodeCapacity - 1].height = -1;
- this.m_freeList = 0;
- this.m_path = 0;
- this.m_insertionCount = 0
-}
-b2DynamicTree.aabbExtensionFattener = new b2Vec2(b2_aabbExtension, b2_aabbExtension);
-b2DynamicTree.prototype = {
- CreateProxy: function (aabb, userData) {
- var proxyId = this.AllocateNode();
- this.m_nodes[proxyId].aabb.lowerBound.Assign(b2Vec2.Subtract(aabb.lowerBound, b2DynamicTree.aabbExtensionFattener));
- this.m_nodes[proxyId].aabb.upperBound.Assign(b2Vec2.Add(aabb.upperBound, b2DynamicTree.aabbExtensionFattener));
- this.m_nodes[proxyId].userData = userData;
- this.m_nodes[proxyId].height = 0;
- this.InsertLeaf(proxyId);
- return proxyId
- },
- DestroyProxy: function (proxyId) {
- this.RemoveLeaf(proxyId);
- this.FreeNode(proxyId)
- },
- MoveProxy: function (proxyId, aabb, displacement) {
- if (this.m_nodes[proxyId].aabb.Contains(aabb)) {
- return false
- }
- this.RemoveLeaf(proxyId);
- this.m_nodes[proxyId].aabb.Assign(aabb);
- this.m_nodes[proxyId].aabb.lowerBound.Subtract(b2DynamicTree.aabbExtensionFattener);
- this.m_nodes[proxyId].aabb.upperBound.Add(b2DynamicTree.aabbExtensionFattener);
- var d = b2Vec2.Multiply(b2_aabbMultiplier, displacement);
- if (d.x < 0) {
- this.m_nodes[proxyId].aabb.lowerBound.x += d.x
- } else {
- this.m_nodes[proxyId].aabb.upperBound.x += d.x
- }
- if (d.y < 0) {
- this.m_nodes[proxyId].aabb.lowerBound.y += d.y
- } else {
- this.m_nodes[proxyId].aabb.upperBound.y += d.y
- }
- this.InsertLeaf(proxyId);
- return true
- },
- GetUserData: function (proxyId) {
- return this.m_nodes[proxyId].userData
- },
- GetFatAABB: function (proxyId) {
- return this.m_nodes[proxyId].aabb
- },
- Query: function (callback, aabb) {
- var stack = [];
- stack.push(this.m_root);
- while (stack.length > 0) {
- var nodeId = stack.pop();
- if (nodeId == b2_nullNode) {
- continue
- }
- var node = this.m_nodes[nodeId];
- if (b2TestOverlap(node.aabb, aabb)) {
- if (node.IsLeaf()) {
- var proceed = callback.QueryCallback(nodeId);
- if (proceed == false) {
- return
- }
- } else {
- stack.push(node.child1);
- stack.push(node.child2)
- }
- }
- }
- },
- RayCast: function (callback, input) {
- var p1 = input.p1;
- var p2 = input.p2;
- var r = b2Vec2.Subtract(p2, p1);
- r.Normalize();
- var v = b2Cross_f_v2(1, r);
- var abs_v = b2Abs_v2(v);
- var maxFraction = input.maxFraction;
- var segmentAABB = new b2AABB();
- var t = b2Vec2.Add(p1, b2Vec2.Multiply(maxFraction, b2Vec2.Subtract(p2, p1)));
- segmentAABB.lowerBound.Assign(b2Min_v2(p1, t));
- segmentAABB.upperBound.Assign(b2Max_v2(p1, t));
- var stack = [];
- stack.push(this.m_root);
- while (stack.length > 0) {
- var nodeId = stack.pop();
- if (nodeId == b2_nullNode) {
- continue
- }
- var node = this.m_nodes[nodeId];
- if (b2TestOverlap(node.aabb, segmentAABB) == false) {
- continue
- }
- var c = node.aabb.GetCenter();
- var h = node.aabb.GetExtents();
- var separation = b2Abs(b2Dot_v2_v2(v, b2Vec2.Subtract(p1, c))) - b2Dot_v2_v2(abs_v, h);
- if (separation > 0) {
- continue
- }
- if (node.IsLeaf()) {
- var subInput = new b2RayCastInput();
- subInput.p1.Assign(input.p1);
- subInput.p2.Assign(input.p2);
- subInput.maxFraction = maxFraction;
- var value = callback.RayCastCallback(subInput, nodeId);
- if (value == 0) {
- return
- }
- if (value > 0) {
- maxFraction = value;
- var t = b2Vec2.Add(p1, b2Vec2.Multiply(maxFraction, b2Vec2.Subtract(p2, p1)));
- segmentAABB.lowerBound.Assign(b2Min_v2(p1, t));
- segmentAABB.upperBound.Assign(b2Max_v2(p1, t))
- }
- } else {
- stack.push(node.child1);
- stack.push(node.child2)
- }
- }
- },
- Validate: function () {
- this.ValidateStructure(this.m_root);
- this.ValidateMetrics(this.m_root);
- var freeCount = 0;
- var freeIndex = this.m_freeList;
- while (freeIndex != b2_nullNode) {
- freeIndex = this.m_nodes[freeIndex].parent;
- ++freeCount
- }
- },
- GetHeight: function () {
- if (this.m_root == b2_nullNode) {
- return 0
- }
- return this.m_nodes[this.m_root].height
- },
- GetMaxBalance: function () {
- var maxBalance = 0;
- for (var i = 0; i < this.m_nodeCapacity; ++i) {
- var node = this.m_nodes[i];
- if (node.height <= 1) {
- continue
- }
- var child1 = node.child1;
- var child2 = node.child2;
- var balance = b2Abs(this.m_nodes[child2].height - this.m_nodes[child1].height);
- maxBalance = b2Max(maxBalance, balance)
- }
- return maxBalance
- },
- GetAreaRatio: function () {
- if (this.m_root == b2_nullNode) {
- return 0
- }
- var root = this.m_nodes[this.m_root];
- var rootArea = root.aabb.GetPerimeter();
- var totalArea = 0;
- for (var i = 0; i < this.m_nodeCapacity; ++i) {
- var node = this.m_nodes[i];
- if (node.height < 0) {
- continue
- }
- totalArea += node.aabb.GetPerimeter()
- }
- return totalArea / rootArea
- },
- RebuildBottomUp: function () {
- var nodes = new Array(this.m_nodeCount);
- var count = 0;
- for (var i = 0; i < this.m_nodeCapacity; ++i) {
- if (this.m_nodes[i].height < 0) {
- continue
- }
- if (this.m_nodes[i].IsLeaf()) {
- this.m_nodes[i].parent = b2_nullNode;
- nodes[count] = i;
- ++count
- } else {
- this.FreeNode(i)
- }
- }
- while (count > 1) {
- var minCost = b2_maxFloat;
- var iMin = -1,
- jMin = -1;
- for (i = 0; i < count; ++i) {
- var aabbi = this.m_nodes[nodes[i]].aabb;
- for (var j = i + 1; j < count; ++j) {
- var aabbj = this.m_nodes[nodes[j]].aabb;
- var b = new b2AABB();
- b.Combine(aabbi, aabbj);
- var cost = b.GetPerimeter();
- if (cost < minCost) {
- iMin = i;
- jMin = j;
- minCost = cost
- }
- }
- }
- var index1 = nodes[iMin];
- var index2 = nodes[jMin];
- var child1 = this.m_nodes[index1];
- var child2 = this.m_nodes[index2];
- var parentIndex = this.AllocateNode();
- var parent = this.m_nodes[parentIndex];
- parent.child1 = index1;
- parent.child2 = index2;
- parent.height = 1 + b2Max(child1.height, child2.height);
- parent.aabb.Combine(child1.aabb, child2.aabb);
- parent.parent = b2_nullNode;
- child1.parent = parentIndex;
- child2.parent = parentIndex;
- nodes[jMin] = nodes[count - 1];
- nodes[iMin] = parentIndex;
- --count
- }
- this.m_root = nodes[0];
- this.Validate()
- },
- ShiftOrigin: function (newOrigin) {
- for (var i = 0; i < this.m_nodeCapacity; ++i) {
- this.m_nodes[i].aabb.lowerBound.Subtract(newOrigin);
- this.m_nodes[i].aabb.upperBound.Subtract(newOrigin)
- }
- },
- AllocateNode: function () {
- if (this.m_freeList == b2_nullNode) {
- var oldNodes = this.m_nodes;
- this.m_nodeCapacity *= 2;
- this.m_nodes = oldNodes.concat(new Array(this.m_nodeCapacity - this.m_nodeCount));
- for (var i = this.m_nodeCount; i < this.m_nodeCapacity - 1; ++i) {
- this.m_nodes[i] = new b2TreeNode();
- this.m_nodes[i].parent = i + 1;
- this.m_nodes[i].height = -1
- }
- this.m_nodes[this.m_nodeCapacity - 1] = new b2TreeNode();
- this.m_nodes[this.m_nodeCapacity - 1].parent = b2_nullNode;
- this.m_nodes[this.m_nodeCapacity - 1].height = -1;
- this.m_freeList = this.m_nodeCount
- }
- var nodeId = this.m_freeList;
- this.m_freeList = this.m_nodes[nodeId].parent;
- this.m_nodes[nodeId].parent = b2_nullNode;
- this.m_nodes[nodeId].child1 = b2_nullNode;
- this.m_nodes[nodeId].child2 = b2_nullNode;
- this.m_nodes[nodeId].height = 0;
- this.m_nodes[nodeId].userData = null;
- ++this.m_nodeCount;
- return nodeId
- },
- FreeNode: function (nodeId) {
- this.m_nodes[nodeId].parent = this.m_freeList;
- this.m_nodes[nodeId].height = -1;
- this.m_freeList = nodeId;
- --this.m_nodeCount
- },
- InsertLeaf: function (leaf) {
- ++this.m_insertionCount;
- if (this.m_root == b2_nullNode) {
- this.m_root = leaf;
- this.m_nodes[this.m_root].parent = b2_nullNode;
- return
- }
- var leafAABB = this.m_nodes[leaf].aabb;
- var index = this.m_root;
- while (this.m_nodes[index].IsLeaf() == false) {
- var child1 = this.m_nodes[index].child1;
- var child2 = this.m_nodes[index].child2;
- var area = this.m_nodes[index].aabb.GetPerimeter();
- var combinedAABB = new b2AABB();
- combinedAABB.Combine(this.m_nodes[index].aabb, leafAABB);
- var combinedArea = combinedAABB.GetPerimeter();
- var cost = 2 * combinedArea;
- var inheritanceCost = 2 * (combinedArea - area);
- var cost1;
- var aabb;
- if (this.m_nodes[child1].IsLeaf()) {
- aabb = new b2AABB();
- aabb.Combine(leafAABB, this.m_nodes[child1].aabb);
- cost1 = aabb.GetPerimeter() + inheritanceCost
- } else {
- aabb = new b2AABB();
- aabb.Combine(leafAABB, this.m_nodes[child1].aabb);
- var oldArea = this.m_nodes[child1].aabb.GetPerimeter();
- var newArea = aabb.GetPerimeter();
- cost1 = (newArea - oldArea) + inheritanceCost
- }
- var cost2;
- if (this.m_nodes[child2].IsLeaf()) {
- aabb = new b2AABB();
- aabb.Combine(leafAABB, this.m_nodes[child2].aabb);
- cost2 = aabb.GetPerimeter() + inheritanceCost
- } else {
- aabb = new b2AABB();
- aabb.Combine(leafAABB, this.m_nodes[child2].aabb);
- var oldArea = this.m_nodes[child2].aabb.GetPerimeter();
- var newArea = aabb.GetPerimeter();
- cost2 = newArea - oldArea + inheritanceCost
- }
- if (cost < cost1 && cost < cost2) {
- break
- }
- if (cost1 < cost2) {
- index = child1
- } else {
- index = child2
- }
- }
- var sibling = index;
- var oldParent = this.m_nodes[sibling].parent;
- var newParent = this.AllocateNode();
- this.m_nodes[newParent].parent = oldParent;
- this.m_nodes[newParent].userData = null;
- this.m_nodes[newParent].aabb.Combine(leafAABB, this.m_nodes[sibling].aabb);
- this.m_nodes[newParent].height = this.m_nodes[sibling].height + 1;
- if (oldParent != b2_nullNode) {
- if (this.m_nodes[oldParent].child1 == sibling) {
- this.m_nodes[oldParent].child1 = newParent
- } else {
- this.m_nodes[oldParent].child2 = newParent
- }
- this.m_nodes[newParent].child1 = sibling;
- this.m_nodes[newParent].child2 = leaf;
- this.m_nodes[sibling].parent = newParent;
- this.m_nodes[leaf].parent = newParent
- } else {
- this.m_nodes[newParent].child1 = sibling;
- this.m_nodes[newParent].child2 = leaf;
- this.m_nodes[sibling].parent = newParent;
- this.m_nodes[leaf].parent = newParent;
- this.m_root = newParent
- }
- index = this.m_nodes[leaf].parent;
- while (index != b2_nullNode) {
- index = this.Balance(index);
- var child1 = this.m_nodes[index].child1;
- var child2 = this.m_nodes[index].child2;
- this.m_nodes[index].height = 1 + b2Max(this.m_nodes[child1].height, this.m_nodes[child2].height);
- this.m_nodes[index].aabb.Combine(this.m_nodes[child1].aabb, this.m_nodes[child2].aabb);
- index = this.m_nodes[index].parent
- }
- },
- RemoveLeaf: function (leaf) {
- if (leaf == this.m_root) {
- this.m_root = b2_nullNode;
- return
- }
- var parent = this.m_nodes[leaf].parent;
- var grandParent = this.m_nodes[parent].parent;
- var sibling;
- if (this.m_nodes[parent].child1 == leaf) {
- sibling = this.m_nodes[parent].child2
- } else {
- sibling = this.m_nodes[parent].child1
- }
- if (grandParent != b2_nullNode) {
- if (this.m_nodes[grandParent].child1 == parent) {
- this.m_nodes[grandParent].child1 = sibling
- } else {
- this.m_nodes[grandParent].child2 = sibling
- }
- this.m_nodes[sibling].parent = grandParent;
- this.FreeNode(parent);
- var index = grandParent;
- while (index != b2_nullNode) {
- index = this.Balance(index);
- var child1 = this.m_nodes[index].child1;
- var child2 = this.m_nodes[index].child2;
- this.m_nodes[index].aabb.Combine(this.m_nodes[child1].aabb, this.m_nodes[child2].aabb);
- this.m_nodes[index].height = 1 + b2Max(this.m_nodes[child1].height, this.m_nodes[child2].height);
- index = this.m_nodes[index].parent
- }
- } else {
- this.m_root = sibling;
- this.m_nodes[sibling].parent = b2_nullNode;
- this.FreeNode(parent)
- }
- },
- Balance: function (iA) {
- var A = this.m_nodes[iA];
- if (A.IsLeaf() || A.height < 2) {
- return iA
- }
- var iB = A.child1;
- var iC = A.child2;
- var B = this.m_nodes[iB];
- var C = this.m_nodes[iC];
- var balance = C.height - B.height;
- if (balance > 1) {
- var iF = C.child1;
- var iG = C.child2;
- var F = this.m_nodes[iF];
- var G = this.m_nodes[iG];
- C.child1 = iA;
- C.parent = A.parent;
- A.parent = iC;
- if (C.parent != b2_nullNode) {
- if (this.m_nodes[C.parent].child1 == iA) {
- this.m_nodes[C.parent].child1 = iC
- } else {
- this.m_nodes[C.parent].child2 = iC
- }
- } else {
- this.m_root = iC
- }
- if (F.height > G.height) {
- C.child2 = iF;
- A.child2 = iG;
- G.parent = iA;
- A.aabb.Combine(B.aabb, G.aabb);
- C.aabb.Combine(A.aabb, F.aabb);
- A.height = 1 + b2Max(B.height, G.height);
- C.height = 1 + b2Max(A.height, F.height)
- } else {
- C.child2 = iG;
- A.child2 = iF;
- F.parent = iA;
- A.aabb.Combine(B.aabb, F.aabb);
- C.aabb.Combine(A.aabb, G.aabb);
- A.height = 1 + b2Max(B.height, F.height);
- C.height = 1 + b2Max(A.height, G.height)
- }
- return iC
- }
- if (balance < -1) {
- var iD = B.child1;
- var iE = B.child2;
- var D = this.m_nodes[iD];
- var E = this.m_nodes[iE];
- B.child1 = iA;
- B.parent = A.parent;
- A.parent = iB;
- if (B.parent != b2_nullNode) {
- if (this.m_nodes[B.parent].child1 == iA) {
- this.m_nodes[B.parent].child1 = iB
- } else {
- this.m_nodes[B.parent].child2 = iB
- }
- } else {
- this.m_root = iB
- }
- if (D.height > E.height) {
- B.child2 = iD;
- A.child1 = iE;
- E.parent = iA;
- A.aabb.Combine(C.aabb, E.aabb);
- B.aabb.Combine(A.aabb, D.aabb);
- A.height = 1 + b2Max(C.height, E.height);
- B.height = 1 + b2Max(A.height, D.height)
- } else {
- B.child2 = iE;
- A.child1 = iD;
- D.parent = iA;
- A.aabb.Combine(C.aabb, D.aabb);
- B.aabb.Combine(A.aabb, E.aabb);
- A.height = 1 + b2Max(C.height, D.height);
- B.height = 1 + b2Max(A.height, E.height)
- }
- return iB
- }
- return iA
- },
- ComputeHeight: function (nodeId) {
- if (typeof (nodeId) === "undefined") {
- nodeId = this.m_root
- }
- var node = this.m_nodes[nodeId];
- if (node.IsLeaf()) {
- return 0
- }
- var height1 = this.ComputeHeight(node.child1);
- var height2 = this.ComputeHeight(node.child2);
- return 1 + b2Max(height1, height2)
- },
- ValidateStructure: function (index) {
- if (index == b2_nullNode) {
- return
- }
- var node = this.m_nodes[index];
- var child1 = node.child1;
- var child2 = node.child2;
- if (node.IsLeaf()) {
- return
- }
- this.ValidateStructure(child1);
- this.ValidateStructure(child2)
- },
- ValidateMetrics: function (index) {
- if (index == b2_nullNode) {
- return
- }
- var node = this.m_nodes[index];
- var child1 = node.child1;
- var child2 = node.child2;
- if (node.IsLeaf()) {
- return
- }
- var height1 = this.m_nodes[child1].height;
- var height2 = this.m_nodes[child2].height;
- var height;
- height = 1 + b2Max(height1, height2);
- var aabb = new b2AABB();
- aabb.Combine(this.m_nodes[child1].aabb, this.m_nodes[child2].aabb);
- this.ValidateMetrics(child1);
- this.ValidateMetrics(child2)
- }
-};
-"use strict";
-
-function b2TOIInput() {
- this.proxyA = new b2DistanceProxy();
- this.proxyB = new b2DistanceProxy();
- this.sweepA = new b2Sweep();
- this.sweepB = new b2Sweep();
- this.tMax = 0
-}
-
-function b2TOIOutput() {
- this.state = 0;
- this.t = 0
-}
-b2TOIOutput.e_unknown = 0;
-b2TOIOutput.e_failed = 1;
-b2TOIOutput.e_overlapped = 2;
-b2TOIOutput.e_touching = 3;
-b2TOIOutput.e_separated = 4;
-
-function b2SeparationFunction() {
- this.m_proxyA = null;
- this.m_proxyB = null;
- this.m_sweepA = null;
- this.m_sweepB = null;
- this.m_type = 0;
- this.m_localPoint = new b2Vec2();
- this.m_axis = new b2Vec2()
-}
-var _local_xfA = new b2Transform();
-var _local_xfB = new b2Transform();
-b2SeparationFunction.prototype = {
- Initialize: function (cache, proxyA, sweepA, proxyB, sweepB, t1) {
- this.m_proxyA = proxyA;
- this.m_proxyB = proxyB;
- var count = cache.count;
- this.m_sweepA = sweepA;
- this.m_sweepB = sweepB;
- this.m_sweepA.GetTransform(_local_xfA, t1);
- this.m_sweepB.GetTransform(_local_xfB, t1);
- if (count == 1) {
- this.m_type = b2SeparationFunction.e_points;
- var localPointA = this.m_proxyA.GetVertex(cache.indexA[0]);
- var localPointB = this.m_proxyB.GetVertex(cache.indexB[0]);
- var pointAx = (_local_xfA.q.c * localPointA.x - _local_xfA.q.s * localPointA.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * localPointA.x + _local_xfA.q.c * localPointA.y) + _local_xfA.p.y;
- var pointBx = (_local_xfB.q.c * localPointB.x - _local_xfB.q.s * localPointB.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * localPointB.x + _local_xfB.q.c * localPointB.y) + _local_xfB.p.y;
- this.m_axis.x = pointBx - pointAx;
- this.m_axis.y = pointBy - pointAy;
- var s = this.m_axis.Normalize();
- return s
- } else {
- if (cache.indexA[0] == cache.indexA[1]) {
- this.m_type = b2SeparationFunction.e_faceB;
- var localPointB1 = proxyB.GetVertex(cache.indexB[0]);
- var localPointB2 = proxyB.GetVertex(cache.indexB[1]);
- this.m_axis.x = 1 * (localPointB2.y - localPointB1.y);
- this.m_axis.y = -1 * (localPointB2.x - localPointB1.x);
- this.m_axis.Normalize();
- var normalx = _local_xfB.q.c * this.m_axis.x - _local_xfB.q.s * this.m_axis.y;
- var normaly = _local_xfB.q.s * this.m_axis.x + _local_xfB.q.c * this.m_axis.y;
- this.m_localPoint.x = 0.5 * (localPointB1.x + localPointB2.x);
- this.m_localPoint.y = 0.5 * (localPointB1.y + localPointB2.y);
- var pointBx = (_local_xfB.q.c * this.m_localPoint.x - _local_xfB.q.s * this.m_localPoint.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * this.m_localPoint.x + _local_xfB.q.c * this.m_localPoint.y) + _local_xfB.p.y;
- var localPointA = proxyA.GetVertex(cache.indexA[0]);
- var pointAx = (_local_xfA.q.c * localPointA.x - _local_xfA.q.s * localPointA.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * localPointA.x + _local_xfA.q.c * localPointA.y) + _local_xfA.p.y;
- var s = (pointAx - pointBx) * normalx + (pointAy - pointBy) * normaly;
- if (s < 0) {
- this.m_axis.x = -this.m_axis.x;
- this.m_axis.y = -this.m_axis.y;
- s = -s
- }
- return s
- } else {
- this.m_type = b2SeparationFunction.e_faceA;
- var localPointA1 = this.m_proxyA.GetVertex(cache.indexA[0]);
- var localPointA2 = this.m_proxyA.GetVertex(cache.indexA[1]);
- this.m_axis.x = 1 * (localPointA2.y - localPointA1.y);
- this.m_axis.y = -1 * (localPointA2.x - localPointA1.x);
- this.m_axis.Normalize();
- var normalx = _local_xfA.q.c * this.m_axis.x - _local_xfA.q.s * this.m_axis.y;
- var normaly = _local_xfA.q.s * this.m_axis.x + _local_xfA.q.c * this.m_axis.y;
- this.m_localPoint.x = 0.5 * (localPointA1.x + localPointA2.x);
- this.m_localPoint.y = 0.5 * (localPointA1.y + localPointA2.y);
- var pointAx = (_local_xfA.q.c * this.m_localPoint.x - _local_xfA.q.s * this.m_localPoint.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * this.m_localPoint.x + _local_xfA.q.c * this.m_localPoint.y) + _local_xfA.p.y;
- var localPointB = this.m_proxyB.GetVertex(cache.indexB[0]);
- var pointBx = (_local_xfB.q.c * localPointB.x - _local_xfB.q.s * localPointB.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * localPointB.x + _local_xfB.q.c * localPointB.y) + _local_xfB.p.y;
- var s = (pointBx - pointAx) * normalx + (pointBy - pointAy) * normaly;
- if (s < 0) {
- this.m_axis.x = -this.m_axis.x;
- this.m_axis.y = -this.m_axis.y;
- s = -s
- }
- return s
- }
- }
- },
- FindMinSeparation: function (indices, t) {
- this.m_sweepA.GetTransform(_local_xfA, t);
- this.m_sweepB.GetTransform(_local_xfB, t);
- switch (this.m_type) {
- case b2SeparationFunction.e_points:
- var axisAx = _local_xfA.q.c * this.m_axis.x + _local_xfA.q.s * this.m_axis.y;
- var axisAy = -_local_xfA.q.s * this.m_axis.x + _local_xfA.q.c * this.m_axis.y;
- var axisBx = _local_xfB.q.c * -this.m_axis.x + _local_xfB.q.s * -this.m_axis.y;
- var axisBy = -_local_xfB.q.s * -this.m_axis.x + _local_xfB.q.c * -this.m_axis.y;
- indices[0] = this.m_proxyA.GetSupport(axisAx, axisAy);
- indices[1] = this.m_proxyB.GetSupport(axisBx, axisBy);
- var localPointA = this.m_proxyA.GetVertex(indices[0]);
- var localPointB = this.m_proxyB.GetVertex(indices[1]);
- var pointAx = (_local_xfA.q.c * localPointA.x - _local_xfA.q.s * localPointA.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * localPointA.x + _local_xfA.q.c * localPointA.y) + _local_xfA.p.y;
- var pointBx = (_local_xfB.q.c * localPointB.x - _local_xfB.q.s * localPointB.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * localPointB.x + _local_xfB.q.c * localPointB.y) + _local_xfB.p.y;
- return (pointBx - pointAx) * this.m_axis.x + (pointBy - pointAy) * this.m_axis.y;
- case b2SeparationFunction.e_faceA:
- var normalx = _local_xfA.q.c * this.m_axis.x - _local_xfA.q.s * this.m_axis.y;
- var normaly = _local_xfA.q.s * this.m_axis.x + _local_xfA.q.c * this.m_axis.y;
- var pointAx = (_local_xfA.q.c * this.m_localPoint.x - _local_xfA.q.s * this.m_localPoint.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * this.m_localPoint.x + _local_xfA.q.c * this.m_localPoint.y) + _local_xfA.p.y;
- var axisBx = _local_xfB.q.c * -normalx + _local_xfB.q.s * -normaly;
- var axisBy = -_local_xfB.q.s * -normalx + _local_xfB.q.c * -normaly;
- indices[0] = -1;
- indices[1] = this.m_proxyB.GetSupport(axisBx, axisBy);
- var localPointB = this.m_proxyB.GetVertex(indices[1]);
- var pointBx = (_local_xfB.q.c * localPointB.x - _local_xfB.q.s * localPointB.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * localPointB.x + _local_xfB.q.c * localPointB.y) + _local_xfB.p.y;
- return (pointBx - pointAx) * normalx + (pointBy - pointAy) * normaly;
- case b2SeparationFunction.e_faceB:
- var normalx = _local_xfB.q.c * this.m_axis.x - _local_xfB.q.s * this.m_axis.y;
- var normaly = _local_xfB.q.s * this.m_axis.x + _local_xfB.q.c * this.m_axis.y;
- var pointBx = (_local_xfB.q.c * this.m_localPoint.x - _local_xfB.q.s * this.m_localPoint.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * this.m_localPoint.x + _local_xfB.q.c * this.m_localPoint.y) + _local_xfB.p.y;
- var axisAx = _local_xfA.q.c * -normalx + _local_xfA.q.s * -normaly;
- var axisBy = -_local_xfA.q.s * -normalx + _local_xfA.q.c * -normaly;
- indices[1] = -1;
- indices[0] = this.m_proxyA.GetSupport(axisAx, axisBy);
- var localPointA = this.m_proxyA.GetVertex(indices[0]);
- var pointAx = (_local_xfA.q.c * localPointA.x - _local_xfA.q.s * localPointA.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * localPointA.x + _local_xfA.q.c * localPointA.y) + _local_xfA.p.y;
- return (pointAx - pointBx) * normalx + (pointAy - pointBy) * normaly
- }
- },
- Evaluate: function (indexA, indexB, t) {
- this.m_sweepA.GetTransform(_local_xfA, t);
- this.m_sweepB.GetTransform(_local_xfB, t);
- switch (this.m_type) {
- case b2SeparationFunction.e_points:
- var localPointA = this.m_proxyA.GetVertex(indexA);
- var localPointB = this.m_proxyB.GetVertex(indexB);
- var pointAx = (_local_xfA.q.c * localPointA.x - _local_xfA.q.s * localPointA.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * localPointA.x + _local_xfA.q.c * localPointA.y) + _local_xfA.p.y;
- var pointBx = (_local_xfB.q.c * localPointB.x - _local_xfB.q.s * localPointB.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * localPointB.x + _local_xfB.q.c * localPointB.y) + _local_xfB.p.y;
- var separation = (pointBx - pointAx) * this.m_axis.x + (pointBy - pointAy) * this.m_axis.y;
- return separation;
- case b2SeparationFunction.e_faceA:
- var normalx = _local_xfA.q.c * this.m_axis.x - _local_xfA.q.s * this.m_axis.y;
- var normaly = _local_xfA.q.s * this.m_axis.x + _local_xfA.q.c * this.m_axis.y;
- var pointAx = (_local_xfA.q.c * this.m_localPoint.x - _local_xfA.q.s * this.m_localPoint.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * this.m_localPoint.x + _local_xfA.q.c * this.m_localPoint.y) + _local_xfA.p.y;
- var localPointB = this.m_proxyB.GetVertex(indexB);
- var pointBx = (_local_xfB.q.c * localPointB.x - _local_xfB.q.s * localPointB.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * localPointB.x + _local_xfB.q.c * localPointB.y) + _local_xfB.p.y;
- var separation = (pointBx - pointAx) * normalx + (pointBy - pointAy) * normaly;
- return separation;
- case b2SeparationFunction.e_faceB:
- var normalx = _local_xfB.q.c * this.m_axis.x - _local_xfB.q.s * this.m_axis.y;
- var normaly = _local_xfB.q.s * this.m_axis.x + _local_xfB.q.c * this.m_axis.y;
- var pointBx = (_local_xfB.q.c * this.m_localPoint.x - _local_xfB.q.s * this.m_localPoint.y) + _local_xfB.p.x;
- var pointBy = (_local_xfB.q.s * this.m_localPoint.x + _local_xfB.q.c * this.m_localPoint.y) + _local_xfB.p.y;
- var localPointA = this.m_proxyA.GetVertex(indexA);
- var pointAx = (_local_xfA.q.c * localPointA.x - _local_xfA.q.s * localPointA.y) + _local_xfA.p.x;
- var pointAy = (_local_xfA.q.s * localPointA.x + _local_xfA.q.c * localPointA.y) + _local_xfA.p.y;
- var separation = (pointAx - pointBx) * normalx + (pointAy - pointBy) * normaly;
- return separation
- }
- }
-};
-b2SeparationFunction.e_points = 0;
-b2SeparationFunction.e_faceA = 1;
-b2SeparationFunction.e_faceB = 2;
-var profile_toi = b2Profiler.create("toi", "solveTOI");
-
-function b2TimeOfImpact(output, input) {
- profile_toi.start();
- ++b2TimeOfImpact.b2_toiCalls;
- output.state = b2TOIOutput.e_unknown;
- output.t = input.tMax;
- var proxyA = input.proxyA;
- var proxyB = input.proxyB;
- b2TimeOfImpact._temp_sweepA.Assign(input.sweepA);
- b2TimeOfImpact._temp_sweepB.Assign(input.sweepB);
- b2TimeOfImpact._temp_sweepA.Normalize();
- b2TimeOfImpact._temp_sweepB.Normalize();
- var tMax = input.tMax;
- var totalRadius = proxyA.m_radius + proxyB.m_radius;
- var target = b2Max(b2_linearSlop, totalRadius - 3 * b2_linearSlop);
- var tolerance = 0.25 * b2_linearSlop;
- var t1 = 0;
- var k_maxIterations = 20;
- var iter = 0;
- var cache = new b2SimplexCache();
- cache.count = 0;
- var distanceInput = new b2DistanceInput();
- distanceInput.proxyA.Assign(input.proxyA);
- distanceInput.proxyB.Assign(input.proxyB);
- distanceInput.useRadii = false;
- for (;;) {
- b2TimeOfImpact._temp_sweepA.GetTransform(distanceInput.transformA, t1);
- b2TimeOfImpact._temp_sweepB.GetTransform(distanceInput.transformB, t1);
- var distanceOutput = new b2DistanceOutput();
- b2DistanceFunc(distanceOutput, cache, distanceInput);
- if (distanceOutput.distance <= 0) {
- output.state = b2TOIOutput.e_overlapped;
- output.t = 0;
- break
- }
- if (distanceOutput.distance < target + tolerance) {
- output.state = b2TOIOutput.e_touching;
- output.t = t1;
- break
- }
- var fcn = new b2SeparationFunction();
- fcn.Initialize(cache, proxyA, b2TimeOfImpact._temp_sweepA, proxyB, b2TimeOfImpact._temp_sweepB, t1);
- var done = false;
- var t2 = tMax;
- var pushBackIter = 0;
- for (;;) {
- var indices = [];
- var s2 = fcn.FindMinSeparation(indices, t2);
- if (s2 > target + tolerance) {
- output.state = b2TOIOutput.e_separated;
- output.t = tMax;
- done = true;
- break
- }
- if (s2 > target - tolerance) {
- t1 = t2;
- break
- }
- var s1 = fcn.Evaluate(indices[0], indices[1], t1);
- if (s1 < target - tolerance) {
- output.state = b2TOIOutput.e_failed;
- output.t = t1;
- done = true;
- break
- }
- if (s1 <= target + tolerance) {
- output.state = b2TOIOutput.e_touching;
- output.t = t1;
- done = true;
- break
- }
- var rootIterCount = 0;
- var a1 = t1,
- a2 = t2;
- for (;;) {
- var t;
- if (rootIterCount & 1) {
- t = a1 + (target - s1) * (a2 - a1) / (s2 - s1)
- } else {
- t = 0.5 * (a1 + a2)
- }++rootIterCount;
- ++b2TimeOfImpact.b2_toiRootIters;
- var s = fcn.Evaluate(indices[0], indices[1], t);
- if (b2Abs(s - target) < tolerance) {
- t2 = t;
- break
- }
- if (s > target) {
- a1 = t;
- s1 = s
- } else {
- a2 = t;
- s2 = s
- }
- if (rootIterCount == 50) {
- break
- }
- }
- b2TimeOfImpact.b2_toiMaxRootIters = b2Max(b2TimeOfImpact.b2_toiMaxRootIters, rootIterCount);
- ++pushBackIter;
- if (pushBackIter == b2_maxPolygonVertices) {
- break
- }
- }++iter;
- ++b2TimeOfImpact.b2_toiIters;
- if (done) {
- break
- }
- if (iter == k_maxIterations) {
- output.state = b2TOIOutput.e_failed;
- output.t = t1;
- break
- }
- }
- b2TimeOfImpact.b2_toiMaxIters = b2Max(b2TimeOfImpact.b2_toiMaxIters, iter);
- profile_toi.stop();
- b2TimeOfImpact.b2_toiMaxTime = b2Max(b2TimeOfImpact.b2_toiMaxTime, profile_toi.elapsedTime);
- b2TimeOfImpact.b2_toiTime += profile_toi.elapsedTime
-}
-b2TimeOfImpact._temp_sweepA = new b2Sweep();
-b2TimeOfImpact._temp_sweepB = new b2Sweep();
-b2TimeOfImpact.b2_toiTime = 0;
-b2TimeOfImpact.b2_toiMaxTime = 0;
-b2TimeOfImpact.b2_toiCalls = 0;
-b2TimeOfImpact.b2_toiIters = 0;
-b2TimeOfImpact.b2_toiMaxIters = 0;
-b2TimeOfImpact.b2_toiRootIters = 0;
-b2TimeOfImpact.b2_toiMaxRootIters = 0;
-"use strict";
-
-function b2BodyDef() {
- this.type = b2Body.b2_staticBody;
- this.position = new b2Vec2(0, 0);
- this.angle = 0;
- this.linearVelocity = new b2Vec2(0, 0);
- this.angularVelocity = 0;
- this.linearDamping = 0;
- this.angularDamping = 0;
- this.allowSleep = true;
- this.awake = true;
- this.fixedRotation = false;
- this.bullet = false;
- this.active = true;
- this.userData = null;
- this.gravityScale = 1;
- Object.seal(this)
-}
-b2BodyDef.prototype = {
- _deserialize: function (data) {
- this.type = data.type;
- this.position._deserialize(data.position);
- this.angle = data.angle;
- this.linearVelocity._deserialize(data.linearVelocity);
- this.angularVelocity = data.angularVelocity;
- this.linearDamping = data.linearDamping;
- this.angularDamping = data.angularDamping;
- this.allowSleep = data.allowSleep;
- this.awake = data.awake;
- this.fixedRotation = data.fixedRotation;
- this.bullet = data.bullet;
- this.active = data.active;
- this.gravityScale = data.gravityScale
- }
-};
-
-function b2Body(bd, world) {
- this.m_islandIndex = 0;
- this.m_flags = 0;
- if (bd.bullet) {
- this.m_flags |= b2Body.e_bulletFlag
- }
- if (bd.fixedRotation) {
- this.m_flags |= b2Body.e_fixedRotationFlag
- }
- if (bd.allowSleep) {
- this.m_flags |= b2Body.e_autoSleepFlag
- }
- if (bd.awake) {
- this.m_flags |= b2Body.e_awakeFlag
- }
- if (bd.active) {
- this.m_flags |= b2Body.e_activeFlag
- }
- this.m_world = world;
- this.m_xf = new b2Transform();
- this.m_xf.p.Assign(bd.position);
- this.m_xf.q.Set(bd.angle);
- this.m_sweep = new b2Sweep();
- this.m_sweep.localCenter.SetZero();
- this.m_sweep.c0.Assign(this.m_xf.p);
- this.m_sweep.c.Assign(this.m_xf.p);
- this.m_sweep.a0 = bd.angle;
- this.m_sweep.a = bd.angle;
- this.m_sweep.alpha0 = 0;
- this.m_jointList = null;
- this.m_contactList = null;
- this.m_prev = null;
- this.m_next = null;
- this.m_linearVelocity = bd.linearVelocity.Clone();
- this.m_angularVelocity = bd.angularVelocity;
- this.m_linearDamping = bd.linearDamping;
- this.m_angularDamping = bd.angularDamping;
- this.m_gravityScale = bd.gravityScale;
- this.m_force = new b2Vec2();
- this.m_torque = 0;
- this.m_sleepTime = 0;
- this.m_type = bd.type;
- if (this.m_type == b2Body.b2_dynamicBody) {
- this.m_mass = 1;
- this.m_invMass = 1
- } else {
- this.m_mass = 0;
- this.m_invMass = 0
- }
- this.m_I = 0;
- this.m_invI = 0;
- this.m_userData = bd.userData;
- this.m_fixtureList = null;
- this.m_fixtureCount = 0
-}
-b2Body.b2_staticBody = 0;
-b2Body.b2_kinematicBody = 1;
-b2Body.b2_dynamicBody = 2;
-b2Body.e_islandFlag = 1;
-b2Body.e_awakeFlag = 2;
-b2Body.e_autoSleepFlag = 4;
-b2Body.e_bulletFlag = 8;
-b2Body.e_fixedRotationFlag = 16;
-b2Body.e_activeFlag = 32;
-b2Body.e_toiFlag = 64;
-b2Body.m_local_oldCenter = new b2Vec2();
-b2Body.m_local_xf1 = new b2Transform();
-b2Body.prototype = {
- CreateFixture: function (def, density) {
- if (typeof (density) !== "undefined") {
- var ndef = new b2FixtureDef();
- ndef.shape = def;
- ndef.density = density;
- return this.CreateFixture(ndef)
- }
- if (this.m_world.IsLocked() == true) {
- return null
- }
- var fixture = new b2Fixture();
- fixture.Create(this, def);
- if (this.m_flags & b2Body.e_activeFlag) {
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- fixture.CreateProxies(broadPhase, this.m_xf)
- }
- fixture.m_next = this.m_fixtureList;
- this.m_fixtureList = fixture;
- ++this.m_fixtureCount;
- fixture.m_body = this;
- if (fixture.m_density > 0) {
- this.ResetMassData()
- }
- this.m_world.m_flags |= b2World.e_newFixture;
- return fixture
- },
- DestroyFixture: function (fixture) {
- if (this.m_world.IsLocked() == true) {
- return
- }
- var node = this.m_fixtureList;
- var found = false;
- while (node != null) {
- if (node == fixture) {
- this.m_fixtureList = node = fixture.m_next;
- found = true;
- break
- }
- node = node.m_next
- }
- var edge = this.m_contactList;
- while (edge) {
- var c = edge.contact;
- edge = edge.next;
- var fixtureA = c.GetFixtureA();
- var fixtureB = c.GetFixtureB();
- if (fixture == fixtureA || fixture == fixtureB) {
- this.m_world.m_contactManager.Destroy(c)
- }
- }
- if (this.m_flags & b2Body.e_activeFlag) {
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- fixture.DestroyProxies(broadPhase)
- }
- fixture.Destroy();
- fixture.m_body = null;
- fixture.m_next = null;
- --this.m_fixtureCount;
- this.ResetMassData()
- },
- SetTransform: function (position, angle) {
- if (this.m_world.IsLocked() == true) {
- return
- }
- this.m_xf.q.Set(angle);
- this.m_xf.p.Assign(position);
- this.m_sweep.c.Assign(b2Mul_t_v2(this.m_xf, this.m_sweep.localCenter));
- this.m_sweep.a = angle;
- this.m_sweep.c0.Assign(this.m_sweep.c);
- this.m_sweep.a0 = angle;
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- for (var f = this.m_fixtureList; f; f = f.m_next) {
- f.Synchronize(broadPhase, this.m_xf, this.m_xf)
- }
- },
- GetTransform: function () {
- return this.m_xf
- },
- GetPosition: function () {
- return this.m_xf.p
- },
- GetAngle: function () {
- return this.m_sweep.a
- },
- GetWorldCenter: function () {
- return this.m_sweep.c
- },
- GetLocalCenter: function () {
- return this.m_sweep.localCenter
- },
- SetLinearVelocity: function (v) {
- if (this.m_type == b2Body.b2_staticBody) {
- return
- }
- if (b2Dot_v2_v2(v, v) > 0) {
- this.SetAwake(true)
- }
- this.m_linearVelocity = v
- },
- GetLinearVelocity: function () {
- return this.m_linearVelocity
- },
- SetAngularVelocity: function (w) {
- if (this.m_type == b2Body.b2_staticBody) {
- return
- }
- if (w * w > 0) {
- this.SetAwake(true)
- }
- this.m_angularVelocity = w
- },
- GetAngularVelocity: function () {
- return this.m_angularVelocity
- },
- ApplyForce: function (force, point, wake) {
- if (this.m_type != b2Body.b2_dynamicBody) {
- return
- }
- if (wake && (this.m_flags & b2Body.e_awakeFlag) == 0) {
- this.SetAwake(true)
- }
- if (this.m_flags & b2Body.e_awakeFlag) {
- this.m_force.Add(force);
- this.m_torque += b2Cross_v2_v2(b2Vec2.Subtract(point, this.m_sweep.c), force)
- }
- },
- ApplyForceToCenter: function (force, wake) {
- if (this.m_type != b2Body.b2_dynamicBody) {
- return
- }
- if (wake && (this.m_flags & b2Body.e_awakeFlag) == 0) {
- this.SetAwake(true)
- }
- if (this.m_flags & b2Body.e_awakeFlag) {
- this.m_force.Add(force)
- }
- },
- ApplyTorque: function (torque, wake) {
- if (this.m_type != b2Body.b2_dynamicBody) {
- return
- }
- if (wake && (this.m_flags & b2Body.e_awakeFlag) == 0) {
- this.SetAwake(true)
- }
- if (this.m_flags & b2Body.e_awakeFlag) {
- this.m_torque += torque
- }
- },
- ApplyLinearImpulse: function (impulse, point, wake) {
- if (this.m_type != b2Body.b2_dynamicBody) {
- return
- }
- if (wake && (this.m_flags & b2Body.e_awakeFlag) == 0) {
- this.SetAwake(true)
- }
- if (this.m_flags & b2Body.e_awakeFlag) {
- this.m_linearVelocity.Add(b2Vec2.Multiply(this.m_invMass, impulse));
- this.m_angularVelocity += this.m_invI * b2Cross_v2_v2(b2Vec2.Subtract(point, this.m_sweep.c), impulse)
- }
- },
- ApplyAngularImpulse: function (impulse, wake) {
- if (this.m_type != b2Body.b2_dynamicBody) {
- return
- }
- if (wake && (this.m_flags & b2Body.e_awakeFlag) == 0) {
- this.SetAwake(true)
- }
- if (this.m_flags & b2Body.e_awakeFlag) {
- this.m_angularVelocity += this.m_invI * impulse
- }
- },
- GetMass: function () {
- return this.m_mass
- },
- GetInertia: function () {
- return this.m_I + this.m_mass * b2Dot_v2_v2(this.m_sweep.localCenter, this.m_sweep.localCenter)
- },
- GetMassData: function (data) {
- data.mass = this.m_mass;
- data.I = this.m_I + this.m_mass * b2Dot_v2_v2(this.m_sweep.localCenter, this.m_sweep.localCenter);
- data.center = this.m_sweep.localCenter
- },
- SetMassData: function (massData) {
- if (this.m_world.IsLocked() == true) {
- return
- }
- if (this.m_type != b2Body.b2_dynamicBody) {
- return
- }
- this.m_invMass = 0;
- this.m_I = 0;
- this.m_invI = 0;
- this.m_mass = massData.mass;
- if (this.m_mass <= 0) {
- this.m_mass = 1
- }
- this.m_invMass = 1 / this.m_mass;
- if (massData.I > 0 && (this.m_flags & b2Body.e_fixedRotationFlag) == 0) {
- this.m_I = massData.I - this.m_mass * b2Dot_v2_v2(massData.center, massData.center);
- this.m_invI = 1 / this.m_I
- }
- b2Body.m_local_oldCenter.Assign(this.m_sweep.c);
- this.m_sweep.localCenter.Assign(massData.center);
- this.m_sweep.c0.Assign(b2Mul_t_v2(this.m_xf, this.m_sweep.localCenter));
- this.m_sweep.c.Assign(this.m_sweep.c0);
- this.m_linearVelocity.Add(b2Cross_f_v2(this.m_angularVelocity, b2Vec2.Subtract(this.m_sweep.c, b2Body.m_local_oldCenter)))
- },
- ResetMassData: function () {
- this.m_mass = 0;
- this.m_invMass = 0;
- this.m_I = 0;
- this.m_invI = 0;
- this.m_sweep.localCenter.SetZero();
- if (this.m_type == b2Body.b2_staticBody || this.m_type == b2Body.b2_kinematicBody) {
- this.m_sweep.c0.Assign(this.m_xf.p);
- this.m_sweep.c.Assign(this.m_xf.p);
- this.m_sweep.a0 = this.m_sweep.a;
- return
- }
- var localCenter = new b2Vec2(0, 0);
- for (var f = this.m_fixtureList; f; f = f.m_next) {
- if (f.m_density == 0) {
- continue
- }
- var massData = new b2MassData();
- f.GetMassData(massData);
- this.m_mass += massData.mass;
- localCenter.Add(b2Vec2.Multiply(massData.mass, massData.center));
- this.m_I += massData.I
- }
- if (this.m_mass > 0) {
- this.m_invMass = 1 / this.m_mass;
- localCenter.Multiply(this.m_invMass)
- } else {
- this.m_mass = 1;
- this.m_invMass = 1
- }
- if (this.m_I > 0 && (this.m_flags & b2Body.e_fixedRotationFlag) == 0) {
- this.m_I -= this.m_mass * b2Dot_v2_v2(localCenter, localCenter);
- this.m_invI = 1 / this.m_I
- } else {
- this.m_I = 0;
- this.m_invI = 0
- }
- b2Body.m_local_oldCenter.Assign(this.m_sweep.c);
- this.m_sweep.localCenter.Assign(localCenter);
- this.m_sweep.c0.Assign(b2Mul_t_v2(this.m_xf, this.m_sweep.localCenter));
- this.m_sweep.c.Assign(this.m_sweep.c0);
- this.m_linearVelocity.Add(b2Cross_f_v2(this.m_angularVelocity, b2Vec2.Subtract(this.m_sweep.c, b2Body.m_local_oldCenter)))
- },
- GetWorldPoint: function (localPoint) {
- return b2Mul_t_v2(this.m_xf, localPoint)
- },
- GetWorldVector: function (localVector) {
- return b2Mul_r_v2(this.m_xf.q, localVector)
- },
- GetLocalPoint: function (worldPoint) {
- return b2MulT_t_v2(this.m_xf, worldPoint)
- },
- GetLocalVector: function (worldVector) {
- return b2MulT_r_v2(this.m_xf.q, worldVector)
- },
- GetLinearVelocityFromWorldPoint: function (worldPoint) {
- return b2Vec2.Add(this.m_linearVelocity, b2Cross_f_v2(this.m_angularVelocity, b2Vec2.Subtract(worldPoint, this.m_sweep.c)))
- },
- GetLinearVelocityFromLocalPoint: function (localPoint) {
- return this.GetLinearVelocityFromWorldPoint(this.GetWorldPoint(localPoint))
- },
- GetLinearDamping: function () {
- return this.m_linearDamping
- },
- SetLinearDamping: function (linearDamping) {
- this.m_linearDamping = linearDamping
- },
- GetAngularDamping: function () {
- return this.m_angularDamping
- },
- SetAngularDamping: function (angularDamping) {
- this.m_angularDamping = angularDamping
- },
- GetGravityScale: function () {
- return this.m_gravityScale
- },
- SetGravityScale: function (scale) {
- this.m_gravityScale = scale
- },
- SetType: function (type) {
- if (this.m_world.IsLocked() == true) {
- return
- }
- if (this.m_type == type) {
- return
- }
- this.m_type = type;
- this.ResetMassData();
- if (this.m_type == b2Body.b2_staticBody) {
- this.m_linearVelocity.SetZero();
- this.m_angularVelocity = 0;
- this.m_sweep.a0 = this.m_sweep.a;
- this.m_sweep.c0.Assign(this.m_sweep.c);
- this.SynchronizeFixtures()
- }
- this.SetAwake(true);
- this.m_force.SetZero();
- this.m_torque = 0;
- var ce = this.m_contactList;
- while (ce) {
- var ce0 = ce;
- ce = ce.next;
- this.m_world.m_contactManager.Destroy(ce0.contact)
- }
- this.m_contactList = null;
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- for (var f = this.m_fixtureList; f; f = f.m_next) {
- var proxyCount = f.m_proxyCount;
- for (var i = 0; i < proxyCount; ++i) {
- broadPhase.TouchProxy(f.m_proxies[i].proxyId)
- }
- }
- },
- GetType: function () {
- return this.m_type
- },
- SetBullet: function (flag) {
- if (flag) {
- this.m_flags |= b2Body.e_bulletFlag
- } else {
- this.m_flags &= ~b2Body.e_bulletFlag
- }
- },
- IsBullet: function () {
- return (this.m_flags & b2Body.e_bulletFlag) == b2Body.e_bulletFlag
- },
- SetSleepingAllowed: function (flag) {
- if (flag) {
- this.m_flags |= b2Body.e_autoSleepFlag
- } else {
- this.m_flags &= ~b2Body.e_autoSleepFlag;
- this.SetAwake(true)
- }
- },
- IsSleepingAllowed: function () {
- return (this.m_flags & b2Body.e_autoSleepFlag) == b2Body.e_autoSleepFlag
- },
- SetAwake: function (flag) {
- if (flag) {
- if ((this.m_flags & b2Body.e_awakeFlag) == 0) {
- this.m_flags |= b2Body.e_awakeFlag;
- this.m_sleepTime = 0
- }
- } else {
- this.m_flags &= ~b2Body.e_awakeFlag;
- this.m_sleepTime = 0;
- this.m_linearVelocity.SetZero();
- this.m_angularVelocity = 0;
- this.m_force.SetZero();
- this.m_torque = 0
- }
- },
- IsAwake: function () {
- return (this.m_flags & b2Body.e_awakeFlag) == b2Body.e_awakeFlag
- },
- SetActive: function (flag) {
- if (flag == this.IsActive()) {
- return
- }
- if (flag) {
- this.m_flags |= b2Body.e_activeFlag;
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- for (var f = this.m_fixtureList; f; f = f.m_next) {
- f.CreateProxies(broadPhase, this.m_xf)
- }
- } else {
- this.m_flags &= ~b2Body.e_activeFlag;
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- for (var f = this.m_fixtureList; f; f = f.m_next) {
- f.DestroyProxies(broadPhase)
- }
- var ce = this.m_contactList;
- while (ce) {
- var ce0 = ce;
- ce = ce.next;
- this.m_world.m_contactManager.Destroy(ce0.contact)
- }
- this.m_contactList = null
- }
- },
- IsActive: function () {
- return (this.m_flags & b2Body.e_activeFlag) == b2Body.e_activeFlag
- },
- SetFixedRotation: function (flag) {
- var status = (this.m_flags & b2Body.e_fixedRotationFlag) == b2Body.e_fixedRotationFlag;
- if (status == flag) {
- return
- }
- if (flag) {
- this.m_flags |= b2Body.e_fixedRotationFlag
- } else {
- this.m_flags &= ~b2Body.e_fixedRotationFlag
- }
- this.m_angularVelocity = 0;
- this.ResetMassData()
- },
- IsFixedRotation: function () {
- return (this.m_flags & b2Body.e_fixedRotationFlag) == b2Body.e_fixedRotationFlag
- },
- GetFixtureList: function () {
- return this.m_fixtureList
- },
- GetJointList: function () {
- return this.m_jointList
- },
- GetContactList: function () {
- return this.m_contactList
- },
- GetNext: function () {
- return this.m_next
- },
- GetUserData: function () {
- return this.m_userData
- },
- SetUserData: function (data) {
- this.m_userData = data
- },
- GetWorld: function () {
- return this.m_world
- },
- SynchronizeFixtures: function () {
- b2Body.m_local_xf1.q.Set(this.m_sweep.a0);
- b2Body.m_local_xf1.p.Assign(b2Vec2.Subtract(this.m_sweep.c0, b2Mul_r_v2(b2Body.m_local_xf1.q, this.m_sweep.localCenter)));
- var broadPhase = this.m_world.m_contactManager.m_broadPhase;
- for (var f = this.m_fixtureList; f; f = f.m_next) {
- f.Synchronize(broadPhase, b2Body.m_local_xf1, this.m_xf)
- }
- },
- SynchronizeTransform: function () {
- this.m_xf.q.Set(this.m_sweep.a);
- this.m_xf.p.Assign(b2Vec2.Subtract(this.m_sweep.c, b2Mul_r_v2(this.m_xf.q, this.m_sweep.localCenter)))
- },
- ShouldCollide: function (other) {
- if (this.m_type != b2Body.b2_dynamicBody && other.m_type != b2Body.b2_dynamicBody) {
- return false
- }
- for (var jn = this.m_jointList; jn; jn = jn.next) {
- if (jn.other == other) {
- if (jn.joint.m_collideConnected == false) {
- return false
- }
- }
- }
- return true
- },
- Advance: function (alpha) {
- this.m_sweep.Advance(alpha);
- this.m_sweep.c.Assign(this.m_sweep.c0);
- this.m_sweep.a = this.m_sweep.a0;
- this.m_xf.q.Set(this.m_sweep.a);
- this.m_xf.p.Assign(b2Vec2.Subtract(this.m_sweep.c, b2Mul_r_v2(this.m_xf.q, this.m_sweep.localCenter)))
- },
- _serialize: function (out) {
- var obj = out || {};
- obj.fixtures = null;
- obj.type = this.m_type;
- obj.position = this.GetPosition()._serialize();
- obj.angle = this.GetAngle();
- obj.linearVelocity = this.GetLinearVelocity()._serialize();
- obj.angularVelocity = this.GetAngularVelocity();
- obj.linearDamping = this.GetLinearDamping();
- obj.angularDamping = this.GetAngularDamping();
- obj.allowSleep = this.IsSleepingAllowed();
- obj.awake = this.IsAwake();
- obj.fixedRotation = this.IsFixedRotation();
- obj.bullet = this.IsBullet();
- obj.active = this.IsActive();
- obj.gravityScale = this.GetGravityScale();
- return obj
- }
-};
-"use strict";
-
-function b2Filter() {
- this.categoryBits = 1;
- this.maskBits = 65535;
- this.groupIndex = 0
-}
-b2Filter.prototype = {
- Clone: function () {
- var filter = new b2Filter();
- filter.categoryBits = this.categoryBits;
- filter.maskBits = this.maskBits;
- filter.groupIndex = this.groupIndex;
- return filter
- },
- Assign: function (filter) {
- this.categoryBits = filter.categoryBits;
- this.maskBits = filter.maskBits;
- this.groupIndex = filter.groupIndex
- },
- _serialize: function (out) {
- var obj = out || {};
- obj.categoryBits = this.categoryBits;
- obj.maskBits = this.maskBits;
- obj.groupIndex = this.groupIndex;
- return obj
- },
- _deserialize: function (data) {
- this.categoryBits = data.categoryBits;
- this.maskBits = data.maskBits;
- this.groupIndex = data.groupIndex
- }
-};
-
-function b2FixtureDef() {
- this.shape = null;
- this.userData = null;
- this.friction = 0.2;
- this.restitution = 0;
- this.density = 0;
- this.isSensor = false;
- this.filter = new b2Filter();
- Object.seal(this)
-}
-b2FixtureDef.prototype = {
- _deserialize: function (data) {
- this.friction = data.friction;
- this.restitution = data.restitution;
- this.density = data.density;
- this.isSensor = data.isSensor;
- this.filter._deserialize(data.filter)
- }
-};
-
-function b2FixtureProxy() {
- this.aabb = new b2AABB();
- this.fixture = null;
- this.childIndex = 0;
- this.proxyId = 0
-}
-
-function b2Fixture() {
- this.m_userData = null;
- this.m_body = null;
- this.m_next = null;
- this.m_proxies = null;
- this.m_proxyCount = 0;
- this.m_shape = null;
- this.m_density = 0;
- this.m_filter = new b2Filter();
- this.m_isSensor = false;
- this.m_friction = 0;
- this.m_restitution = 0
-}
-b2Fixture.prototype = {
- GetType: function () {
- return this.m_shape.GetType()
- },
- GetShape: function () {
- return this.m_shape
- },
- SetSensor: function (sensor) {
- if (sensor != this.m_isSensor) {
- this.m_body.SetAwake(true);
- this.m_isSensor = sensor
- }
- },
- IsSensor: function () {
- return this.m_isSensor
- },
- SetFilterData: function (filter) {
- this.m_filter = filter;
- this.Refilter()
- },
- GetFilterData: function () {
- return this.m_filter
- },
- Refilter: function () {
- if (this.m_body == null) {
- return
- }
- var edge = this.m_body.GetContactList();
- while (edge) {
- var contact = edge.contact;
- var fixtureA = contact.GetFixtureA();
- var fixtureB = contact.GetFixtureB();
- if (fixtureA == this || fixtureB == this) {
- contact.FlagForFiltering()
- }
- edge = edge.next
- }
- var world = this.m_body.GetWorld();
- if (world == null) {
- return
- }
- var broadPhase = world.m_contactManager.m_broadPhase;
- for (var i = 0; i < this.m_proxyCount; ++i) {
- broadPhase.TouchProxy(this.m_proxies[i].proxyId)
- }
- },
- GetBody: function () {
- return this.m_body
- },
- GetNext: function () {
- return this.m_next
- },
- GetUserData: function () {
- return this.m_userData
- },
- SetUserData: function (data) {
- this.m_userData = data
- },
- TestPoint: function (p) {
- return this.m_shape.TestPoint(this.m_body.GetTransform(), p)
- },
- RayCast: function (output, input, childIndex) {
- return this.m_shape.RayCast(output, input, this.m_body.GetTransform(), childIndex)
- },
- GetMassData: function (massData) {
- this.m_shape.ComputeMass(massData, this.m_density)
- },
- SetDensity: function (density) {
- this.m_density = density
- },
- GetDensity: function () {
- return this.m_density
- },
- GetFriction: function () {
- return this.m_friction
- },
- SetFriction: function (friction) {
- this.m_friction = friction
- },
- GetRestitution: function () {
- return this.m_restitution
- },
- SetRestitution: function (restitution) {
- this.m_restitution = restitution
- },
- GetAABB: function (childIndex) {
- return this.m_proxies[childIndex].aabb
- },
- Create: function (body, def) {
- this.m_userData = def.userData;
- this.m_friction = def.friction;
- this.m_restitution = def.restitution;
- this.m_body = body;
- this.m_next = null;
- this.m_filter.Assign(def.filter);
- this.m_isSensor = def.isSensor;
- this.m_shape = def.shape.Clone();
- var childCount = this.m_shape.GetChildCount();
- this.m_proxies = new Array(childCount);
- for (var i = 0; i < childCount; ++i) {
- this.m_proxies[i] = new b2FixtureProxy();
- this.m_proxies[i].fixture = null;
- this.m_proxies[i].proxyId = b2BroadPhase.e_nullProxy
- }
- this.m_proxyCount = 0;
- this.m_density = def.density
- },
- Destroy: function () {
- this.m_proxies = null;
- this.m_shape = null
- },
- CreateProxies: function (broadPhase, xf) {
- this.m_proxyCount = this.m_shape.GetChildCount();
- for (var i = 0; i < this.m_proxyCount; ++i) {
- var proxy = this.m_proxies[i];
- this.m_shape.ComputeAABB(proxy.aabb, xf, i);
- proxy.proxyId = broadPhase.CreateProxy(proxy.aabb, proxy);
- proxy.fixture = this;
- proxy.childIndex = i
- }
- },
- DestroyProxies: function (broadPhase) {
- for (var i = 0; i < this.m_proxyCount; ++i) {
- var proxy = this.m_proxies[i];
- broadPhase.DestroyProxy(proxy.proxyId);
- proxy.proxyId = b2BroadPhase.e_nullProxy
- }
- this.m_proxyCount = 0
- },
- Synchronize: function (broadPhase, transform1, transform2) {
- if (this.m_proxyCount == 0) {
- return
- }
- for (var i = 0; i < this.m_proxyCount; ++i) {
- var proxy = this.m_proxies[i];
- var aabb1 = new b2AABB(),
- aabb2 = new b2AABB();
- this.m_shape.ComputeAABB(aabb1, transform1, proxy.childIndex);
- this.m_shape.ComputeAABB(aabb2, transform2, proxy.childIndex);
- proxy.aabb.Combine(aabb1, aabb2);
- var displacement = b2Vec2.Subtract(transform2.p, transform1.p);
- broadPhase.MoveProxy(proxy.proxyId, proxy.aabb, displacement)
- }
- },
- _serialize: function (out) {
- var obj = out || {};
- obj.shape = null;
- obj.friction = this.m_friction;
- obj.restitution = this.m_restitution;
- obj.density = this.m_density;
- obj.isSensor = this.m_isSensor;
- obj.filter = this.m_filter._serialize();
- return obj
- }
-};
-"use strict";
-
-function b2DestructionListener() {}
-b2DestructionListener.prototype = {
- SayGoodbyeJoint: function (joint) {},
- SayGoodbyeFixture: function (fixture) {}
-};
-
-function b2ContactFilter() {}
-b2ContactFilter.prototype = {
- ShouldCollide: function (fixtureA, fixtureB) {
- var filterA = fixtureA.GetFilterData();
- var filterB = fixtureB.GetFilterData();
- if (filterA.groupIndex == filterB.groupIndex && filterA.groupIndex != 0) {
- return filterA.groupIndex > 0
- }
- var collide = (filterA.maskBits & filterB.categoryBits) != 0 && (filterA.categoryBits & filterB.maskBits) != 0;
- return collide
- }
-};
-
-function b2ContactImpulse() {
- this.normalImpulses = new Array(b2_maxManifoldPoints);
- this.tangentImpulses = new Array(b2_maxManifoldPoints);
- this.count = 0
-}
-
-function b2ContactListener() {}
-b2ContactListener.prototype = {
- BeginContact: function (contact) {},
- EndContact: function (contact) {},
- PreSolve: function (contact, oldManifold) {},
- PostSolve: function (contact, impulse) {}
-};
-
-function b2QueryCallback() {}
-b2QueryCallback.prototype = {
- ReportFixture: function (fixture) {}
-};
-
-function b2RayCastCallback() {}
-b2RayCastCallback.prototype = {
- ReportFixture: function (fixture, point, normal, fraction) {}
-};
-"use strict";
-
-function b2TimeStep() {
- this.dt = 0;
- this.inv_dt = 0;
- this.dtRatio = 0;
- this.velocityIterations = 0;
- this.positionIterations = 0;
- this.warmStarting = false
-}
-
-function b2Position() {
- this.c = new b2Vec2();
- this.a = 0
-}
-
-function b2Velocity() {
- this.v = new b2Vec2();
- this.w = 0
-}
-
-function b2SolverData() {
- this.step = new b2TimeStep();
- this.positions = null;
- this.velocities = null
-}
-"use strict";
-var profile_world_step = b2Profiler.create("step");
-var profile_world_collide = b2Profiler.create("collide", "step");
-var profile_world_solve = b2Profiler.create("solve", "step");
-var profile_world_solveTOI = b2Profiler.create("solveTOI", "step");
-var profile_world_broadphase = b2Profiler.create("broadphase", "step");
-
-function b2World(gravity) {
- this.m_contactManager = new b2ContactManager();
- this.m_destructionListener = null;
- this.g_debugDraw = null;
- this.m_bodyList = null;
- this.m_jointList = null;
- this.m_bodyCount = 0;
- this.m_jointCount = 0;
- this.m_warmStarting = true;
- this.m_continuousPhysics = true;
- this.m_subStepping = false;
- this.m_stepComplete = true;
- this.m_allowSleep = true;
- this.m_gravity = gravity;
- this.m_flags = b2World.e_clearForces;
- this.m_inv_dt0 = 0;
- this.p_step = new b2TimeStep();
- this.p_island = new b2Island()
-}
-
-function b2WorldQueryWrapper() {
- this.broadPhase = null;
- this.callback = null
-}
-b2WorldQueryWrapper.prototype = {
- QueryCallback: function (proxyId) {
- var proxy = this.broadPhase.GetUserData(proxyId);
- return this.callback.ReportFixture(proxy.fixture)
- }
-};
-
-function b2WorldRayCastWrapper() {
- this.broadPhase = null;
- this.callback = null
-}
-b2WorldRayCastWrapper.prototype = {
- RayCastCallback: function (input, proxyId) {
- var userData = this.broadPhase.GetUserData(proxyId);
- var proxy = userData;
- var fixture = proxy.fixture;
- var index = proxy.childIndex;
- var output = new b2RayCastOutput();
- var hit = fixture.RayCast(output, input, index);
- if (hit) {
- var fraction = output.fraction;
- var point = b2Vec2.Add(b2Vec2.Multiply((1 - fraction), input.p1), b2Vec2.Multiply(fraction, input.p2));
- return this.callback.ReportFixture(fixture, point, output.normal, fraction)
- }
- return input.maxFraction
- }
-};
-b2World.m_local_sweep_backupA = new b2Sweep();
-b2World.m_local_sweep_backupB = new b2Sweep();
-b2World.m_local_sweep_backupC = new b2Sweep();
-b2World.prototype = {
- Destroy: function () {
- var b = this.m_bodyList;
- while (b) {
- var bNext = b.m_next;
- var f = b.m_fixtureList;
- while (f) {
- var fNext = f.m_next;
- f.m_proxyCount = 0;
- f.Destroy();
- f = fNext
- }
- b = bNext
- }
- },
- SetDestructionListener: function (listener) {
- this.m_destructionListener = listener
- },
- SetContactFilter: function (filter) {
- this.m_contactManager.m_contactFilter = filter
- },
- SetContactListener: function (listener) {
- this.m_contactManager.m_contactListener = listener
- },
- SetDebugDraw: function (debugDraw) {
- this.g_debugDraw = debugDraw
- },
- CreateBody: function (def) {
- if (this.IsLocked()) {
- return null
- }
- var b = new b2Body(def, this);
- b.m_prev = null;
- b.m_next = this.m_bodyList;
- if (this.m_bodyList) {
- this.m_bodyList.m_prev = b
- }
- this.m_bodyList = b;
- ++this.m_bodyCount;
- return b
- },
- DestroyBody: function (b) {
- if (this.IsLocked()) {
- return
- }
- var je = b.m_jointList;
- while (je) {
- var je0 = je;
- je = je.next;
- if (this.m_destructionListener) {
- this.m_destructionListener.SayGoodbyeJoint(je0.joint)
- }
- this.DestroyJoint(je0.joint);
- b.m_jointList = je
- }
- b.m_jointList = null;
- var ce = b.m_contactList;
- while (ce) {
- var ce0 = ce;
- ce = ce.next;
- this.m_contactManager.Destroy(ce0.contact)
- }
- b.m_contactList = null;
- var f = b.m_fixtureList;
- while (f) {
- var f0 = f;
- f = f.m_next;
- if (this.m_destructionListener) {
- this.m_destructionListener.SayGoodbyeFixture(f0)
- }
- f0.DestroyProxies(this.m_contactManager.m_broadPhase);
- f0.Destroy();
- b.m_fixtureList = f;
- b.m_fixtureCount -= 1
- }
- b.m_fixtureList = null;
- b.m_fixtureCount = 0;
- if (b.m_prev) {
- b.m_prev.m_next = b.m_next
- }
- if (b.m_next) {
- b.m_next.m_prev = b.m_prev
- }
- if (b == this.m_bodyList) {
- this.m_bodyList = b.m_next
- }
- b.m_destroyed = true;
- --this.m_bodyCount
- },
- CreateJoint: function (def) {
- if (this.IsLocked()) {
- return null
- }
- var j = b2Joint.Create(def);
- j.m_prev = null;
- j.m_next = this.m_jointList;
- if (this.m_jointList) {
- this.m_jointList.m_prev = j
- }
- this.m_jointList = j;
- ++this.m_jointCount;
- j.m_edgeA.joint = j;
- j.m_edgeA.other = j.m_bodyB;
- j.m_edgeA.prev = null;
- j.m_edgeA.next = j.m_bodyA.m_jointList;
- if (j.m_bodyA.m_jointList) {
- j.m_bodyA.m_jointList.prev = j.m_edgeA
- }
- j.m_bodyA.m_jointList = j.m_edgeA;
- j.m_edgeB.joint = j;
- j.m_edgeB.other = j.m_bodyA;
- j.m_edgeB.prev = null;
- j.m_edgeB.next = j.m_bodyB.m_jointList;
- if (j.m_bodyB.m_jointList) {
- j.m_bodyB.m_jointList.prev = j.m_edgeB
- }
- j.m_bodyB.m_jointList = j.m_edgeB;
- var bodyA = def.bodyA;
- var bodyB = def.bodyB;
- if (def.collideConnected == false) {
- var edge = bodyB.GetContactList();
- while (edge) {
- if (edge.other == bodyA) {
- edge.contact.FlagForFiltering()
- }
- edge = edge.next
- }
- }
- return j
- },
- DestroyJoint: function (j) {
- if (this.IsLocked()) {
- return
- }
- var collideConnected = j.m_collideConnected;
- if (j.m_prev) {
- j.m_prev.m_next = j.m_next
- }
- if (j.m_next) {
- j.m_next.m_prev = j.m_prev
- }
- if (j == this.m_jointList) {
- this.m_jointList = j.m_next
- }
- var bodyA = j.m_bodyA;
- var bodyB = j.m_bodyB;
- bodyA.SetAwake(true);
- bodyB.SetAwake(true);
- if (j.m_edgeA.prev) {
- j.m_edgeA.prev.next = j.m_edgeA.next
- }
- if (j.m_edgeA.next) {
- j.m_edgeA.next.prev = j.m_edgeA.prev
- }
- if (j.m_edgeA == bodyA.m_jointList) {
- bodyA.m_jointList = j.m_edgeA.next
- }
- j.m_edgeA.prev = null;
- j.m_edgeA.next = null;
- if (j.m_edgeB.prev) {
- j.m_edgeB.prev.next = j.m_edgeB.next
- }
- if (j.m_edgeB.next) {
- j.m_edgeB.next.prev = j.m_edgeB.prev
- }
- if (j.m_edgeB == bodyB.m_jointList) {
- bodyB.m_jointList = j.m_edgeB.next
- }
- j.m_edgeB.prev = null;
- j.m_edgeB.next = null;
- b2Joint.Destroy(j);
- --this.m_jointCount;
- if (collideConnected == false) {
- var edge = bodyB.GetContactList();
- while (edge) {
- if (edge.other == bodyA) {
- edge.contact.FlagForFiltering()
- }
- edge = edge.next
- }
- }
- },
- Step: function (dt, velocityIterations, positionIterations) {
- profile_world_step.start();
- if (this.m_flags & b2World.e_newFixture) {
- this.m_contactManager.FindNewContacts();
- this.m_flags &= ~b2World.e_newFixture
- }
- this.m_flags |= b2World.e_locked;
- this.p_step.dt = dt;
- this.p_step.velocityIterations = velocityIterations;
- this.p_step.positionIterations = positionIterations;
- if (dt > 0) {
- this.p_step.inv_dt = 1 / dt
- } else {
- this.p_step.inv_dt = 0
- }
- this.p_step.dtRatio = this.m_inv_dt0 * dt;
- this.p_step.warmStarting = this.m_warmStarting;
- profile_world_collide.start();
- this.m_contactManager.Collide();
- profile_world_collide.stop();
- if (this.m_stepComplete && this.p_step.dt > 0) {
- profile_world_solve.start();
- this.Solve(this.p_step);
- profile_world_solve.stop()
- }
- if (this.m_continuousPhysics && this.p_step.dt > 0) {
- profile_world_solveTOI.start();
- this.SolveTOI(this.p_step);
- profile_world_solveTOI.stop()
- }
- if (this.p_step.dt > 0) {
- this.m_inv_dt0 = this.p_step.inv_dt
- }
- if (this.m_flags & b2World.e_clearForces) {
- this.ClearForces()
- }
- this.m_flags &= ~b2World.e_locked;
- profile_world_step.stop()
- },
- ClearForces: function () {
- for (var body = this.m_bodyList; body; body = body.GetNext()) {
- body.m_force.x = body.m_force.y = 0;
- body.m_torque = 0
- }
- },
- DrawDebugData: function () {
- if (this.g_debugDraw == null) {
- return
- }
- var flags = this.g_debugDraw.GetFlags();
- if (flags & b2Draw.e_shapeBit) {
- for (var b = this.m_bodyList; b; b = b.GetNext()) {
- var xf = b.GetTransform();
- for (var f = b.GetFixtureList(); f; f = f.GetNext()) {
- if (b.IsActive() == false) {
- this.DrawShape(f, xf, new b2Color(0.5, 0.5, 0.3))
- } else {
- if (b.GetType() == b2Body.b2_staticBody) {
- this.DrawShape(f, xf, new b2Color(0.5, 0.9, 0.5))
- } else {
- if (b.GetType() == b2Body.b2_kinematicBody) {
- this.DrawShape(f, xf, new b2Color(0.5, 0.5, 0.9))
- } else {
- if (b.IsAwake() == false) {
- this.DrawShape(f, xf, new b2Color(0.6, 0.6, 0.6))
- } else {
- this.DrawShape(f, xf, new b2Color(0.9, 0.7, 0.7))
- }
- }
- }
- }
- }
- }
- }
- if (flags & b2Draw.e_jointBit) {
- for (var j = this.m_jointList; j; j = j.GetNext()) {
- this.DrawJoint(j)
- }
- }
- if (flags & b2Draw.e_pairBit) {
- var color = new b2Color(0.3, 0.9, 0.9);
- for (var c = this.m_contactManager.m_contactList; c; c = c.GetNext()) {
- var fixtureA = c.GetFixtureA();
- var fixtureB = c.GetFixtureB();
- var cA = fixtureA.GetAABB(c.GetChildIndexA()).GetCenter();
- var cB = fixtureB.GetAABB(c.GetChildIndexB()).GetCenter();
- this.g_debugDraw.DrawSegment(cA, cB, color)
- }
- }
- if (flags & b2Draw.e_aabbBit) {
- var color = new b2Color(0.9, 0.3, 0.9);
- var bp = this.m_contactManager.m_broadPhase;
- for (var b = this.m_bodyList; b; b = b.GetNext()) {
- if (b.IsActive() == false) {
- continue
- }
- for (var f = b.GetFixtureList(); f; f = f.GetNext()) {
- for (var i = 0; i < f.m_proxyCount; ++i) {
- var proxy = f.m_proxies[i];
- var aabb = bp.GetFatAABB(proxy.proxyId);
- var vs = [];
- vs[0] = new b2Vec2(aabb.lowerBound.x, aabb.lowerBound.y);
- vs[1] = new b2Vec2(aabb.upperBound.x, aabb.lowerBound.y);
- vs[2] = new b2Vec2(aabb.upperBound.x, aabb.upperBound.y);
- vs[3] = new b2Vec2(aabb.lowerBound.x, aabb.upperBound.y);
- this.g_debugDraw.DrawPolygon(vs, 4, color)
- }
- }
- }
- }
- if (flags & b2Draw.e_centerOfMassBit) {
- for (var b = this.m_bodyList; b; b = b.GetNext()) {
- var xf = b.GetTransform().Clone();
- xf.p = b.GetWorldCenter();
- this.g_debugDraw.DrawTransform(xf)
- }
- }
- },
- QueryAABB: function (callback, aabb) {
- var wrapper = new b2WorldQueryWrapper();
- wrapper.broadPhase = this.m_contactManager.m_broadPhase;
- wrapper.callback = callback;
- this.m_contactManager.m_broadPhase.Query(wrapper, aabb)
- },
- RayCast: function (callback, point1, point2) {
- var wrapper = new b2WorldRayCastWrapper();
- wrapper.broadPhase = this.m_contactManager.m_broadPhase;
- wrapper.callback = callback;
- var input = new b2RayCastInput();
- input.maxFraction = 1;
- input.p1 = point1;
- input.p2 = point2;
- this.m_contactManager.m_broadPhase.RayCast(wrapper, input)
- },
- GetBodyList: function () {
- return this.m_bodyList
- },
- GetJointList: function () {
- return this.m_jointList
- },
- GetContactList: function () {
- return this.m_contactManager.m_contactList
- },
- SetAllowSleeping: function (flag) {
- if (flag == this.m_allowSleep) {
- return
- }
- this.m_allowSleep = flag;
- if (this.m_allowSleep == false) {
- for (var b = this.m_bodyList; b; b = b.m_next) {
- b.SetAwake(true)
- }
- }
- },
- GetAllowSleeping: function () {
- return this.m_allowSleep
- },
- SetWarmStarting: function (flag) {
- this.m_warmStarting = flag
- },
- GetWarmStarting: function () {
- return this.m_warmStarting
- },
- SetContinuousPhysics: function (flag) {
- this.m_continuousPhysics = flag
- },
- GetContinuousPhysics: function () {
- return this.m_continuousPhysics
- },
- SetSubStepping: function (flag) {
- this.m_subStepping = flag
- },
- GetSubStepping: function () {
- return this.m_subStepping
- },
- GetProxyCount: function () {
- return this.m_contactManager.m_broadPhase.GetProxyCount()
- },
- GetBodyCount: function () {
- return this.m_bodyCount
- },
- GetJointCount: function () {
- return this.m_jointCount
- },
- GetContactCount: function () {
- return this.m_contactManager.m_contactCount
- },
- GetTreeHeight: function () {
- return this.m_contactManager.m_broadPhase.GetTreeHeight()
- },
- GetTreeBalance: function () {
- return this.m_contactManager.m_broadPhase.GetTreeBalance()
- },
- GetTreeQuality: function () {
- return this.m_contactManager.m_broadPhase.GetTreeQuality()
- },
- SetGravity: function (gravity) {
- this.m_gravity = gravity
- },
- GetGravity: function () {
- return this.m_gravity
- },
- IsLocked: function () {
- return (this.m_flags & b2World.e_locked) == b2World.e_locked
- },
- SetAutoClearForces: function (flag) {
- if (flag) {
- this.m_flags |= b2World.e_clearForces
- } else {
- this.m_flags &= ~b2World.e_clearForces
- }
- },
- GetAutoClearForces: function () {
- return (this.m_flags & b2World.e_clearForces) == b2World.e_clearForces
- },
- ShiftOrigin: function (newOrigin) {
- if ((this.m_flags & b2World.e_locked) == b2World.e_locked) {
- return
- }
- for (var b = this.m_bodyList; b; b = b.m_next) {
- b.m_xf.p.Subtract(newOrigin);
- b.m_sweep.c0.Subtract(newOrigin);
- b.m_sweep.c.Subtract(newOrigin)
- }
- for (var j = this.m_jointList; j; j = j.m_next) {
- j.ShiftOrigin(newOrigin)
- }
- this.m_contactManager.m_broadPhase.ShiftOrigin(newOrigin)
- },
- GetContactManager: function () {
- return this.m_contactManager
- },
- Solve: function (step) {
- this.p_island.Initialize(this.m_bodyCount, this.m_contactManager.m_contactCount, this.m_jointCount, this.m_contactManager.m_contactListener);
- for (var b = this.m_bodyList; b; b = b.m_next) {
- b.m_flags &= ~b2Body.e_islandFlag
- }
- for (var c = this.m_contactManager.m_contactList; c; c = c.m_next) {
- c.m_flags &= ~b2Contact.e_islandFlag
- }
- for (var j = this.m_jointList; j; j = j.m_next) {
- j.m_islandFlag = false
- }
- var stackSize = this.m_bodyCount;
- var stack = new Array(stackSize);
- for (var seed = this.m_bodyList; seed; seed = seed.m_next) {
- if (seed.m_flags & b2Body.e_islandFlag) {
- continue
- }
- if (seed.IsAwake() == false || seed.IsActive() == false) {
- continue
- }
- if (seed.GetType() == b2Body.b2_staticBody) {
- continue
- }
- this.p_island.Clear();
- var stackCount = 0;
- stack[stackCount++] = seed;
- seed.m_flags |= b2Body.e_islandFlag;
- while (stackCount > 0) {
- var b = stack[--stackCount];
- this.p_island.AddBody(b);
- b.SetAwake(true);
- if (b.GetType() == b2Body.b2_staticBody) {
- continue
- }
- for (var ce = b.m_contactList; ce; ce = ce.next) {
- var contact = ce.contact;
- if (contact.m_flags & b2Contact.e_islandFlag) {
- continue
- }
- if (contact.IsEnabled() == false || contact.IsTouching() == false) {
- continue
- }
- var sensorA = contact.m_fixtureA.m_isSensor;
- var sensorB = contact.m_fixtureB.m_isSensor;
- if (sensorA || sensorB) {
- continue
- }
- this.p_island.AddContact(contact);
- contact.m_flags |= b2Contact.e_islandFlag;
- var other = ce.other;
- if (other.m_flags & b2Body.e_islandFlag) {
- continue
- }
- stack[stackCount++] = other;
- other.m_flags |= b2Body.e_islandFlag
- }
- for (var je = b.m_jointList; je; je = je.next) {
- if (je.joint.m_islandFlag == true) {
- continue
- }
- var other = je.other;
- if (other.IsActive() == false) {
- continue
- }
- this.p_island.AddJoint(je.joint);
- je.joint.m_islandFlag = true;
- if (other.m_flags & b2Body.e_islandFlag) {
- continue
- }
- stack[stackCount++] = other;
- other.m_flags |= b2Body.e_islandFlag
- }
- }
- this.p_island.Solve(step, this.m_gravity, this.m_allowSleep);
- for (var i = 0; i < this.p_island.m_bodyCount; ++i) {
- var b = this.p_island.m_bodies[i];
- if (b.GetType() == b2Body.b2_staticBody) {
- b.m_flags &= ~b2Body.e_islandFlag
- }
- }
- }
- profile_world_broadphase.start();
- for (var b = this.m_bodyList; b; b = b.GetNext()) {
- if ((b.m_flags & b2Body.e_islandFlag) == 0) {
- continue
- }
- if (b.GetType() == b2Body.b2_staticBody) {
- continue
- }
- b.SynchronizeFixtures()
- }
- this.m_contactManager.FindNewContacts();
- profile_world_broadphase.stop()
- },
- SolveTOI: function (step) {
- this.p_island.Initialize(2 * b2_maxTOIContacts, b2_maxTOIContacts, 0, this.m_contactManager.m_contactListener);
- if (this.m_stepComplete) {
- for (var b = this.m_bodyList; b; b = b.m_next) {
- b.m_flags &= ~b2Body.e_islandFlag;
- b.m_sweep.alpha0 = 0
- }
- for (var c = this.m_contactManager.m_contactList; c; c = c.m_next) {
- c.m_flags &= ~(b2Contact.e_toiFlag | b2Contact.e_islandFlag);
- c.m_toiCount = 0;
- c.m_toi = 1
- }
- }
- for (;;) {
- var minContact = null;
- var minAlpha = 1;
- for (var c = this.m_contactManager.m_contactList; c; c = c.m_next) {
- if (c.IsEnabled() == false) {
- continue
- }
- if (c.m_toiCount > b2_maxSubSteps) {
- continue
- }
- var alpha = 1;
- if (c.m_flags & b2Contact.e_toiFlag) {
- alpha = c.m_toi
- } else {
- var fA = c.GetFixtureA();
- var fB = c.GetFixtureB();
- if (fA.IsSensor() || fB.IsSensor()) {
- continue
- }
- var bA = fA.GetBody();
- var bB = fB.GetBody();
- var typeA = bA.m_type;
- var typeB = bB.m_type;
- var activeA = bA.IsAwake() && typeA != b2Body.b2_staticBody;
- var activeB = bB.IsAwake() && typeB != b2Body.b2_staticBody;
- if (activeA == false && activeB == false) {
- continue
- }
- var collideA = bA.IsBullet() || typeA != b2Body.b2_dynamicBody;
- var collideB = bB.IsBullet() || typeB != b2Body.b2_dynamicBody;
- if (collideA == false && collideB == false) {
- continue
- }
- var alpha0 = bA.m_sweep.alpha0;
- if (bA.m_sweep.alpha0 < bB.m_sweep.alpha0) {
- alpha0 = bB.m_sweep.alpha0;
- bA.m_sweep.Advance(alpha0)
- } else {
- if (bB.m_sweep.alpha0 < bA.m_sweep.alpha0) {
- alpha0 = bA.m_sweep.alpha0;
- bB.m_sweep.Advance(alpha0)
- }
- }
- var indexA = c.GetChildIndexA();
- var indexB = c.GetChildIndexB();
- var input = new b2TOIInput();
- input.proxyA.Set(fA.GetShape(), indexA);
- input.proxyB.Set(fB.GetShape(), indexB);
- input.sweepA.Assign(bA.m_sweep);
- input.sweepB.Assign(bB.m_sweep);
- input.tMax = 1;
- var output = new b2TOIOutput();
- b2TimeOfImpact(output, input);
- var beta = output.t;
- if (output.state == b2TOIOutput.e_touching) {
- alpha = b2Min(alpha0 + (1 - alpha0) * beta, 1)
- } else {
- alpha = 1
- }
- c.m_toi = alpha;
- c.m_flags |= b2Contact.e_toiFlag
- }
- if (alpha < minAlpha) {
- minContact = c;
- minAlpha = alpha
- }
- }
- if (minContact == null || 1 - 10 * b2_epsilon < minAlpha) {
- this.m_stepComplete = true;
- break
- }
- var fA = minContact.GetFixtureA();
- var fB = minContact.GetFixtureB();
- var bA = fA.GetBody();
- var bB = fB.GetBody();
- b2World.m_local_sweep_backupA.Assign(bA.m_sweep);
- b2World.m_local_sweep_backupB.Assign(bB.m_sweep);
- bA.Advance(minAlpha);
- bB.Advance(minAlpha);
- minContact.Update(this.m_contactManager.m_contactListener);
- minContact.m_flags &= ~b2Contact.e_toiFlag;
- ++minContact.m_toiCount;
- if (minContact.IsEnabled() == false || minContact.IsTouching() == false) {
- minContact.SetEnabled(false);
- bA.m_sweep.Assign(b2World.m_local_sweep_backupA);
- bB.m_sweep.Assign(b2World.m_local_sweep_backupB);
- bA.SynchronizeTransform();
- bB.SynchronizeTransform();
- continue
- }
- bA.SetAwake(true);
- bB.SetAwake(true);
- this.p_island.Clear();
- this.p_island.AddBody(bA);
- this.p_island.AddBody(bB);
- this.p_island.AddContact(minContact);
- bA.m_flags |= b2Body.e_islandFlag;
- bB.m_flags |= b2Body.e_islandFlag;
- minContact.m_flags |= b2Contact.e_islandFlag;
- var bodies = [bA, bB];
- for (var i = 0; i < 2; ++i) {
- var body = bodies[i];
- if (body.m_type == b2Body.b2_dynamicBody) {
- for (var ce = body.m_contactList; ce; ce = ce.next) {
- if (this.p_island.m_bodyCount == this.p_island.m_bodyCapacity) {
- break
- }
- if (this.p_island.m_contactCount == this.p_island.m_contactCapacity) {
- break
- }
- var contact = ce.contact;
- if (contact.m_flags & b2Contact.e_islandFlag) {
- continue
- }
- var other = ce.other;
- if (other.m_type == b2Body.b2_dynamicBody && body.IsBullet() == false && other.IsBullet() == false) {
- continue
- }
- var sensorA = contact.m_fixtureA.m_isSensor;
- var sensorB = contact.m_fixtureB.m_isSensor;
- if (sensorA || sensorB) {
- continue
- }
- b2World.m_local_sweep_backupC.Assign(other.m_sweep);
- if ((other.m_flags & b2Body.e_islandFlag) == 0) {
- other.Advance(minAlpha)
- }
- contact.Update(this.m_contactManager.m_contactListener);
- if (contact.IsEnabled() == false) {
- other.m_sweep.Assign(b2World.m_local_sweep_backupC);
- other.SynchronizeTransform();
- continue
- }
- if (contact.IsTouching() == false) {
- other.m_sweep.Assign(b2World.m_local_sweep_backupC);
- other.SynchronizeTransform();
- continue
- }
- contact.m_flags |= b2Contact.e_islandFlag;
- this.p_island.AddContact(contact);
- if (other.m_flags & b2Body.e_islandFlag) {
- continue
- }
- other.m_flags |= b2Body.e_islandFlag;
- if (other.m_type != b2Body.b2_staticBody) {
- other.SetAwake(true)
- }
- this.p_island.AddBody(other)
- }
- }
- }
- var subStep = new b2TimeStep();
- subStep.dt = (1 - minAlpha) * step.dt;
- subStep.inv_dt = 1 / subStep.dt;
- subStep.dtRatio = 1;
- subStep.positionIterations = 20;
- subStep.velocityIterations = step.velocityIterations;
- subStep.warmStarting = false;
- this.p_island.SolveTOI(subStep, bA.m_islandIndex, bB.m_islandIndex);
- for (var i = 0; i < this.p_island.m_bodyCount; ++i) {
- var body = this.p_island.m_bodies[i];
- body.m_flags &= ~b2Body.e_islandFlag;
- if (body.m_type != b2Body.b2_dynamicBody) {
- continue
- }
- body.SynchronizeFixtures();
- for (var ce = body.m_contactList; ce; ce = ce.next) {
- ce.contact.m_flags &= ~(b2Contact.e_toiFlag | b2Contact.e_islandFlag)
- }
- }
- this.m_contactManager.FindNewContacts();
- if (this.m_subStepping) {
- this.m_stepComplete = false;
- break
- }
- }
- },
- DrawJoint: function (joint) {
- var bodyA = joint.GetBodyA();
- var bodyB = joint.GetBodyB();
- var xf1 = bodyA.GetTransform();
- var xf2 = bodyB.GetTransform();
- var x1 = xf1.p;
- var x2 = xf2.p;
- var p1 = joint.GetAnchorA();
- var p2 = joint.GetAnchorB();
- var color = new b2Color(0.5, 0.8, 0.8);
- switch (joint.GetType()) {
- case b2Joint.e_distanceJoint:
- this.g_debugDraw.DrawSegment(p1, p2, color);
- break;
- case b2Joint.e_pulleyJoint:
- var pulley = joint;
- var s1 = pulley.GetGroundAnchorA();
- var s2 = pulley.GetGroundAnchorB();
- this.g_debugDraw.DrawSegment(s1, p1, color);
- this.g_debugDraw.DrawSegment(s2, p2, color);
- this.g_debugDraw.DrawSegment(s1, s2, color);
- break;
- case b2Joint.e_mouseJoint:
- break;
- case b2Joint.e_motorJoint:
- this.g_debugDraw.DrawPoint(joint.GetLinearOffset(), 5, color);
- default:
- this.g_debugDraw.DrawSegment(x1, p1, color);
- this.g_debugDraw.DrawSegment(p1, p2, color);
- this.g_debugDraw.DrawSegment(x2, p2, color)
- }
- },
- DrawShape: function (fixture, xf, color) {
- switch (fixture.GetType()) {
- case b2Shape.e_circle:
- var circle = fixture.GetShape();
- var center = b2Mul_t_v2(xf, circle.m_p);
- var radius = circle.m_radius;
- var axis = b2Mul_r_v2(xf.q, new b2Vec2(1, 0));
- this.g_debugDraw.DrawSolidCircle(center, radius, axis, color);
- break;
- case b2Shape.e_edge:
- var edge = fixture.GetShape();
- var v1 = b2Mul_t_v2(xf, edge.m_vertex1);
- var v2 = b2Mul_t_v2(xf, edge.m_vertex2);
- this.g_debugDraw.DrawSegment(v1, v2, color);
- break;
- case b2Shape.e_chain:
- var chain = fixture.GetShape();
- var count = chain.m_count;
- var vertices = chain.m_vertices;
- var v1 = b2Mul_t_v2(xf, vertices[0]);
- for (var i = 1; i < count; ++i) {
- var v2 = b2Mul_t_v2(xf, vertices[i]);
- this.g_debugDraw.DrawSegment(v1, v2, color);
- v1 = v2
- }
- break;
- case b2Shape.e_polygon:
- var poly = fixture.GetShape();
- var vertexCount = poly.m_count;
- var vertices = new Array(b2_maxPolygonVertices);
- for (var i = 0; i < vertexCount; ++i) {
- vertices[i] = b2Mul_t_v2(xf, poly.m_vertices[i])
- }
- this.g_debugDraw.DrawSolidPolygon(vertices, vertexCount, color);
- break;
- default:
- break
- }
- }
-};
-b2World.e_newFixture = 1;
-b2World.e_locked = 2;
-b2World.e_clearForces = 4;
-"use strict";
-
-function b2MixFriction(friction1, friction2) {
- return b2Sqrt(friction1 * friction2)
-}
-
-function b2MixRestitution(restitution1, restitution2) {
- return restitution1 > restitution2 ? restitution1 : restitution2
-}
-
-function b2ContactRegister() {
- this.fcn = null;
- this.primary = false
-}
-
-function b2ContactEdge() {
- this.other = null;
- this.contact = null;
- this.prev = null;
- this.next = null
-}
-
-function b2Contact() {
- this.m_nodeA = new b2ContactEdge();
- this.m_nodeB = new b2ContactEdge();
- this.m_manifold = new b2Manifold()
-}
-b2Contact.m_local_tempManifold = new b2Manifold();
-b2Contact.prototype = {
- Create: function (fA, indexA, fB, indexB) {
- this.m_toi = 0;
- this.m_flags = b2Contact.e_enabledFlag;
- this.m_fixtureA = fA || null;
- this.m_fixtureB = fB || null;
- this.m_indexA = indexA || 0;
- this.m_indexB = indexB || 0;
- this.m_manifold.pointCount = 0;
- this.m_prev = null;
- this.m_next = null;
- this.m_nodeA.contact = null;
- this.m_nodeA.prev = null;
- this.m_nodeA.next = null;
- this.m_nodeA.other = null;
- this.m_nodeB.contact = null;
- this.m_nodeB.prev = null;
- this.m_nodeB.next = null;
- this.m_nodeB.other = null;
- this.m_toiCount = 0;
- if (fA) {
- this.m_friction = b2MixFriction(this.m_fixtureA.m_friction, this.m_fixtureB.m_friction);
- this.m_restitution = b2MixRestitution(this.m_fixtureA.m_restitution, this.m_fixtureB.m_restitution)
- } else {
- this.m_friction = 0;
- this.m_restitution = 0
- }
- this.m_tangentSpeed = 0
- },
- GetManifold: function () {
- return this.m_manifold
- },
- GetWorldManifold: function (worldManifold) {
- var bodyA = this.m_fixtureA.GetBody();
- var bodyB = this.m_fixtureB.GetBody();
- var shapeA = this.m_fixtureA.GetShape();
- var shapeB = this.m_fixtureB.GetShape();
- worldManifold.Initialize(this.m_manifold, bodyA.GetTransform(), shapeA.m_radius, bodyB.GetTransform(), shapeB.m_radius)
- },
- IsTouching: function () {
- return (this.m_flags & b2Contact.e_touchingFlag) == b2Contact.e_touchingFlag
- },
- SetEnabled: function (flag) {
- if (flag) {
- this.m_flags |= b2Contact.e_enabledFlag
- } else {
- this.m_flags &= ~b2Contact.e_enabledFlag
- }
- },
- IsEnabled: function () {
- return (this.m_flags & b2Contact.e_enabledFlag) == b2Contact.e_enabledFlag
- },
- GetNext: function () {
- return this.m_next
- },
- GetFixtureA: function () {
- return this.m_fixtureA
- },
- GetChildIndexA: function () {
- return this.m_indexA
- },
- GetFixtureB: function () {
- return this.m_fixtureB
- },
- GetChildIndexB: function () {
- return this.m_indexB
- },
- SetFriction: function (friction) {
- this.m_friction = friction
- },
- GetFriction: function () {
- return this.m_friction
- },
- ResetFriction: function () {
- this.m_friction = b2MixFriction(this.m_fixtureA.m_friction, this.m_fixtureB.m_friction)
- },
- SetRestitution: function (restitution) {
- this.m_restitution = restitution
- },
- GetRestitution: function () {
- return this.m_restitution
- },
- ResetRestitution: function () {
- this.m_restitution = b2MixRestitution(this.m_fixtureA.m_restitution, this.m_fixtureB.m_restitution)
- },
- SetTangentSpeed: function (speed) {
- this.m_tangentSpeed = speed
- },
- GetTangentSpeed: function () {
- return this.m_tangentSpeed
- },
- Evaluate: function (manifold, xfA, xfB) {},
- FlagForFiltering: function () {
- this.m_flags |= b2Contact.e_filterFlag
- },
- m_oldManifold: null,
- Update: function (listener) {
- b2Contact.m_local_tempManifold.Assign(this.m_manifold);
- this.m_flags |= b2Contact.e_enabledFlag;
- var touching = false;
- var wasTouching = (this.m_flags & b2Contact.e_touchingFlag) == b2Contact.e_touchingFlag;
- var sensorA = this.m_fixtureA.IsSensor();
- var sensorB = this.m_fixtureB.IsSensor();
- var sensor = sensorA || sensorB;
- var bodyA = this.m_fixtureA.GetBody();
- var bodyB = this.m_fixtureB.GetBody();
- var xfA = bodyA.GetTransform();
- var xfB = bodyB.GetTransform();
- if (sensor) {
- var shapeA = this.m_fixtureA.GetShape();
- var shapeB = this.m_fixtureB.GetShape();
- touching = b2TestShapeOverlap(shapeA, this.m_indexA, shapeB, this.m_indexB, xfA, xfB);
- this.m_manifold.pointCount = 0
- } else {
- this.Evaluate(this.m_manifold, xfA, xfB);
- touching = this.m_manifold.pointCount > 0;
- for (var i = 0; i < this.m_manifold.pointCount; ++i) {
- var mp2 = this.m_manifold.points[i];
- mp2.normalImpulse = 0;
- mp2.tangentImpulse = 0;
- var id2 = mp2.id;
- for (var j = 0; j < b2Contact.m_local_tempManifold.pointCount; ++j) {
- var mp1 = b2Contact.m_local_tempManifold.points[j];
- if (mp1.id.Get() == id2.Get()) {
- mp2.normalImpulse = mp1.normalImpulse;
- mp2.tangentImpulse = mp1.tangentImpulse;
- break
- }
- }
- }
- if (touching != wasTouching) {
- bodyA.SetAwake(true);
- bodyB.SetAwake(true)
- }
- }
- if (touching) {
- this.m_flags |= b2Contact.e_touchingFlag
- } else {
- this.m_flags &= ~b2Contact.e_touchingFlag
- }
- if (wasTouching == false && touching == true && listener) {
- listener.BeginContact(this)
- }
- if (wasTouching == true && touching == false && listener) {
- listener.EndContact(this)
- }
- if (sensor == false && touching && listener) {
- listener.PreSolve(this, b2Contact.m_local_tempManifold)
- }
- }
-};
-b2Contact.e_islandFlag = 1;
-b2Contact.e_touchingFlag = 2;
-b2Contact.e_enabledFlag = 4;
-b2Contact.e_filterFlag = 8;
-b2Contact.e_bulletHitFlag = 16;
-b2Contact.e_toiFlag = 32;
-
-function b2CircleContact() {
- this.parent.call(this)
-}
-b2CircleContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- b2CollideCircles(manifold, this.m_fixtureA.GetShape(), xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, unused1, fixtureB, unused2) {
- this.parent.prototype.Create.call(this, fixtureA, 0, fixtureB, 0)
- }
-};
-b2CircleContact._extend(b2Contact);
-var _local_temp_edgeShape = new b2EdgeShape();
-
-function b2ChainAndCircleContact() {
- this.parent.call(this)
-}
-b2ChainAndCircleContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- var chain = this.m_fixtureA.GetShape();
- chain.GetChildEdge(_local_temp_edgeShape, this.m_indexA);
- b2CollideEdgeAndCircle(manifold, _local_temp_edgeShape, xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, indexA, fixtureB, indexB) {
- this.parent.prototype.Create.call(this, fixtureA, indexA, fixtureB, indexB)
- }
-};
-b2ChainAndCircleContact._extend(b2Contact);
-
-function b2ChainAndPolygonContact() {
- this.parent.call(this)
-}
-b2ChainAndPolygonContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- var chain = this.m_fixtureA.GetShape();
- chain.GetChildEdge(_local_temp_edgeShape, this.m_indexA);
- b2CollideEdgeAndPolygon(manifold, _local_temp_edgeShape, xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, indexA, fixtureB, indexB) {
- this.parent.prototype.Create.call(this, fixtureA, indexA, fixtureB, indexB)
- }
-};
-b2ChainAndPolygonContact.Create = function (fixtureA, indexA, fixtureB, indexB) {
- return new b2ChainAndPolygonContact(fixtureA, indexA, fixtureB, indexB)
-};
-b2ChainAndPolygonContact._extend(b2Contact);
-
-function b2EdgeAndCircleContact() {
- this.parent.call(this)
-}
-b2EdgeAndCircleContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- b2CollideEdgeAndCircle(manifold, this.m_fixtureA.GetShape(), xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, indexA, fixtureB, indexB) {
- this.parent.prototype.Create.call(this, fixtureA, 0, fixtureB, 0)
- }
-};
-b2EdgeAndCircleContact.Create = function (fixtureA, indexA, fixtureB, indexB) {
- return new b2EdgeAndCircleContact(fixtureA, fixtureB)
-};
-b2EdgeAndCircleContact._extend(b2Contact);
-
-function b2EdgeAndPolygonContact() {
- this.parent.call(this)
-}
-b2EdgeAndPolygonContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- b2CollideEdgeAndPolygon(manifold, this.m_fixtureA.GetShape(), xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, indexA, fixtureB, indexB) {
- this.parent.prototype.Create.call(this, fixtureA, 0, fixtureB, 0)
- }
-};
-b2EdgeAndPolygonContact.Create = function (fixtureA, indexA, fixtureB, indexB) {
- return new b2EdgeAndPolygonContact(fixtureA, fixtureB)
-};
-b2EdgeAndPolygonContact._extend(b2Contact);
-
-function b2PolygonAndCircleContact() {
- this.parent.call(this)
-}
-b2PolygonAndCircleContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- b2CollidePolygonAndCircle(manifold, this.m_fixtureA.GetShape(), xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, indexA, fixtureB, indexB) {
- this.parent.prototype.Create.call(this, fixtureA, 0, fixtureB, 0)
- }
-};
-b2PolygonAndCircleContact.Create = function (fixtureA, indexA, fixtureB, indexB) {
- return new b2PolygonAndCircleContact(fixtureA, fixtureB)
-};
-b2PolygonAndCircleContact._extend(b2Contact);
-
-function b2PolygonContact() {
- this.parent.call(this)
-}
-b2PolygonContact.prototype = {
- Evaluate: function (manifold, xfA, xfB) {
- b2CollidePolygons(manifold, this.m_fixtureA.GetShape(), xfA, this.m_fixtureB.GetShape(), xfB)
- },
- Create: function (fixtureA, indexA, fixtureB, indexB) {
- this.parent.prototype.Create.call(this, fixtureA, 0, fixtureB, 0)
- }
-};
-b2PolygonContact.Create = function (fixtureA, indexA, fixtureB, indexB) {
- return new b2PolygonContact(fixtureA, fixtureB)
-};
-b2PolygonContact._extend(b2Contact);
-b2Contact.AddType = function (fcn, type1, type2) {
- if (!b2Contact.s_registers[type1]) {
- b2Contact.s_registers[type1] = []
- }
- b2Contact.s_registers[type1][type2] = new b2ContactRegister();
- b2Contact.s_registers[type1][type2].fcn = fcn;
- b2Contact.s_registers[type1][type2].primary = true;
- if (type1 != type2) {
- if (!b2Contact.s_registers[type2]) {
- b2Contact.s_registers[type2] = []
- }
- b2Contact.s_registers[type2][type1] = new b2ContactRegister();
- b2Contact.s_registers[type2][type1].fcn = fcn;
- b2Contact.s_registers[type2][type1].primary = false
- }
- fcn.garbage = [];
- fcn.alloc = 2
-};
-b2Contact.InitializeRegisters = function () {
- b2Contact.AddType(b2CircleContact, b2Shape.e_circle, b2Shape.e_circle);
- b2Contact.AddType(b2PolygonAndCircleContact, b2Shape.e_polygon, b2Shape.e_circle);
- b2Contact.AddType(b2PolygonContact, b2Shape.e_polygon, b2Shape.e_polygon);
- b2Contact.AddType(b2EdgeAndCircleContact, b2Shape.e_edge, b2Shape.e_circle);
- b2Contact.AddType(b2EdgeAndPolygonContact, b2Shape.e_edge, b2Shape.e_polygon);
- b2Contact.AddType(b2ChainAndCircleContact, b2Shape.e_chain, b2Shape.e_circle);
- b2Contact.AddType(b2ChainAndPolygonContact, b2Shape.e_chain, b2Shape.e_polygon)
-};
-b2Contact.RetrieveGarbage = function (fcn) {
- var contact;
- if (contact = fcn.garbage.pop()) {
- return contact
- }
- for (var i = 0; i < fcn.alloc - 1; ++i) {
- fcn.garbage.push(new fcn())
- }
- fcn.alloc += 32;
- return new fcn()
-};
-b2Contact.Create = function (fixtureA, indexA, fixtureB, indexB) {
- if (b2Contact.s_initialized == false) {
- b2Contact.InitializeRegisters();
- b2Contact.s_initialized = true
- }
- var type1 = fixtureA.GetType();
- var type2 = fixtureB.GetType();
- var fcn = b2Contact.s_registers[type1][type2].fcn;
- if (fcn) {
- var contact = b2Contact.RetrieveGarbage(fcn);
- if (b2Contact.s_registers[type1][type2].primary) {
- contact.Create(fixtureA, indexA, fixtureB, indexB)
- } else {
- contact.Create(fixtureB, indexB, fixtureA, indexA)
- }
- return contact
- }
- return null
-};
-b2Contact.Destroy = function (contact) {
- var fixtureA = contact.m_fixtureA;
- var fixtureB = contact.m_fixtureB;
- if (contact.m_manifold.pointCount > 0 && fixtureA.IsSensor() == false && fixtureB.IsSensor() == false) {
- fixtureA.GetBody().SetAwake(true);
- fixtureB.GetBody().SetAwake(true)
- }
- var typeA = fixtureA.GetType();
- var typeB = fixtureB.GetType();
- b2Contact.s_registers[typeA][typeB].fcn.garbage.push(contact)
-};
-b2Contact.s_registers = [];
-b2Contact.s_initialized = false;
-"use strict";
-var b2_defaultFilter = new b2ContactFilter();
-var b2_defaultListener = new b2ContactListener();
-
-function b2ContactManager() {
- this.m_broadPhase = new b2BroadPhase();
- this.m_contactList = null;
- this.m_contactCount = 0;
- this.m_contactFilter = b2_defaultFilter;
- this.m_contactListener = b2_defaultListener
-}
-b2ContactManager.prototype = {
- AddPair: function (proxyUserDataA, proxyUserDataB) {
- var proxyA = proxyUserDataA;
- var proxyB = proxyUserDataB;
- var fixtureA = proxyA.fixture;
- var fixtureB = proxyB.fixture;
- var indexA = proxyA.childIndex;
- var indexB = proxyB.childIndex;
- var bodyA = fixtureA.GetBody();
- var bodyB = fixtureB.GetBody();
- if (bodyA == bodyB) {
- return
- }
- var edge = bodyB.GetContactList();
- while (edge) {
- if (edge.other == bodyA) {
- var fA = edge.contact.GetFixtureA();
- var fB = edge.contact.GetFixtureB();
- var iA = edge.contact.GetChildIndexA();
- var iB = edge.contact.GetChildIndexB();
- if (fA == fixtureA && fB == fixtureB && iA == indexA && iB == indexB) {
- return
- }
- if (fA == fixtureB && fB == fixtureA && iA == indexB && iB == indexA) {
- return
- }
- }
- edge = edge.next
- }
- if (bodyB.ShouldCollide(bodyA) == false) {
- return
- }
- if (this.m_contactFilter && this.m_contactFilter.ShouldCollide(fixtureA, fixtureB) == false) {
- return
- }
- var c = b2Contact.Create(fixtureA, indexA, fixtureB, indexB);
- if (c == null) {
- return
- }
- fixtureA = c.GetFixtureA();
- fixtureB = c.GetFixtureB();
- indexA = c.GetChildIndexA();
- indexB = c.GetChildIndexB();
- bodyA = fixtureA.GetBody();
- bodyB = fixtureB.GetBody();
- c.m_prev = null;
- c.m_next = this.m_contactList;
- if (this.m_contactList != null) {
- this.m_contactList.m_prev = c
- }
- this.m_contactList = c;
- c.m_nodeA.contact = c;
- c.m_nodeA.other = bodyB;
- c.m_nodeA.prev = null;
- c.m_nodeA.next = bodyA.m_contactList;
- if (bodyA.m_contactList != null) {
- bodyA.m_contactList.prev = c.m_nodeA
- }
- bodyA.m_contactList = c.m_nodeA;
- c.m_nodeB.contact = c;
- c.m_nodeB.other = bodyA;
- c.m_nodeB.prev = null;
- c.m_nodeB.next = bodyB.m_contactList;
- if (bodyB.m_contactList != null) {
- bodyB.m_contactList.prev = c.m_nodeB
- }
- bodyB.m_contactList = c.m_nodeB;
- if (fixtureA.IsSensor() == false && fixtureB.IsSensor() == false) {
- bodyA.SetAwake(true);
- bodyB.SetAwake(true)
- }++this.m_contactCount
- },
- FindNewContacts: function () {
- this.m_broadPhase.UpdatePairs(this)
- },
- Destroy: function (c) {
- var fixtureA = c.GetFixtureA();
- var fixtureB = c.GetFixtureB();
- var bodyA = fixtureA.GetBody();
- var bodyB = fixtureB.GetBody();
- if (this.m_contactListener && c.IsTouching()) {
- this.m_contactListener.EndContact(c)
- }
- if (c.m_prev) {
- c.m_prev.m_next = c.m_next
- }
- if (c.m_next) {
- c.m_next.m_prev = c.m_prev
- }
- if (c == this.m_contactList) {
- this.m_contactList = c.m_next
- }
- if (c.m_nodeA.prev) {
- c.m_nodeA.prev.next = c.m_nodeA.next
- }
- if (c.m_nodeA.next) {
- c.m_nodeA.next.prev = c.m_nodeA.prev
- }
- if (c.m_nodeA == bodyA.m_contactList) {
- bodyA.m_contactList = c.m_nodeA.next
- }
- if (c.m_nodeB.prev) {
- c.m_nodeB.prev.next = c.m_nodeB.next
- }
- if (c.m_nodeB.next) {
- c.m_nodeB.next.prev = c.m_nodeB.prev
- }
- if (c.m_nodeB == bodyB.m_contactList) {
- bodyB.m_contactList = c.m_nodeB.next
- }
- b2Contact.Destroy(c);
- --this.m_contactCount
- },
- Collide: function () {
- var c = this.m_contactList;
- while (c) {
- var fixtureA = c.GetFixtureA();
- var fixtureB = c.GetFixtureB();
- var indexA = c.GetChildIndexA();
- var indexB = c.GetChildIndexB();
- var bodyA = fixtureA.GetBody();
- var bodyB = fixtureB.GetBody();
- if (c.m_flags & b2Contact.e_filterFlag) {
- if (bodyB.ShouldCollide(bodyA) == false) {
- var cNuke = c;
- c = cNuke.GetNext();
- this.Destroy(cNuke);
- continue
- }
- if (this.m_contactFilter && this.m_contactFilter.ShouldCollide(fixtureA, fixtureB) == false) {
- var cNuke = c;
- c = cNuke.GetNext();
- this.Destroy(cNuke);
- continue
- }
- c.m_flags &= ~b2Contact.e_filterFlag
- }
- var activeA = bodyA.IsAwake() && bodyA.m_type != b2Body.b2_staticBody;
- var activeB = bodyB.IsAwake() && bodyB.m_type != b2Body.b2_staticBody;
- if (activeA == false && activeB == false) {
- c = c.GetNext();
- continue
- }
- var proxyIdA = fixtureA.m_proxies[indexA].proxyId;
- var proxyIdB = fixtureB.m_proxies[indexB].proxyId;
- var overlap = this.m_broadPhase.TestOverlap(proxyIdA, proxyIdB);
- if (overlap == false) {
- var cNuke = c;
- c = cNuke.GetNext();
- this.Destroy(cNuke);
- continue
- }
- c.Update(this.m_contactListener);
- c = c.GetNext()
- }
- }
-};
-
-function b2VelocityConstraintPoint() {
- this.rA = new b2Vec2();
- this.rB = new b2Vec2();
- this.normalImpulse = 0;
- this.tangentImpulse = 0;
- this.normalMass = 0;
- this.tangentMass = 0;
- this.velocityBias = 0
-}
-
-function b2ContactPositionConstraint() {
- this.localPoints = new Array(b2_maxManifoldPoints);
- this.localNormal = new b2Vec2();
- this.localPoint = new b2Vec2();
- this.indexA = 0;
- this.indexB = 0;
- this.invMassA = 0, this.invMassB = 0;
- this.localCenterA = new b2Vec2(), this.localCenterB = new b2Vec2();
- this.invIA = 0, this.invIB = 0;
- this.type = 0;
- this.radiusA = 0, this.radiusB = 0;
- this.pointCount = 0
-}
-
-function b2ContactVelocityConstraint() {
- this.points = new Array(b2_maxManifoldPoints);
- for (var i = 0; i < this.points.length; ++i) {
- this.points[i] = new b2VelocityConstraintPoint()
- }
- this.normal = new b2Vec2();
- this.normalMass = new b2Mat22();
- this.K = new b2Mat22();
- this.indexA = 0;
- this.indexB = 0;
- this.invMassA = 0, this.invMassB = 0;
- this.invIA = 0, this.invIB = 0;
- this.friction = 0;
- this.restitution = 0;
- this.tangentSpeed = 0;
- this.pointCount = 0;
- this.contactIndex = 0
-}
-
-function b2PositionSolverManifold() {
- this.normal = new b2Vec2();
- this.point = new b2Vec2();
- this.separation = 0
-}
-b2PositionSolverManifold.prototype = {
- Initialize: function (pc, xfA, xfB, index) {
- switch (pc.type) {
- case b2Manifold.e_circles:
- var pointAx = (xfA.q.c * pc.localPoint.x - xfA.q.s * pc.localPoint.y) + xfA.p.x;
- var pointAy = (xfA.q.s * pc.localPoint.x + xfA.q.c * pc.localPoint.y) + xfA.p.y;
- var pointBx = (xfB.q.c * pc.localPoints[0].x - xfB.q.s * pc.localPoints[0].y) + xfB.p.x;
- var pointBy = (xfB.q.s * pc.localPoints[0].x + xfB.q.c * pc.localPoints[0].y) + xfB.p.y;
- this.point.x = 0.5 * (pointAx + pointBx);
- this.point.y = 0.5 * (pointAy + pointBy);
- this.normal.x = pointBx - pointAx;
- this.normal.y = pointBy - pointAy;
- var tempnx = this.normal.x;
- var tempny = this.normal.y;
- this.normal.Normalize();
- this.separation = (tempnx * this.normal.x + tempny * this.normal.y) - pc.radiusA - pc.radiusB;
- break;
- case b2Manifold.e_faceA:
- this.normal.x = xfA.q.c * pc.localNormal.x - xfA.q.s * pc.localNormal.y;
- this.normal.y = xfA.q.s * pc.localNormal.x + xfA.q.c * pc.localNormal.y;
- var planePointx = (xfA.q.c * pc.localPoint.x - xfA.q.s * pc.localPoint.y) + xfA.p.x;
- var planePointy = (xfA.q.s * pc.localPoint.x + xfA.q.c * pc.localPoint.y) + xfA.p.y;
- var clipPointx = (xfB.q.c * pc.localPoints[index].x - xfB.q.s * pc.localPoints[index].y) + xfB.p.x;
- var clipPointy = (xfB.q.s * pc.localPoints[index].x + xfB.q.c * pc.localPoints[index].y) + xfB.p.y;
- this.separation = ((clipPointx - planePointx) * this.normal.x + (clipPointy - planePointy) * this.normal.y) - pc.radiusA - pc.radiusB;
- this.point.x = clipPointx;
- this.point.y = clipPointy;
- break;
- case b2Manifold.e_faceB:
- this.normal.x = xfB.q.c * pc.localNormal.x - xfB.q.s * pc.localNormal.y;
- this.normal.y = xfB.q.s * pc.localNormal.x + xfB.q.c * pc.localNormal.y;
- var planePointx = (xfB.q.c * pc.localPoint.x - xfB.q.s * pc.localPoint.y) + xfB.p.x;
- var planePointy = (xfB.q.s * pc.localPoint.x + xfB.q.c * pc.localPoint.y) + xfB.p.y;
- var clipPointx = (xfA.q.c * pc.localPoints[index].x - xfA.q.s * pc.localPoints[index].y) + xfA.p.x;
- var clipPointy = (xfA.q.s * pc.localPoints[index].x + xfA.q.c * pc.localPoints[index].y) + xfA.p.y;
- this.separation = ((clipPointx - planePointx) * this.normal.x + (clipPointy - planePointy) * this.normal.y) - pc.radiusA - pc.radiusB;
- this.point.x = clipPointx;
- this.point.y = clipPointy;
- this.normal.x = -this.normal.x;
- this.normal.y = -this.normal.y;
- break
- }
- }
-};
-
-function b2ContactSolverDef() {
- this.step = new b2TimeStep();
- this.contacts = null;
- this.count = 0;
- this.positions = null;
- this.velocities = null
-}
-
-function b2ContactSolver() {
- this.m_positionConstraints = [];
- this.m_velocityConstraints = []
-}
-b2ContactSolver.cs_xfA = new b2Transform();
-b2ContactSolver.cs_xfB = new b2Transform();
-b2ContactSolver.temp_solver_manifold = new b2PositionSolverManifold();
-b2ContactSolver.prototype = {
- Init: function (def) {
- this.m_step = def.step;
- this.m_count = def.count;
- this.m_positionConstraints.length = this.m_count;
- this.m_velocityConstraints.length = this.m_count;
- this.m_positions = def.positions;
- this.m_velocities = def.velocities;
- this.m_contacts = def.contacts;
- for (var i = 0; i < this.m_count; ++i) {
- var contact = this.m_contacts[i];
- var fixtureA = contact.m_fixtureA;
- var fixtureB = contact.m_fixtureB;
- var shapeA = fixtureA.GetShape();
- var shapeB = fixtureB.GetShape();
- var radiusA = shapeA.m_radius;
- var radiusB = shapeB.m_radius;
- var bodyA = fixtureA.GetBody();
- var bodyB = fixtureB.GetBody();
- var manifold = contact.GetManifold();
- var pointCount = manifold.pointCount;
- var vc = this.m_velocityConstraints[i] || new b2ContactVelocityConstraint();
- vc.friction = contact.m_friction;
- vc.restitution = contact.m_restitution;
- vc.tangentSpeed = contact.m_tangentSpeed;
- vc.indexA = bodyA.m_islandIndex;
- vc.indexB = bodyB.m_islandIndex;
- vc.invMassA = bodyA.m_invMass;
- vc.invMassB = bodyB.m_invMass;
- vc.invIA = bodyA.m_invI;
- vc.invIB = bodyB.m_invI;
- vc.contactIndex = i;
- vc.pointCount = pointCount;
- vc.K.SetZero();
- vc.normalMass.SetZero();
- this.m_velocityConstraints[i] = vc;
- var pc = this.m_positionConstraints[i] || new b2ContactPositionConstraint();
- pc.indexA = bodyA.m_islandIndex;
- pc.indexB = bodyB.m_islandIndex;
- pc.invMassA = bodyA.m_invMass;
- pc.invMassB = bodyB.m_invMass;
- pc.localCenterA.x = bodyA.m_sweep.localCenter.x;
- pc.localCenterA.y = bodyA.m_sweep.localCenter.y;
- pc.localCenterB.x = bodyB.m_sweep.localCenter.x;
- pc.localCenterB.y = bodyB.m_sweep.localCenter.y;
- pc.invIA = bodyA.m_invI;
- pc.invIB = bodyB.m_invI;
- pc.localNormal.x = manifold.localNormal.x;
- pc.localNormal.y = manifold.localNormal.y;
- pc.localPoint.x = manifold.localPoint.x;
- pc.localPoint.y = manifold.localPoint.y;
- pc.pointCount = pointCount;
- pc.radiusA = radiusA;
- pc.radiusB = radiusB;
- pc.type = manifold.type;
- this.m_positionConstraints[i] = pc;
- for (var j = 0; j < pointCount; ++j) {
- var cp = manifold.points[j];
- var vcp = vc.points[j];
- if (this.m_step.warmStarting) {
- vcp.normalImpulse = this.m_step.dtRatio * cp.normalImpulse;
- vcp.tangentImpulse = this.m_step.dtRatio * cp.tangentImpulse
- } else {
- vcp.normalImpulse = 0;
- vcp.tangentImpulse = 0
- }
- vcp.rA.SetZero();
- vcp.rB.SetZero();
- vcp.normalMass = 0;
- vcp.tangentMass = 0;
- vcp.velocityBias = 0;
- pc.localPoints[j] = cp.localPoint
- }
- }
- },
- InitializeVelocityConstraints: function () {
- for (var i = 0; i < this.m_count; ++i) {
- var vc = this.m_velocityConstraints[i];
- var pc = this.m_positionConstraints[i];
- var radiusA = pc.radiusA;
- var radiusB = pc.radiusB;
- var manifold = this.m_contacts[vc.contactIndex].GetManifold();
- var indexA = vc.indexA;
- var indexB = vc.indexB;
- var mA = vc.invMassA;
- var mB = vc.invMassB;
- var iA = vc.invIA;
- var iB = vc.invIB;
- var localCenterA = pc.localCenterA;
- var localCenterB = pc.localCenterB;
- var cA = this.m_positions[indexA].c;
- var aA = this.m_positions[indexA].a;
- var vA = this.m_velocities[indexA].v;
- var wA = this.m_velocities[indexA].w;
- var cB = this.m_positions[indexB].c;
- var aB = this.m_positions[indexB].a;
- var vB = this.m_velocities[indexB].v;
- var wB = this.m_velocities[indexB].w;
- b2ContactSolver.cs_xfA.q.Set(aA);
- b2ContactSolver.cs_xfB.q.Set(aB);
- b2ContactSolver.cs_xfA.p.x = cA.x - (b2ContactSolver.cs_xfA.q.c * localCenterA.x - b2ContactSolver.cs_xfA.q.s * localCenterA.y);
- b2ContactSolver.cs_xfA.p.y = cA.y - (b2ContactSolver.cs_xfA.q.s * localCenterA.x + b2ContactSolver.cs_xfA.q.c * localCenterA.y);
- b2ContactSolver.cs_xfB.p.x = cB.x - (b2ContactSolver.cs_xfB.q.c * localCenterB.x - b2ContactSolver.cs_xfB.q.s * localCenterB.y);
- b2ContactSolver.cs_xfB.p.y = cB.y - (b2ContactSolver.cs_xfB.q.s * localCenterB.x + b2ContactSolver.cs_xfB.q.c * localCenterB.y);
- var worldManifold = new b2WorldManifold();
- worldManifold.Initialize(manifold, b2ContactSolver.cs_xfA, radiusA, b2ContactSolver.cs_xfB, radiusB);
- vc.normal.x = worldManifold.normal.x;
- vc.normal.y = worldManifold.normal.y;
- var pointCount = vc.pointCount;
- for (var j = 0; j < pointCount; ++j) {
- var vcp = vc.points[j];
- vcp.rA.x = worldManifold.points[j].x - cA.x;
- vcp.rA.y = worldManifold.points[j].y - cA.y;
- vcp.rB.x = worldManifold.points[j].x - cB.x;
- vcp.rB.y = worldManifold.points[j].y - cB.y;
- var rnA = vcp.rA.x * vc.normal.y - vcp.rA.y * vc.normal.x;
- var rnB = vcp.rB.x * vc.normal.y - vcp.rB.y * vc.normal.x;
- var kNormal = mA + mB + iA * rnA * rnA + iB * rnB * rnB;
- vcp.normalMass = kNormal > 0 ? 1 / kNormal : 0;
- var tangentx = 1 * vc.normal.y;
- var tangenty = -1 * vc.normal.x;
- var rtA = vcp.rA.x * tangenty - vcp.rA.y * tangentx;
- var rtB = vcp.rB.x * tangenty - vcp.rB.y * tangentx;
- var kTangent = mA + mB + iA * rtA * rtA + iB * rtB * rtB;
- vcp.tangentMass = kTangent > 0 ? 1 / kTangent : 0;
- vcp.velocityBias = 0;
- var vRel = vc.normal.x * (((vB.x + (-wB * vcp.rB.y)) - vA.x) - (-wA * vcp.rA.y)) + vc.normal.y * (((vB.y + (wB * vcp.rB.x)) - vA.y) - (wA * vcp.rA.x));
- if (vRel < -b2_velocityThreshold) {
- vcp.velocityBias = -vc.restitution * vRel
- }
- }
- if (vc.pointCount == 2) {
- var vcp1 = vc.points[0];
- var vcp2 = vc.points[1];
- var rn1A = vcp1.rA.x * vc.normal.y - vcp1.rA.y * vc.normal.x;
- var rn1B = vcp1.rB.x * vc.normal.y - vcp1.rB.y * vc.normal.x;
- var rn2A = vcp2.rA.x * vc.normal.y - vcp2.rA.y * vc.normal.x;
- var rn2B = vcp2.rB.x * vc.normal.y - vcp2.rB.y * vc.normal.x;
- var k11 = mA + mB + iA * rn1A * rn1A + iB * rn1B * rn1B;
- var k22 = mA + mB + iA * rn2A * rn2A + iB * rn2B * rn2B;
- var k12 = mA + mB + iA * rn1A * rn2A + iB * rn1B * rn2B;
- var k_maxConditionNumber = 1000;
- if (k11 * k11 < k_maxConditionNumber * (k11 * k22 - k12 * k12)) {
- vc.K.ex.x = k11;
- vc.K.ex.y = k12;
- vc.K.ey.x = k12;
- vc.K.ey.y = k22;
- vc.normalMass.Assign(vc.K.GetInverse())
- } else {
- vc.pointCount = 1
- }
- }
- }
- },
- WarmStart: function () {
- for (var i = 0; i < this.m_count; ++i) {
- var vc = this.m_velocityConstraints[i];
- var indexA = vc.indexA;
- var indexB = vc.indexB;
- var mA = vc.invMassA;
- var iA = vc.invIA;
- var mB = vc.invMassB;
- var iB = vc.invIB;
- var pointCount = vc.pointCount;
- var vA = this.m_velocities[indexA].v;
- var wA = this.m_velocities[indexA].w;
- var vB = this.m_velocities[indexB].v;
- var wB = this.m_velocities[indexB].w;
- var normal = vc.normal;
- var tangentx = 1 * normal.y;
- var tangenty = -1 * normal.x;
- for (var j = 0; j < pointCount; ++j) {
- var vcp = vc.points[j];
- var Px = (vcp.normalImpulse * normal.x) + (vcp.tangentImpulse * tangentx);
- var Py = (vcp.normalImpulse * normal.y) + (vcp.tangentImpulse * tangenty);
- wA -= iA * (vcp.rA.x * Py - vcp.rA.y * Px);
- vA.x -= mA * Px;
- vA.y -= mA * Py;
- wB += iB * (vcp.rB.x * Py - vcp.rB.y * Px);
- vB.x += mB * Px;
- vB.y += mB * Py
- }
- this.m_velocities[indexA].w = wA;
- this.m_velocities[indexB].w = wB
- }
- },
- SolveVelocityConstraints: function () {
- for (var i = 0; i < this.m_count; ++i) {
- var vc = this.m_velocityConstraints[i];
- var indexA = vc.indexA;
- var indexB = vc.indexB;
- var mA = vc.invMassA;
- var iA = vc.invIA;
- var mB = vc.invMassB;
- var iB = vc.invIB;
- var pointCount = vc.pointCount;
- var vA = this.m_velocities[indexA].v;
- var wA = this.m_velocities[indexA].w;
- var vB = this.m_velocities[indexB].v;
- var wB = this.m_velocities[indexB].w;
- var normal = vc.normal;
- var tangentx = 1 * normal.y;
- var tangenty = -1 * normal.x;
- var friction = vc.friction;
- for (var j = 0; j < pointCount; ++j) {
- var vcp = vc.points[j];
- var dvx = vB.x + (-wB * vcp.rB.y) - vA.x - (-wA * vcp.rA.y);
- var dvy = vB.y + (wB * vcp.rB.x) - vA.y - (wA * vcp.rA.x);
- var vt = (dvx * tangentx + dvy * tangenty) - vc.tangentSpeed;
- var lambda = vcp.tangentMass * (-vt);
- var maxFriction = friction * vcp.normalImpulse;
- var newImpulse = b2Clamp(vcp.tangentImpulse + lambda, -maxFriction, maxFriction);
- lambda = newImpulse - vcp.tangentImpulse;
- vcp.tangentImpulse = newImpulse;
- var Px = lambda * tangentx;
- var Py = lambda * tangenty;
- vA.x -= mA * Px;
- vA.y -= mA * Py;
- wA -= iA * (vcp.rA.x * Py - vcp.rA.y * Px);
- vB.x += mB * Px;
- vB.y += mB * Py;
- wB += iB * (vcp.rB.x * Py - vcp.rB.y * Px)
- }
- if (vc.pointCount == 1) {
- vcp = vc.points[0];
- dvx = vB.x + (-wB * vcp.rB.y) - vA.x - (-wA * vcp.rA.y);
- dvy = vB.y + (wB * vcp.rB.x) - vA.y - (wA * vcp.rA.x);
- var vn = dvx * normal.x + dvy * normal.y;
- var lambda = -vcp.normalMass * (vn - vcp.velocityBias);
- var newImpulse = b2Max(vcp.normalImpulse + lambda, 0);
- lambda = newImpulse - vcp.normalImpulse;
- vcp.normalImpulse = newImpulse;
- Px = lambda * normal.x;
- Py = lambda * normal.y;
- vA.x -= mA * Px;
- vA.y -= mA * Py;
- wA -= iA * (vcp.rA.x * Py - vcp.rA.y * Px);
- vB.x += mB * Px;
- vB.y += mB * Py;
- wB += iB * (vcp.rB.x * Py - vcp.rB.y * Px)
- } else {
- var cp1 = vc.points[0];
- var cp2 = vc.points[1];
- var ax = cp1.normalImpulse;
- var ay = cp2.normalImpulse;
- var dv1x = vB.x + (-wB * cp1.rB.y) - vA.x - (-wA * cp1.rA.y);
- var dv1y = vB.y + (wB * cp1.rB.x) - vA.y - (wA * cp1.rA.x);
- var dv2x = vB.x + (-wB * cp2.rB.y) - vA.x - (-wA * cp2.rA.y);
- var dv2y = vB.y + (wB * cp2.rB.x) - vA.y - (wA * cp2.rA.x);
- var vn1 = dv1x * normal.x + dv1y * normal.y;
- var vn2 = dv2x * normal.x + dv2y * normal.y;
- var bx = vn1 - cp1.velocityBias;
- var by = vn2 - cp2.velocityBias;
- bx -= vc.K.ex.x * ax + vc.K.ey.x * ay;
- by -= vc.K.ex.y * ax + vc.K.ey.y * ay;
- for (;;) {
- var xx = -(vc.normalMass.ex.x * bx + vc.normalMass.ey.x * by);
- var xy = -(vc.normalMass.ex.y * bx + vc.normalMass.ey.y * by);
- if (xx >= 0 && xy >= 0) {
- var dx = xx - ax;
- var dy = xy - ay;
- var P1x = dx * normal.x;
- var P1y = dx * normal.y;
- var P2x = dy * normal.x;
- var P2y = dy * normal.y;
- vA.x -= mA * (P1x + P2x);
- vA.y -= mA * (P1y + P2y);
- wA -= iA * ((cp1.rA.x * P1y - cp1.rA.y * P1x) + (cp2.rA.x * P2y - cp2.rA.y * P2x));
- vB.x += mB * (P1x + P2x);
- vB.y += mB * (P1y + P2y);
- wB += iB * ((cp1.rB.x * P1y - cp1.rB.y * P1x) + (cp2.rB.x * P2y - cp2.rB.y * P2x));
- cp1.normalImpulse = xx;
- cp2.normalImpulse = xy;
- break
- }
- xx = -cp1.normalMass * bx;
- xy = 0;
- vn1 = 0;
- vn2 = vc.K.ex.y * xx + by;
- if (xx >= 0 && vn2 >= 0) {
- dx = xx - ax;
- dy = xy - ay;
- P1x = dx * normal.x;
- P1y = dx * normal.y;
- P2x = dy * normal.x;
- P2y = dy * normal.y;
- vA.x -= mA * (P1x + P2x);
- vA.y -= mA * (P1y + P2y);
- wA -= iA * ((cp1.rA.x * P1y - cp1.rA.y * P1x) + (cp2.rA.x * P2y - cp2.rA.y * P2x));
- vB.x += mB * (P1x + P2x);
- vB.y += mB * (P1y + P2y);
- wB += iB * ((cp1.rB.x * P1y - cp1.rB.y * P1x) + (cp2.rB.x * P2y - cp2.rB.y * P2x));
- cp1.normalImpulse = xx;
- cp2.normalImpulse = xy;
- break
- }
- xx = 0;
- xy = -cp2.normalMass * by;
- vn1 = vc.K.ey.x * xy + bx;
- vn2 = 0;
- if (xy >= 0 && vn1 >= 0) {
- dx = xx - ax;
- dy = xy - ay;
- P1x = dx * normal.x;
- P1y = dx * normal.y;
- P2x = dy * normal.x;
- P2y = dy * normal.y;
- vA.x -= mA * (P1x + P2x);
- vA.y -= mA * (P1y + P2y);
- wA -= iA * ((cp1.rA.x * P1y - cp1.rA.y * P1x) + (cp2.rA.x * P2y - cp2.rA.y * P2x));
- vB.x += mB * (P1x + P2x);
- vB.y += mB * (P1y + P2y);
- wB += iB * ((cp1.rB.x * P1y - cp1.rB.y * P1x) + (cp2.rB.x * P2y - cp2.rB.y * P2x));
- cp1.normalImpulse = xx;
- cp2.normalImpulse = xy;
- break
- }
- xx = 0;
- xy = 0;
- vn1 = bx;
- vn2 = by;
- if (vn1 >= 0 && vn2 >= 0) {
- dx = xx - ax;
- dy = xy - ay;
- P1x = dx * normal.x;
- P1y = dx * normal.y;
- P2x = dy * normal.x;
- P2y = dy * normal.y;
- vA.x -= mA * (P1x + P2x);
- vA.y -= mA * (P1y + P2y);
- wA -= iA * ((cp1.rA.x * P1y - cp1.rA.y * P1x) + (cp2.rA.x * P2y - cp2.rA.y * P2x));
- vB.x += mB * (P1x + P2x);
- vB.y += mB * (P1y + P2y);
- wB += iB * ((cp1.rB.x * P1y - cp1.rB.y * P1x) + (cp2.rB.x * P2y - cp2.rB.y * P2x));
- cp1.normalImpulse = xx;
- cp2.normalImpulse = xy;
- break
- }
- break
- }
- }
- this.m_velocities[indexA].w = wA;
- this.m_velocities[indexB].w = wB
- }
- },
- StoreImpulses: function () {
- for (var i = 0; i < this.m_count; ++i) {
- var vc = this.m_velocityConstraints[i];
- var manifold = this.m_contacts[vc.contactIndex].GetManifold();
- for (var j = 0; j < vc.pointCount; ++j) {
- manifold.points[j].normalImpulse = vc.points[j].normalImpulse;
- manifold.points[j].tangentImpulse = vc.points[j].tangentImpulse
- }
- }
- },
- SolvePositionConstraints: function () {
- var minSeparation = 0;
- for (var i = 0; i < this.m_count; ++i) {
- var pc = this.m_positionConstraints[i];
- var indexA = pc.indexA;
- var indexB = pc.indexB;
- var localCenterA = pc.localCenterA;
- var mA = pc.invMassA;
- var iA = pc.invIA;
- var localCenterB = pc.localCenterB;
- var mB = pc.invMassB;
- var iB = pc.invIB;
- var pointCount = pc.pointCount;
- var cA = this.m_positions[indexA].c;
- var aA = this.m_positions[indexA].a;
- var cB = this.m_positions[indexB].c;
- var aB = this.m_positions[indexB].a;
- for (var j = 0; j < pointCount; ++j) {
- b2ContactSolver.cs_xfA.q.Set(aA);
- b2ContactSolver.cs_xfB.q.Set(aB);
- b2ContactSolver.cs_xfA.p.x = cA.x - (b2ContactSolver.cs_xfA.q.c * localCenterA.x - b2ContactSolver.cs_xfA.q.s * localCenterA.y);
- b2ContactSolver.cs_xfA.p.y = cA.y - (b2ContactSolver.cs_xfA.q.s * localCenterA.x + b2ContactSolver.cs_xfA.q.c * localCenterA.y);
- b2ContactSolver.cs_xfB.p.x = cB.x - (b2ContactSolver.cs_xfB.q.c * localCenterB.x - b2ContactSolver.cs_xfB.q.s * localCenterB.y);
- b2ContactSolver.cs_xfB.p.y = cB.y - (b2ContactSolver.cs_xfB.q.s * localCenterB.x + b2ContactSolver.cs_xfB.q.c * localCenterB.y);
- b2ContactSolver.temp_solver_manifold.Initialize(pc, b2ContactSolver.cs_xfA, b2ContactSolver.cs_xfB, j);
- var normal = b2ContactSolver.temp_solver_manifold.normal;
- var point = b2ContactSolver.temp_solver_manifold.point;
- var separation = b2ContactSolver.temp_solver_manifold.separation;
- var rAx = point.x - cA.x;
- var rAy = point.y - cA.y;
- var rBx = point.x - cB.x;
- var rBy = point.y - cB.y;
- minSeparation = b2Min(minSeparation, separation);
- var C = b2Clamp(b2_baumgarte * (separation + b2_linearSlop), -b2_maxLinearCorrection, 0);
- var rnA = rAx * normal.y - rAy * normal.x;
- var rnB = rBx * normal.y - rBy * normal.x;
- var K = mA + mB + iA * rnA * rnA + iB * rnB * rnB;
- var impulse = K > 0 ? -C / K : 0;
- var Px = impulse * normal.x;
- var Py = impulse * normal.y;
- cA.x -= mA * Px;
- cA.y -= mA * Py;
- aA -= iA * (rAx * Py - rAy * Px);
- cB.x += mB * Px;
- cB.y += mB * Py;
- aB += iB * (rBx * Py - rBy * Px)
- }
- this.m_positions[indexA].a = aA;
- this.m_positions[indexB].a = aB
- }
- return minSeparation >= -3 * b2_linearSlop
- },
- SolveTOIPositionConstraints: function (toiIndexA, toiIndexB) {
- var minSeparation = 0;
- for (var i = 0; i < this.m_count; ++i) {
- var pc = this.m_positionConstraints[i];
- var indexA = pc.indexA;
- var indexB = pc.indexB;
- var localCenterA = pc.localCenterA;
- var localCenterB = pc.localCenterB;
- var pointCount = pc.pointCount;
- var mA = 0;
- var iA = 0;
- if (indexA == toiIndexA || indexA == toiIndexB) {
- mA = pc.invMassA;
- iA = pc.invIA
- }
- var mB = 0;
- var iB = 0;
- if (indexB == toiIndexA || indexB == toiIndexB) {
- mB = pc.invMassB;
- iB = pc.invIB
- }
- var cA = this.m_positions[indexA].c;
- var aA = this.m_positions[indexA].a;
- var cB = this.m_positions[indexB].c;
- var aB = this.m_positions[indexB].a;
- for (var j = 0; j < pointCount; ++j) {
- b2ContactSolver.cs_xfA.q.Set(aA);
- b2ContactSolver.cs_xfB.q.Set(aB);
- b2ContactSolver.cs_xfA.p.Assign(b2Vec2.Subtract(cA, b2Mul_r_v2(b2ContactSolver.cs_xfA.q, localCenterA)));
- b2ContactSolver.cs_xfB.p.Assign(b2Vec2.Subtract(cB, b2Mul_r_v2(b2ContactSolver.cs_xfB.q, localCenterB)));
- b2ContactSolver.temp_solver_manifold.Initialize(pc, b2ContactSolver.cs_xfA, b2ContactSolver.cs_xfB, j);
- var normal = b2ContactSolver.temp_solver_manifold.normal;
- var point = b2ContactSolver.temp_solver_manifold.point;
- var separation = b2ContactSolver.temp_solver_manifold.separation;
- var rA = b2Vec2.Subtract(point, cA);
- var rB = b2Vec2.Subtract(point, cB);
- minSeparation = b2Min(minSeparation, separation);
- var C = b2Clamp(b2_toiBaugarte * (separation + b2_linearSlop), -b2_maxLinearCorrection, 0);
- var rnA = b2Cross_v2_v2(rA, normal);
- var rnB = b2Cross_v2_v2(rB, normal);
- var K = mA + mB + iA * rnA * rnA + iB * rnB * rnB;
- var impulse = K > 0 ? -C / K : 0;
- var P = b2Vec2.Multiply(impulse, normal);
- cA.Subtract(b2Vec2.Multiply(mA, P));
- aA -= iA * b2Cross_v2_v2(rA, P);
- cB.Add(b2Vec2.Multiply(mB, P));
- aB += iB * b2Cross_v2_v2(rB, P)
- }
- this.m_positions[indexA].a = aA;
- this.m_positions[indexB].a = aB
- }
- return minSeparation >= -1.5 * b2_linearSlop
- }
-};
-"use strict";
-
-function b2Island() {
- this.m_bodies = [];
- this.m_contacts = [];
- this.m_joints = [];
- this.m_velocities = [];
- this.m_positions = []
-}
-var profile_solve_init = b2Profiler.create("solve initialization", "solve");
-var profile_solve_init_warmStarting = b2Profiler.create("warm starting", "solve initialization");
-var profile_solve_velocity = b2Profiler.create("solve velocities", "solve");
-var profile_solve_position = b2Profiler.create("solve positions", "solve");
-b2Island._solverData = new b2SolverData();
-b2Island._solverDef = new b2ContactSolverDef();
-b2Island._solver = new b2ContactSolver();
-b2Island.prototype = {
- Clear: function () {
- this.m_bodyCount = 0;
- this.m_contactCount = 0;
- this.m_jointCount = 0
- },
- Initialize: function (bodyCapacity, contactCapacity, jointCapacity, listener) {
- this.m_listener = listener;
- this.m_bodyCapacity = bodyCapacity;
- this.m_contactCapacity = contactCapacity;
- this.m_jointCapacity = jointCapacity;
- this.m_bodyCount = 0;
- this.m_contactCount = 0;
- this.m_jointCount = 0;
- this.m_bodies.length = bodyCapacity;
- this.m_contacts.length = contactCapacity;
- this.m_joints.length = jointCapacity;
- this.m_velocities.length = bodyCapacity;
- this.m_positions.length = bodyCapacity
- },
- Solve: function (step, gravity, allowSleep) {
- profile_solve_init.start();
- var h = step.dt;
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var b = this.m_bodies[i];
- this.m_positions[i].c.Assign(b.m_sweep.c);
- var a = b.m_sweep.a;
- this.m_velocities[i].v.Assign(b.m_linearVelocity);
- var w = b.m_angularVelocity;
- b.m_sweep.c0.Assign(b.m_sweep.c);
- b.m_sweep.a0 = b.m_sweep.a;
- if (b.m_type == b2Body.b2_dynamicBody) {
- this.m_velocities[i].v.x += h * ((b.m_gravityScale * gravity.x) + (b.m_invMass * b.m_force.x));
- this.m_velocities[i].v.y += h * ((b.m_gravityScale * gravity.y) + (b.m_invMass * b.m_force.y));
- w += h * b.m_invI * b.m_torque;
- this.m_velocities[i].v.x *= 1 / (1 + h * b.m_linearDamping);
- this.m_velocities[i].v.y *= 1 / (1 + h * b.m_linearDamping);
- w *= 1 / (1 + h * b.m_angularDamping)
- }
- this.m_positions[i].a = a;
- this.m_velocities[i].w = w
- }
- b2Island._solverData.step = step;
- b2Island._solverData.positions = this.m_positions;
- b2Island._solverData.velocities = this.m_velocities;
- b2Island._solverDef.step = step;
- b2Island._solverDef.contacts = this.m_contacts;
- b2Island._solverDef.count = this.m_contactCount;
- b2Island._solverDef.positions = this.m_positions;
- b2Island._solverDef.velocities = this.m_velocities;
- b2Island._solverDef.allocator = this.m_allocator;
- b2Island._solver.Init(b2Island._solverDef);
- b2Island._solver.InitializeVelocityConstraints();
- if (step.warmStarting) {
- profile_solve_init_warmStarting.start();
- b2Island._solver.WarmStart();
- profile_solve_init_warmStarting.stop()
- }
- for (var i = 0; i < this.m_jointCount; ++i) {
- this.m_joints[i].InitVelocityConstraints(b2Island._solverData)
- }
- profile_solve_init.stop();
- profile_solve_velocity.start();
- for (var i = 0; i < step.velocityIterations; ++i) {
- for (var j = 0; j < this.m_jointCount; ++j) {
- this.m_joints[j].SolveVelocityConstraints(b2Island._solverData)
- }
- b2Island._solver.SolveVelocityConstraints()
- }
- b2Island._solver.StoreImpulses();
- profile_solve_velocity.stop();
- profile_solve_position.start();
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var c = this.m_positions[i].c;
- var a = this.m_positions[i].a;
- var v = this.m_velocities[i].v;
- var w = this.m_velocities[i].w;
- var translationx = h * v.x;
- var translationy = h * v.y;
- var translationl = translationx * translationx + translationy * translationy;
- if (translationl > b2_maxTranslationSquared) {
- var ratio = b2_maxTranslation / b2Sqrt(translationl);
- v.x *= ratio;
- v.y *= ratio
- }
- var rotation = h * w;
- if (rotation * rotation > b2_maxRotationSquared) {
- var ratio = b2_maxRotation / b2Abs(rotation);
- w *= ratio
- }
- c.x += h * v.x;
- c.y += h * v.y;
- a += h * w;
- this.m_positions[i].a = a;
- this.m_velocities[i].w = w
- }
- var positionSolved = false;
- for (var i = 0; i < step.positionIterations; ++i) {
- var contactsOkay = b2Island._solver.SolvePositionConstraints();
- var jointsOkay = true;
- for (var j = 0; j < this.m_jointCount; ++j) {
- var jointOkay = this.m_joints[j].SolvePositionConstraints(b2Island._solverData);
- jointsOkay = jointsOkay && jointOkay
- }
- if (contactsOkay && jointsOkay) {
- positionSolved = true;
- break
- }
- }
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var body = this.m_bodies[i];
- body.m_sweep.c.Assign(this.m_positions[i].c);
- body.m_sweep.a = this.m_positions[i].a;
- body.m_linearVelocity.Assign(this.m_velocities[i].v);
- body.m_angularVelocity = this.m_velocities[i].w;
- body.SynchronizeTransform()
- }
- profile_solve_position.stop();
- this.Report(b2Island._solver.m_velocityConstraints);
- if (allowSleep) {
- var minSleepTime = b2_maxFloat;
- var linTolSqr = b2_linearSleepTolerance * b2_linearSleepTolerance;
- var angTolSqr = b2_angularSleepTolerance * b2_angularSleepTolerance;
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var b = this.m_bodies[i];
- if (b.GetType() == b2Body.b2_staticBody) {
- continue
- }
- if ((b.m_flags & b2Body.e_autoSleepFlag) == 0 || b.m_angularVelocity * b.m_angularVelocity > angTolSqr || b2Dot_v2_v2(b.m_linearVelocity, b.m_linearVelocity) > linTolSqr) {
- b.m_sleepTime = 0;
- minSleepTime = 0
- } else {
- b.m_sleepTime += h;
- minSleepTime = b2Min(minSleepTime, b.m_sleepTime)
- }
- }
- if (minSleepTime >= b2_timeToSleep && positionSolved) {
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var b = this.m_bodies[i];
- b.SetAwake(false)
- }
- }
- }
- },
- SolveTOI: function (subStep, toiIndexA, toiIndexB) {
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var b = this.m_bodies[i];
- this.m_positions[i].c.Assign(b.m_sweep.c);
- this.m_positions[i].a = b.m_sweep.a;
- this.m_velocities[i].v.Assign(b.m_linearVelocity);
- this.m_velocities[i].w = b.m_angularVelocity
- }
- b2Island._solverDef.contacts = this.m_contacts;
- b2Island._solverDef.count = this.m_contactCount;
- b2Island._solverDef.step = subStep;
- b2Island._solverDef.positions = this.m_positions;
- b2Island._solverDef.velocities = this.m_velocities;
- b2Island._solver.Init(b2Island._solverDef);
- for (var i = 0; i < subStep.positionIterations; ++i) {
- var contactsOkay = b2Island._solver.SolveTOIPositionConstraints(toiIndexA, toiIndexB);
- if (contactsOkay) {
- break
- }
- }
- this.m_bodies[toiIndexA].m_sweep.c0.Assign(this.m_positions[toiIndexA].c);
- this.m_bodies[toiIndexA].m_sweep.a0 = this.m_positions[toiIndexA].a;
- this.m_bodies[toiIndexB].m_sweep.c0.Assign(this.m_positions[toiIndexB].c);
- this.m_bodies[toiIndexB].m_sweep.a0 = this.m_positions[toiIndexB].a;
- b2Island._solver.InitializeVelocityConstraints();
- for (var i = 0; i < subStep.velocityIterations; ++i) {
- b2Island._solver.SolveVelocityConstraints()
- }
- var h = subStep.dt;
- for (var i = 0; i < this.m_bodyCount; ++i) {
- var c = this.m_positions[i].c;
- var a = this.m_positions[i].a;
- var v = this.m_velocities[i].v;
- var w = this.m_velocities[i].w;
- var translation = b2Vec2.Multiply(h, v);
- if (b2Dot_v2_v2(translation, translation) > b2_maxTranslationSquared) {
- var ratio = b2_maxTranslation / translation.Length();
- v.Multiply(ratio)
- }
- var rotation = h * w;
- if (rotation * rotation > b2_maxRotationSquared) {
- var ratio = b2_maxRotation / b2Abs(rotation);
- w *= ratio
- }
- c.Add(b2Vec2.Multiply(h, v));
- a += h * w;
- this.m_positions[i].a = a;
- this.m_velocities[i].w = w;
- var body = this.m_bodies[i];
- body.m_sweep.c.Assign(c);
- body.m_sweep.a = a;
- body.m_linearVelocity.Assign(v);
- body.m_angularVelocity = w;
- body.SynchronizeTransform()
- }
- this.Report(b2Island._solver.m_velocityConstraints)
- },
- AddBody: function (body) {
- body.m_islandIndex = this.m_bodyCount;
- this.m_bodies[this.m_bodyCount] = body;
- if (!this.m_positions[this.m_bodyCount]) {
- this.m_positions[this.m_bodyCount] = new b2Position();
- this.m_velocities[this.m_bodyCount] = new b2Velocity()
- }++this.m_bodyCount
- },
- AddContact: function (contact) {
- this.m_contacts[this.m_contactCount++] = contact
- },
- AddJoint: function (joint) {
- this.m_joints[this.m_jointCount++] = joint
- },
- Report: function (constraints) {
- if (this.m_listener == null) {
- return
- }
- for (var i = 0; i < this.m_contactCount; ++i) {
- var c = this.m_contacts[i];
- var vc = constraints[i];
- var impulse = new b2ContactImpulse();
- impulse.count = vc.pointCount;
- for (var j = 0; j < vc.pointCount; ++j) {
- impulse.normalImpulses[j] = vc.points[j].normalImpulse;
- impulse.tangentImpulses[j] = vc.points[j].tangentImpulse
- }
- this.m_listener.PostSolve(c, impulse)
- }
- }
-};
-
-function b2Jacobian() {
- this.linear = new b2Vec2();
- this.angularA = 0;
- this.angularB = 0
-}
-
-function b2JointEdge() {
- this.other = null;
- this.joint = null;
- this.prev = null;
- this.next = null
-}
-
-function b2JointDef() {
- this.type = b2Joint.e_unknownJoint;
- this.userData = null;
- this.bodyA = null;
- this.bodyB = null;
- this.collideConnected = false
-}
-b2JointDef.prototype = {
- _deserialize: function (data, bodies, joints) {
- this.bodyA = bodies[data.bodyA];
- this.bodyB = bodies[data.bodyB];
- this.collideConnected = data.collideConnected
- }
-};
-
-function b2Joint(def) {
- this.m_type = def.type;
- this.m_prev = null;
- this.m_next = null;
- this.m_bodyA = def.bodyA;
- this.m_bodyB = def.bodyB;
- this.m_index = 0;
- this.m_collideConnected = def.collideConnected;
- this.m_islandFlag = false;
- this.m_userData = def.userData;
- this.m_edgeA = new b2JointEdge();
- this.m_edgeA.joint = null;
- this.m_edgeA.other = null;
- this.m_edgeA.prev = null;
- this.m_edgeA.next = null;
- this.m_edgeB = new b2JointEdge();
- this.m_edgeB.joint = null;
- this.m_edgeB.other = null;
- this.m_edgeB.prev = null;
- this.m_edgeB.next = null
-}
-b2Joint.prototype = {
- GetType: function () {
- return this.m_type
- },
- GetBodyA: function () {
- return this.m_bodyA
- },
- GetBodyB: function () {
- return this.m_bodyB
- },
- GetAnchorA: function () {},
- GetAnchorB: function () {},
- GetReactionForce: function (inv_dt) {},
- GetReactionTorque: function (inv_dt) {},
- GetNext: function () {
- return this.m_next
- },
- GetUserData: function () {
- return this.m_userData
- },
- SetUserData: function (data) {
- this.m_userData = data
- },
- IsActive: function () {
- return this.m_bodyA.IsActive() && this.m_bodyB.IsActive()
- },
- GetCollideConnected: function () {
- return this.m_collideConnected
- },
- ShiftOrigin: function (newOrigin) {},
- InitVelocityConstraints: function (data) {},
- SolveVelocityConstraints: function (data) {},
- SolvePositionConstraints: function (data) {},
- _serialize: function (out) {
- var obj = out || {};
- obj.bodyA = null;
- obj.bodyB = null;
- obj.type = this.m_type;
- obj.collideConnected = this.m_collideConnected;
- return obj
- }
-};
-b2Joint.e_inactiveLimit = 0;
-b2Joint.e_atLowerLimit = 1;
-b2Joint.e_atUpperLimit = 2;
-b2Joint.e_equalLimits = 3;
-b2Joint.e_unknownJoint = 0;
-b2Joint.e_revoluteJoint = 1;
-b2Joint.e_prismaticJoint = 2;
-b2Joint.e_distanceJoint = 3;
-b2Joint.e_pulleyJoint = 4;
-b2Joint.e_mouseJoint = 5;
-b2Joint.e_gearJoint = 6;
-b2Joint.e_wheelJoint = 7;
-b2Joint.e_weldJoint = 8;
-b2Joint.e_frictionJoint = 9;
-b2Joint.e_ropeJoint = 10;
-b2Joint.e_motorJoint = 11;
-b2Joint.Create = function (def) {
- var joint = null;
- switch (def.type) {
- case b2Joint.e_distanceJoint:
- joint = new b2DistanceJoint(def);
- break;
- case b2Joint.e_mouseJoint:
- joint = new b2MouseJoint(def);
- break;
- case b2Joint.e_prismaticJoint:
- joint = new b2PrismaticJoint(def);
- break;
- case b2Joint.e_revoluteJoint:
- joint = new b2RevoluteJoint(def);
- break;
- case b2Joint.e_pulleyJoint:
- joint = new b2PulleyJoint(def);
- break;
- case b2Joint.e_gearJoint:
- joint = new b2GearJoint(def);
- break;
- case b2Joint.e_wheelJoint:
- joint = new b2WheelJoint(def);
- break;
- case b2Joint.e_weldJoint:
- joint = new b2WeldJoint(def);
- break;
- case b2Joint.e_frictionJoint:
- joint = new b2FrictionJoint(def);
- break;
- case b2Joint.e_ropeJoint:
- joint = new b2RopeJoint(def);
- break;
- case b2Joint.e_motorJoint:
- joint = new b2MotorJoint(def);
- break
- }
- return joint
-};
-b2Joint.Destroy = function (joint) {};
-
-function b2RevoluteJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_revoluteJoint;
- this.localAnchorA = new b2Vec2();
- this.localAnchorB = new b2Vec2();
- this.referenceAngle = 0;
- this.lowerAngle = 0;
- this.upperAngle = 0;
- this.maxMotorTorque = 0;
- this.motorSpeed = 0;
- this.enableLimit = false;
- this.enableMotor = false;
- Object.seal(this)
-}
-b2RevoluteJointDef.prototype = {
- Initialize: function (bA, bB, anchor) {
- this.bodyA = bA;
- this.bodyB = bB;
- this.localAnchorA = this.bodyA.GetLocalPoint(anchor);
- this.localAnchorB = this.bodyB.GetLocalPoint(anchor);
- this.referenceAngle = this.bodyB.GetAngle() - this.bodyA.GetAngle()
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.referenceAngle = data.referenceAngle;
- this.lowerAngle = data.lowerAngle;
- this.upperAngle = data.upperAngle;
- this.maxMotorTorque = data.maxMotorTorque;
- this.motorSpeed = data.motorSpeed;
- this.enableLimit = data.enableLimit;
- this.enableMotor = data.enableMotor
- }
-};
-b2RevoluteJointDef._extend(b2JointDef);
-
-function b2RevoluteJoint(def) {
- this.parent.call(this, def);
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_referenceAngle = def.referenceAngle;
- this.m_impulse = new b2Vec3();
- this.m_motorImpulse = 0;
- this.m_lowerAngle = def.lowerAngle;
- this.m_upperAngle = def.upperAngle;
- this.m_maxMotorTorque = def.maxMotorTorque;
- this.m_motorSpeed = def.motorSpeed;
- this.m_enableLimit = def.enableLimit;
- this.m_enableMotor = def.enableMotor;
- this.m_limitState = b2Joint.e_inactiveLimit;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_mass = new b2Mat33();
- this.m_motorMass = 0
-}
-b2RevoluteJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- GetReferenceAngle: function () {
- return this.m_referenceAngle
- },
- GetJointAngle: function () {
- var bA = this.m_bodyA;
- var bB = this.m_bodyB;
- return bB.m_sweep.a - bA.m_sweep.a - this.m_referenceAngle
- },
- GetJointSpeed: function () {
- var bA = this.m_bodyA;
- var bB = this.m_bodyB;
- return bB.m_angularVelocity - bA.m_angularVelocity
- },
- IsLimitEnabled: function () {
- return this.m_enableLimit
- },
- EnableLimit: function (flag) {
- if (flag != this.m_enableLimit) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_enableLimit = flag;
- this.m_impulse.z = 0
- }
- },
- GetLowerLimit: function () {
- return this.m_lowerAngle
- },
- GetUpperLimit: function () {
- return this.m_upperAngle
- },
- SetLimits: function (lower, upper) {
- if (lower != this.m_lowerAngle || upper != this.m_upperAngle) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_impulse.z = 0;
- this.m_lowerAngle = lower;
- this.m_upperAngle = upper
- }
- },
- IsMotorEnabled: function () {
- return this.m_enableMotor
- },
- EnableMotor: function (flag) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_enableMotor = flag
- },
- SetMotorSpeed: function (speed) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_motorSpeed = speed
- },
- GetMotorSpeed: function () {
- return this.m_motorSpeed
- },
- SetMaxMotorTorque: function (torque) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_maxMotorTorque = torque
- },
- GetMaxMotorTorque: function () {
- return this.m_maxMotorTorque
- },
- GetReactionForce: function (inv_dt) {
- var P = new b2Vec2(this.m_impulse.x, this.m_impulse.y);
- return b2Vec2.Multiply(inv_dt, P)
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * this.m_impulse.z
- },
- GetMotorTorque: function (inv_dt) {
- return inv_dt * this.m_motorImpulse
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA = this.m_bodyA.m_sweep.localCenter;
- this.m_localCenterB = this.m_bodyB.m_sweep.localCenter;
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- this.m_rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var fixedRotation = (iA + iB == 0);
- this.m_mass.ex.x = mA + mB + this.m_rA.y * this.m_rA.y * iA + this.m_rB.y * this.m_rB.y * iB;
- this.m_mass.ey.x = -this.m_rA.y * this.m_rA.x * iA - this.m_rB.y * this.m_rB.x * iB;
- this.m_mass.ez.x = -this.m_rA.y * iA - this.m_rB.y * iB;
- this.m_mass.ex.y = this.m_mass.ey.x;
- this.m_mass.ey.y = mA + mB + this.m_rA.x * this.m_rA.x * iA + this.m_rB.x * this.m_rB.x * iB;
- this.m_mass.ez.y = this.m_rA.x * iA + this.m_rB.x * iB;
- this.m_mass.ex.z = this.m_mass.ez.x;
- this.m_mass.ey.z = this.m_mass.ez.y;
- this.m_mass.ez.z = iA + iB;
- this.m_motorMass = iA + iB;
- if (this.m_motorMass > 0) {
- this.m_motorMass = 1 / this.m_motorMass
- }
- if (this.m_enableMotor == false || fixedRotation) {
- this.m_motorImpulse = 0
- }
- if (this.m_enableLimit && fixedRotation == false) {
- var jointAngle = aB - aA - this.m_referenceAngle;
- if (b2Abs(this.m_upperAngle - this.m_lowerAngle) < 2 * b2_angularSlop) {
- this.m_limitState = b2Joint.e_equalLimits
- } else {
- if (jointAngle <= this.m_lowerAngle) {
- if (this.m_limitState != b2Joint.e_atLowerLimit) {
- this.m_impulse.z = 0
- }
- this.m_limitState = b2Joint.e_atLowerLimit
- } else {
- if (jointAngle >= this.m_upperAngle) {
- if (this.m_limitState != b2Joint.e_atUpperLimit) {
- this.m_impulse.z = 0
- }
- this.m_limitState = b2Joint.e_atUpperLimit
- } else {
- this.m_limitState = b2Joint.e_inactiveLimit;
- this.m_impulse.z = 0
- }
- }
- }
- } else {
- this.m_limitState = b2Joint.e_inactiveLimit
- }
- if (data.step.warmStarting) {
- this.m_impulse.Multiply(data.step.dtRatio);
- this.m_motorImpulse *= data.step.dtRatio;
- var P = new b2Vec2(this.m_impulse.x, this.m_impulse.y);
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * (b2Cross_v2_v2(this.m_rA, P) + this.m_motorImpulse + this.m_impulse.z);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * (b2Cross_v2_v2(this.m_rB, P) + this.m_motorImpulse + this.m_impulse.z)
- } else {
- this.m_impulse.SetZero();
- this.m_motorImpulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var fixedRotation = (iA + iB == 0);
- if (this.m_enableMotor && this.m_limitState != b2Joint.e_equalLimits && fixedRotation == false) {
- var Cdot = wB - wA - this.m_motorSpeed;
- var impulse = -this.m_motorMass * Cdot;
- var oldImpulse = this.m_motorImpulse;
- var maxImpulse = data.step.dt * this.m_maxMotorTorque;
- this.m_motorImpulse = b2Clamp(this.m_motorImpulse + impulse, -maxImpulse, maxImpulse);
- impulse = this.m_motorImpulse - oldImpulse;
- wA -= iA * impulse;
- wB += iB * impulse
- }
- if (this.m_enableLimit && this.m_limitState != b2Joint.e_inactiveLimit && fixedRotation == false) {
- var Cdot1 = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB)), vA), b2Cross_f_v2(wA, this.m_rA));
- var Cdot2 = wB - wA;
- var Cdot = new b2Vec3(Cdot1.x, Cdot1.y, Cdot2);
- var impulse = this.m_mass.Solve33(Cdot).Negate();
- if (this.m_limitState == b2Joint.e_equalLimits) {
- this.m_impulse.Add(impulse)
- } else {
- if (this.m_limitState == b2Joint.e_atLowerLimit) {
- var newImpulse = this.m_impulse.z + impulse.z;
- if (newImpulse < 0) {
- var rhs = b2Vec2.Add(Cdot1.Negate(), b2Vec2.Multiply(this.m_impulse.z, new b2Vec2(this.m_mass.ez.x, this.m_mass.ez.y)));
- var reduced = this.m_mass.Solve22(rhs);
- impulse.x = reduced.x;
- impulse.y = reduced.y;
- impulse.z = -this.m_impulse.z;
- this.m_impulse.x += reduced.x;
- this.m_impulse.y += reduced.y;
- this.m_impulse.z = 0
- } else {
- this.m_impulse.Add(impulse)
- }
- } else {
- if (this.m_limitState == b2Joint.e_atUpperLimit) {
- var newImpulse = this.m_impulse.z + impulse.z;
- if (newImpulse > 0) {
- var rhs = b2Vec2.Add(Cdot1.Negate(), b2Vec2.Multiply(this.m_impulse.z, new b2Vec2(this.m_mass.ez.x, this.m_mass.ez.y)));
- var reduced = this.m_mass.Solve22(rhs);
- impulse.x = reduced.x;
- impulse.y = reduced.y;
- impulse.z = -this.m_impulse.z;
- this.m_impulse.x += reduced.x;
- this.m_impulse.y += reduced.y;
- this.m_impulse.z = 0
- } else {
- this.m_impulse.Add(impulse)
- }
- }
- }
- }
- var P = new b2Vec2(impulse.x, impulse.y);
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * (b2Cross_v2_v2(this.m_rA, P) + impulse.z);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * (b2Cross_v2_v2(this.m_rB, P) + impulse.z)
- } else {
- var Cdot = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB)), vA), b2Cross_f_v2(wA, this.m_rA));
- var impulse = this.m_mass.Solve22(Cdot.Negate());
- this.m_impulse.x += impulse.x;
- this.m_impulse.y += impulse.y;
- vA.Subtract(b2Vec2.Multiply(mA, impulse));
- wA -= iA * b2Cross_v2_v2(this.m_rA, impulse);
- vB.Add(b2Vec2.Multiply(mB, impulse));
- wB += iB * b2Cross_v2_v2(this.m_rB, impulse)
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var angularError = 0;
- var positionError = 0;
- var fixedRotation = (this.m_invIA + this.m_invIB == 0);
- if (this.m_enableLimit && this.m_limitState != b2Joint.e_inactiveLimit && fixedRotation == false) {
- var angle = aB - aA - this.m_referenceAngle;
- var limitImpulse = 0;
- if (this.m_limitState == b2Joint.e_equalLimits) {
- var C = b2Clamp(angle - this.m_lowerAngle, -b2_maxAngularCorrection, b2_maxAngularCorrection);
- limitImpulse = -this.m_motorMass * C;
- angularError = b2Abs(C)
- } else {
- if (this.m_limitState == b2Joint.e_atLowerLimit) {
- var C = angle - this.m_lowerAngle;
- angularError = -C;
- C = b2Clamp(C + b2_angularSlop, -b2_maxAngularCorrection, 0);
- limitImpulse = -this.m_motorMass * C
- } else {
- if (this.m_limitState == b2Joint.e_atUpperLimit) {
- var C = angle - this.m_upperAngle;
- angularError = C;
- C = b2Clamp(C - b2_angularSlop, 0, b2_maxAngularCorrection);
- limitImpulse = -this.m_motorMass * C
- }
- }
- }
- aA -= this.m_invIA * limitImpulse;
- aB += this.m_invIB * limitImpulse
- }
- qA.Set(aA);
- qB.Set(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var C = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- positionError = C.Length();
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var K = new b2Mat22();
- K.ex.x = mA + mB + iA * rA.y * rA.y + iB * rB.y * rB.y;
- K.ex.y = -iA * rA.x * rA.y - iB * rB.x * rB.y;
- K.ey.x = K.ex.y;
- K.ey.y = mA + mB + iA * rA.x * rA.x + iB * rB.x * rB.x;
- var impulse = K.Solve(C).Negate();
- cA.Subtract(b2Vec2.Multiply(mA, impulse));
- aA -= iA * b2Cross_v2_v2(rA, impulse);
- cB.Add(b2Vec2.Multiply(mB, impulse));
- aB += iB * b2Cross_v2_v2(rB, impulse);
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return positionError <= b2_linearSlop && angularError <= b2_angularSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.referenceAngle = this.m_referenceAngle;
- obj.lowerAngle = this.m_lowerAngle;
- obj.upperAngle = this.m_upperAngle;
- obj.maxMotorTorque = this.m_maxMotorTorque;
- obj.motorSpeed = this.m_motorSpeed;
- obj.enableLimit = this.m_enableLimit;
- obj.enableMotor = this.m_enableMotor;
- return obj
- }
-};
-b2RevoluteJoint._extend(b2Joint);
-
-function b2MouseJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_mouseJoint;
- this.target = new b2Vec2(0, 0);
- this.maxForce = 0;
- this.frequencyHz = 5;
- this.dampingRatio = 0.7;
- Object.seal(this)
-}
-b2MouseJointDef._extend(b2JointDef);
-
-function b2MouseJoint(def) {
- this.parent.call(this, def);
- this.m_targetA = def.target.Clone();
- this.m_localAnchorB = b2MulT_t_v2(this.m_bodyB.GetTransform(), this.m_targetA);
- this.m_maxForce = def.maxForce;
- this.m_impulse = new b2Vec2();
- this.m_frequencyHz = def.frequencyHz;
- this.m_dampingRatio = def.dampingRatio;
- this.m_beta = 0;
- this.m_gamma = 0;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_rB = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassB = 0;
- this.m_invIB = 0;
- this.m_mass = new b2Mat22();
- this.m_C = new b2Vec2()
-}
-b2MouseJoint.prototype = {
- GetAnchorA: function () {
- return this.m_targetA
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- return b2Vec2.Multiply(inv_dt, this.m_impulse)
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * 0
- },
- SetTarget: function (target) {
- if (this.m_bodyB.IsAwake() == false) {
- this.m_bodyB.SetAwake(true)
- }
- this.m_targetA.Assign(target)
- },
- GetTarget: function () {
- return this.m_targetA
- },
- SetMaxForce: function (force) {
- this.m_maxForce = force
- },
- GetMaxForce: function () {
- return this.m_maxForce
- },
- SetFrequency: function (hz) {
- this.m_frequencyHz = hz
- },
- GetFrequency: function () {
- return this.m_frequencyHz
- },
- SetDampingRatio: function (ratio) {
- this.m_dampingRatio = ratio
- },
- GetDampingRatio: function () {
- return this.m_dampingRatio
- },
- ShiftOrigin: function (newOrigin) {
- this.m_targetA.Subtract(newOrigin)
- },
- InitVelocityConstraints: function (data) {
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIB = this.m_bodyB.m_invI;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qB = new b2Rot(aB);
- var mass = this.m_bodyB.GetMass();
- var omega = 2 * b2_pi * this.m_frequencyHz;
- var d = 2 * mass * this.m_dampingRatio * omega;
- var k = mass * (omega * omega);
- var h = data.step.dt;
- this.m_gamma = h * (d + h * k);
- if (this.m_gamma != 0) {
- this.m_gamma = 1 / this.m_gamma
- }
- this.m_beta = h * k * this.m_gamma;
- this.m_rB.Assign(b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB)));
- var K = new b2Mat22();
- K.ex.x = this.m_invMassB + this.m_invIB * this.m_rB.y * this.m_rB.y + this.m_gamma;
- K.ex.y = -this.m_invIB * this.m_rB.x * this.m_rB.y;
- K.ey.x = K.ex.y;
- K.ey.y = this.m_invMassB + this.m_invIB * this.m_rB.x * this.m_rB.x + this.m_gamma;
- this.m_mass.Assign(K.GetInverse());
- this.m_C.Assign(b2Vec2.Subtract(b2Vec2.Add(cB, this.m_rB), this.m_targetA));
- this.m_C.Multiply(this.m_beta);
- wB *= 0.98;
- if (data.step.warmStarting) {
- this.m_impulse.Multiply(data.step.dtRatio);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, this.m_impulse));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, this.m_impulse)
- } else {
- this.m_impulse.SetZero()
- }
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var Cdot = b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB));
- var impulse = b2Mul_m22_v2(this.m_mass, (b2Vec2.Add(b2Vec2.Add(Cdot, this.m_C), b2Vec2.Multiply(this.m_gamma, this.m_impulse))).Negate());
- var oldImpulse = this.m_impulse.Clone();
- this.m_impulse.Add(impulse);
- var maxImpulse = data.step.dt * this.m_maxForce;
- if (this.m_impulse.LengthSquared() > maxImpulse * maxImpulse) {
- this.m_impulse.Multiply(maxImpulse / this.m_impulse.Length())
- }
- impulse.Assign(b2Vec2.Subtract(this.m_impulse, oldImpulse));
- vB.Add(b2Vec2.Multiply(this.m_invMassB, impulse));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, impulse);
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- return true
- }
-};
-b2MouseJoint._extend(b2Joint);
-
-function b2DistanceJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_distanceJoint;
- this.localAnchorA = new b2Vec2(0, 0);
- this.localAnchorB = new b2Vec2(0, 0);
- this.length = 1;
- this.frequencyHz = 0;
- this.dampingRatio = 0;
- Object.seal(this)
-}
-b2DistanceJointDef.prototype = {
- Initialize: function (b1, b2, anchor1, anchor2) {
- this.bodyA = b1;
- this.bodyB = b2;
- this.localAnchorA = this.bodyA.GetLocalPoint(anchor1);
- this.localAnchorB = this.bodyB.GetLocalPoint(anchor2);
- var d = b2Vec2.Subtract(anchor2, anchor1);
- this.length = d.Length()
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.length = data.length;
- this.frequencyHz = data.frequencyHz;
- this.dampingRatio = data.dampingRatio
- }
-};
-b2DistanceJointDef._extend(b2JointDef);
-
-function b2DistanceJoint(def) {
- this.parent.call(this, def);
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_length = def.length;
- this.m_frequencyHz = def.frequencyHz;
- this.m_dampingRatio = def.dampingRatio;
- this.m_impulse = 0;
- this.m_gamma = 0;
- this.m_bias = 0;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_u = new b2Vec2();
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_mass = 0
-}
-b2DistanceJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- var F = b2Vec2.Multiply((inv_dt * this.m_impulse), this.m_u);
- return F
- },
- GetReactionTorque: function (inv_dt) {
- return 0
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- SetLength: function (length) {
- this.m_length = length
- },
- GetLength: function () {
- return this.m_length
- },
- SetFrequency: function (hz) {
- this.m_frequencyHz = hz
- },
- GetFrequency: function () {
- return this.m_frequencyHz
- },
- SetDampingRatio: function (ratio) {
- this.m_dampingRatio = ratio
- },
- GetDampingRatio: function () {
- return this.m_dampingRatio
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- this.m_rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- this.m_u = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, this.m_rB), cA), this.m_rA);
- var length = this.m_u.Length();
- if (length > b2_linearSlop) {
- this.m_u.Multiply(1 / length)
- } else {
- this.m_u.Set(0, 0)
- }
- var crAu = b2Cross_v2_v2(this.m_rA, this.m_u);
- var crBu = b2Cross_v2_v2(this.m_rB, this.m_u);
- var invMass = this.m_invMassA + this.m_invIA * crAu * crAu + this.m_invMassB + this.m_invIB * crBu * crBu;
- this.m_mass = invMass != 0 ? 1 / invMass : 0;
- if (this.m_frequencyHz > 0) {
- var C = length - this.m_length;
- var omega = 2 * b2_pi * this.m_frequencyHz;
- var d = 2 * this.m_mass * this.m_dampingRatio * omega;
- var k = this.m_mass * omega * omega;
- var h = data.step.dt;
- this.m_gamma = h * (d + h * k);
- this.m_gamma = this.m_gamma != 0 ? 1 / this.m_gamma : 0;
- this.m_bias = C * h * k * this.m_gamma;
- invMass += this.m_gamma;
- this.m_mass = invMass != 0 ? 1 / invMass : 0
- } else {
- this.m_gamma = 0;
- this.m_bias = 0
- }
- if (data.step.warmStarting) {
- this.m_impulse *= data.step.dtRatio;
- var P = b2Vec2.Multiply(this.m_impulse, this.m_u);
- vA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- wA -= this.m_invIA * b2Cross_v2_v2(this.m_rA, P);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, P)
- } else {
- this.m_impulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var vpA = b2Vec2.Add(vA, b2Cross_f_v2(wA, this.m_rA));
- var vpB = b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB));
- var Cdot = b2Dot_v2_v2(this.m_u, b2Vec2.Subtract(vpB, vpA));
- var impulse = -this.m_mass * (Cdot + this.m_bias + this.m_gamma * this.m_impulse);
- this.m_impulse += impulse;
- var P = b2Vec2.Multiply(impulse, this.m_u);
- vA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- wA -= this.m_invIA * b2Cross_v2_v2(this.m_rA, P);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, P);
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- if (this.m_frequencyHz > 0) {
- return true
- }
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var u = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- var length = u.Normalize();
- var C = length - this.m_length;
- C = b2Clamp(C, -b2_maxLinearCorrection, b2_maxLinearCorrection);
- var impulse = -this.m_mass * C;
- var P = b2Vec2.Multiply(impulse, u);
- cA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- aA -= this.m_invIA * b2Cross_v2_v2(rA, P);
- cB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- aB += this.m_invIB * b2Cross_v2_v2(rB, P);
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return b2Abs(C) < b2_linearSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.length = this.m_length;
- obj.frequencyHz = this.m_frequencyHz;
- obj.dampingRatio = this.m_dampingRatio;
- return obj
- }
-};
-b2DistanceJoint._extend(b2Joint);
-
-function b2PrismaticJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_prismaticJoint;
- this.localAnchorA = new b2Vec2();
- this.localAnchorB = new b2Vec2();
- this.localAxisA = new b2Vec2(1, 0);
- this.referenceAngle = 0;
- this.enableLimit = false;
- this.lowerTranslation = 0;
- this.upperTranslation = 0;
- this.enableMotor = false;
- this.maxMotorForce = 0;
- this.motorSpeed = 0;
- Object.seal(this)
-}
-b2PrismaticJointDef.prototype = {
- Initialize: function (bA, bB, anchor, axis) {
- this.bodyA = bA;
- this.bodyB = bB;
- this.localAnchorA = this.bodyA.GetLocalPoint(anchor);
- this.localAnchorB = this.bodyB.GetLocalPoint(anchor);
- this.localAxisA = this.bodyA.GetLocalVector(axis);
- this.referenceAngle = this.bodyB.GetAngle() - this.bodyA.GetAngle()
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.localAxisA._deserialize(data.localAxisA);
- this.referenceAngle = data.referenceAngle;
- this.enableLimit = data.enableLimit;
- this.lowerTranslation = data.lowerTranslation;
- this.upperTranslation = data.upperTranslation;
- this.enableMotor = data.enableMotor;
- this.maxMotorForce = data.maxMotorForce;
- this.motorSpeed = data.motorSpeed
- }
-};
-b2PrismaticJointDef._extend(b2JointDef);
-
-function b2PrismaticJoint(def) {
- this.parent.call(this, def);
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_localXAxisA = def.localAxisA.Clone();
- this.m_localXAxisA.Normalize();
- this.m_localYAxisA = b2Cross_f_v2(1, this.m_localXAxisA);
- this.m_referenceAngle = def.referenceAngle;
- this.m_impulse = new b2Vec3();
- this.m_motorMass = 0;
- this.m_motorImpulse = 0;
- this.m_lowerTranslation = def.lowerTranslation;
- this.m_upperTranslation = def.upperTranslation;
- this.m_maxMotorForce = def.maxMotorForce;
- this.m_motorSpeed = def.motorSpeed;
- this.m_enableLimit = def.enableLimit;
- this.m_enableMotor = def.enableMotor;
- this.m_limitState = b2Joint.e_inactiveLimit;
- this.m_axis = new b2Vec2();
- this.m_perp = new b2Vec2();
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_s1 = 0, this.m_s2 = 0;
- this.m_a1 = 0, this.m_a2 = 0;
- this.m_K = new b2Mat33();
- this.m_motorMass = 0
-}
-b2PrismaticJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- return b2Vec2.Multiply(inv_dt, b2Vec2.Add(b2Vec2.Multiply(this.m_impulse.x, this.m_perp), b2Vec2.Multiply((this.m_motorImpulse + this.m_impulse.z), this.m_axis)))
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * this.m_impulse.y
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- GetLocalAxisA: function () {
- return this.m_localXAxisA
- },
- GetReferenceAngle: function () {
- return this.m_referenceAngle
- },
- GetJointTranslation: function () {
- var pA = this.m_bodyA.GetWorldPoint(this.m_localAnchorA);
- var pB = this.m_bodyB.GetWorldPoint(this.m_localAnchorB);
- var d = b2Vec2.Subtract(pB, pA);
- var axis = this.m_bodyA.GetWorldVector(this.m_localXAxisA);
- var translation = b2Dot_v2_v2(d, axis);
- return translation
- },
- GetJointSpeed: function () {
- var bA = this.m_bodyA;
- var bB = this.m_bodyB;
- var rA = b2Mul_r_v2(bA.m_xf.q, b2Vec2.Subtract(this.m_localAnchorA, bA.m_sweep.localCenter));
- var rB = b2Mul_r_v2(bB.m_xf.q, b2Vec2.Subtract(this.m_localAnchorB, bB.m_sweep.localCenter));
- var p1 = b2Vec2.Add(bA.m_sweep.c, rA);
- var p2 = b2Vec2.Add(bB.m_sweep.c, rB);
- var d = b2Vec2.Subtract(p2, p1);
- var axis = b2Mul_r_v2(bA.m_xf.q, this.m_localXAxisA);
- var vA = bA.m_linearVelocity;
- var vB = bB.m_linearVelocity;
- var wA = bA.m_angularVelocity;
- var wB = bB.m_angularVelocity;
- var speed = b2Dot_v2_v2(d, b2Cross_f_v2(wA, axis)) + b2Dot_v2_v2(axis, b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(vB, b2Cross_f_v2(wB, rB)), vA), b2Cross_f_v2(wA, rA)));
- return speed
- },
- IsLimitEnabled: function () {
- return this.m_enableLimit
- },
- EnableLimit: function (flag) {
- if (flag != this.m_enableLimit) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_enableLimit = flag;
- this.m_impulse.z = 0
- }
- },
- GetLowerLimit: function () {
- return this.m_lowerTranslation
- },
- GetUpperLimit: function () {
- return this.m_upperTranslation
- },
- SetLimits: function (lower, upper) {
- if (lower != this.m_lowerTranslation || upper != this.m_upperTranslation) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_lowerTranslation = lower;
- this.m_upperTranslation = upper;
- this.m_impulse.z = 0
- }
- },
- IsMotorEnabled: function () {
- return this.m_enableMotor
- },
- EnableMotor: function (flag) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_enableMotor = flag
- },
- SetMotorSpeed: function (speed) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_motorSpeed = speed
- },
- GetMotorSpeed: function () {
- return this.m_motorSpeed
- },
- SetMaxMotorForce: function (force) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_maxMotorForce = force
- },
- GetMaxMotorForce: function () {
- return this.m_maxMotorForce
- },
- GetMotorForce: function (inv_dt) {
- return inv_dt * this.m_motorImpulse
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA = this.m_bodyA.m_sweep.localCenter;
- this.m_localCenterB = this.m_bodyB.m_sweep.localCenter;
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var d = b2Vec2.Add(b2Vec2.Subtract(cB, cA), b2Vec2.Subtract(rB, rA));
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- this.m_axis = b2Mul_r_v2(qA, this.m_localXAxisA);
- this.m_a1 = b2Cross_v2_v2(b2Vec2.Add(d, rA), this.m_axis);
- this.m_a2 = b2Cross_v2_v2(rB, this.m_axis);
- this.m_motorMass = mA + mB + iA * this.m_a1 * this.m_a1 + iB * this.m_a2 * this.m_a2;
- if (this.m_motorMass > 0) {
- this.m_motorMass = 1 / this.m_motorMass
- }
- this.m_perp = b2Mul_r_v2(qA, this.m_localYAxisA);
- this.m_s1 = b2Cross_v2_v2(b2Vec2.Add(d, rA), this.m_perp);
- this.m_s2 = b2Cross_v2_v2(rB, this.m_perp);
- var k11 = mA + mB + iA * this.m_s1 * this.m_s1 + iB * this.m_s2 * this.m_s2;
- var k12 = iA * this.m_s1 + iB * this.m_s2;
- var k13 = iA * this.m_s1 * this.m_a1 + iB * this.m_s2 * this.m_a2;
- var k22 = iA + iB;
- if (k22 == 0) {
- k22 = 1
- }
- var k23 = iA * this.m_a1 + iB * this.m_a2;
- var k33 = mA + mB + iA * this.m_a1 * this.m_a1 + iB * this.m_a2 * this.m_a2;
- this.m_K.ex.Set(k11, k12, k13);
- this.m_K.ey.Set(k12, k22, k23);
- this.m_K.ez.Set(k13, k23, k33);
- if (this.m_enableLimit) {
- var jointTranslation = b2Dot_v2_v2(this.m_axis, d);
- if (b2Abs(this.m_upperTranslation - this.m_lowerTranslation) < 2 * b2_linearSlop) {
- this.m_limitState = b2Joint.e_equalLimits
- } else {
- if (jointTranslation <= this.m_lowerTranslation) {
- if (this.m_limitState != b2Joint.e_atLowerLimit) {
- this.m_limitState = b2Joint.e_atLowerLimit;
- this.m_impulse.z = 0
- }
- } else {
- if (jointTranslation >= this.m_upperTranslation) {
- if (this.m_limitState != b2Joint.e_atUpperLimit) {
- this.m_limitState = b2Joint.e_atUpperLimit;
- this.m_impulse.z = 0
- }
- } else {
- this.m_limitState = b2Joint.e_inactiveLimit;
- this.m_impulse.z = 0
- }
- }
- }
- } else {
- this.m_limitState = b2Joint.e_inactiveLimit;
- this.m_impulse.z = 0
- }
- if (this.m_enableMotor == false) {
- this.m_motorImpulse = 0
- }
- if (data.step.warmStarting) {
- this.m_impulse.Multiply(data.step.dtRatio);
- this.m_motorImpulse *= data.step.dtRatio;
- var P = b2Vec2.Add(b2Vec2.Multiply(this.m_impulse.x, this.m_perp), b2Vec2.Multiply((this.m_motorImpulse + this.m_impulse.z), this.m_axis));
- var LA = this.m_impulse.x * this.m_s1 + this.m_impulse.y + (this.m_motorImpulse + this.m_impulse.z) * this.m_a1;
- var LB = this.m_impulse.x * this.m_s2 + this.m_impulse.y + (this.m_motorImpulse + this.m_impulse.z) * this.m_a2;
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * LA;
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * LB
- } else {
- this.m_impulse.SetZero();
- this.m_motorImpulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- if (this.m_enableMotor && this.m_limitState != b2Joint.e_equalLimits) {
- var Cdot = b2Dot_v2_v2(this.m_axis, b2Vec2.Subtract(vB, vA)) + this.m_a2 * wB - this.m_a1 * wA;
- var impulse = this.m_motorMass * (this.m_motorSpeed - Cdot);
- var oldImpulse = this.m_motorImpulse;
- var maxImpulse = data.step.dt * this.m_maxMotorForce;
- this.m_motorImpulse = b2Clamp(this.m_motorImpulse + impulse, -maxImpulse, maxImpulse);
- impulse = this.m_motorImpulse - oldImpulse;
- var P = b2Vec2.Multiply(impulse, this.m_axis);
- var LA = impulse * this.m_a1;
- var LB = impulse * this.m_a2;
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * LA;
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * LB
- }
- var Cdot1 = new b2Vec2();
- Cdot1.x = b2Dot_v2_v2(this.m_perp, b2Vec2.Subtract(vB, vA)) + this.m_s2 * wB - this.m_s1 * wA;
- Cdot1.y = wB - wA;
- if (this.m_enableLimit && this.m_limitState != b2Joint.e_inactiveLimit) {
- var Cdot2;
- Cdot2 = b2Dot_v2_v2(this.m_axis, b2Vec2.Subtract(vB, vA)) + this.m_a2 * wB - this.m_a1 * wA;
- var Cdot = new b2Vec3(Cdot1.x, Cdot1.y, Cdot2);
- var f1 = this.m_impulse.Clone();
- var df = this.m_K.Solve33(Cdot.Negate());
- this.m_impulse.Add(df);
- if (this.m_limitState == b2Joint.e_atLowerLimit) {
- this.m_impulse.z = b2Max(this.m_impulse.z, 0)
- } else {
- if (this.m_limitState == b2Joint.e_atUpperLimit) {
- this.m_impulse.z = b2Min(this.m_impulse.z, 0)
- }
- }
- var b = b2Vec2.Subtract(Cdot1.Negate(), b2Vec2.Multiply((this.m_impulse.z - f1.z), new b2Vec2(this.m_K.ez.x, this.m_K.ez.y)));
- var f2r = b2Vec2.Add(this.m_K.Solve22(b), new b2Vec2(f1.x, f1.y));
- this.m_impulse.x = f2r.x;
- this.m_impulse.y = f2r.y;
- df = b2Vec3.Subtract(this.m_impulse, f1);
- var P = b2Vec2.Add(b2Vec2.Multiply(df.x, this.m_perp), b2Vec2.Multiply(df.z, this.m_axis));
- var LA = df.x * this.m_s1 + df.y + df.z * this.m_a1;
- var LB = df.x * this.m_s2 + df.y + df.z * this.m_a2;
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * LA;
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * LB
- } else {
- var df = this.m_K.Solve22(Cdot1.Negate());
- this.m_impulse.x += df.x;
- this.m_impulse.y += df.y;
- var P = b2Vec2.Multiply(df.x, this.m_perp);
- var LA = df.x * this.m_s1 + df.y;
- var LB = df.x * this.m_s2 + df.y;
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * LA;
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * LB
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var d = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- var axis = b2Mul_r_v2(qA, this.m_localXAxisA);
- var a1 = b2Cross_v2_v2(b2Vec2.Add(d, rA), axis);
- var a2 = b2Cross_v2_v2(rB, axis);
- var perp = b2Mul_r_v2(qA, this.m_localYAxisA);
- var s1 = b2Cross_v2_v2(b2Vec2.Add(d, rA), perp);
- var s2 = b2Cross_v2_v2(rB, perp);
- var impulse = new b2Vec3();
- var C1 = new b2Vec2();
- C1.x = b2Dot_v2_v2(perp, d);
- C1.y = aB - aA - this.m_referenceAngle;
- var linearError = b2Abs(C1.x);
- var angularError = b2Abs(C1.y);
- var active = false;
- var C2 = 0;
- if (this.m_enableLimit) {
- var translation = b2Dot_v2_v2(axis, d);
- if (b2Abs(this.m_upperTranslation - this.m_lowerTranslation) < 2 * b2_linearSlop) {
- C2 = b2Clamp(translation, -b2_maxLinearCorrection, b2_maxLinearCorrection);
- linearError = b2Max(linearError, b2Abs(translation));
- active = true
- } else {
- if (translation <= this.m_lowerTranslation) {
- C2 = b2Clamp(translation - this.m_lowerTranslation + b2_linearSlop, -b2_maxLinearCorrection, 0);
- linearError = b2Max(linearError, this.m_lowerTranslation - translation);
- active = true
- } else {
- if (translation >= this.m_upperTranslation) {
- C2 = b2Clamp(translation - this.m_upperTranslation - b2_linearSlop, 0, b2_maxLinearCorrection);
- linearError = b2Max(linearError, translation - this.m_upperTranslation);
- active = true
- }
- }
- }
- }
- if (active) {
- var k11 = mA + mB + iA * s1 * s1 + iB * s2 * s2;
- var k12 = iA * s1 + iB * s2;
- var k13 = iA * s1 * a1 + iB * s2 * a2;
- var k22 = iA + iB;
- if (k22 == 0) {
- k22 = 1
- }
- var k23 = iA * a1 + iB * a2;
- var k33 = mA + mB + iA * a1 * a1 + iB * a2 * a2;
- var K = new b2Mat33();
- K.ex.Set(k11, k12, k13);
- K.ey.Set(k12, k22, k23);
- K.ez.Set(k13, k23, k33);
- var C = new b2Vec3();
- C.x = C1.x;
- C.y = C1.y;
- C.z = C2;
- impulse = K.Solve33(C.Negate())
- } else {
- var k11 = mA + mB + iA * s1 * s1 + iB * s2 * s2;
- var k12 = iA * s1 + iB * s2;
- var k22 = iA + iB;
- if (k22 == 0) {
- k22 = 1
- }
- var K = new b2Mat22();
- K.ex.Set(k11, k12);
- K.ey.Set(k12, k22);
- var impulse1 = K.Solve(C1.Negate());
- impulse.x = impulse1.x;
- impulse.y = impulse1.y;
- impulse.z = 0
- }
- var P = b2Vec2.Add(b2Vec2.Multiply(impulse.x, perp), b2Vec2.Multiply(impulse.z, axis));
- var LA = impulse.x * s1 + impulse.y + impulse.z * a1;
- var LB = impulse.x * s2 + impulse.y + impulse.z * a2;
- cA.Subtract(b2Vec2.Multiply(mA, P));
- aA -= iA * LA;
- cB.Add(b2Vec2.Multiply(mB, P));
- aB += iB * LB;
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return linearError <= b2_linearSlop && angularError <= b2_angularSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.localAxisA = this.m_localXAxisA._serialize();
- obj.referenceAngle = this.m_referenceAngle;
- obj.enableLimit = this.m_enableLimit;
- obj.lowerTranslation = this.m_lowerTranslation;
- obj.upperTranslation = this.m_upperTranslation;
- obj.enableMotor = this.m_enableMotor;
- obj.maxMotorForce = this.m_maxMotorForce;
- obj.motorSpeed = this.m_motorSpeed;
- return obj
- }
-};
-b2PrismaticJoint._extend(b2Joint);
-
-function b2FrictionJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_frictionJoint;
- this.localAnchorA = new b2Vec2();
- this.localAnchorB = new b2Vec2();
- this.maxForce = 0;
- this.maxTorque = 0;
- Object.seal(this)
-}
-b2FrictionJointDef.prototype = {
- Initialize: function (bA, bB, anchor) {
- this.bodyA = bA;
- this.bodyB = bB;
- this.localAnchorA.Assign(this.bodyA.GetLocalPoint(anchor));
- this.localAnchorB.Assign(this.bodyB.GetLocalPoint(anchor))
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.maxForce = data.maxForce;
- this.maxTorque = data.maxTorque
- }
-};
-b2FrictionJointDef._extend(b2JointDef);
-
-function b2FrictionJoint(def) {
- this.parent.call(this, def);
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_linearImpulse = new b2Vec2();
- this.m_angularImpulse = 0;
- this.m_maxForce = def.maxForce;
- this.m_maxTorque = def.maxTorque;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_linearMass = new b2Mat22();
- this.m_angularMass = 0
-}
-b2FrictionJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- return b2Vec2.Multiply(inv_dt, this.m_linearImpulse)
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * this.m_angularImpulse
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- SetMaxForce: function (force) {
- this.m_maxForce = force
- },
- GetMaxForce: function () {
- return this.m_maxForce
- },
- SetMaxTorque: function (torque) {
- this.m_maxTorque = torque
- },
- GetMaxTorque: function () {
- return this.m_maxTorque
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- this.m_rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var K = new b2Mat22();
- K.ex.x = mA + mB + iA * this.m_rA.y * this.m_rA.y + iB * this.m_rB.y * this.m_rB.y;
- K.ex.y = -iA * this.m_rA.x * this.m_rA.y - iB * this.m_rB.x * this.m_rB.y;
- K.ey.x = K.ex.y;
- K.ey.y = mA + mB + iA * this.m_rA.x * this.m_rA.x + iB * this.m_rB.x * this.m_rB.x;
- this.m_linearMass = K.GetInverse();
- this.m_angularMass = iA + iB;
- if (this.m_angularMass > 0) {
- this.m_angularMass = 1 / this.m_angularMass
- }
- if (data.step.warmStarting) {
- this.m_linearImpulse.Multiply(data.step.dtRatio);
- this.m_angularImpulse *= data.step.dtRatio;
- var P = new b2Vec2(this.m_linearImpulse.x, this.m_linearImpulse.y);
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * (b2Cross_v2_v2(this.m_rA, P) + this.m_angularImpulse);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * (b2Cross_v2_v2(this.m_rB, P) + this.m_angularImpulse)
- } else {
- this.m_linearImpulse.SetZero();
- this.m_angularImpulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var h = data.step.dt;
- var Cdot = wB - wA;
- var impulse = -this.m_angularMass * Cdot;
- var oldImpulse = this.m_angularImpulse;
- var maxImpulse = h * this.m_maxTorque;
- this.m_angularImpulse = b2Clamp(this.m_angularImpulse + impulse, -maxImpulse, maxImpulse);
- impulse = this.m_angularImpulse - oldImpulse;
- wA -= iA * impulse;
- wB += iB * impulse;
- var Cdot = b2Vec2.Add(vB, b2Vec2.Subtract(b2Cross_f_v2(wB, this.m_rB), b2Vec2.Subtract(vA, b2Cross_f_v2(wA, this.m_rA))));
- var impulse = b2Mul_m22_v2(this.m_linearMass, Cdot).Negate();
- var oldImpulse = this.m_linearImpulse.Clone();
- this.m_linearImpulse.Add(impulse);
- var maxImpulse = h * this.m_maxForce;
- if (this.m_linearImpulse.LengthSquared() > maxImpulse * maxImpulse) {
- this.m_linearImpulse.Normalize();
- this.m_linearImpulse.Multiply(maxImpulse)
- }
- impulse = b2Vec2.Subtract(this.m_linearImpulse, oldImpulse);
- vA.Subtract(b2Vec2.Multiply(mA, impulse));
- wA -= iA * b2Cross_v2_v2(this.m_rA, impulse);
- vB.Add(b2Vec2.Multiply(mB, impulse));
- wB += iB * b2Cross_v2_v2(this.m_rB, impulse);
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- return true
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.maxForce = this.m_maxForce;
- obj.maxTorque = this.m_maxTorque;
- return obj
- }
-};
-b2FrictionJoint._extend(b2Joint);
-
-function b2WeldJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_weldJoint;
- this.localAnchorA = new b2Vec2(0, 0);
- this.localAnchorB = new b2Vec2(0, 0);
- this.referenceAngle = 0;
- this.frequencyHz = 0;
- this.dampingRatio = 0;
- Object.seal(this)
-}
-b2WeldJointDef.prototype = {
- Initialize: function (bA, bB, anchor) {
- this.bodyA = bA;
- this.bodyB = bB;
- this.localAnchorA.Assign(this.bodyA.GetLocalPoint(anchor));
- this.localAnchorB.Assign(this.bodyB.GetLocalPoint(anchor));
- this.referenceAngle = this.bodyB.GetAngle() - this.bodyA.GetAngle()
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.referenceAngle = data.referenceAngle;
- this.frequencyHz = data.frequencyHz;
- this.dampingRatio = data.dampingRatio
- }
-};
-b2WeldJointDef._extend(b2JointDef);
-
-function b2WeldJoint(def) {
- this.parent.call(this, def);
- this.m_bias = 0;
- this.m_gamma = 0;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_mass = new b2Mat33();
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_referenceAngle = def.referenceAngle;
- this.m_frequencyHz = def.frequencyHz;
- this.m_dampingRatio = def.dampingRatio;
- this.m_impulse = new b2Vec3()
-}
-b2WeldJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- var P = new b2Vec2(this.m_impulse.x, this.m_impulse.y);
- return b2Vec2.Multiply(inv_dt, P)
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * this.m_impulse.z
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- GetReferenceAngle: function () {
- return this.m_referenceAngle
- },
- SetFrequency: function (hz) {
- this.m_frequencyHz = hz
- },
- GetFrequency: function () {
- return this.m_frequencyHz
- },
- SetDampingRatio: function (ratio) {
- this.m_dampingRatio = ratio
- },
- GetDampingRatio: function () {
- return this.m_dampingRatio
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA.Assign(b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA)));
- this.m_rB.Assign(b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB)));
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var K = new b2Mat33();
- K.ex.x = mA + mB + this.m_rA.y * this.m_rA.y * iA + this.m_rB.y * this.m_rB.y * iB;
- K.ey.x = -this.m_rA.y * this.m_rA.x * iA - this.m_rB.y * this.m_rB.x * iB;
- K.ez.x = -this.m_rA.y * iA - this.m_rB.y * iB;
- K.ex.y = K.ey.x;
- K.ey.y = mA + mB + this.m_rA.x * this.m_rA.x * iA + this.m_rB.x * this.m_rB.x * iB;
- K.ez.y = this.m_rA.x * iA + this.m_rB.x * iB;
- K.ex.z = K.ez.x;
- K.ey.z = K.ez.y;
- K.ez.z = iA + iB;
- if (this.m_frequencyHz > 0) {
- K.GetInverse22(this.m_mass);
- var invM = iA + iB;
- var m = invM > 0 ? 1 / invM : 0;
- var C = aB - aA - this.m_referenceAngle;
- var omega = 2 * b2_pi * this.m_frequencyHz;
- var d = 2 * m * this.m_dampingRatio * omega;
- var k = m * omega * omega;
- var h = data.step.dt;
- this.m_gamma = h * (d + h * k);
- this.m_gamma = this.m_gamma != 0 ? 1 / this.m_gamma : 0;
- this.m_bias = C * h * k * this.m_gamma;
- invM += this.m_gamma;
- this.m_mass.ez.z = invM != 0 ? 1 / invM : 0
- } else {
- K.GetSymInverse33(this.m_mass);
- this.m_gamma = 0;
- this.m_bias = 0
- }
- if (data.step.warmStarting) {
- this.m_impulse.Multiply(data.step.dtRatio);
- var P = new b2Vec2(this.m_impulse.x, this.m_impulse.y);
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * (b2Cross_v2_v2(this.m_rA, P) + this.m_impulse.z);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * (b2Cross_v2_v2(this.m_rB, P) + this.m_impulse.z)
- } else {
- this.m_impulse.SetZero()
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- if (this.m_frequencyHz > 0) {
- var Cdot2 = wB - wA;
- var impulse2 = -this.m_mass.ez.z * (Cdot2 + this.m_bias + this.m_gamma * this.m_impulse.z);
- this.m_impulse.z += impulse2;
- wA -= iA * impulse2;
- wB += iB * impulse2;
- var Cdot1 = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB)), vA), b2Cross_f_v2(wA, this.m_rA));
- var impulse1 = b2Mul22_m33_v2(this.m_mass, Cdot1).Negate();
- this.m_impulse.x += impulse1.x;
- this.m_impulse.y += impulse1.y;
- var P = impulse1.Clone();
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * b2Cross_v2_v2(this.m_rA, P);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * b2Cross_v2_v2(this.m_rB, P)
- } else {
- var Cdot1 = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB)), vA), b2Cross_f_v2(wA, this.m_rA));
- var Cdot2 = wB - wA;
- var Cdot = new b2Vec3(Cdot1.x, Cdot1.y, Cdot2);
- var impulse = b2Mul_m33_v3(this.m_mass, Cdot).Negate();
- this.m_impulse.Add(impulse);
- var P = new b2Vec2(impulse.x, impulse.y);
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * (b2Cross_v2_v2(this.m_rA, P) + impulse.z);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * (b2Cross_v2_v2(this.m_rB, P) + impulse.z)
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var positionError, angularError;
- var K = new b2Mat33();
- K.ex.x = mA + mB + rA.y * rA.y * iA + rB.y * rB.y * iB;
- K.ey.x = -rA.y * rA.x * iA - rB.y * rB.x * iB;
- K.ez.x = -rA.y * iA - rB.y * iB;
- K.ex.y = K.ey.x;
- K.ey.y = mA + mB + rA.x * rA.x * iA + rB.x * rB.x * iB;
- K.ez.y = rA.x * iA + rB.x * iB;
- K.ex.z = K.ez.x;
- K.ey.z = K.ez.y;
- K.ez.z = iA + iB;
- if (this.m_frequencyHz > 0) {
- var C1 = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- positionError = C1.Length();
- angularError = 0;
- var P = K.Solve22(C1).Negate();
- cA.Subtract(b2Vec2.Multiply(mA, P));
- aA -= iA * b2Cross_v2_v2(rA, P);
- cB.Add(b2Vec2.Multiply(mB, P));
- aB += iB * b2Cross_v2_v2(rB, P)
- } else {
- var C1 = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- var C2 = aB - aA - this.m_referenceAngle;
- positionError = C1.Length();
- angularError = b2Abs(C2);
- var C = new b2Vec3(C1.x, C1.y, C2);
- var impulse = K.Solve33(C).Negate();
- var P = new b2Vec2(impulse.x, impulse.y);
- cA.Subtract(b2Vec2.Multiply(mA, P));
- aA -= iA * (b2Cross_v2_v2(rA, P) + impulse.z);
- cB.Add(b2Vec2.Multiply(mB, P));
- aB += iB * (b2Cross_v2_v2(rB, P) + impulse.z)
- }
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return positionError <= b2_linearSlop && angularError <= b2_angularSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.referenceAngle = this.m_referenceAngle;
- obj.frequencyHz = this.m_frequencyHz;
- obj.dampingRatio = this.m_dampingRatio;
- return obj
- }
-};
-b2WeldJoint._extend(b2Joint);
-
-function b2WheelJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_wheelJoint;
- this.localAnchorA = new b2Vec2();
- this.localAnchorB = new b2Vec2();
- this.localAxisA = new b2Vec2(1, 0);
- this.enableMotor = false;
- this.maxMotorTorque = 0;
- this.motorSpeed = 0;
- this.frequencyHz = 2;
- this.dampingRatio = 0.7;
- Object.seal(this)
-}
-b2WheelJointDef.prototype = {
- Initialize: function (bA, bB, anchor, axis) {
- this.bodyA = bA;
- this.bodyB = bB;
- this.localAnchorA.Assign(this.bodyA.GetLocalPoint(anchor));
- this.localAnchorB.Assign(this.bodyB.GetLocalPoint(anchor));
- this.localAxisA.Assign(this.bodyA.GetLocalVector(axis))
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.localAxisA._deserialize(data.localAxisA);
- this.enableMotor = data.enableMotor;
- this.maxMotorTorque = data.maxMotorTorque;
- this.motorSpeed = data.motorSpeed;
- this.frequencyHz = data.frequencyHz;
- this.dampingRatio = data.dampingRatio
- }
-};
-b2WheelJointDef._extend(b2JointDef);
-
-function b2WheelJoint(def) {
- this.parent.call(this, def);
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_localXAxisA = def.localAxisA.Clone();
- this.m_localYAxisA = b2Cross_f_v2(1, this.m_localXAxisA);
- this.m_mass = 0;
- this.m_impulse = 0;
- this.m_motorMass = 0;
- this.m_motorImpulse = 0;
- this.m_springMass = 0;
- this.m_springImpulse = 0;
- this.m_maxMotorTorque = def.maxMotorTorque;
- this.m_motorSpeed = def.motorSpeed;
- this.m_enableMotor = def.enableMotor;
- this.m_frequencyHz = def.frequencyHz;
- this.m_dampingRatio = def.dampingRatio;
- this.m_bias = 0;
- this.m_gamma = 0;
- this.m_ax = new b2Vec2();
- this.m_ay = new b2Vec2();
- this.m_sAx = this.m_sBx = 0;
- this.m_sAy = this.m_sBy = 0
-}
-b2WheelJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- return b2Vec2.Multiply(inv_dt, b2Vec2.Add(b2Vec2.Multiply(this.m_impulse, this.m_ay), b2Vec2.Multiply(this.m_springImpulse, this.m_ax)))
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * this.m_motorImpulse
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- GetLocalAxisA: function () {
- return this.m_localXAxisA
- },
- GetJointTranslation: function () {
- var bA = this.m_bodyA;
- var bB = this.m_bodyB;
- var pA = bA.GetWorldPoint(this.m_localAnchorA);
- var pB = bB.GetWorldPoint(this.m_localAnchorB);
- var d = b2Vec2.Subtract(pB, pA);
- var axis = bA.GetWorldVector(this.m_localXAxisA);
- var translation = b2Dot_v2_v2(d, axis);
- return translation
- },
- GetJointSpeed: function () {
- var wA = this.m_bodyA.m_angularVelocity;
- var wB = this.m_bodyB.m_angularVelocity;
- return wB - wA
- },
- IsMotorEnabled: function () {
- return this.m_enableMotor
- },
- EnableMotor: function (flag) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_enableMotor = flag
- },
- SetMotorSpeed: function (speed) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_motorSpeed = speed
- },
- GetMotorSpeed: function () {
- return this.m_motorSpeed
- },
- SetMaxMotorTorque: function (torque) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_maxMotorTorque = torque
- },
- GetMaxMotorTorque: function () {
- return this.m_maxMotorTorque
- },
- GetMotorTorque: function (inv_dt) {
- return inv_dt * this.m_motorImpulse
- },
- SetSpringFrequencyHz: function (hz) {
- this.m_frequencyHz = hz
- },
- GetSpringFrequencyHz: function () {
- return this.m_frequencyHz
- },
- SetSpringDampingRatio: function (ratio) {
- this.m_dampingRatio = ratio
- },
- GetSpringDampingRatio: function () {
- return this.m_dampingRatio
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var d = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- this.m_ay.Assign(b2Mul_r_v2(qA, this.m_localYAxisA));
- this.m_sAy = b2Cross_v2_v2(b2Vec2.Add(d, rA), this.m_ay);
- this.m_sBy = b2Cross_v2_v2(rB, this.m_ay);
- this.m_mass = mA + mB + iA * this.m_sAy * this.m_sAy + iB * this.m_sBy * this.m_sBy;
- if (this.m_mass > 0) {
- this.m_mass = 1 / this.m_mass
- }
- this.m_springMass = 0;
- this.m_bias = 0;
- this.m_gamma = 0;
- if (this.m_frequencyHz > 0) {
- this.m_ax.Assign(b2Mul_r_v2(qA, this.m_localXAxisA));
- this.m_sAx = b2Cross_v2_v2(b2Vec2.Add(d, rA), this.m_ax);
- this.m_sBx = b2Cross_v2_v2(rB, this.m_ax);
- var invMass = mA + mB + iA * this.m_sAx * this.m_sAx + iB * this.m_sBx * this.m_sBx;
- if (invMass > 0) {
- this.m_springMass = 1 / invMass;
- var C = b2Dot_v2_v2(d, this.m_ax);
- var omega = 2 * b2_pi * this.m_frequencyHz;
- var d = 2 * this.m_springMass * this.m_dampingRatio * omega;
- var k = this.m_springMass * omega * omega;
- var h = data.step.dt;
- this.m_gamma = h * (d + h * k);
- if (this.m_gamma > 0) {
- this.m_gamma = 1 / this.m_gamma
- }
- this.m_bias = C * h * k * this.m_gamma;
- this.m_springMass = invMass + this.m_gamma;
- if (this.m_springMass > 0) {
- this.m_springMass = 1 / this.m_springMass
- }
- }
- } else {
- this.m_springImpulse = 0
- }
- if (this.m_enableMotor) {
- this.m_motorMass = iA + iB;
- if (this.m_motorMass > 0) {
- this.m_motorMass = 1 / this.m_motorMass
- }
- } else {
- this.m_motorMass = 0;
- this.m_motorImpulse = 0
- }
- if (data.step.warmStarting) {
- this.m_impulse *= data.step.dtRatio;
- this.m_springImpulse *= data.step.dtRatio;
- this.m_motorImpulse *= data.step.dtRatio;
- var P = b2Vec2.Add(b2Vec2.Multiply(this.m_impulse, this.m_ay), b2Vec2.Multiply(this.m_springImpulse, this.m_ax));
- var LA = this.m_impulse * this.m_sAy + this.m_springImpulse * this.m_sAx + this.m_motorImpulse;
- var LB = this.m_impulse * this.m_sBy + this.m_springImpulse * this.m_sBx + this.m_motorImpulse;
- vA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- wA -= this.m_invIA * LA;
- vB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- wB += this.m_invIB * LB
- } else {
- this.m_impulse = 0;
- this.m_springImpulse = 0;
- this.m_motorImpulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var Cdot = b2Dot_v2_v2(this.m_ax, b2Vec2.Subtract(vB, vA)) + this.m_sBx * wB - this.m_sAx * wA;
- var impulse = -this.m_springMass * (Cdot + this.m_bias + this.m_gamma * this.m_springImpulse);
- this.m_springImpulse += impulse;
- var P = b2Vec2.Multiply(impulse, this.m_ax);
- var LA = impulse * this.m_sAx;
- var LB = impulse * this.m_sBx;
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * LA;
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * LB;
- var Cdot = wB - wA - this.m_motorSpeed;
- var impulse = -this.m_motorMass * Cdot;
- var oldImpulse = this.m_motorImpulse;
- var maxImpulse = data.step.dt * this.m_maxMotorTorque;
- this.m_motorImpulse = b2Clamp(this.m_motorImpulse + impulse, -maxImpulse, maxImpulse);
- impulse = this.m_motorImpulse - oldImpulse;
- wA -= iA * impulse;
- wB += iB * impulse;
- var Cdot = b2Dot_v2_v2(this.m_ay, b2Vec2.Subtract(vB, vA)) + this.m_sBy * wB - this.m_sAy * wA;
- var impulse = -this.m_mass * Cdot;
- this.m_impulse += impulse;
- var P = b2Vec2.Multiply(impulse, this.m_ay);
- var LA = impulse * this.m_sAy;
- var LB = impulse * this.m_sBy;
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * LA;
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * LB;
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var d = b2Vec2.Add(b2Vec2.Subtract(cB, cA), b2Vec2.Subtract(rB, rA));
- var ay = b2Mul_r_v2(qA, this.m_localYAxisA);
- var sAy = b2Cross_v2_v2(b2Vec2.Add(d, rA), ay);
- var sBy = b2Cross_v2_v2(rB, ay);
- var C = b2Dot_v2_v2(d, ay);
- var k = this.m_invMassA + this.m_invMassB + this.m_invIA * this.m_sAy * this.m_sAy + this.m_invIB * this.m_sBy * this.m_sBy;
- var impulse;
- if (k != 0) {
- impulse = -C / k
- } else {
- impulse = 0
- }
- var P = b2Vec2.Multiply(impulse, ay);
- var LA = impulse * sAy;
- var LB = impulse * sBy;
- cA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- aA -= this.m_invIA * LA;
- cB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- aB += this.m_invIB * LB;
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return b2Abs(C) <= b2_linearSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.localAxisA = this.m_localAxisA._serialize();
- obj.enableMotor = this.m_enableMotor;
- obj.maxMotorTorque = this.m_maxMotorTorque;
- obj.motorSpeed = this.m_motorSpeed;
- obj.frequencyHz = this.m_frequencyHz;
- obj.dampingRatio = this.m_dampingRatio;
- return obj
- }
-};
-b2WheelJoint._extend(b2Joint);
-
-function b2GearJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_gearJoint;
- this.joint1 = null;
- this.joint2 = null;
- this.ratio = 1;
- Object.seal(this)
-}
-b2GearJointDef.prototype = {
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.joint1 = data.joint1;
- this.joint2 = data.joint2;
- this.ratio = data.ratio
- }
-};
-b2GearJointDef._extend(b2JointDef);
-
-function b2GearJoint(def) {
- this.parent.call(this, def);
- this.m_joint1 = def.joint1;
- this.m_joint2 = def.joint2;
- this.m_typeA = this.m_joint1.GetType();
- this.m_typeB = this.m_joint2.GetType();
- var coordinateA, coordinateB;
- this.m_bodyC = this.m_joint1.GetBodyA();
- this.m_bodyA = this.m_joint1.GetBodyB();
- var xfA = this.m_bodyA.m_xf;
- var aA = this.m_bodyA.m_sweep.a;
- var xfC = this.m_bodyC.m_xf;
- var aC = this.m_bodyC.m_sweep.a;
- this.m_localAnchorA = new b2Vec2();
- this.m_localAnchorB = new b2Vec2();
- this.m_localAnchorC = new b2Vec2();
- this.m_localAnchorD = new b2Vec2();
- this.m_localAxisC = new b2Vec2();
- this.m_localAxisD = new b2Vec2();
- if (this.m_typeA == b2Joint.e_revoluteJoint) {
- var revolute = def.joint1;
- this.m_localAnchorC.Assign(revolute.m_localAnchorA);
- this.m_localAnchorA.Assign(revolute.m_localAnchorB);
- this.m_referenceAngleA = revolute.m_referenceAngle;
- this.m_localAxisC.SetZero();
- coordinateA = aA - aC - this.m_referenceAngleA
- } else {
- var prismatic = def.joint1;
- this.m_localAnchorC.Assign(prismatic.m_localAnchorA);
- this.m_localAnchorA.Assign(prismatic.m_localAnchorB);
- this.m_referenceAngleA = prismatic.m_referenceAngle;
- this.m_localAxisC.Assign(prismatic.m_localXAxisA);
- var pC = this.m_localAnchorC;
- var pA = b2MulT_r_v2(xfC.q, b2Vec2.Add(b2Mul_r_v2(xfA.q, this.m_localAnchorA), b2Vec2.Subtract(xfA.p, xfC.p)));
- coordinateA = b2Dot_v2_v2(b2Vec2.Subtract(pA, pC), this.m_localAxisC)
- }
- this.m_bodyD = this.m_joint2.GetBodyA();
- this.m_bodyB = this.m_joint2.GetBodyB();
- var xfB = this.m_bodyB.m_xf;
- var aB = this.m_bodyB.m_sweep.a;
- var xfD = this.m_bodyD.m_xf;
- var aD = this.m_bodyD.m_sweep.a;
- if (this.m_typeB == b2Joint.e_revoluteJoint) {
- var revolute = def.joint2;
- this.m_localAnchorD.Assign(revolute.m_localAnchorA);
- this.m_localAnchorB.Assign(revolute.m_localAnchorB);
- this.m_referenceAngleB = revolute.m_referenceAngle;
- this.m_localAxisD.SetZero();
- coordinateB = aB - aD - this.m_referenceAngleB
- } else {
- var prismatic = def.joint2;
- this.m_localAnchorD.Assign(prismatic.m_localAnchorA);
- this.m_localAnchorB.Assign(prismatic.m_localAnchorB);
- this.m_referenceAngleB = prismatic.m_referenceAngle;
- this.m_localAxisD.Assign(prismatic.m_localXAxisA);
- var pD = this.m_localAnchorD;
- var pB = b2MulT_r_v2(xfD.q, b2Vec2.Add(b2Mul_r_v2(xfB.q, this.m_localAnchorB), b2Vec2.Subtract(xfB.p, xfD.p)));
- coordinateB = b2Dot_v2_v2(b2Vec2.Subtract(pB, pD), this.m_localAxisD)
- }
- this.m_ratio = def.ratio;
- this.m_constant = coordinateA + this.m_ratio * coordinateB;
- this.m_impulse = 0;
- this.m_indexA = this.m_indexB = this.m_indexC = this.m_indexD = 0;
- this.m_lcA = new b2Vec2();
- this.m_lcB = new b2Vec2();
- this.m_lcC = new b2Vec2();
- this.m_lcD = new b2Vec2();
- this.m_mA = this.m_mB = this.m_mC = this.m_mD = 0;
- this.m_iA = this.m_iB = this.m_iC = this.m_iD = 0;
- this.m_JvAC = new b2Vec2(), this.m_JvBD = new b2Vec2();
- this.m_JwA = this.m_JwB = this.m_JwC = this.m_JwD = 0;
- this.m_mass = 0
-}
-b2GearJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- var P = b2Vec2.Multiply(this.m_impulse, this.m_JvAC);
- return b2Vec2.Multiply(inv_dt, P)
- },
- GetReactionTorque: function (inv_dt) {
- var L = this.m_impulse * this.m_JwA;
- return inv_dt * L
- },
- GetJoint1: function () {
- return this.m_joint1
- },
- GetJoint2: function () {
- return this.m_joint2
- },
- SetRatio: function (ratio) {
- this.m_ratio = ratio
- },
- GetRatio: function () {
- return this.m_ratio
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_indexC = this.m_bodyC.m_islandIndex;
- this.m_indexD = this.m_bodyD.m_islandIndex;
- this.m_lcA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_lcB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_lcC.Assign(this.m_bodyC.m_sweep.localCenter);
- this.m_lcD.Assign(this.m_bodyD.m_sweep.localCenter);
- this.m_mA = this.m_bodyA.m_invMass;
- this.m_mB = this.m_bodyB.m_invMass;
- this.m_mC = this.m_bodyC.m_invMass;
- this.m_mD = this.m_bodyD.m_invMass;
- this.m_iA = this.m_bodyA.m_invI;
- this.m_iB = this.m_bodyB.m_invI;
- this.m_iC = this.m_bodyC.m_invI;
- this.m_iD = this.m_bodyD.m_invI;
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var aC = data.positions[this.m_indexC].a;
- var vC = data.velocities[this.m_indexC].v.Clone();
- var wC = data.velocities[this.m_indexC].w;
- var aD = data.positions[this.m_indexD].a;
- var vD = data.velocities[this.m_indexD].v.Clone();
- var wD = data.velocities[this.m_indexD].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB),
- qC = new b2Rot(aC),
- qD = new b2Rot(aD);
- this.m_mass = 0;
- if (this.m_typeA == b2Joint.e_revoluteJoint) {
- this.m_JvAC.SetZero();
- this.m_JwA = 1;
- this.m_JwC = 1;
- this.m_mass += this.m_iA + this.m_iC
- } else {
- var u = b2Mul_r_v2(qC, this.m_localAxisC);
- var rC = b2Mul_r_v2(qC, b2Vec2.Subtract(this.m_localAnchorC, this.m_lcC));
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_lcA));
- this.m_JvAC.Assign(u);
- this.m_JwC = b2Cross_v2_v2(rC, u);
- this.m_JwA = b2Cross_v2_v2(rA, u);
- this.m_mass += this.m_mC + this.m_mA + this.m_iC * this.m_JwC * this.m_JwC + this.m_iA * this.m_JwA * this.m_JwA
- }
- if (this.m_typeB == b2Joint.e_revoluteJoint) {
- this.m_JvBD.SetZero();
- this.m_JwB = this.m_ratio;
- this.m_JwD = this.m_ratio;
- this.m_mass += this.m_ratio * this.m_ratio * (this.m_iB + this.m_iD)
- } else {
- var u = b2Mul_r_v2(qD, this.m_localAxisD);
- var rD = b2Mul_r_v2(qD, b2Vec2.Subtract(this.m_localAnchorD, this.m_lcD));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_lcB));
- this.m_JvBD.Assign(b2Vec2.Multiply(this.m_ratio, u));
- this.m_JwD = this.m_ratio * b2Cross_v2_v2(rD, u);
- this.m_JwB = this.m_ratio * b2Cross_v2_v2(rB, u);
- this.m_mass += this.m_ratio * this.m_ratio * (this.m_mD + this.m_mB) + this.m_iD * this.m_JwD * this.m_JwD + this.m_iB * this.m_JwB * this.m_JwB
- }
- this.m_mass = this.m_mass > 0 ? 1 / this.m_mass : 0;
- if (data.step.warmStarting) {
- vA.Add(b2Vec2.Multiply((this.m_mA * this.m_impulse), this.m_JvAC));
- wA += this.m_iA * this.m_impulse * this.m_JwA;
- vB.Add(b2Vec2.Multiply((this.m_mB * this.m_impulse), this.m_JvBD));
- wB += this.m_iB * this.m_impulse * this.m_JwB;
- vC.Subtract(b2Vec2.Multiply((this.m_mC * this.m_impulse), this.m_JvAC));
- wC -= this.m_iC * this.m_impulse * this.m_JwC;
- vD.Subtract(b2Vec2.Multiply((this.m_mD * this.m_impulse), this.m_JvBD));
- wD -= this.m_iD * this.m_impulse * this.m_JwD
- } else {
- this.m_impulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB;
- data.velocities[this.m_indexC].v.Assign(vC);
- data.velocities[this.m_indexC].w = wC;
- data.velocities[this.m_indexD].v.Assign(vD);
- data.velocities[this.m_indexD].w = wD
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var vC = data.velocities[this.m_indexC].v.Clone();
- var wC = data.velocities[this.m_indexC].w;
- var vD = data.velocities[this.m_indexD].v.Clone();
- var wD = data.velocities[this.m_indexD].w;
- var Cdot = b2Dot_v2_v2(this.m_JvAC, b2Vec2.Subtract(vA, vC)) + b2Dot_v2_v2(this.m_JvBD, b2Vec2.Subtract(vB, vD));
- Cdot += (this.m_JwA * wA - this.m_JwC * wC) + (this.m_JwB * wB - this.m_JwD * wD);
- var impulse = -this.m_mass * Cdot;
- this.m_impulse += impulse;
- vA.Add(b2Vec2.Multiply((this.m_mA * impulse), this.m_JvAC));
- wA += this.m_iA * impulse * this.m_JwA;
- vB.Add(b2Vec2.Multiply((this.m_mB * impulse), this.m_JvBD));
- wB += this.m_iB * impulse * this.m_JwB;
- vC.Subtract(b2Vec2.Multiply((this.m_mC * impulse), this.m_JvAC));
- wC -= this.m_iC * impulse * this.m_JwC;
- vD.Subtract(b2Vec2.Multiply((this.m_mD * impulse), this.m_JvBD));
- wD -= this.m_iD * impulse * this.m_JwD;
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB;
- data.velocities[this.m_indexC].v.Assign(vC);
- data.velocities[this.m_indexC].w = wC;
- data.velocities[this.m_indexD].v.Assign(vD);
- data.velocities[this.m_indexD].w = wD
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var cC = data.positions[this.m_indexC].c.Clone();
- var aC = data.positions[this.m_indexC].a;
- var cD = data.positions[this.m_indexD].c.Clone();
- var aD = data.positions[this.m_indexD].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB),
- qC = new b2Rot(aC),
- qD = new b2Rot(aD);
- var linearError = 0;
- var coordinateA, coordinateB;
- var JvAC = new b2Vec2(),
- JvBD = new b2Vec2();
- var JwA, JwB, JwC, JwD;
- var mass = 0;
- if (this.m_typeA == b2Joint.e_revoluteJoint) {
- JvAC.SetZero();
- JwA = 1;
- JwC = 1;
- mass += this.m_iA + this.m_iC;
- coordinateA = aA - aC - this.m_referenceAngleA
- } else {
- var u = b2Mul_r_v2(qC, this.m_localAxisC);
- var rC = b2Mul_r_v2(qC, b2Vec2.Subtract(this.m_localAnchorC, this.m_lcC));
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_lcA));
- JvAC.Assign(u);
- JwC = b2Cross_v2_v2(rC, u);
- JwA = b2Cross_v2_v2(rA, u);
- mass += this.m_mC + this.m_mA + this.m_iC * JwC * JwC + this.m_iA * JwA * JwA;
- var pC = b2Vec2.Subtract(this.m_localAnchorC, this.m_lcC);
- var pA = b2MulT_r_v2(qC, b2Vec2.Add(rA, b2Vec2.Subtract(cA, cC)));
- coordinateA = b2Dot_v2_v2(b2Vec2.Subtract(pA, pC), this.m_localAxisC)
- }
- if (this.m_typeB == b2Joint.e_revoluteJoint) {
- JvBD.SetZero();
- JwB = this.m_ratio;
- JwD = this.m_ratio;
- mass += this.m_ratio * this.m_ratio * (this.m_iB + this.m_iD);
- coordinateB = aB - aD - this.m_referenceAngleB
- } else {
- var u = b2Mul_r_v2(qD, this.m_localAxisD);
- var rD = b2Mul_r_v2(qD, b2Vec2.Subtract(this.m_localAnchorD, this.m_lcD));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_lcB));
- JvBD.Assign(b2Vec2.Multiply(this.m_ratio, u));
- JwD = this.m_ratio * b2Cross_v2_v2(rD, u);
- JwB = this.m_ratio * b2Cross_v2_v2(rB, u);
- mass += this.m_ratio * this.m_ratio * (this.m_mD + this.m_mB) + this.m_iD * JwD * JwD + this.m_iB * JwB * JwB;
- var pD = b2Vec2.Subtract(this.m_localAnchorD, this.m_lcD);
- var pB = b2MulT_r_v2(qD, b2Vec2.Add(rB, b2Vec2.Subtract(cB, cD)));
- coordinateB = b2Dot_v2_v2(b2Vec2.Subtract(pB, pD), this.m_localAxisD)
- }
- var C = (coordinateA + this.m_ratio * coordinateB) - this.m_constant;
- var impulse = 0;
- if (mass > 0) {
- impulse = -C / mass
- }
- cA.Add(b2Vec2.Multiply(this.m_mA, b2Vec2.Multiply(impulse, JvAC)));
- aA += this.m_iA * impulse * JwA;
- cB.Add(b2Vec2.Multiply(this.m_mB, b2Vec2.Multiply(impulse, JvBD)));
- aB += this.m_iB * impulse * JwB;
- cC.Subtract(b2Vec2.Multiply(this.m_mC, b2Vec2.Multiply(impulse, JvAC)));
- aC -= this.m_iC * impulse * JwC;
- cD.Subtract(b2Vec2.Multiply(this.m_mD, b2Vec2.Multiply(impulse, JvBD)));
- aD -= this.m_iD * impulse * JwD;
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- data.positions[this.m_indexC].c.Assign(cC);
- data.positions[this.m_indexC].a = aC;
- data.positions[this.m_indexD].c.Assign(cD);
- data.positions[this.m_indexD].a = aD;
- return linearError < b2_linearSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.joint1 = this.m_joint1.__temp_joint_id;
- obj.joint2 = this.m_joint2.__temp_joint_id;
- obj.ratio = this.m_ratio;
- return obj
- }
-};
-b2GearJoint._extend(b2Joint);
-
-function b2MotorJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_motorJoint;
- this.linearOffset = new b2Vec2();
- this.angularOffset = 0;
- this.maxForce = 1;
- this.maxTorque = 1;
- this.correctionFactor = 0.3;
- Object.seal(this)
-}
-b2MotorJointDef.prototype = {
- Initialize: function (bA, bB) {
- this.bodyA = bA;
- this.bodyB = bB;
- var xB = this.bodyB.GetPosition();
- this.linearOffset.Assign(this.bodyA.GetLocalPoint(xB));
- var angleA = this.bodyA.GetAngle();
- var angleB = this.bodyB.GetAngle();
- this.angularOffset = angleB - angleA
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.linearOffset._deserialize(data.linearOffset);
- this.angularOffset = data.angularOffset;
- this.maxForce = data.maxForce;
- this.maxTorque = data.maxTorque;
- this.correctionFactor = data.correctionFactor
- }
-};
-b2MotorJointDef._extend(b2JointDef);
-
-function b2MotorJoint(def) {
- this.parent.call(this, def);
- this.m_linearOffset = def.linearOffset.Clone();
- this.m_angularOffset = def.angularOffset;
- this.m_linearImpulse = new b2Vec2();
- this.m_angularImpulse = 0;
- this.m_maxForce = def.maxForce;
- this.m_maxTorque = def.maxTorque;
- this.m_correctionFactor = def.correctionFactor;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_linearError = new b2Vec2();
- this.m_angularError = 0;
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_linearMass = new b2Mat22();
- this.m_angularMass = 0
-}
-b2MotorJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetPosition()
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetPosition()
- },
- GetReactionForce: function (inv_dt) {
- return b2Vec2.Multiply(inv_dt, this.m_linearImpulse)
- },
- GetReactionTorque: function (inv_dt) {
- return inv_dt * this.m_angularImpulse
- },
- SetLinearOffset: function (linearOffset) {
- if (linearOffset.x != this.m_linearOffset.x || linearOffset.y != this.m_linearOffset.y) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_linearOffset.Assign(linearOffset)
- }
- },
- GetLinearOffset: function () {
- return this.m_linearOffset
- },
- SetAngularOffset: function (angularOffset) {
- if (angularOffset != this.m_angularOffset) {
- this.m_bodyA.SetAwake(true);
- this.m_bodyB.SetAwake(true);
- this.m_angularOffset = angularOffset
- }
- },
- GetAngularOffset: function () {
- return this.m_angularOffset
- },
- SetMaxForce: function (force) {
- this.m_maxForce = force
- },
- GetMaxForce: function () {
- return this.m_maxForce
- },
- SetMaxTorque: function (torque) {
- this.m_maxTorque = torque
- },
- GetMaxTorque: function () {
- return this.m_maxTorque
- },
- SetCorrectionFactor: function (factor) {
- this.m_correctionFactor = factor
- },
- GetCorrectionFactor: function () {
- return this.m_correctionFactor
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA.Assign(b2Mul_r_v2(qA, this.m_localCenterA.Negate()));
- this.m_rB.Assign(b2Mul_r_v2(qB, this.m_localCenterB.Negate()));
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var K = new b2Mat22();
- K.ex.x = mA + mB + iA * this.m_rA.y * this.m_rA.y + iB * this.m_rB.y * this.m_rB.y;
- K.ex.y = -iA * this.m_rA.x * this.m_rA.y - iB * this.m_rB.x * this.m_rB.y;
- K.ey.x = K.ex.y;
- K.ey.y = mA + mB + iA * this.m_rA.x * this.m_rA.x + iB * this.m_rB.x * this.m_rB.x;
- this.m_linearMass.Assign(K.GetInverse());
- this.m_angularMass = iA + iB;
- if (this.m_angularMass > 0) {
- this.m_angularMass = 1 / this.m_angularMass
- }
- this.m_linearError.Assign(b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, this.m_rB), cA), this.m_rA), b2Mul_r_v2(qA, this.m_linearOffset)));
- this.m_angularError = aB - aA - this.m_angularOffset;
- if (data.step.warmStarting) {
- this.m_linearImpulse.Multiply(data.step.dtRatio);
- this.m_angularImpulse *= data.step.dtRatio;
- var P = new b2Vec2(this.m_linearImpulse.x, this.m_linearImpulse.y);
- vA.Subtract(b2Vec2.Multiply(mA, P));
- wA -= iA * (b2Cross_v2_v2(this.m_rA, P) + this.m_angularImpulse);
- vB.Add(b2Vec2.Multiply(mB, P));
- wB += iB * (b2Cross_v2_v2(this.m_rB, P) + this.m_angularImpulse)
- } else {
- this.m_linearImpulse.SetZero();
- this.m_angularImpulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var mA = this.m_invMassA,
- mB = this.m_invMassB;
- var iA = this.m_invIA,
- iB = this.m_invIB;
- var h = data.step.dt;
- var inv_h = data.step.inv_dt;
- var Cdot = wB - wA + inv_h * this.m_correctionFactor * this.m_angularError;
- var impulse = -this.m_angularMass * Cdot;
- var oldImpulse = this.m_angularImpulse;
- var maxImpulse = h * this.m_maxTorque;
- this.m_angularImpulse = b2Clamp(this.m_angularImpulse + impulse, -maxImpulse, maxImpulse);
- impulse = this.m_angularImpulse - oldImpulse;
- wA -= iA * impulse;
- wB += iB * impulse;
- var Cdot = b2Vec2.Add(b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB)), vA), b2Cross_f_v2(wA, this.m_rA)), b2Vec2.Multiply(inv_h, b2Vec2.Multiply(this.m_correctionFactor, this.m_linearError)));
- var impulse = b2Mul_m22_v2(this.m_linearMass, Cdot).Negate();
- var oldImpulse = this.m_linearImpulse;
- this.m_linearImpulse.Add(impulse);
- var maxImpulse = h * this.m_maxForce;
- if (this.m_linearImpulse.LengthSquared() > maxImpulse * maxImpulse) {
- this.m_linearImpulse.Normalize();
- this.m_linearImpulse.Multiply(maxImpulse)
- }
- impulse.Assign(b2Vec2.Subtract(this.m_linearImpulse, oldImpulse));
- vA.Subtract(b2Vec2.Multiply(mA, impulse));
- wA -= iA * b2Cross_v2_v2(this.m_rA, impulse);
- vB.Add(b2Vec2.Multiply(mB, impulse));
- wB += iB * b2Cross_v2_v2(this.m_rB, impulse);
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- return true
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.linearOffset = this.m_linearOffset._serialize();
- obj.angularOffset = this.m_angularOffset;
- obj.maxForce = this.m_maxForce;
- obj.maxTorque = this.m_maxTorque;
- obj.correctionFactor = this.m_correctionFactor;
- return obj
- }
-};
-b2MotorJoint._extend(b2Joint);
-var b2_minPulleyLength = 2;
-
-function b2PulleyJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_pulleyJoint;
- this.groundAnchorA = new b2Vec2(-1, 1);
- this.groundAnchorB = new b2Vec2(1, 1);
- this.localAnchorA = new b2Vec2(-1, 0);
- this.localAnchorB = new b2Vec2(1, 0);
- this.lengthA = 0;
- this.lengthB = 0;
- this.ratio = 1;
- this.collideConnected = true;
- Object.seal(this)
-}
-b2PulleyJointDef.prototype = {
- Initialize: function (bA, bB, groundA, groundB, anchorA, anchorB, r) {
- this.bodyA = bA;
- this.bodyB = bB;
- this.groundAnchorA.Assign(groundA);
- this.groundAnchorB.Assign(groundB);
- this.localAnchorA.Assign(this.bodyA.GetLocalPoint(anchorA));
- this.localAnchorB.Assign(this.bodyB.GetLocalPoint(anchorB));
- var dA = b2Vec2.Subtract(anchorA, groundA);
- this.lengthA = dA.Length();
- var dB = b2Vec2.Subtract(anchorB, groundB);
- this.lengthB = dB.Length();
- this.ratio = r
- },
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.groundAnchorA._deserialize(data.groundAnchorA);
- this.groundAnchorB._deserialize(data.groundAnchorB);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.lengthA = data.lengthA;
- this.lengthB = data.lengthB;
- this.ratio = data.ratio
- }
-};
-b2PulleyJointDef._extend(b2JointDef);
-
-function b2PulleyJoint(def) {
- this.parent.call(this, def);
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_uA = new b2Vec2();
- this.m_uB = new b2Vec2();
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0;
- this.m_mass = 0;
- this.m_groundAnchorA = def.groundAnchorA.Clone();
- this.m_groundAnchorB = def.groundAnchorB.Clone();
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_lengthA = def.lengthA;
- this.m_lengthB = def.lengthB;
- this.m_ratio = def.ratio;
- this.m_constant = def.lengthA + this.m_ratio * def.lengthB;
- this.m_impulse = 0
-}
-b2PulleyJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- var P = b2Vec2.Multiply(this.m_impulse, this.m_uB);
- return b2Vec2.Multiply(inv_dt, P)
- },
- GetReactionTorque: function (inv_dt) {
- return 0
- },
- GetGroundAnchorA: function () {
- return this.m_groundAnchorA
- },
- GetGroundAnchorB: function () {
- return this.m_groundAnchorB
- },
- GetLengthA: function () {
- return this.m_lengthA
- },
- GetLengthB: function () {
- return this.m_lengthB
- },
- GetRatio: function () {
- return this.m_ratio
- },
- GetCurrentLengthA: function () {
- var p = this.m_bodyA.GetWorldPoint(this.m_localAnchorA);
- var s = this.m_groundAnchorA;
- var d = b2Vec2.Subtract(p, s);
- return d.Length()
- },
- GetCurrentLengthB: function () {
- var p = this.m_bodyB.GetWorldPoint(this.m_localAnchorB);
- var s = this.m_groundAnchorB;
- var d = b2Vec2.Subtract(p, s);
- return d.Length()
- },
- ShiftOrigin: function (newOrigin) {
- this.m_groundAnchorA.Subtract(newOrigin);
- this.m_groundAnchorB.Subtract(newOrigin)
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA.Assign(b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA)));
- this.m_rB.Assign(b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB)));
- this.m_uA.Assign(b2Vec2.Add(cA, b2Vec2.Subtract(this.m_rA, this.m_groundAnchorA)));
- this.m_uB.Assign(b2Vec2.Add(cB, b2Vec2.Subtract(this.m_rB, this.m_groundAnchorB)));
- var lengthA = this.m_uA.Length();
- var lengthB = this.m_uB.Length();
- if (lengthA > 10 * b2_linearSlop) {
- this.m_uA.Multiply(1 / lengthA)
- } else {
- this.m_uA.SetZero()
- }
- if (lengthB > 10 * b2_linearSlop) {
- this.m_uB.Multiply(1 / lengthB)
- } else {
- this.m_uB.SetZero()
- }
- var ruA = b2Cross_v2_v2(this.m_rA, this.m_uA);
- var ruB = b2Cross_v2_v2(this.m_rB, this.m_uB);
- var mA = this.m_invMassA + this.m_invIA * ruA * ruA;
- var mB = this.m_invMassB + this.m_invIB * ruB * ruB;
- this.m_mass = mA + this.m_ratio * this.m_ratio * mB;
- if (this.m_mass > 0) {
- this.m_mass = 1 / this.m_mass
- }
- if (data.step.warmStarting) {
- this.m_impulse *= data.step.dtRatio;
- var PA = b2Vec2.Multiply(-(this.m_impulse), this.m_uA);
- var PB = b2Vec2.Multiply((-this.m_ratio * this.m_impulse), this.m_uB);
- vA.Add(b2Vec2.Multiply(this.m_invMassA, PA));
- wA += this.m_invIA * b2Cross_v2_v2(this.m_rA, PA);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, PB));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, PB)
- } else {
- this.m_impulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var vpA = b2Vec2.Add(vA, b2Cross_f_v2(wA, this.m_rA));
- var vpB = b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB));
- var Cdot = -b2Dot_v2_v2(this.m_uA, vpA) - this.m_ratio * b2Dot_v2_v2(this.m_uB, vpB);
- var impulse = -this.m_mass * Cdot;
- this.m_impulse += impulse;
- var PA = b2Vec2.Multiply(-impulse, this.m_uA);
- var PB = b2Vec2.Multiply(-this.m_ratio, b2Vec2.Multiply(impulse, this.m_uB));
- vA.Add(b2Vec2.Multiply(this.m_invMassA, PA));
- wA += this.m_invIA * b2Cross_v2_v2(this.m_rA, PA);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, PB));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, PB);
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var uA = b2Vec2.Add(cA, b2Vec2.Subtract(rA, this.m_groundAnchorA));
- var uB = b2Vec2.Add(cB, b2Vec2.Subtract(rB, this.m_groundAnchorB));
- var lengthA = uA.Length();
- var lengthB = uB.Length();
- if (lengthA > 10 * b2_linearSlop) {
- uA.Multiply(1 / lengthA)
- } else {
- uA.SetZero()
- }
- if (lengthB > 10 * b2_linearSlop) {
- uB.Multiply(1 / lengthB)
- } else {
- uB.SetZero()
- }
- var ruA = b2Cross_v2_v2(rA, uA);
- var ruB = b2Cross_v2_v2(rB, uB);
- var mA = this.m_invMassA + this.m_invIA * ruA * ruA;
- var mB = this.m_invMassB + this.m_invIB * ruB * ruB;
- var mass = mA + this.m_ratio * this.m_ratio * mB;
- if (mass > 0) {
- mass = 1 / mass
- }
- var C = this.m_constant - lengthA - this.m_ratio * lengthB;
- var linearError = b2Abs(C);
- var impulse = -mass * C;
- var PA = b2Vec2.Multiply(-impulse, uA);
- var PB = b2Vec2.Multiply(-this.m_ratio, b2Vec2.Multiply(impulse, uB));
- cA.Add(b2Vec2.Multiply(this.m_invMassA, PA));
- aA += this.m_invIA * b2Cross_v2_v2(rA, PA);
- cB.Add(b2Vec2.Multiply(this.m_invMassB, PB));
- aB += this.m_invIB * b2Cross_v2_v2(rB, PB);
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return linearError < b2_linearSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.groundAnchorA = this.m_groundAnchorA._serialize();
- obj.groundAnchorB = this.m_groundAnchorB._serialize();
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.lengthA = this.m_lengthA;
- obj.lengthB = this.m_lengthB;
- obj.ratio = this.m_ratio;
- return obj
- }
-};
-b2PulleyJoint._extend(b2Joint);
-
-function b2RopeJointDef() {
- this.parent.call(this);
- this.type = b2Joint.e_ropeJoint;
- this.localAnchorA = new b2Vec2(-1, 0);
- this.localAnchorB = new b2Vec2(1, 0);
- this.maxLength = 0;
- Object.seal(this)
-}
-b2RopeJointDef.prototype = {
- _deserialize: function (data, bodies, joints) {
- this.parent.prototype._deserialize.call(this, data, bodies, joints);
- this.localAnchorA._deserialize(data.localAnchorA);
- this.localAnchorB._deserialize(data.localAnchorB);
- this.maxLength = data.maxLength
- }
-};
-b2RopeJointDef._extend(b2JointDef);
-
-function b2RopeJoint(def) {
- this.parent.call(this, def);
- this.m_localAnchorA = def.localAnchorA.Clone();
- this.m_localAnchorB = def.localAnchorB.Clone();
- this.m_maxLength = def.maxLength;
- this.m_mass = 0;
- this.m_impulse = 0;
- this.m_state = b2Joint.e_inactiveLimit;
- this.m_length = 0;
- this.m_indexA = 0;
- this.m_indexB = 0;
- this.m_u = new b2Vec2();
- this.m_rA = new b2Vec2();
- this.m_rB = new b2Vec2();
- this.m_localCenterA = new b2Vec2();
- this.m_localCenterB = new b2Vec2();
- this.m_invMassA = 0;
- this.m_invMassB = 0;
- this.m_invIA = 0;
- this.m_invIB = 0
-}
-b2RopeJoint.prototype = {
- GetAnchorA: function () {
- return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)
- },
- GetAnchorB: function () {
- return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)
- },
- GetReactionForce: function (inv_dt) {
- var F = b2Vec2.Multiply((inv_dt * this.m_impulse), this.m_u);
- return F
- },
- GetReactionTorque: function (inv_dt) {
- return 0
- },
- GetLocalAnchorA: function () {
- return this.m_localAnchorA
- },
- GetLocalAnchorB: function () {
- return this.m_localAnchorB
- },
- SetMaxLength: function (length) {
- this.m_maxLength = length
- },
- GetMaxLength: function () {
- return this.m_maxLength
- },
- GetLimitState: function () {
- return this.m_state
- },
- InitVelocityConstraints: function (data) {
- this.m_indexA = this.m_bodyA.m_islandIndex;
- this.m_indexB = this.m_bodyB.m_islandIndex;
- this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter);
- this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter);
- this.m_invMassA = this.m_bodyA.m_invMass;
- this.m_invMassB = this.m_bodyB.m_invMass;
- this.m_invIA = this.m_bodyA.m_invI;
- this.m_invIB = this.m_bodyB.m_invI;
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- this.m_rA.Assign(b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA)));
- this.m_rB.Assign(b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB)));
- this.m_u.Assign(b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, this.m_rB), cA), this.m_rA));
- this.m_length = this.m_u.Length();
- var C = this.m_length - this.m_maxLength;
- if (C > 0) {
- this.m_state = b2Joint.e_atUpperLimit
- } else {
- this.m_state = b2Joint.e_inactiveLimit
- }
- if (this.m_length > b2_linearSlop) {
- this.m_u.Multiply(1 / this.m_length)
- } else {
- this.m_u.SetZero();
- this.m_mass = 0;
- this.m_impulse = 0;
- return
- }
- var crA = b2Cross_v2_v2(this.m_rA, this.m_u);
- var crB = b2Cross_v2_v2(this.m_rB, this.m_u);
- var invMass = this.m_invMassA + this.m_invIA * crA * crA + this.m_invMassB + this.m_invIB * crB * crB;
- this.m_mass = invMass != 0 ? 1 / invMass : 0;
- if (data.step.warmStarting) {
- this.m_impulse *= data.step.dtRatio;
- var P = b2Vec2.Multiply(this.m_impulse, this.m_u);
- vA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- wA -= this.m_invIA * b2Cross_v2_v2(this.m_rA, P);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, P)
- } else {
- this.m_impulse = 0
- }
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolveVelocityConstraints: function (data) {
- var vA = data.velocities[this.m_indexA].v.Clone();
- var wA = data.velocities[this.m_indexA].w;
- var vB = data.velocities[this.m_indexB].v.Clone();
- var wB = data.velocities[this.m_indexB].w;
- var vpA = b2Vec2.Add(vA, b2Cross_f_v2(wA, this.m_rA));
- var vpB = b2Vec2.Add(vB, b2Cross_f_v2(wB, this.m_rB));
- var C = this.m_length - this.m_maxLength;
- var Cdot = b2Dot_v2_v2(this.m_u, b2Vec2.Subtract(vpB, vpA));
- if (C < 0) {
- Cdot += data.step.inv_dt * C
- }
- var impulse = -this.m_mass * Cdot;
- var oldImpulse = this.m_impulse;
- this.m_impulse = b2Min(0, this.m_impulse + impulse);
- impulse = this.m_impulse - oldImpulse;
- var P = b2Vec2.Multiply(impulse, this.m_u);
- vA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- wA -= this.m_invIA * b2Cross_v2_v2(this.m_rA, P);
- vB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- wB += this.m_invIB * b2Cross_v2_v2(this.m_rB, P);
- data.velocities[this.m_indexA].v.Assign(vA);
- data.velocities[this.m_indexA].w = wA;
- data.velocities[this.m_indexB].v.Assign(vB);
- data.velocities[this.m_indexB].w = wB
- },
- SolvePositionConstraints: function (data) {
- var cA = data.positions[this.m_indexA].c.Clone();
- var aA = data.positions[this.m_indexA].a;
- var cB = data.positions[this.m_indexB].c.Clone();
- var aB = data.positions[this.m_indexB].a;
- var qA = new b2Rot(aA),
- qB = new b2Rot(aB);
- var rA = b2Mul_r_v2(qA, b2Vec2.Subtract(this.m_localAnchorA, this.m_localCenterA));
- var rB = b2Mul_r_v2(qB, b2Vec2.Subtract(this.m_localAnchorB, this.m_localCenterB));
- var u = b2Vec2.Subtract(b2Vec2.Subtract(b2Vec2.Add(cB, rB), cA), rA);
- var length = u.Normalize();
- var C = length - this.m_maxLength;
- C = b2Clamp(C, 0, b2_maxLinearCorrection);
- var impulse = -this.m_mass * C;
- var P = b2Vec2.Multiply(impulse, u);
- cA.Subtract(b2Vec2.Multiply(this.m_invMassA, P));
- aA -= this.m_invIA * b2Cross_v2_v2(rA, P);
- cB.Add(b2Vec2.Multiply(this.m_invMassB, P));
- aB += this.m_invIB * b2Cross_v2_v2(rB, P);
- data.positions[this.m_indexA].c.Assign(cA);
- data.positions[this.m_indexA].a = aA;
- data.positions[this.m_indexB].c.Assign(cB);
- data.positions[this.m_indexB].a = aB;
- return length - this.m_maxLength < b2_linearSlop
- },
- _serialize: function (out) {
- var obj = out || {};
- this.parent.prototype._serialize.call(this, obj);
- obj.localAnchorA = this.m_localAnchorA._serialize();
- obj.localAnchorB = this.m_localAnchorB._serialize();
- obj.maxLength = this.m_maxLength;
- return obj
- }
-};
-b2RopeJoint._extend(b2Joint);
-var expf = Math.exp;
-
-function b2RopeDef() {
- this.vertices = null;
- this.count = 0;
- this.masses = null;
- this.gravity = new b2Vec2();
- this.damping = 0.1;
- this.k2 = 0.9;
- this.k3 = 0.1
-}
-
-function b2Rope() {
- this.m_count = 0;
- this.m_ps = null;
- this.m_p0s = null;
- this.m_vs = null;
- this.m_ims = null;
- this.m_Ls = null;
- this.m_as = null;
- this.m_damping = 0;
- this.m_gravity = new b2Vec2();
- this.m_k2 = 1;
- this.m_k3 = 0.1
-}
-b2Rope.prototype = {
- Initialize: function (def) {
- this.m_count = def.count;
- this.m_ps = new Array(this.m_count);
- this.m_p0s = new Array(this.m_count);
- this.m_vs = new Array(this.m_count);
- this.m_ims = new Array(this.m_count);
- for (var i = 0; i < this.m_count; ++i) {
- this.m_ps[i] = def.vertices[i].Clone();
- this.m_p0s[i] = def.vertices[i].Clone();
- this.m_vs[i] = new b2Vec2();
- var m = def.masses[i];
- if (m > 0) {
- this.m_ims[i] = 1 / m
- } else {
- this.m_ims[i] = 0
- }
- }
- var count2 = this.m_count - 1;
- var count3 = this.m_count - 2;
- this.m_Ls = new Array(count2);
- this.m_as = new Array(count3);
- for (var i = 0; i < count2; ++i) {
- var p1 = this.m_ps[i];
- var p2 = this.m_ps[i + 1];
- this.m_Ls[i] = b2Distance(p1, p2)
- }
- for (var i = 0; i < count3; ++i) {
- var p1 = this.m_ps[i];
- var p2 = this.m_ps[i + 1];
- var p3 = this.m_ps[i + 2];
- var d1 = b2Vec2.Subtract(p2, p1);
- var d2 = b2Vec2.Subtract(p3, p2);
- var a = b2Cross_v2_v2(d1, d2);
- var b = b2Dot_v2_v2(d1, d2);
- this.m_as[i] = b2Atan2(a, b)
- }
- this.m_gravity = def.gravity.Clone();
- this.m_damping = def.damping;
- this.m_k2 = def.k2;
- this.m_k3 = def.k3
- },
- Step: function (h, iterations) {
- if (h == 0) {
- return
- }
- var d = expf(-h * this.m_damping);
- for (var i = 0; i < this.m_count; ++i) {
- this.m_p0s[i].Assign(this.m_ps[i]);
- if (this.m_ims[i] > 0) {
- this.m_vs[i].Add(b2Vec2.Multiply(h, this.m_gravity))
- }
- this.m_vs[i].Multiply(d);
- this.m_ps[i].Add(b2Vec2.Multiply(h, this.m_vs[i]))
- }
- for (var i = 0; i < iterations; ++i) {
- this.SolveC2();
- this.SolveC3();
- this.SolveC2()
- }
- var inv_h = 1 / h;
- for (var i = 0; i < this.m_count; ++i) {
- this.m_vs[i] = b2Vec2.Multiply(inv_h, b2Vec2.Subtract(this.m_ps[i], this.m_p0s[i]))
- }
- },
- GetVertexCount: function () {
- return this.m_count
- },
- GetVertices: function () {
- return this.m_ps
- },
- Draw: function (draw) {
- var c = new b2Color(0.4, 0.5, 0.7);
- for (var i = 0; i < this.m_count - 1; ++i) {
- draw.DrawSegment(this.m_ps[i], this.m_ps[i + 1], c)
- }
- },
- SetAngle: function (angle) {
- var count3 = this.m_count - 2;
- for (var i = 0; i < count3; ++i) {
- this.m_as[i] = angle
- }
- },
- SolveC2: function () {
- var count2 = this.m_count - 1;
- for (var i = 0; i < count2; ++i) {
- var p1 = this.m_ps[i];
- var p2 = this.m_ps[i + 1];
- var d = b2Vec2.Subtract(p2, p1);
- var L = d.Normalize();
- var im1 = this.m_ims[i];
- var im2 = this.m_ims[i + 1];
- if (im1 + im2 == 0) {
- continue
- }
- var s1 = im1 / (im1 + im2);
- var s2 = im2 / (im1 + im2);
- p1.Subtract(b2Vec2.Multiply(this.m_k2 * s1 * (this.m_Ls[i] - L), d));
- p2.Add(b2Vec2.Multiply(this.m_k2 * s2 * (this.m_Ls[i] - L), d))
- }
- },
- SolveC3: function () {
- var count3 = this.m_count - 2;
- for (var i = 0; i < count3; ++i) {
- var p1 = this.m_ps[i];
- var p2 = this.m_ps[i + 1];
- var p3 = this.m_ps[i + 2];
- var m1 = this.m_ims[i];
- var m2 = this.m_ims[i + 1];
- var m3 = this.m_ims[i + 2];
- var d1 = b2Vec2.Subtract(p2, p1);
- var d2 = b2Vec2.Subtract(p3, p2);
- var L1sqr = d1.LengthSquared();
- var L2sqr = d2.LengthSquared();
- if (L1sqr * L2sqr == 0) {
- continue
- }
- var a = b2Cross_v2_v2(d1, d2);
- var b = b2Dot_v2_v2(d1, d2);
- var angle = b2Atan2(a, b);
- var Jd1 = b2Vec2.Multiply((-1 / L1sqr), d1.Skew());
- var Jd2 = b2Vec2.Multiply((1 / L2sqr), d2.Skew());
- var J1 = b2Vec2.Negate(Jd1);
- var J2 = b2Vec2.Subtract(Jd1, Jd2);
- var J3 = Jd2;
- var mass = m1 * b2Dot_v2_v2(J1, J1) + m2 * b2Dot_v2_v2(J2, J2) + m3 * b2Dot_v2_v2(J3, J3);
- if (mass == 0) {
- continue
- }
- mass = 1 / mass;
- var C = angle - this.m_as[i];
- while (C > b2_pi) {
- angle -= 2 * b2_pi;
- C = angle - this.m_as[i]
- }
- while (C < -b2_pi) {
- angle += 2 * b2_pi;
- C = angle - this.m_as[i]
- }
- var impulse = -this.m_k3 * mass * C;
- p1.Add(b2Vec2.Multiply((m1 * impulse), J1));
- p2.Add(b2Vec2.Multiply((m2 * impulse), J2));
- p3.Add(b2Vec2.Multiply((m3 * impulse), J3))
- }
- }
-};
-var b2JsonSerializer = {
- serialize: function (world) {
- var shapes = [];
- var i;
- var serialized;
- var b;
- var f;
- var shape;
- for (b = world.GetBodyList(); b; b = b.GetNext()) {
- for (f = b.GetFixtureList(); f; f = f.GetNext()) {
- shape = f.GetShape();
- f.__temp_shape_id = shapes.length;
- shapes.push(shape._serialize())
- }
- }
- var fixtures = [];
- for (b = world.GetBodyList(); b; b = b.GetNext()) {
- b.__temp_fixture_ids = [];
- for (f = b.GetFixtureList(); f; f = f.GetNext()) {
- serialized = f._serialize();
- serialized.shape = f.__temp_shape_id;
- delete f.__temp_shape_id;
- b.__temp_fixture_ids.push(fixtures.length);
- fixtures.push(serialized)
- }
- }
- var bodies = [];
- for (b = world.GetBodyList(); b; b = b.GetNext()) {
- serialized = b._serialize();
- serialized.fixtures = [];
- for (i = 0; i < b.__temp_fixture_ids.length; ++i) {
- serialized.fixtures.push(b.__temp_fixture_ids[i])
- }
- delete b.__temp_fixture_ids;
- b.__temp_body_id = bodies.length;
- bodies.push(serialized)
- }
- var joints = [];
- var j;
- for (j = world.GetJointList(), i = 0; j; j = j.GetNext(), ++i) {
- j.__temp_joint_id = i
- }
- for (j = world.GetJointList(); j; j = j.GetNext()) {
- if (j.GetType() === b2Joint.e_mouseJoint) {
- continue
- }
- serialized = j._serialize();
- serialized.bodyA = j.GetBodyA().__temp_body_id;
- serialized.bodyB = j.GetBodyB().__temp_body_id;
- joints.push(serialized)
- }
- for (j = world.GetJointList(); j; j = j.GetNext()) {
- delete j.__temp_joint_id
- }
- for (b = world.GetBodyList(); b; b = b.GetNext()) {
- delete b.__temp_body_id
- }
- return {
- shapes: shapes,
- fixtures: fixtures,
- bodies: bodies,
- joints: joints
- }
- },
- deserialize: function (serialized, world, clear) {
- var deserialized = JSON.parse(serialized);
- if (clear) {
- for (var b = world.GetBodyList(); b;) {
- var next = b.GetNext();
- world.DestroyBody(b);
- b = next
- }
- for (var j = world.GetJointList(); j;) {
- var next = j.GetNext();
- world.DestroyJoint(j);
- j = next
- }
- }
- var shapes = [];
- for (var i = 0; i < deserialized.shapes.length; ++i) {
- var shapeData = deserialized.shapes[i];
- var shape;
- switch (shapeData.m_type) {
- case b2Shape.e_circle:
- shape = new b2CircleShape();
- break;
- case b2Shape.e_edge:
- shape = new b2EdgeShape();
- break;
- case b2Shape.e_chain:
- shape = new b2ChainShape();
- break;
- case b2Shape.e_polygon:
- shape = new b2PolygonShape();
- break
- }
- shape._deserialize(shapeData);
- shapes.push(shape)
- }
- var fixtures = [];
- for (i = 0; i < deserialized.fixtures.length; ++i) {
- var fixtureData = deserialized.fixtures[i];
- var fixture = new b2FixtureDef();
- fixture._deserialize(fixtureData);
- fixture.shape = shapes[fixtureData.shape];
- fixtures.push(fixture)
- }
- var bodies = [];
- for (i = 0; i < deserialized.bodies.length; ++i) {
- var bodyData = deserialized.bodies[i];
- var def = new b2BodyDef();
- def._deserialize(bodyData);
- var body = world.CreateBody(def);
- for (var x = 0; x < bodyData.fixtures.length; ++x) {
- body.CreateFixture(fixtures[bodyData.fixtures[x]])
- }
- bodies.push(body)
- }
- var joints = [];
- var gears = [];
- for (i = 0; i < deserialized.joints.length; ++i) {
- var jointData = deserialized.joints[i];
- var jointDef;
- switch (jointData.type) {
- case b2Joint.e_revoluteJoint:
- jointDef = new b2RevoluteJointDef();
- break;
- case b2Joint.e_prismaticJoint:
- jointDef = new b2PrismaticJointDef();
- break;
- case b2Joint.e_distanceJoint:
- jointDef = new b2DistanceJointDef();
- break;
- case b2Joint.e_pulleyJoint:
- jointDef = new b2PulleyJointDef();
- break;
- case b2Joint.e_gearJoint:
- jointDef = new b2GearJointDef();
- break;
- case b2Joint.e_wheelJoint:
- jointDef = new b2WheelJointDef();
- break;
- case b2Joint.e_weldJoint:
- jointDef = new b2WeldJointDef();
- break;
- case b2Joint.e_frictionJoint:
- jointDef = new b2FrictionJointDef();
- break;
- case b2Joint.e_ropeJoint:
- jointDef = new b2RopeJointDef();
- break;
- case b2Joint.e_motorJoint:
- jointDef = new b2MotorJointDef();
- break;
- default:
- throw new Error("unknown joint")
- }
- jointDef._deserialize(jointData, bodies);
- if (jointData.type === b2Joint.e_gearJoint) {
- gears.push([jointDef, joints.length]);
- joints.push(null)
- } else {
- var joint = world.CreateJoint(jointDef);
- joints.push(joint)
- }
- }
- for (i = 0; i < gears.length; ++i) {
- gears[i][0].joint1 = joints[gears[i][0].joint1];
- gears[i][0].joint2 = joints[gears[i][0].joint2];
- joint = world.CreateJoint(gears[i][0]);
- joints[gears[i][1]] = joint
- }
- }
-};
-var b2RUBELoader = (function () {
- function parseVector(obj) {
- return new b2Vec2(obj ? (obj.x || 0) : 0, obj ? (obj.y || 0) : 0)
- }
-
- function parseVectorArray(obj) {
- var vals = new Array(obj.x.length);
- for (var i = 0; i < vals.length; ++i) {
- vals[i] = new b2Vec2(obj.x[i], obj.y[i])
- }
- return vals
- }
-
- function parseProperty(obj, instance) {
- var name = obj.name;
- var val;
- if (typeof (obj["int"]) !== "undefined") {
- val = obj["int"]
- } else {
- if (typeof (obj["float"]) !== "undefined") {
- val = obj["float"]
- } else {
- if (typeof (obj.string) !== "undefined") {
- val = obj.string
- } else {
- if (typeof (obj.bool) !== "undefined") {
- val = obj.bool
- } else {
- if (typeof (obj.vec2) !== "undefined") {
- val = parseVector(obj.vec2)
- } else {
- throw new Error("unknown property type")
- }
- }
- }
- }
- }
- if (instance.hasOwnProperty(name)) {
- throw new Error("custom property possibly overwriting an existing one")
- }
- instance[name] = val
- }
-
- function parseFixture(obj, body) {
- var def = new b2FixtureDef();
- def.density = obj.density || 0;
- def.filter.categoryBits = typeof (obj["filter-categoryBits"]) === "undefined" ? 1 : obj["filter-categoryBits"];
- def.filter.maskBits = typeof (obj["filter-maskBits"]) === "undefined" ? 65535 : obj["filter-maskBits"];
- def.filter.groupIndex = typeof (obj["filter-groupIndex"]) === "undefined" ? 0 : obj["filter-groupIndex"];
- def.friction = obj.friction || 0;
- def.restitution = obj.restitution || 0;
- def.isSensor = obj.sensor || 0;
- var shape;
- if (typeof (obj.circle) !== "undefined") {
- shape = new b2CircleShape();
- shape.m_p = parseVector(obj.circle.center);
- shape.m_radius = obj.circle.radius || 0
- } else {
- if (typeof (obj.polygon) !== "undefined") {
- var vertices = parseVectorArray(obj.polygon.vertices);
- shape = new b2PolygonShape();
- shape.Set(vertices, vertices.length)
- } else {
- if (typeof (obj.chain) !== "undefined") {
- var vertices = parseVectorArray(obj.chain.vertices);
- shape = new b2ChainShape();
- shape.m_count = vertices.length;
- shape.m_vertices = vertices;
- if (shape.m_hasNextVertex = obj.chain.hasNextVertex) {
- shape.m_nextVertex = parseVector(obj.chain.nextVertex)
- }
- if (shape.m_hasPrevVertex = obj.chain.hasPrevVertex) {
- shape.m_prevVertex = parseVector(obj.chain.prevVertex)
- }
- } else {
- throw new Error("unknown shape type")
- }
- }
- }
- def.shape = shape;
- var fixture = body.CreateFixture(def);
- fixture.name = obj.name;
- if (obj.customProperties) {
- for (var i = 0; i < obj.customProperties.length; ++i) {
- parseProperty(obj, fixture)
- }
- }
- }
-
- function parseBody(obj, world) {
- var def = new b2BodyDef();
- def.type = obj.type || b2Body.b2_staticBody;
- def.angle = obj.angle || 0;
- def.angularDamping = obj.angularDamping || 0;
- def.angularVelocity = obj.angularVelocity || 0;
- def.awake = obj.awake || false;
- def.bullet = obj.bullet || false;
- def.fixedRotation = obj.fixedRotation || false;
- def.linearDamping = obj.linearDamping || false;
- def.linearVelocity = parseVector(obj.linearVelocity);
- def.gravityScale = typeof (obj.gravityScale) !== "undefined" ? obj.gravityScale : 1;
- var md = new b2MassData();
- md.mass = obj["massData-mass"] || 0;
- md.center = parseVector(obj["massData-center"]);
- md.I = obj["massData-I"] || 0;
- def.position = parseVector(obj.position);
- var body = world.CreateBody(def);
- body.name = obj.name;
- body.SetMassData(md);
- if (obj.fixture) {
- for (var i = 0; i < obj.fixture.length; ++i) {
- parseFixture(obj.fixture[i], body)
- }
- }
- if (obj.customProperties) {
- for (i = 0; i < obj.customProperties.length; ++i) {
- parseProperty(obj, body)
- }
- }
- return body
- }
- var jointsList = {
- revolute: b2RevoluteJointDef,
- distance: b2DistanceJointDef,
- prismatic: b2PrismaticJointDef,
- wheel: b2WheelJointDef,
- rope: b2RopeJointDef,
- motor: b2MotorJointDef,
- weld: b2WeldJointDef,
- friction: b2FrictionJointDef
- };
-
- function parseJoint(obj, world, bodies) {
- if (!jointsList[obj.type]) {
- throw new Error("unknown joint type")
- }
- var jd = new jointsList[obj.type]();
- switch (jd.type) {
- case b2Joint.e_revoluteJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.enableLimit = obj.enableLimit || false;
- jd.enableMotor = obj.enableMotor || false;
- jd.lowerAngle = obj.lowerLimit || 0;
- jd.maxMotorTorque = obj.maxMotorTorque || 0;
- jd.motorSpeed = obj.motorSpeed || 0;
- jd.referenceAngle = obj.refAngle || 0;
- jd.upperAngle = obj.upperLimit || 0;
- break;
- case b2Joint.e_distanceJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.dampingRatio = obj.dampingRatio || 0;
- jd.frequencyHz = obj.frequency || 0;
- jd.length = obj.length || 0;
- break;
- case b2Joint.e_prismaticJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.enableLimit = obj.enableLimit || false;
- jd.enableMotor = obj.enableMotor || false;
- jd.localAxisA = parseVector(obj.localAxisA);
- jd.lowerTranslation = obj.lowerLimit || 0;
- jd.maxMotorForce = obj.maxMotorForce || 0;
- jd.motorSpeed = obj.motorSpeed || 0;
- jd.referenceAngle = obj.refAngle || 0;
- jd.upperTranslation = obj.upperLimit || 0;
- break;
- case b2Joint.e_wheelJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.enableMotor = obj.enableMotor || false;
- jd.localAxisA = parseVector(obj.localAxisA);
- jd.maxMotorTorque = obj.maxMotorTorque || 0;
- jd.motorSpeed = obj.motorSpeed || 0;
- jd.dampingRatio = obj.springDampingRatio || 0;
- jd.frequencyHz = obj.springFrequency || 0;
- break;
- case b2Joint.e_ropeJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.maxLength = obj.maxLength || 0;
- break;
- case b2Joint.e_motorJoint:
- jd.linearOffset = parseVector(obj.anchorA);
- jd.angularOffset = obj.refAngle || 0;
- jd.maxForce = obj.maxForce || 0;
- jd.maxTorque = obj.maxTorque || 0;
- jd.correctionFactor = obj.correctionFactor || 0;
- break;
- case b2Joint.e_weldJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.referenceAngle = obj.refAngle || 0;
- jd.dampingRatio = obj.dampingRatio || 0;
- jd.frequencyHz = obj.frequencyHz || 0;
- break;
- case b2Joint.e_frictionJoint:
- jd.localAnchorA = parseVector(obj.anchorA);
- jd.localAnchorB = parseVector(obj.anchorB);
- jd.maxForce = obj.maxForce || 0;
- jd.maxTorque = obj.maxTorque || 0;
- break;
- default:
- throw new Error("wat?")
- }
- jd.bodyA = bodies[obj.bodyA || 0];
- jd.bodyB = bodies[obj.bodyB || 0];
- jd.collideConnected = obj.collideConnected || false;
- var joint = world.CreateJoint(jd);
- joint.name = obj.name;
- if (obj.customProperties) {
- for (var i = 0; i < obj.customProperties.length; ++i) {
- parseProperty(obj, joint)
- }
- }
- return joint
- }
-
- function b2RubeParameters() {
- this.world = null;
- this.positionIterations = 0;
- this.velocityIterations = 0;
- this.stepsPerSecond = 0;
- this.fixtures = {};
- this.bodies = {};
- this.joints = {};
- Object.seal(this)
- }
-
- function parseWorld(obj, world) {
- var params = new b2RubeParameters();
- params.world = world = world || new b2World(new b2Vec2(0, 0));
- params.positionIterations = obj.positionIterations || 0;
- params.velocityIterations = obj.velocityIterations || 0;
- params.stepsPerSecond = obj.stepsPerSecond || 0;
- if (obj.gravity) {
- world.SetGravity(parseVector(obj.gravity))
- }
- world.SetAllowSleeping(obj.allowSleep || false);
- world.SetAutoClearForces(obj.autoClearForces || false);
- world.SetWarmStarting(obj.warmStarting || false);
- world.SetContinuousPhysics(obj.continuousPhysics || false);
- world.SetSubStepping(obj.subStepping || false);
- var bodies = [];
- var bl = obj.body;
- if (bl) {
- for (var i = 0; i < bl.length; ++i) {
- var body = parseBody(bl[i], world);
- bodies.push(body);
- for (var f = body.GetFixtureList(); f; f = f.GetNext()) {
- if (!params.fixtures[f.name]) {
- params.fixtures[f.name] = []
- }
- params.fixtures[f.name].push(f)
- }
- if (!params.bodies[body.name]) {
- params.bodies[body.name] = []
- }
- params.bodies[body.name].push(body)
- }
- }
- var joints = [];
- var jl = obj.joint;
- if (jl) {
- for (i = 0; i < jl.length; ++i) {
- var joint = parseJoint(jl[i], world, bodies);
- joints.push(joint);
- if (!params.joints[joint.name]) {
- params.joints[joint.name] = []
- }
- params.joints[joint.name].push(joint)
- }
- }
- return params
- }
- return {
- parseWorld: parseWorld
- }
-})();
-var mappings = [{
- trimmed: "version",
- name: "b2_version",
- def: b2_version
-}, {
- trimmed: "Vec2",
- name: "b2Vec2",
- def: b2Vec2
-}, {
- trimmed: "Vec3",
- name: "b2Vec3",
- def: b2Vec3
-}, {
- trimmed: "Mat22",
- name: "b2Mat22",
- def: b2Mat22
-}, {
- trimmed: "Mat33",
- name: "b2Mat33",
- def: b2Mat33
-}, {
- trimmed: "Rot",
- name: "b2Rot",
- def: b2Rot
-}, {
- trimmed: "Transform",
- name: "b2Transform",
- def: b2Transform
-}, {
- trimmed: "Sweep",
- name: "b2Sweep",
- def: b2Sweep
-}, {
- trimmed: "Dot_v2_v2",
- name: "b2Dot_v2_v2",
- def: b2Dot_v2_v2
-}, {
- trimmed: "Cross_v2_v2",
- name: "b2Cross_v2_v2",
- def: b2Cross_v2_v2
-}, {
- trimmed: "Cross_v2_f",
- name: "b2Cross_v2_f",
- def: b2Cross_v2_f
-}, {
- trimmed: "Cross_f_v2",
- name: "b2Cross_f_v2",
- def: b2Cross_f_v2
-}, {
- trimmed: "Mul_m22_v2",
- name: "b2Mul_m22_v2",
- def: b2Mul_m22_v2
-}, {
- trimmed: "MulT_m22_v2",
- name: "b2MulT_m22_v2",
- def: b2MulT_m22_v2
-}, {
- trimmed: "Distance",
- name: "b2Distance",
- def: b2Distance
-}, {
- trimmed: "DistanceSquared",
- name: "b2DistanceSquared",
- def: b2DistanceSquared
-}, {
- trimmed: "Dot_v3_v3",
- name: "b2Dot_v3_v3",
- def: b2Dot_v3_v3
-}, {
- trimmed: "Cross_v3_v3",
- name: "b2Cross_v3_v3",
- def: b2Cross_v3_v3
-}, {
- trimmed: "Mul_m22_m22",
- name: "b2Mul_m22_m22",
- def: b2Mul_m22_m22
-}, {
- trimmed: "MulT_m22_m22",
- name: "b2MulT_m22_m22",
- def: b2MulT_m22_m22
-}, {
- trimmed: "Mul_m33_v3",
- name: "b2Mul_m33_v3",
- def: b2Mul_m33_v3
-}, {
- trimmed: "Mul22_m33_v2",
- name: "b2Mul22_m33_v2",
- def: b2Mul22_m33_v2
-}, {
- trimmed: "Mul_r_r",
- name: "b2Mul_r_r",
- def: b2Mul_r_r
-}, {
- trimmed: "MulT_r_r",
- name: "b2MulT_r_r",
- def: b2MulT_r_r
-}, {
- trimmed: "Mul_r_v2",
- name: "b2Mul_r_v2",
- def: b2Mul_r_v2
-}, {
- trimmed: "MulT_r_v2",
- name: "b2MulT_r_v2",
- def: b2MulT_r_v2
-}, {
- trimmed: "Mul_t_v2",
- name: "b2Mul_t_v2",
- def: b2Mul_t_v2
-}, {
- trimmed: "Min_v2",
- name: "b2Min_v2",
- def: b2Min_v2
-}, {
- trimmed: "Max_v2",
- name: "b2Max_v2",
- def: b2Max_v2
-}, {
- trimmed: "Clamp",
- name: "b2Clamp",
- def: b2Clamp
-}, {
- trimmed: "MulT_t_v2",
- name: "b2MulT_t_v2",
- def: b2MulT_t_v2
-}, {
- trimmed: "Mul_t_t",
- name: "b2Mul_t_t",
- def: b2Mul_t_t
-}, {
- trimmed: "MulT_t_t",
- name: "b2MulT_t_t",
- def: b2MulT_t_t
-}, {
- trimmed: "Clamp_v2",
- name: "b2Clamp_v2",
- def: b2Clamp_v2
-}, {
- trimmed: "NextPowerOfTwo",
- name: "b2NextPowerOfTwo",
- def: b2NextPowerOfTwo
-}, {
- trimmed: "Abs_v2",
- name: "b2Abs_v2",
- def: b2Abs_v2
-}, {
- trimmed: "Abs_m22",
- name: "b2Abs_m22",
- def: b2Abs_m22
-}, {
- trimmed: "IsPowerOfTwo",
- name: "b2IsPowerOfTwo",
- def: b2IsPowerOfTwo
-}, {
- trimmed: "RandomFloat",
- name: "b2RandomFloat",
- def: b2RandomFloat
-}, {
- trimmed: "Timer",
- name: "b2Timer",
- def: b2Timer
-}, {
- trimmed: "Color",
- name: "b2Color",
- def: b2Color
-}, {
- trimmed: "Draw",
- name: "b2Draw",
- def: b2Draw
-}, {
- trimmed: "ContactID",
- name: "b2ContactID",
- def: b2ContactID
-}, {
- trimmed: "ManifoldPoint",
- name: "b2ManifoldPoint",
- def: b2ManifoldPoint
-}, {
- trimmed: "Manifold",
- name: "b2Manifold",
- def: b2Manifold
-}, {
- trimmed: "WorldManifold",
- name: "b2WorldManifold",
- def: b2WorldManifold
-}, {
- trimmed: "GetPointStates",
- name: "b2GetPointStates",
- def: b2GetPointStates
-}, {
- trimmed: "ClipVertex",
- name: "b2ClipVertex",
- def: b2ClipVertex
-}, {
- trimmed: "RayCastInput",
- name: "b2RayCastInput",
- def: b2RayCastInput
-}, {
- trimmed: "RayCastOutput",
- name: "b2RayCastOutput",
- def: b2RayCastOutput
-}, {
- trimmed: "AABB",
- name: "b2AABB",
- def: b2AABB
-}, {
- trimmed: "CollideCircles",
- name: "b2CollideCircles",
- def: b2CollideCircles
-}, {
- trimmed: "CollidePolygonAndCircle",
- name: "b2CollidePolygonAndCircle",
- def: b2CollidePolygonAndCircle
-}, {
- trimmed: "FindMaxSeparation",
- name: "b2FindMaxSeparation",
- def: b2FindMaxSeparation
-}, {
- trimmed: "FindIncidentEdge",
- name: "b2FindIncidentEdge",
- def: b2FindIncidentEdge
-}, {
- trimmed: "CollidePolygons",
- name: "b2CollidePolygons",
- def: b2CollidePolygons
-}, {
- trimmed: "CollideEdgeAndCircle",
- name: "b2CollideEdgeAndCircle",
- def: b2CollideEdgeAndCircle
-}, {
- trimmed: "EPAxis",
- name: "b2EPAxis",
- def: b2EPAxis
-}, {
- trimmed: "TempPolygon",
- name: "b2TempPolygon",
- def: b2TempPolygon
-}, {
- trimmed: "ReferenceFace",
- name: "b2ReferenceFace",
- def: b2ReferenceFace
-}, {
- trimmed: "EPCollider",
- name: "b2EPCollider",
- def: b2EPCollider
-}, {
- trimmed: "CollideEdgeAndPolygon",
- name: "b2CollideEdgeAndPolygon",
- def: b2CollideEdgeAndPolygon
-}, {
- trimmed: "ClipSegmentToLine",
- name: "b2ClipSegmentToLine",
- def: b2ClipSegmentToLine
-}, {
- trimmed: "TestShapeOverlap",
- name: "b2TestShapeOverlap",
- def: b2TestShapeOverlap
-}, {
- trimmed: "TestOverlap",
- name: "b2TestOverlap",
- def: b2TestOverlap
-}, {
- trimmed: "Shape",
- name: "b2Shape",
- def: b2Shape
-}, {
- trimmed: "CircleShape",
- name: "b2CircleShape",
- def: b2CircleShape
-}, {
- trimmed: "EdgeShape",
- name: "b2EdgeShape",
- def: b2EdgeShape
-}, {
- trimmed: "ChainShape",
- name: "b2ChainShape",
- def: b2ChainShape
-}, {
- trimmed: "PolygonShape",
- name: "b2PolygonShape",
- def: b2PolygonShape
-}, {
- trimmed: "Pair",
- name: "b2Pair",
- def: b2Pair
-}, {
- trimmed: "PairLessThan",
- name: "b2PairLessThan",
- def: b2PairLessThan
-}, {
- trimmed: "BroadPhase",
- name: "b2BroadPhase",
- def: b2BroadPhase
-}, {
- trimmed: "DistanceProxy",
- name: "b2DistanceProxy",
- def: b2DistanceProxy
-}, {
- trimmed: "SimplexCache",
- name: "b2SimplexCache",
- def: b2SimplexCache
-}, {
- trimmed: "DistanceInput",
- name: "b2DistanceInput",
- def: b2DistanceInput
-}, {
- trimmed: "DistanceOutput",
- name: "b2DistanceOutput",
- def: b2DistanceOutput
-}, {
- trimmed: "SimplexVertex",
- name: "b2SimplexVertex",
- def: b2SimplexVertex
-}, {
- trimmed: "Simplex",
- name: "b2Simplex",
- def: b2Simplex
-}, {
- trimmed: "DistanceFunc",
- name: "b2DistanceFunc",
- def: b2DistanceFunc
-}, {
- trimmed: "TreeNode",
- name: "b2TreeNode",
- def: b2TreeNode
-}, {
- trimmed: "DynamicTree",
- name: "b2DynamicTree",
- def: b2DynamicTree
-}, {
- trimmed: "TOIInput",
- name: "b2TOIInput",
- def: b2TOIInput
-}, {
- trimmed: "TOIOutput",
- name: "b2TOIOutput",
- def: b2TOIOutput
-}, {
- trimmed: "SeparationFunction",
- name: "b2SeparationFunction",
- def: b2SeparationFunction
-}, {
- trimmed: "TimeOfImpact",
- name: "b2TimeOfImpact",
- def: b2TimeOfImpact
-}, {
- trimmed: "BodyDef",
- name: "b2BodyDef",
- def: b2BodyDef
-}, {
- trimmed: "Body",
- name: "b2Body",
- def: b2Body
-}, {
- trimmed: "Filter",
- name: "b2Filter",
- def: b2Filter
-}, {
- trimmed: "FixtureDef",
- name: "b2FixtureDef",
- def: b2FixtureDef
-}, {
- trimmed: "Fixture",
- name: "b2Fixture",
- def: b2Fixture
-}, {
- trimmed: "DestructionListener",
- name: "b2DestructionListener",
- def: b2DestructionListener
-}, {
- trimmed: "ContactFilter",
- name: "b2ContactFilter",
- def: b2ContactFilter
-}, {
- trimmed: "ContactImpulse",
- name: "b2ContactImpulse",
- def: b2ContactImpulse
-}, {
- trimmed: "ContactListener",
- name: "b2ContactListener",
- def: b2ContactListener
-}, {
- trimmed: "QueryCallback",
- name: "b2QueryCallback",
- def: b2QueryCallback
-}, {
- trimmed: "RayCastCallback",
- name: "b2RayCastCallback",
- def: b2RayCastCallback
-}, {
- trimmed: "TimeStep",
- name: "b2TimeStep",
- def: b2TimeStep
-}, {
- trimmed: "Position",
- name: "b2Position",
- def: b2Position
-}, {
- trimmed: "Velocity",
- name: "b2Velocity",
- def: b2Velocity
-}, {
- trimmed: "SolverData",
- name: "b2SolverData",
- def: b2SolverData
-}, {
- trimmed: "World",
- name: "b2World",
- def: b2World
-}, {
- trimmed: "MixFriction",
- name: "b2MixFriction",
- def: b2MixFriction
-}, {
- trimmed: "MixRestitution",
- name: "b2MixRestitution",
- def: b2MixRestitution
-}, {
- trimmed: "ContactRegister",
- name: "b2ContactRegister",
- def: b2ContactRegister
-}, {
- trimmed: "ContactEdge",
- name: "b2ContactEdge",
- def: b2ContactEdge
-}, {
- trimmed: "Contact",
- name: "b2Contact",
- def: b2Contact
-}, {
- trimmed: "CircleContact",
- name: "b2CircleContact",
- def: b2CircleContact
-}, {
- trimmed: "PolygonContact",
- name: "b2PolygonContact",
- def: b2PolygonContact
-}, {
- trimmed: "ChainAndCircleContact",
- name: "b2ChainAndCircleContact",
- def: b2ChainAndCircleContact
-}, {
- trimmed: "ChainAndPolygonContact",
- name: "b2ChainAndPolygonContact",
- def: b2ChainAndPolygonContact
-}, {
- trimmed: "EdgeAndCircleContact",
- name: "b2EdgeAndCircleContact",
- def: b2EdgeAndCircleContact
-}, {
- trimmed: "EdgeAndPolygonContact",
- name: "b2EdgeAndPolygonContact",
- def: b2EdgeAndPolygonContact
-}, {
- trimmed: "PolygonAndCircleContact",
- name: "b2PolygonAndCircleContact",
- def: b2PolygonAndCircleContact
-}, {
- trimmed: "defaultFilter",
- name: "b2_defaultFilter",
- def: b2_defaultFilter
-}, {
- trimmed: "defaultListener",
- name: "b2_defaultListener",
- def: b2_defaultListener
-}, {
- trimmed: "ContactManager",
- name: "b2ContactManager",
- def: b2ContactManager
-}, {
- trimmed: "VelocityConstraintPoint",
- name: "b2VelocityConstraintPoint",
- def: b2VelocityConstraintPoint
-}, {
- trimmed: "ContactPositionConstraint",
- name: "b2ContactPositionConstraint",
- def: b2ContactPositionConstraint
-}, {
- trimmed: "ContactVelocityConstraint",
- name: "b2ContactVelocityConstraint",
- def: b2ContactVelocityConstraint
-}, {
- trimmed: "PositionSolverManifold",
- name: "b2PositionSolverManifold",
- def: b2PositionSolverManifold
-}, {
- trimmed: "ContactSolverDef",
- name: "b2ContactSolverDef",
- def: b2ContactSolverDef
-}, {
- trimmed: "ContactSolver",
- name: "b2ContactSolver",
- def: b2ContactSolver
-}, {
- trimmed: "Island",
- name: "b2Island",
- def: b2Island
-}, {
- trimmed: "Jacobian",
- name: "b2Jacobian",
- def: b2Jacobian
-}, {
- trimmed: "JointEdge",
- name: "b2JointEdge",
- def: b2JointEdge
-}, {
- trimmed: "JointDef",
- name: "b2JointDef",
- def: b2JointDef
-}, {
- trimmed: "Joint",
- name: "b2Joint",
- def: b2Joint
-}, {
- trimmed: "RevoluteJointDef",
- name: "b2RevoluteJointDef",
- def: b2RevoluteJointDef
-}, {
- trimmed: "RevoluteJoint",
- name: "b2RevoluteJoint",
- def: b2RevoluteJoint
-}, {
- trimmed: "MouseJointDef",
- name: "b2MouseJointDef",
- def: b2MouseJointDef
-}, {
- trimmed: "MouseJoint",
- name: "b2MouseJoint",
- def: b2MouseJoint
-}, {
- trimmed: "DistanceJointDef",
- name: "b2DistanceJointDef",
- def: b2DistanceJointDef
-}, {
- trimmed: "DistanceJoint",
- name: "b2DistanceJoint",
- def: b2DistanceJoint
-}, {
- trimmed: "PrismaticJointDef",
- name: "b2PrismaticJointDef",
- def: b2PrismaticJointDef
-}, {
- trimmed: "PrismaticJoint",
- name: "b2PrismaticJoint",
- def: b2PrismaticJoint
-}, {
- trimmed: "FrictionJointDef",
- name: "b2FrictionJointDef",
- def: b2FrictionJointDef
-}, {
- trimmed: "FrictionJoint",
- name: "b2FrictionJoint",
- def: b2FrictionJoint
-}, {
- trimmed: "WeldJointDef",
- name: "b2WeldJointDef",
- def: b2WeldJointDef
-}, {
- trimmed: "WeldJoint",
- name: "b2WeldJoint",
- def: b2WeldJoint
-}, {
- trimmed: "WheelJointDef",
- name: "b2WheelJointDef",
- def: b2WheelJointDef
-}, {
- trimmed: "WheelJoint",
- name: "b2WheelJoint",
- def: b2WheelJoint
-}, {
- trimmed: "GearJointDef",
- name: "b2GearJointDef",
- def: b2GearJointDef
-}, {
- trimmed: "GearJoint",
- name: "b2GearJoint",
- def: b2GearJoint
-}, {
- trimmed: "MotorJointDef",
- name: "b2MotorJointDef",
- def: b2MotorJointDef
-}, {
- trimmed: "MotorJoint",
- name: "b2MotorJoint",
- def: b2MotorJoint
-}, {
- trimmed: "PulleyJointDef",
- name: "b2PulleyJointDef",
- def: b2PulleyJointDef
-}, {
- trimmed: "PulleyJoint",
- name: "b2PulleyJoint",
- def: b2PulleyJoint
-}, {
- trimmed: "RopeJointDef",
- name: "b2RopeJointDef",
- def: b2RopeJointDef
-}, {
- trimmed: "RopeJoint",
- name: "b2RopeJoint",
- def: b2RopeJoint
-}, {
- trimmed: "RopeDef",
- name: "b2RopeDef",
- def: b2RopeDef
-}, {
- trimmed: "Rope",
- name: "b2Rope",
- def: b2Rope
-}, {
- trimmed: "maxManifoldPoints",
- name: "b2_maxManifoldPoints",
- def: b2_maxManifoldPoints
-}, {
- trimmed: "maxPolygonVertices",
- name: "b2_maxPolygonVertices",
- def: b2_maxPolygonVertices
-}, {
- trimmed: "aabbExtension",
- name: "b2_aabbExtension",
- def: b2_aabbExtension
-}, {
- trimmed: "aabbMultiplier",
- name: "b2_aabbMultiplier",
- def: b2_aabbMultiplier
-}, {
- trimmed: "linearSlop",
- name: "b2_linearSlop",
- def: b2_linearSlop
-}, {
- trimmed: "angularSlop",
- name: "b2_angularSlop",
- def: b2_angularSlop
-}, {
- trimmed: "polygonRadius",
- name: "b2_polygonRadius",
- def: b2_polygonRadius
-}, {
- trimmed: "maxSubSteps",
- name: "b2_maxSubSteps",
- def: b2_maxSubSteps
-}, {
- trimmed: "maxTOIContacts",
- name: "b2_maxTOIContacts",
- def: b2_maxTOIContacts
-}, {
- trimmed: "velocityThreshold",
- name: "b2_velocityThreshold",
- def: b2_velocityThreshold
-}, {
- trimmed: "maxLinearCorrection",
- name: "b2_maxLinearCorrection",
- def: b2_maxLinearCorrection
-}, {
- trimmed: "maxAngularCorrection",
- name: "b2_maxAngularCorrection",
- def: b2_maxAngularCorrection
-}, {
- trimmed: "maxTranslation",
- name: "b2_maxTranslation",
- def: b2_maxTranslation
-}, {
- trimmed: "maxTranslationSquared",
- name: "b2_maxTranslationSquared",
- def: b2_maxTranslationSquared
-}, {
- trimmed: "maxRotation",
- name: "b2_maxRotation",
- def: b2_maxRotation
-}, {
- trimmed: "maxRotationSquared",
- name: "b2_maxRotationSquared",
- def: b2_maxRotationSquared
-}, {
- trimmed: "baumgarte",
- name: "b2_baumgarte",
- def: b2_baumgarte
-}, {
- trimmed: "toiBaugarte",
- name: "b2_toiBaugarte",
- def: b2_toiBaugarte
-}, {
- trimmed: "timeToSleep",
- name: "b2_timeToSleep",
- def: b2_timeToSleep
-}, {
- trimmed: "linearSleepTolerance",
- name: "b2_linearSleepTolerance",
- def: b2_linearSleepTolerance
-}, {
- trimmed: "angularSleepTolerance",
- name: "b2_angularSleepTolerance",
- def: b2_angularSleepTolerance
-}, {
- trimmed: "epsilon",
- name: "b2_epsilon",
- def: b2_epsilon
-}, {
- trimmed: "JsonSerializer",
- name: "b2JsonSerializer",
- def: b2JsonSerializer
-}, {
- trimmed: "RUBELoader",
- name: "b2RUBELoader",
- def: b2RUBELoader
-}, {
- trimmed: "Profiler",
- name: "b2Profiler",
- def: b2Profiler
-}];
-
-
-if (typeof (b2_compatibility) !== "undefined" && typeof (window) !== "undefined") {
- for (var i = 0; i < mappings.length; ++i) {
- window[mappings[i].name] = mappings[i].def
- }
-} else {
- var b2 = {};
- for (var i = 0; i < mappings.length; ++i) {
- b2[mappings[i].trimmed] = mappings[i].def
- }
- if (typeof (module) !== "undefined") {
- module.exports = b2
- } else {
- window.b2 = b2
- }
-}
\ No newline at end of file
diff --git a/spaces/fmind/resume/files/linkedin.html b/spaces/fmind/resume/files/linkedin.html
deleted file mode 100644
index 8fbd199d4c3340aa607026a51ff244dd06d0fb75..0000000000000000000000000000000000000000
--- a/spaces/fmind/resume/files/linkedin.html
+++ /dev/null
@@ -1,11335 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Médéric HURIER - Lead MLOps Engineer - Decathlon Technology | LinkedIn
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Skip to main content
-
-
-
-
-
Note: I'm not available to work on new missions until the 1st of September 2023. Thank you for your understanding.
When I worked as a teacher, I told my students that Artificial Intelligence and Machine Learning are the most effective levers to make a difference. Every day, new AI and ML solutions are released to empower companies and individuals alike. The question is: Is your business ready to provide the best AI/ML products for your customers?
I'm a professional Machine Learning Engineer, Data Scientist, and MLOps ready to assist you in this quest. I've completed a Ph.D. in Machine Learning and several high-end AI/ML certifications to help you build leading data-driven services. My past experiences include working with companies like Google, BNP Paribas, ArcelorMittal, the European Commission, and Decathlon to frame their needs, create state-of-the-art models and deliver AI/ML artifacts at scale.
I now work as a freelancer in Luxembourg, and I can carry out missions remotely in other European countries. You can get in touch with me on LinkedIn or at contact@fmind.dev. I'll be happy to collaborate with you or discuss your favored AI/ML topics in the MLOps Community.
- Tutoring adult students to become data scientists specializing in machine learning. - https://openclassrooms.com/fr/paths/793-data-scientist - https://openclassrooms.com/fr/paths/794-machine-learning-engineer - https://openclassrooms.com/fr/paths/795-ai-engineer
-
- Mission: Enhance the ARACHNE risk scoring tool (fraud detection).
Main tasks and responsibilities: - Develop a new version of Arachne using data mining techniques - Manage the development of the Arachne PoC/Project (SCRUM) - Assist data scientists in their projects (Virtual Assistant, NLP, …)
Technical stack: - Data Science: Python, PostgreSQL, SQLAlchemy, Hugging Face, HayStack - Management/Environment: Jira, Confluence, MS Office, AWS, Azure
-
- Mission: Design and implement the next ML/MLOps platform on AWS and GCP.
Main tasks and responsibilities: - Design the functional & technical architecture of the platform - Manage the MLOps@Decathlon initiative (tasks, plannings) - Select the vendor solutions based on a user need analysis - Communicate the progress and success to stack-holders - Assist data scientists in their project (audience, forecast)
- Mission: Design and implement the next ML/MLOps platform on AWS and GCP.
Main tasks and responsibilities: - Design the functional & technical architecture of the platform - Manage the MLOps@Decathlon initiative (tasks, plannings) - Select the vendor solutions based on a user need analysis - Communicate the progress and success to stack-holders - Assist data scientists in their project (audience, forecast)
- Mission: Improve the visibility and assets of SFEIR's Data Team.
Main tasks and responsibilities: - Design and create technical interviews for recruiting data scientists. - Become a Professional Machine Learning Engineer on Google Cloud. - Propose a strategy to improve the online visibility of SFEIR data team. - Share knowledge about data trends with non-technical staff members. - Create a group to write tutorials and kata on AI/ML for SFEIR developers.
-
- Mission: Train and optimize machine learning models to recommend steel prices.
Main tasks and responsibilities: - Create and fine-tune machine-learning models (tree-based) - Evaluate the performance of the model on real datasets - Communicate the results to business stack-holders
- While the future of machine learning and MLOps is being debated, practitioners still need to attend to their machine learning models in production. This is no easy task, as ML engineers must constantly assess the quality of the data that enters and exits their pipelines, and ensure that their models generate the correct predictions. To assist ML engineers with this challenge, several AI/ML monitoring solutions have been developed.
In this article, I will discuss the nature of AI/ML…
-
-
-
-
-
-
-
-
-
-
- While the future of machine learning and MLOps is being debated, practitioners still need to attend to their machine learning models in production. This is no easy task, as ML engineers must constantly assess the quality of the data that enters and exits their pipelines, and ensure that their models generate the correct predictions. To assist ML engineers with this challenge, several AI/ML monitoring solutions have been developed.
In this article, I will discuss the nature of AI/ML monitoring and how it relates to data engineering. First, I will present the similarities between AI/ML monitoring and data engineering. Second, I will enumerate additional features that AI/ML monitoring solutions can provide. Third, I will briefly touch on the topic of AI/ML observability and its relation to AI/ML monitoring. Finally, I will provide my conclusion about the field of AI/ML monitoring and how it should be considered to ensure the success of your AI/ML project.
-
-
-
-
-
-
-
-
- In this article, I present the implementation of a Python package on GitHub designed to support MLOps initiatives. The goal of this package is to make the coding workflow of data scientists and ML engineers as flexible, robust, and productive as possible. First, I start by motivating the use of Python packages. Then, I provide some tools and tips you can include in your MLOps project. Finally, I explain the follow-up steps required to take this package to the next level and make it work in your…
-
-
-
-
-
-
-
-
-
-
- In this article, I present the implementation of a Python package on GitHub designed to support MLOps initiatives. The goal of this package is to make the coding workflow of data scientists and ML engineers as flexible, robust, and productive as possible. First, I start by motivating the use of Python packages. Then, I provide some tools and tips you can include in your MLOps project. Finally, I explain the follow-up steps required to take this package to the next level and make it work in your environment.
-
-
-
-
-
-
-
-
- Large Language Model (LLM) is such an existing topic. Since the release of ChatGPT, we saw a surge of innovation ranging from education mentorship to finance advisory. Each week is a new opportunity for addressing new kinds of problems, increasing human productivity, or improving existing solutions. Yet, we may wonder if this is just a new hype cycle or if organizations are truly adopting LLMs at scale …
On March 2023, the MLOps Community issued a survey about LLMs in production to…
-
-
-
-
-
-
-
-
-
-
- Large Language Model (LLM) is such an existing topic. Since the release of ChatGPT, we saw a surge of innovation ranging from education mentorship to finance advisory. Each week is a new opportunity for addressing new kinds of problems, increasing human productivity, or improving existing solutions. Yet, we may wonder if this is just a new hype cycle or if organizations are truly adopting LLMs at scale …
On March 2023, the MLOps Community issued a survey about LLMs in production to picture the state of adoption. The survey is full of interesting insights, but there is a catch: 80% of the questions are open-ended, which means respondents answered the survey freely from a few keywords to full sentences. I volunteered to clean up the answers with the help of ChatGPT and let the community get a grasp of the survey experiences.
In this article, I present the steps and lessons learned from my journey to shed some light on the MLOps survey on LLMs. I’m first going to present the goal and questions of the survey. Then, I will explain how I used ChatGPT to review the data and standardize the content. Finally, I’m going to evaluate the performance of ChatGPT compared to a manual review.
-
-
-
-
-
-
-
-
- If you work on MLOps, you must navigate an ever-growing landscape of tools and solutions. This is both an intense source of stimulation and fatigue for MLOps practitioners.
Vendors and users face the same problem: How can we combine all these tools without the combinatorial complexity of creating custom integrations?
In this article, I propose a solution analogous to POSIX to address this challenge. First, I motivate the creation of common protocols and schemas for combining MLOps…
-
-
-
-
-
-
-
-
-
-
- If you work on MLOps, you must navigate an ever-growing landscape of tools and solutions. This is both an intense source of stimulation and fatigue for MLOps practitioners.
Vendors and users face the same problem: How can we combine all these tools without the combinatorial complexity of creating custom integrations?
In this article, I propose a solution analogous to POSIX to address this challenge. First, I motivate the creation of common protocols and schemas for combining MLOps tools. Second, I present a high-level architecture to support implementation. Third, I conclude with the benefits and limitations of standardizing MLOps.
-
-
-
-
-
-
-
-
- Kubeflow Pipelines (KFP) is a powerful platform for building machine learning pipelines at scale with Kubernetes. The platform is well supported on major cloud platforms such as GCP (Vertex AI Pipelines) or AWS (Kubeflow on AWS). However, installing KFP on Apple Silicon (macOS 12.5.1 with Apple M1 Pro) proved to be more challenging than I imagined. Thus, I wanted to share my experience and tips to install KFP as easily as possible on your shiny Mac.
In this article, I present 4 steps to…
-
-
-
-
-
-
-
-
-
-
- Kubeflow Pipelines (KFP) is a powerful platform for building machine learning pipelines at scale with Kubernetes. The platform is well supported on major cloud platforms such as GCP (Vertex AI Pipelines) or AWS (Kubeflow on AWS). However, installing KFP on Apple Silicon (macOS 12.5.1 with Apple M1 Pro) proved to be more challenging than I imagined. Thus, I wanted to share my experience and tips to install KFP as easily as possible on your shiny Mac.
In this article, I present 4 steps to install Kubeflow on Apple Silicon, using Rancher Desktop for setting up Docker/Kubernetes. In the end, I list the problems I encountered during the installation of Kubeflow Pipelines.
-
-
-
-
-
-
-
-
- As programmers, we are continuously looking for languages that are performant, productive, and general purpose. Is there any programming language that currently satisfies these properties? Can we ever create one?
In this article, I present a fundamental trade-off that affects the design of programming languages and the success of software projects.
-
-
-
- University of Luxembourg
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Mobile applications are essential for interacting with technology and other people. With more than 2 billion devices deployed all over the world, Android offers a thriving ecosystem by making accessible the work of thousands of developers on digital marketplaces such as Google Play. Nevertheless, the success of Android also exposes millions of users to malware authors who seek to siphon private information and hijack mobile devices for their benefits.
To fight against the proliferation…
-
-
-
-
-
-
-
-
-
-
- Mobile applications are essential for interacting with technology and other people. With more than 2 billion devices deployed all over the world, Android offers a thriving ecosystem by making accessible the work of thousands of developers on digital marketplaces such as Google Play. Nevertheless, the success of Android also exposes millions of users to malware authors who seek to siphon private information and hijack mobile devices for their benefits.
To fight against the proliferation of Android malware, the security community embraced machine learning, a branch of artificial intelligence that powers a new generation of detection systems. Machine learning algorithms, however, require a substantial number of qualified samples to learn the classification rules enforced by security experts. Unfortunately, malware ground truths are notoriously hard to construct due to the inherent complexity of Android applications and the global lack of public information about malware. In a context where both information and human resources are limited, the security community is in demand for new approaches to aid practitioners to accurately define Android malware, automate classification decisions, and improve the comprehension of Android malware.
This dissertation proposes three solutions to assist with the creation of malware ground truths.
-
-
-
-
-
-
-
-
- Android malware is now pervasive and evolving rapidly. Thousands of malware samples are discovered every day with new models of attacks. The growth of these threats has come hand in hand with the proliferation of collective repositories sharing the latest specimens. Having access to a large number of samples opens new research directions aiming at efficiently vetting apps. However, automatically inferring a reference ground-truth from those repositories is not straightforward and can…
-
-
-
-
-
-
-
-
-
-
- Android malware is now pervasive and evolving rapidly. Thousands of malware samples are discovered every day with new models of attacks. The growth of these threats has come hand in hand with the proliferation of collective repositories sharing the latest specimens. Having access to a large number of samples opens new research directions aiming at efficiently vetting apps. However, automatically inferring a reference ground-truth from those repositories is not straightforward and can inadvertently lead to unforeseen misconceptions. On the one hand, samples are often mislabeled as different parties use distinct naming schemes for the same sample. On the other hand, samples are frequently misclassified due to conceptual errors made during labeling processes.
In this paper, we analyze the associations between all labels given by different vendors and we propose a system called EUPHONY to systematically unify common samples into family groups. The key novelty of our approach is that no prior knowledge of malware families is needed. We evaluate our approach using reference datasets and more than 0.4 million additional samples outside of these datasets. Results show that EUPHONY provides competitive performance against the state-of-the-art.
-
-
-
-
-
-
-
-
- There is generally a lack of consensus in Antivirus (AV) engines' decisions on a given sample. This challenges the building of authoritative ground-truth datasets. Instead, researchers and practitioners may rely on unvalidated approaches to build their ground truth, e.g., by considering decisions from a selected set of Antivirus vendors or by setting up a threshold number of positive detections before classifying a sample. Both approaches are biased as they implicitly either decide on ranking…
-
-
-
-
-
-
-
-
-
-
- There is generally a lack of consensus in Antivirus (AV) engines' decisions on a given sample. This challenges the building of authoritative ground-truth datasets. Instead, researchers and practitioners may rely on unvalidated approaches to build their ground truth, e.g., by considering decisions from a selected set of Antivirus vendors or by setting up a threshold number of positive detections before classifying a sample. Both approaches are biased as they implicitly either decide on ranking AV products, or they consider that all AV decisions have equal weights. In this paper, we extensively investigate the lack of agreement among AV engines.
To that end, we propose a set of metrics that quantitatively describe the different dimensions of this lack of consensus. We show how our metrics can bring important insights by using the detection results of 66 AV products on 2 million Android apps as a case study. Our analysis focuses not only on AV binary decision but also on the notoriously hard problem of labels that AVs associate with suspicious files, and allows to highlight biases hidden in the collection of a malware ground truth---a foundation stone of any machine learning-based malware detection approach.
-
-
-
-
-
-
-
-
-'''
-print(examples)
-demo = gr.Interface(fn=predict,
- inputs=[gr.Image(type='pil'),
- gr.Dropdown(["Day", "Night"], label="Time in the image above",
- info="Fire intensity depends on time of the day!")],
- outputs=[gr.Label(num_top_classes=2, label="Prediction"),
- gr.Label(num_top_classes=1, label="Fire Intensity"),
- gr.Number(label="Prediction time (s)")],
- title=title,
- description=description,
- article=article,
- examples=examples)
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dm_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dm_head.py
deleted file mode 100644
index 19c963923126b53ce22f60813540a35badf24b3d..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dm_head.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer
-
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-class DCM(nn.Module):
- """Dynamic Convolutional Module used in DMNet.
-
- Args:
- filter_size (int): The filter size of generated convolution kernel
- used in Dynamic Convolutional Module.
- fusion (bool): Add one conv to fuse DCM output feature.
- in_channels (int): Input channels.
- channels (int): Channels after modules, before conv_seg.
- conv_cfg (dict | None): Config of conv layers.
- norm_cfg (dict | None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg,
- norm_cfg, act_cfg):
- super(DCM, self).__init__()
- self.filter_size = filter_size
- self.fusion = fusion
- self.in_channels = in_channels
- self.channels = channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1,
- 0)
-
- self.input_redu_conv = ConvModule(
- self.in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- if self.norm_cfg is not None:
- self.norm = build_norm_layer(self.norm_cfg, self.channels)[1]
- else:
- self.norm = None
- self.activate = build_activation_layer(self.act_cfg)
-
- if self.fusion:
- self.fusion_conv = ConvModule(
- self.channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, x):
- """Forward function."""
- generated_filter = self.filter_gen_conv(
- F.adaptive_avg_pool2d(x, self.filter_size))
- x = self.input_redu_conv(x)
- b, c, h, w = x.shape
- # [1, b * c, h, w], c = self.channels
- x = x.view(1, b * c, h, w)
- # [b * c, 1, filter_size, filter_size]
- generated_filter = generated_filter.view(b * c, 1, self.filter_size,
- self.filter_size)
- pad = (self.filter_size - 1) // 2
- if (self.filter_size - 1) % 2 == 0:
- p2d = (pad, pad, pad, pad)
- else:
- p2d = (pad + 1, pad, pad + 1, pad)
- x = F.pad(input=x, pad=p2d, mode='constant', value=0)
- # [1, b * c, h, w]
- output = F.conv2d(input=x, weight=generated_filter, groups=b * c)
- # [b, c, h, w]
- output = output.view(b, c, h, w)
- if self.norm is not None:
- output = self.norm(output)
- output = self.activate(output)
-
- if self.fusion:
- output = self.fusion_conv(output)
-
- return output
-
-
-@HEADS.register_module()
-class DMHead(BaseDecodeHead):
- """Dynamic Multi-scale Filters for Semantic Segmentation.
-
- This head is the implementation of
- `DMNet `_.
-
- Args:
- filter_sizes (tuple[int]): The size of generated convolutional filters
- used in Dynamic Convolutional Module. Default: (1, 3, 5, 7).
- fusion (bool): Add one conv to fuse DCM output feature.
- """
-
- def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs):
- super(DMHead, self).__init__(**kwargs)
- assert isinstance(filter_sizes, (list, tuple))
- self.filter_sizes = filter_sizes
- self.fusion = fusion
- dcm_modules = []
- for filter_size in self.filter_sizes:
- dcm_modules.append(
- DCM(filter_size,
- self.fusion,
- self.in_channels,
- self.channels,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.dcm_modules = nn.ModuleList(dcm_modules)
- self.bottleneck = ConvModule(
- self.in_channels + len(filter_sizes) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- dcm_outs = [x]
- for dcm_module in self.dcm_modules:
- dcm_outs.append(dcm_module(x))
- dcm_outs = torch.cat(dcm_outs, dim=1)
- output = self.bottleneck(dcm_outs)
- output = self.cls_seg(output)
- return output
diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/encoders/xception.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/encoders/xception.py
deleted file mode 100644
index 9453bd08351f78ff78450b6ff17e2d216385ba6b..0000000000000000000000000000000000000000
--- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/encoders/xception.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import re
-import torch.nn as nn
-
-from pretrainedmodels.models.xception import pretrained_settings
-from pretrainedmodels.models.xception import Xception
-
-from ._base import EncoderMixin
-
-
-class XceptionEncoder(Xception, EncoderMixin):
- def __init__(self, out_channels, *args, depth=5, **kwargs):
- super().__init__(*args, **kwargs)
-
- self._out_channels = out_channels
- self._depth = depth
- self._in_channels = 3
-
- # modify padding to maintain output shape
- self.conv1.padding = (1, 1)
- self.conv2.padding = (1, 1)
-
- del self.fc
-
- def make_dilated(self, *args, **kwargs):
- raise ValueError(
- "Xception encoder does not support dilated mode "
- "due to pooling operation for downsampling!"
- )
-
- def get_stages(self):
- return [
- nn.Identity(),
- nn.Sequential(
- self.conv1, self.bn1, self.relu, self.conv2, self.bn2, self.relu
- ),
- self.block1,
- self.block2,
- nn.Sequential(
- self.block3,
- self.block4,
- self.block5,
- self.block6,
- self.block7,
- self.block8,
- self.block9,
- self.block10,
- self.block11,
- ),
- nn.Sequential(
- self.block12, self.conv3, self.bn3, self.relu, self.conv4, self.bn4
- ),
- ]
-
- def forward(self, x):
- stages = self.get_stages()
-
- features = []
- for i in range(self._depth + 1):
- x = stages[i](x)
- features.append(x)
-
- return features
-
- def load_state_dict(self, state_dict):
- # remove linear
- state_dict.pop("fc.bias", None)
- state_dict.pop("fc.weight", None)
-
- super().load_state_dict(state_dict)
-
-
-xception_encoders = {
- "xception": {
- "encoder": XceptionEncoder,
- "pretrained_settings": pretrained_settings["xception"],
- "params": {"out_channels": (3, 64, 128, 256, 728, 2048),},
- },
-}
diff --git a/spaces/gligen/demo/gligen/ldm/lr_scheduler.py b/spaces/gligen/demo/gligen/ldm/lr_scheduler.py
deleted file mode 100644
index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000
--- a/spaces/gligen/demo/gligen/ldm/lr_scheduler.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import numpy as np
-
-
-class LambdaWarmUpCosineScheduler:
- """
- note: use with a base_lr of 1.0
- """
- def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0):
- self.lr_warm_up_steps = warm_up_steps
- self.lr_start = lr_start
- self.lr_min = lr_min
- self.lr_max = lr_max
- self.lr_max_decay_steps = max_decay_steps
- self.last_lr = 0.
- self.verbosity_interval = verbosity_interval
-
- def schedule(self, n, **kwargs):
- if self.verbosity_interval > 0:
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}")
- if n < self.lr_warm_up_steps:
- lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start
- self.last_lr = lr
- return lr
- else:
- t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps)
- t = min(t, 1.0)
- lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * (
- 1 + np.cos(t * np.pi))
- self.last_lr = lr
- return lr
-
- def __call__(self, n, **kwargs):
- return self.schedule(n,**kwargs)
-
-
-class LambdaWarmUpCosineScheduler2:
- """
- supports repeated iterations, configurable via lists
- note: use with a base_lr of 1.0.
- """
- def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0):
- assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths)
- self.lr_warm_up_steps = warm_up_steps
- self.f_start = f_start
- self.f_min = f_min
- self.f_max = f_max
- self.cycle_lengths = cycle_lengths
- self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths))
- self.last_f = 0.
- self.verbosity_interval = verbosity_interval
-
- def find_in_interval(self, n):
- interval = 0
- for cl in self.cum_cycles[1:]:
- if n <= cl:
- return interval
- interval += 1
-
- def schedule(self, n, **kwargs):
- cycle = self.find_in_interval(n)
- n = n - self.cum_cycles[cycle]
- if self.verbosity_interval > 0:
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
- f"current cycle {cycle}")
- if n < self.lr_warm_up_steps[cycle]:
- f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
- self.last_f = f
- return f
- else:
- t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle])
- t = min(t, 1.0)
- f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * (
- 1 + np.cos(t * np.pi))
- self.last_f = f
- return f
-
- def __call__(self, n, **kwargs):
- return self.schedule(n, **kwargs)
-
-
-class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2):
-
- def schedule(self, n, **kwargs):
- cycle = self.find_in_interval(n)
- n = n - self.cum_cycles[cycle]
- if self.verbosity_interval > 0:
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
- f"current cycle {cycle}")
-
- if n < self.lr_warm_up_steps[cycle]:
- f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
- self.last_f = f
- return f
- else:
- f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle])
- self.last_f = f
- return f
-
diff --git a/spaces/gnakan/airtable-QA/README.md b/spaces/gnakan/airtable-QA/README.md
deleted file mode 100644
index 27aed8df3b5963bbfd1ee397ac020a0494fc02bd..0000000000000000000000000000000000000000
--- a/spaces/gnakan/airtable-QA/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Q&A Your Airtable Data with Streamlit & OpenAI
-emoji: 🌍
-colorFrom: green
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Google Photo For Mac How to Enjoy Your Photos on Smart Displays.md b/spaces/gotiQspiryo/whisper-ui/examples/Google Photo For Mac How to Enjoy Your Photos on Smart Displays.md
deleted file mode 100644
index 4c9153d6fc7d9f9754a972ec3e8ca0f1093d2d3a..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Google Photo For Mac How to Enjoy Your Photos on Smart Displays.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
If you have photos or videos in a Picasa Web Album, the easiest way to still access, modify and share most of that content is to log in to Google Photos. Your photos and videos will already be there.
-
For those who have already downloaded it, it will continue to work as it does today. But we will not be developing it further, and there will be no future updates.
If you choose to switch to Google Photos, you can continue to upload photos and videos using the desktop uploader at photos.google.com/apps.
Google Photos is the de-facto place for most people's photos these days. The app comes pre-installed on Android smartphones and offers unlimited free storage at high quality and a generous 15 GB storage if you want to store photos in original resolution. This is not easily beaten at all& and only Microsoft& with its 1 TB OneDrive solution& comes close in terms of price-per-gigabyte ratio. It does not come remotely close to what Google Photos is capable of since OneDrive is a storage solution and not much else when it comes to photos and media on an Android smartphone.
-
Previously& Google Photos was merely a filter app that would sieve your photos from your Google Drive folder (specified or otherwise) and show them in the app. Deleting a photo in the folders in Google Drive would remove it from Photos& and deleting a photo while in Google Photos would remove it from Google Drive. In a sense& it was an open system. Google Photos is now a closed-loop app. Photos are uploaded from your devices to the Google Photos cloud. They may or may not take up space in your 15 GB quota. There is no way to officially export photos and media easily because Google wants to tie you into its ecosystem. This is true for every other provider as well. Since this change came into effect& people have been searching for the best ways to download photos from Google Photos& download photos from Google Photos to Mac& download photos from Google Photos to Android& iPhone& or download photos from Google Photos to PC.
-
Sure& you can use the one-by-one method to download your photos from Google Photos to your PC and Mac. You open the Google Photos website and start downloading one by one. How many photos do you have& again? Yeah& let's not get into this method. What's the best way to download all photos from Google Photos then?
-
The best way to download photos from Google Photos to your PC and Mac is by using the web browser and Google Takeout. There is only one concern with this method that will be broached later in the article.
-
This is where the concern that was previously talked about lies. It pertains to storage space. Google provides 15 GB storage for everything but gives unlimited storage for photos up to High Quality. When you set the Takeout to Add To Drive& your photos will take up space in Google Drive since you will have a copy of photos with you now. So& your photos will only be backed up till the free space in your Google Drive storage. In case you have a lot of photos& you may want to use other options such as choosing select albums to backup and then copying them over and repeating the process.
-
If you must download using the one-by-one method since you believe that would be a faster way for you (you have very few photos)& then this is how you download photos from Google Photos to PC using a web browser and Google Photos website.
-
-
Despite iPad now comes with its own iPadOS& iOS& and iPadOS apps are the same for most purposes. Remember how we postulated above that Google does not want you to take your photos out of Google Photos? It does so with such alacrity and audacity that you can only download one photo at a time from Google Photos on iOS devices.
-
Downloading multiple photos from Google Photos to Android smartphone is easier in the Google Photos app on Android but with one catch. You can only add these photos to another app such as Google Drive or Dropbox& or OneDrive if you want to download and upload to another drive. If you add photos to Google Drive& you can then use the Drive app to download them to the device.
-
Downloading all photos from Google Photos is not as straightforward as it used to be on any device. Today& the best way to download all photos from Google Photos remains using the Google Takeout option that creates a ZIP file in your Drive or another cloud storage or even gives you a link to download to your computer should you want it that way. This is the simplest method you can use by far& but you need to do it on a computer& regardless of the operating system. If you want to do it on a mobile phone& it is not that easy on either Android or iPhone since Google Photos doesn't export directly any longer and you have to switch apps& first to upload to Google Drive and then to download your photos from Google Photos and Google Drive.
-
You can automatically download your photos from Google Photos (I believe Google+ photos end up there, correct me if I'm wrong) fairly easily. It requires a few different steps to get it set up though. Once it's setup, the sync is done for you.
-
Note, as pointed out by @Jer, this will count against your Google Drive storage whereas if you just stored them in Google Photos, you can make use of the unlimited storage. I pay $10 a mo/ for 1TB of Google Drive storage and this works great. I organize all my photos into subdirectories, tag them and manage them via Adobe Lightroom, with Google Drive automatically syncing them to Google Photos on my phone. It also lets me easily select a folder I've organized via Lightroom and share it with other people via Google Drive, instead of having to build out albums and share via Photos.
-
The solution by Johnathon Sullinger has caused photos to sync using Google Drive, which counts towards your 15GB limit. Ended up having to delete and re-upload using photos. So sync cannot work both ways without counting towards drive storage limits as far as I can tell.
-
If I take a picture on my camera, it automatically uploads to Google photos once I'm on wifi. It automatically shows up in my "Google Photos" folder on Drive, AND it shows in my offline Google Drive's "Google Photos" folder on my Mac.
-
If I drag a photo into my offline Google Drive "Google Photos" folder on my Mac, it automatically uploads to sync with my Google Drive "Google Photos" folder in the cloud, and it shows up in Google Photos online.
-
Google has purposely blocked this by not releasing bio direction sync on any platform even the phones wont sync downloads they just show up in the viewer and you have to select them to download a copy to your phone. Google photos is for auto upload only not auto download. Until people demand this service and another vendor provides it google will never allow it.
-
Analyzing all those photos, all that metadata, is of course just more raw information to feed the all-important, all-encompassing algorithms. That analysis fuels the targeted ads, drives influenced clicks, builds up the profile, and enables Google and others to analyze you among millions of others, categorizing you with AI, to infer what it can assume about your likely behaviors and the likely behaviors of others.
-
That last point in another swipe at Google, leading to the second critical consideration for any iPhone user with Google Photos on their phone. When Apple released iOS 14 last year, it gave users the option to share only selected photos and videos with apps, rather than their entire collection. Why should an app have access to years of memories, when all you want to do is edit a few photos or videos?
-
We will show you how to create a folder on your Mac, and every image or video you add to this folder will automatically upload to Google Photos. You only have to put the pictures in this Finder folder and do nothing else! Once photos are uploaded to Google Photos, you can access them on a web browser or the Google Photos app on your smartphone.
-
It is important to note that this only backs up photos in your Photos app. You can easily back up pictures that you have stored anywhere else on your computer by following steps 11-13, and selecting your other photos in step 12 instead of the export folder.
-
You can sync media like photos and videos with the Backup and Sync app. By downloading the desktop app, you can backup the existing photos on your computer to Google Photos.You can also sync specific folders to ensure every new file is automatically stored in Google Photos. So, the first step is to download the Backup and Sync app to your Windows or Mac PC. Then, continue to follow these steps:1. When you install the Backup and Sync app on your computer, sign in to your Google account.2. Next, choose the folders you want to back up to Google Photos.3. You can also select whether to preserve the original photo and video quality or opt for \"Storage saver.\"4. Select \"Start\" and wait for the existing files to upload.You can always change the folders you want to sync with the Backup and Sync app. The app icon will automatically appear on your desktop as well." } }, "@type": "Question", "name": "What Is the Storage Limit on Google Photos?", "acceptedAnswer": "@type": "Answer", "text": "One of the reasons Google Photos was different from other Google products is because it used to offer unlimited storage. It allowed users to keep thousands of pictures and create as many albums as they wanted.Unfortunately, as of June 2021, Google has discontinued this practice. Now Google Photos storage is a part of the same free 15GB available across all Google products.If you want more storage, you need to purchase it. Initially, this news caused some concern among those who already have much more than 15GB worth of photos and videos in Google Photos.However, every file stored up to June 2021 remains \"as is\" in Google Photos and isn't affected by the new rule." , "@type": "Question", "name": "Can you sync Google Photos to you Phone?", "acceptedAnswer": "@type": "Answer", "text": "You can access all the synced pictures from your computer to Google Photos from any other device too. If you have a Google Photos app on your Android phone or iPhone, you can see all the pictures, regardless of the upload location.Furthermore, you can download a picture to your smartphone even if it was taken on another device. For example, if you took a photo via a laptop camera and synced it to Google Photos, you'll see it in the Google Photos app on your phone. If you want to download it to your mobile device, here's how you do it:1. Launch Google Photos on your smartphone.2. Open the picture you want to save on your device.3. Tap on the menu icon at the top right corner of the screen.4. Select \"Download\"If you're connected to the internet, the picture will automatically download in your phone gallery." ] } BODY .fancybox-containerz-index:200000BODY .fancybox-is-open .fancybox-bgopacity:0.87BODY .fancybox-bg background-color:#0f0f11BODY .fancybox-thumbs background-color:#ffffff "@context": " ", "@type": "BreadcrumbList", "itemListElement": [ "@type": "ListItem", "position": 1, "item": "@id": " -mobile/", "name": "PC & Mobile" , "@type": "ListItem", "position": 2, "item": "@id": " -mobile/apps/", "name": "Apps" , "@type": "ListItem", "position": 3, "item": "@id": " -mobile/apps/google-photos/", "name": "Google Photos" ] "@context": " ", "@type": "Article", "mainEntityOfPage": "@type": "WebPage", "@id": " -google-photos-windows-mac/" , "headline": "How to Sync Google Photos to a Windows or Mac PC", "image": [ " -content/uploads/2021/08/How-To-Sync-Google-Photos-To-A-Windows-Or-Mac-PC-1.png?resize=918%2C613&ssl=1", " -content/uploads/2021/08/How-To-Sync-Google-Photos-To-A-Windows-Or-Mac-PC-1.png?resize=918%2C613&ssl=1", " -content/uploads/2021/08/How-To-Sync-Google-Photos-To-A-Windows-Or-Mac-PC-1.png?resize=918%2C613&ssl=1", " -content/uploads/2021/08/How-To-Sync-Google-Photos-To-A-Windows-Or-Mac-PC-1.png?fit=918%2C613&ssl=1", " -content/uploads/2021/07/Screenshot_5-46.png", " -content/uploads/2021/07/Screenshot_2-57.png", " -content/uploads/2021/07/Screenshot_3-55.png", " -content/uploads/2021/07/Screenshot_4-52.png", " -content/uploads/2021/07/1-75.png", " -content/uploads/2021/07/3-54.png", " -content/uploads/2021/07/5-45.png", " -content/uploads/2021/07/Screenshot_6-42.png", " -content/uploads/2021/07/Screenshot_7-32.png", " -content/uploads/2021/07/Screenshot_8-22.png", " -content/uploads/2021/07/Screenshot_9-26.png", " -content/uploads/2021/07/Screenshot_11-17.png", " -content/uploads/2021/07/Screenshot_12-21.png" ], "datePublished": "2021-08-09T13:30:00-06:00", "dateModified": "2021-08-09T13:30:00-06:00", "author": "@type": "Person", "name": "Lee Stanton" , "publisher": "@type": "Organization", "name": "Alphr", "logo": "@type": "ImageObject", "url": " -content/themes/alphr/images/logo_new.svg" , "description": "Google Photos is one of the best photos and video storing and sharing services. If you use Google Photos on your smartphone, then you're familiar with how convenient it can be. The pictures and videos you took with your" var ajaxurl = ' -admin/admin-ajax.php'; window.adsLoaded = false; var freestar = freestar || ; freestar.queue = freestar.queue || []; freestar.config = freestar.config || ; freestar.config.enabled_slots = []; freestar.initCallback = function () if (typeof window.initAds !== "undefined") window.initAds(); else window.adsLoaded = true; (freestar.config.enabled_slots.length === 0) ? freestar.initCallbackCalled = false : freestar.newAdSlots(freestar.config.enabled_slots) GamingXboxNintendoPlayStationTwitchDiscordMinecraftSteam
PC & MobileAndroidiPhoneChromebookWindowsMacGoogle SheetsZoomGoogle MeetGoogle PhotosMicrosoft TeamsZohoSocial MediaFacebookInstagramTikTokTwitterSnapChatWhatsAppTelegramMessengerInternetVPNsAlexaGoogle DriveGoogle PhotosiCloudPaypalNotionEntertainmentChromecastFire TVsRokuNetflixSpotifyKodiDisney+GadgetsSmart HomeEchoGoogle HomeiPadKindle FireVizio TVsSamsung TVsVPNsKodiXboxOn a RouterAndroidFirestick
-
- )}
-
- );
-};
-export default Home;
-
-export const getServerSideProps: GetServerSideProps = async ({ locale }) => {
- const defaultModelId =
- (process.env.DEFAULT_MODEL &&
- Object.values(OpenAIModelID).includes(
- process.env.DEFAULT_MODEL as OpenAIModelID,
- ) &&
- process.env.DEFAULT_MODEL) ||
- fallbackModelID;
-
- let serverSidePluginKeysSet = false;
-
- const googleApiKey = process.env.GOOGLE_API_KEY;
- const googleCSEId = process.env.GOOGLE_CSE_ID;
-
- if (googleApiKey && googleCSEId) {
- serverSidePluginKeysSet = true;
- }
-
- return {
- props: {
- serverSideApiKeyIsSet: !!process.env.OPENAI_API_KEY,
- defaultModelId,
- serverSidePluginKeysSet,
- ...(await serverSideTranslations(locale ?? 'en', [
- 'common',
- 'chat',
- 'sidebar',
- 'markdown',
- 'promptbar',
- 'settings',
- ])),
- },
- };
-};
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/framework.h b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/framework.h
deleted file mode 100644
index 12d803caaf3210c45808dee41217c4c6c6edfe6e..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/framework.h
+++ /dev/null
@@ -1,49 +0,0 @@
-// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#pragma once
-
-// Framework-specific macros to enable code sharing.
-
-//------------------------------------------------------------------------
-// Tensorflow.
-
-#ifdef NVDR_TENSORFLOW
-#define EIGEN_USE_GPU
-#include "tensorflow/core/framework/op.h"
-#include "tensorflow/core/framework/op_kernel.h"
-#include "tensorflow/core/framework/shape_inference.h"
-#include "tensorflow/core/platform/default/logging.h"
-using namespace tensorflow;
-using namespace tensorflow::shape_inference;
-#define NVDR_CTX_ARGS OpKernelContext* _nvdr_ctx
-#define NVDR_CTX_PARAMS _nvdr_ctx
-#define NVDR_CHECK(COND, ERR) OP_REQUIRES(_nvdr_ctx, COND, errors::Internal(ERR))
-#define NVDR_CHECK_CUDA_ERROR(CUDA_CALL) OP_CHECK_CUDA_ERROR(_nvdr_ctx, CUDA_CALL)
-#define NVDR_CHECK_GL_ERROR(GL_CALL) OP_CHECK_GL_ERROR(_nvdr_ctx, GL_CALL)
-#endif
-
-//------------------------------------------------------------------------
-// PyTorch.
-
-#ifdef NVDR_TORCH
-#ifndef __CUDACC__
-#include
-#include
-#include
-#include
-#include
-#endif
-#define NVDR_CTX_ARGS int _nvdr_ctx_dummy
-#define NVDR_CTX_PARAMS 0
-#define NVDR_CHECK(COND, ERR) do { TORCH_CHECK(COND, ERR) } while(0)
-#define NVDR_CHECK_CUDA_ERROR(CUDA_CALL) do { cudaError_t err = CUDA_CALL; TORCH_CHECK(!err, "Cuda error: ", cudaGetLastError(), "[", #CUDA_CALL, ";]"); } while(0)
-#define NVDR_CHECK_GL_ERROR(GL_CALL) do { GL_CALL; GLenum err = glGetError(); TORCH_CHECK(err == GL_NO_ERROR, "OpenGL error: ", getGLErrorString(err), "[", #GL_CALL, ";]"); } while(0)
-#endif
-
-//------------------------------------------------------------------------
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py
deleted file mode 100644
index 55929854a284626862af6666d3d981e83ad486fa..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# empty
diff --git a/spaces/haakohu/deep_privacy2/dp2/gan_trainer.py b/spaces/haakohu/deep_privacy2/dp2/gan_trainer.py
deleted file mode 100644
index 149e0f6be0602e90f26b67551615d0abb96aad56..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2/dp2/gan_trainer.py
+++ /dev/null
@@ -1,325 +0,0 @@
-import atexit
-from collections import defaultdict
-import logging
-import typing
-import torch
-import time
-from dp2.utils import vis_utils
-from dp2 import utils
-from tops import logger, checkpointer
-import tops
-from easydict import EasyDict
-
-
-def accumulate_gradients(params, fp16_ddp_accumulate):
- if len(params) == 0:
- return
- params = [param for param in params if param.grad is not None]
- flat = torch.cat([param.grad.flatten() for param in params])
- orig_dtype = flat.dtype
- if tops.world_size() > 1:
- if fp16_ddp_accumulate:
- flat = flat.half() / tops.world_size()
- else:
- flat /= tops.world_size()
- torch.distributed.all_reduce(flat)
- flat = flat.to(orig_dtype)
- grads = flat.split([param.numel() for param in params])
- for param, grad in zip(params, grads):
- param.grad = grad.reshape(param.shape)
-
-
-def accumulate_buffers(module: torch.nn.Module):
- buffers = [buf for buf in module.buffers()]
- if len(buffers) == 0:
- return
- flat = torch.cat([buf.flatten() for buf in buffers])
- if tops.world_size() > 1:
- torch.distributed.all_reduce(flat)
- flat /= tops.world_size()
- bufs = flat.split([buf.numel() for buf in buffers])
- for old, new in zip(buffers, bufs):
- old.copy_(new.reshape(old.shape), non_blocking=True)
-
-
-def check_ddp_consistency(module):
- if tops.world_size() == 1:
- return
- assert isinstance(module, torch.nn.Module)
- assert isinstance(module, torch.nn.Module)
- params_buffs = list(module.named_parameters()) + list(module.named_buffers())
- for name, tensor in params_buffs:
- fullname = type(module).__name__ + '.' + name
- tensor = tensor.detach()
- if tensor.is_floating_point():
- tensor = torch.nan_to_num(tensor)
- other = tensor.clone()
- torch.distributed.broadcast(tensor=other, src=0)
- assert (tensor == other).all(), fullname
-
-
-class AverageMeter():
- def __init__(self) -> None:
- self.to_log = dict()
- self.n = defaultdict(int)
- pass
-
- @torch.no_grad()
- def update(self, values: dict):
- for key, value in values.items():
- self.n[key] += 1
- if key in self.to_log:
- self.to_log[key] += value.mean().detach()
- else:
- self.to_log[key] = value.mean().detach()
-
- def get_average(self):
- return {key: value / self.n[key] for key, value in self.to_log.items()}
-
-
-class GANTrainer:
-
- def __init__(
- self,
- G: torch.nn.Module,
- D: torch.nn.Module,
- G_EMA: torch.nn.Module,
- D_optim: torch.optim.Optimizer,
- G_optim: torch.optim.Optimizer,
- dl_train: typing.Iterator,
- dl_val: typing.Iterable,
- scaler_D: torch.cuda.amp.GradScaler,
- scaler_G: torch.cuda.amp.GradScaler,
- ims_per_log: int,
- max_images_to_train: int,
- loss_handler,
- ims_per_val: int,
- evaluate_fn,
- batch_size: int,
- broadcast_buffers: bool,
- fp16_ddp_accumulate: bool,
- save_state: bool,
- *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- self.G = G
- self.D = D
- self.G_EMA = G_EMA
- self.D_optim = D_optim
- self.G_optim = G_optim
- self.dl_train = dl_train
- self.dl_val = dl_val
- self.scaler_D = scaler_D
- self.scaler_G = scaler_G
- self.loss_handler = loss_handler
- self.max_images_to_train = max_images_to_train
- self.images_per_val = ims_per_val
- self.images_per_log = ims_per_log
- self.evaluate_fn = evaluate_fn
- self.batch_size = batch_size
- self.broadcast_buffers = broadcast_buffers
- self.fp16_ddp_accumulate = fp16_ddp_accumulate
-
- self.train_state = EasyDict(
- next_log_step=0,
- next_val_step=ims_per_val,
- total_time=0
- )
-
- checkpointer.register_models(dict(
- generator=G, discriminator=D, EMA_generator=G_EMA,
- D_optimizer=D_optim,
- G_optimizer=G_optim,
- train_state=self.train_state,
- scaler_D=self.scaler_D,
- scaler_G=self.scaler_G
- ))
- if checkpointer.has_checkpoint():
- checkpointer.load_registered_models()
- logger.log(f"Resuming training from: global step: {logger.global_step()}")
- else:
- logger.add_dict({
- "stats/discriminator_parameters": tops.num_parameters(self.D),
- "stats/generator_parameters": tops.num_parameters(self.G),
- }, commit=False)
- if save_state:
- # If the job is unexpectedly killed, there could be a mismatch between previously saved checkpoint and the current checkpoint.
- atexit.register(checkpointer.save_registered_models)
-
- self._ims_per_log = ims_per_log
-
- self.to_log = AverageMeter()
- self.trainable_params_D = [param for param in self.D.parameters() if param.requires_grad]
- self.trainable_params_G = [param for param in self.G.parameters() if param.requires_grad]
- logger.add_dict({
- "stats/discriminator_trainable_parameters": sum(p.numel() for p in self.trainable_params_D),
- "stats/generator_trainable_parameters": sum(p.numel() for p in self.trainable_params_G),
- }, commit=False, level=logging.INFO)
- check_ddp_consistency(self.D)
- check_ddp_consistency(self.G)
- check_ddp_consistency(self.G_EMA.generator)
-
- def train_loop(self):
- self.log_time()
- while logger.global_step() <= self.max_images_to_train:
- batch = next(self.dl_train)
- self.G_EMA.update_beta()
- self.to_log.update(self.step_D(batch))
- self.to_log.update(self.step_G(batch))
- self.G_EMA.update(self.G)
-
- if logger.global_step() >= self.train_state.next_log_step:
- to_log = {f"loss/{key}": item.item() for key, item in self.to_log.get_average().items()}
- to_log.update({"amp/grad_scale_G": self.scaler_G.get_scale()})
- to_log.update({"amp/grad_scale_D": self.scaler_D.get_scale()})
- self.to_log = AverageMeter()
- logger.add_dict(to_log, commit=True)
- self.train_state.next_log_step += self.images_per_log
- if self.scaler_D.get_scale() < 1e-8 or self.scaler_G.get_scale() < 1e-8:
- print("Stopping training as gradient scale < 1e-8")
- logger.log("Stopping training as gradient scale < 1e-8")
- break
-
- if logger.global_step() >= self.train_state.next_val_step:
- self.evaluate()
- self.log_time()
- self.save_images()
- self.train_state.next_val_step += self.images_per_val
- logger.step(self.batch_size*tops.world_size())
- logger.log(f"Reached end of training at step {logger.global_step()}.")
- checkpointer.save_registered_models()
-
- def estimate_ims_per_hour(self):
- batch = next(self.dl_train)
- n_ims = int(100e3)
- n_steps = int(n_ims / (self.batch_size * tops.world_size()))
- n_ims = n_steps * self.batch_size * tops.world_size()
- for i in range(10): # Warmup
- self.G_EMA.update_beta()
- self.step_D(batch)
- self.step_G(batch)
- self.G_EMA.update(self.G)
- start_time = time.time()
- for i in utils.tqdm_(list(range(n_steps))):
- self.G_EMA.update_beta()
- self.step_D(batch)
- self.step_G(batch)
- self.G_EMA.update(self.G)
- total_time = time.time() - start_time
- ims_per_sec = n_ims / total_time
- ims_per_hour = ims_per_sec * 60*60
- ims_per_day = ims_per_hour * 24
- logger.log(f"Images per hour: {ims_per_hour/1e6:.3f}M")
- logger.log(f"Images per day: {ims_per_day/1e6:.3f}M")
- import math
- ims_per_4_day = int(math.ceil(ims_per_day / tops.world_size() * 4))
- logger.log(f"Images per 4 days: {ims_per_4_day}")
- logger.add_dict({
- "stats/ims_per_day": ims_per_day,
- "stats/ims_per_4_day": ims_per_4_day
- })
-
- def log_time(self):
- if not hasattr(self, "start_time"):
- self.start_time = time.time()
- self.last_time_step = logger.global_step()
- return
- n_images = logger.global_step() - self.last_time_step
- if n_images == 0:
- return
- n_secs = time.time() - self.start_time
- n_ims_per_sec = n_images / n_secs
- training_time_hours = n_secs / 60 / 60
- self.train_state.total_time += training_time_hours
- remaining_images = self.max_images_to_train - logger.global_step()
- remaining_time = remaining_images / n_ims_per_sec / 60 / 60
- logger.add_dict({
- "stats/n_ims_per_sec": n_ims_per_sec,
- "stats/total_traing_time_hours": self.train_state.total_time,
- "stats/remaining_time_hours": remaining_time
- })
- self.last_time_step = logger.global_step()
- self.start_time = time.time()
-
- def save_images(self):
- dl_val = iter(self.dl_val)
- batch = next(dl_val)
- # TRUNCATED visualization
- ims_to_log = 8
- self.G_EMA.eval()
- z = self.G.get_z(batch["img"])
- fakes_truncated = self.G_EMA.sample(**batch, truncation_value=0)["img"]
- fakes_truncated = utils.denormalize_img(fakes_truncated).mul(255).byte()[:ims_to_log].cpu()
- if "__key__" in batch:
- batch.pop("__key__")
- real = vis_utils.visualize_batch(**tops.to_cpu(batch))[:ims_to_log]
- to_vis = torch.cat((real, fakes_truncated))
- logger.add_images("images/truncated", to_vis, nrow=2)
-
- # Diverse images
- ims_diverse = 3
- batch = next(dl_val)
- to_vis = []
-
- for i in range(ims_diverse):
- z = self.G.get_z(batch["img"])[:1].repeat(batch["img"].shape[0], 1)
- fakes = utils.denormalize_img(self.G_EMA(**batch, z=z)["img"]).mul(255).byte()[:ims_to_log].cpu()
- to_vis.append(fakes)
- if "__key__" in batch:
- batch.pop("__key__")
- reals = vis_utils.visualize_batch(**tops.to_cpu(batch))[:ims_to_log]
- to_vis.insert(0, reals)
- to_vis = torch.cat(to_vis)
- logger.add_images("images/diverse", to_vis, nrow=ims_diverse+1)
-
- self.G_EMA.train()
- pass
-
- def evaluate(self):
- logger.log("Stating evaluation.")
- self.G_EMA.eval()
- try:
- checkpointer.save_registered_models(max_keep=3)
- except Exception:
- logger.log("Could not save checkpoint.")
- if self.broadcast_buffers:
- check_ddp_consistency(self.G)
- check_ddp_consistency(self.D)
- metrics = self.evaluate_fn(generator=self.G_EMA, dataloader=self.dl_val)
- metrics = {f"metrics/{k}": v for k, v in metrics.items()}
- logger.add_dict(metrics, level=logger.logger.INFO)
-
- def step_D(self, batch):
- utils.set_requires_grad(self.trainable_params_D, True)
- utils.set_requires_grad(self.trainable_params_G, False)
- tops.zero_grad(self.D)
- loss, to_log = self.loss_handler.D_loss(batch, grad_scaler=self.scaler_D)
- with torch.autograd.profiler.record_function("D_step"):
- self.scaler_D.scale(loss).backward()
- accumulate_gradients(self.trainable_params_D, fp16_ddp_accumulate=self.fp16_ddp_accumulate)
- if self.broadcast_buffers:
- accumulate_buffers(self.D)
- accumulate_buffers(self.G)
- # Step will not unscale if unscale is called previously.
- self.scaler_D.step(self.D_optim)
- self.scaler_D.update()
- utils.set_requires_grad(self.trainable_params_D, False)
- utils.set_requires_grad(self.trainable_params_G, False)
- return to_log
-
- def step_G(self, batch):
- utils.set_requires_grad(self.trainable_params_D, False)
- utils.set_requires_grad(self.trainable_params_G, True)
- tops.zero_grad(self.G)
- loss, to_log = self.loss_handler.G_loss(batch, grad_scaler=self.scaler_G)
- with torch.autograd.profiler.record_function("G_step"):
- self.scaler_G.scale(loss).backward()
- accumulate_gradients(self.trainable_params_G, fp16_ddp_accumulate=self.fp16_ddp_accumulate)
- if self.broadcast_buffers:
- accumulate_buffers(self.G)
- accumulate_buffers(self.D)
- self.scaler_G.step(self.G_optim)
- self.scaler_G.update()
- utils.set_requires_grad(self.trainable_params_D, False)
- utils.set_requires_grad(self.trainable_params_G, False)
- return to_log
diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/data/datasets/register_ade20k_150.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/data/datasets/register_ade20k_150.py
deleted file mode 100644
index fa3cb77077513df451620dac095cf625fc40e0e7..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/data/datasets/register_ade20k_150.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets import load_sem_seg
-import copy
-
-def _get_ade20k_150_meta():
- ade20k_150_classes = ["wall", "building", "sky", "floor", "tree", "ceiling", "road", "bed ", "windowpane", "grass", "cabinet", "sidewalk", "person", "earth", "door", "table", "mountain", "plant", "curtain", "chair", "car", "water", "painting", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock", "wardrobe", "lamp", "bathtub", "railing", "cushion", "base", "box", "column", "signboard", "chest of drawers", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator", "grandstand", "path", "stairs", "runway", "case", "pool table", "pillow", "screen door", "stairway", "river", "bridge", "bookcase", "blind", "coffee table", "toilet", "flower", "book", "hill", "bench", "countertop", "stove", "palm", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel", "bus", "towel", "light", "truck", "tower", "chandelier", "awning", "streetlight", "booth", "television receiver", "airplane", "dirt track", "apparel", "pole", "land", "bannister", "escalator", "ottoman", "bottle", "buffet", "poster", "stage", "van", "ship", "fountain", "conveyer belt", "canopy", "washer", "plaything", "swimming pool", "stool", "barrel", "basket", "waterfall", "tent", "bag", "minibike", "cradle", "oven", "ball", "food", "step", "tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket", "sculpture", "hood", "sconce", "vase", "traffic light", "tray", "ashcan", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass", "clock", "flag"]
-
- ret = {
- "stuff_classes" : ade20k_150_classes,
- }
- return ret
-
-def register_ade20k_150(root):
- root = os.path.join(root, "ADEChallengeData2016")
- meta = _get_ade20k_150_meta()
- for name, image_dirname, sem_seg_dirname in [
- ("test", "images/validation", "annotations_detectron2/validation"),
- ]:
- image_dir = os.path.join(root, image_dirname)
- gt_dir = os.path.join(root, sem_seg_dirname)
- name = f"ade20k_150_{name}_sem_seg"
- DatasetCatalog.register(name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext='png', image_ext='jpg'))
- MetadataCatalog.get(name).set(image_root=image_dir, seg_seg_root=gt_dir, evaluator_type="sem_seg", ignore_label=255, **meta,)
-
-_root = os.getenv("DETECTRON2_DATASETS", "datasets")
-register_ade20k_150(_root)
diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/config/singleton.py b/spaces/hamelcubsfan/AutoGPT/autogpt/config/singleton.py
deleted file mode 100644
index 55b2aeea120bbe51ca837265fcb7fbff467e55f2..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/autogpt/config/singleton.py
+++ /dev/null
@@ -1,24 +0,0 @@
-"""The singleton metaclass for ensuring only one instance of a class."""
-import abc
-
-
-class Singleton(abc.ABCMeta, type):
- """
- Singleton metaclass for ensuring only one instance of a class.
- """
-
- _instances = {}
-
- def __call__(cls, *args, **kwargs):
- """Call method for the singleton metaclass."""
- if cls not in cls._instances:
- cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
- return cls._instances[cls]
-
-
-class AbstractSingleton(abc.ABC, metaclass=Singleton):
- """
- Abstract singleton class for ensuring only one instance of a class.
- """
-
- pass
diff --git a/spaces/hands012/gpt-academic/multi_language.py b/spaces/hands012/gpt-academic/multi_language.py
deleted file mode 100644
index 6c7259836e69d7bc5724a301883a9dbf1526589a..0000000000000000000000000000000000000000
--- a/spaces/hands012/gpt-academic/multi_language.py
+++ /dev/null
@@ -1,510 +0,0 @@
-"""
- Translate this project to other languages (experimental, please open an issue if there is any bug)
-
-
- Usage:
- 1. modify LANG
- LANG = "English"
-
- 2. modify TransPrompt
- TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
-
- 3. Run `python multi_language.py`.
- Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
-
- 4. Find the translated program in `multi-language\English\*`
-
- P.S.
-
- - The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there.
-
- - If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request
-
- - If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request
-
- - Welcome any Pull Request, regardless of language
-"""
-
-import os
-import json
-import functools
-import re
-import pickle
-import time
-
-CACHE_FOLDER = "gpt_log"
-blacklist = ['multi-language', 'gpt_log', '.git', 'private_upload', 'multi_language.py']
-
-# LANG = "TraditionalChinese"
-# TransPrompt = f"Replace each json value `#` with translated results in Traditional Chinese, e.g., \"原始文本\":\"翻譯後文字\". Keep Json format. Do not answer #."
-
-# LANG = "Japanese"
-# TransPrompt = f"Replace each json value `#` with translated results in Japanese, e.g., \"原始文本\":\"テキストの翻訳\". Keep Json format. Do not answer #."
-
-LANG = "English"
-TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #."
-
-
-if not os.path.exists(CACHE_FOLDER):
- os.makedirs(CACHE_FOLDER)
-
-
-def lru_file_cache(maxsize=128, ttl=None, filename=None):
- """
- Decorator that caches a function's return value after being called with given arguments.
- It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache.
- maxsize: Maximum size of the cache. Defaults to 128.
- ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache.
- filename: Name of the file to store the cache in. If not supplied, the function name + ".cache" will be used.
- """
- cache_path = os.path.join(CACHE_FOLDER, f"{filename}.cache") if filename is not None else None
-
- def decorator_function(func):
- cache = {}
- _cache_info = {
- "hits": 0,
- "misses": 0,
- "maxsize": maxsize,
- "currsize": 0,
- "ttl": ttl,
- "filename": cache_path,
- }
-
- @functools.wraps(func)
- def wrapper_function(*args, **kwargs):
- key = str((args, frozenset(kwargs)))
- if key in cache:
- if _cache_info["ttl"] is None or (cache[key][1] + _cache_info["ttl"]) >= time.time():
- _cache_info["hits"] += 1
- print(f'Warning, reading cache, last read {(time.time()-cache[key][1])//60} minutes ago'); time.sleep(2)
- cache[key][1] = time.time()
- return cache[key][0]
- else:
- del cache[key]
-
- result = func(*args, **kwargs)
- cache[key] = [result, time.time()]
- _cache_info["misses"] += 1
- _cache_info["currsize"] += 1
-
- if _cache_info["currsize"] > _cache_info["maxsize"]:
- oldest_key = None
- for k in cache:
- if oldest_key is None:
- oldest_key = k
- elif cache[k][1] < cache[oldest_key][1]:
- oldest_key = k
- del cache[oldest_key]
- _cache_info["currsize"] -= 1
-
- if cache_path is not None:
- with open(cache_path, "wb") as f:
- pickle.dump(cache, f)
-
- return result
-
- def cache_info():
- return _cache_info
-
- wrapper_function.cache_info = cache_info
-
- if cache_path is not None and os.path.exists(cache_path):
- with open(cache_path, "rb") as f:
- cache = pickle.load(f)
- _cache_info["currsize"] = len(cache)
-
- return wrapper_function
-
- return decorator_function
-
-def contains_chinese(string):
- """
- Returns True if the given string contains Chinese characters, False otherwise.
- """
- chinese_regex = re.compile(u'[\u4e00-\u9fff]+')
- return chinese_regex.search(string) is not None
-
-def split_list(lst, n_each_req):
- """
- Split a list into smaller lists, each with a maximum number of elements.
- :param lst: the list to split
- :param n_each_req: the maximum number of elements in each sub-list
- :return: a list of sub-lists
- """
- result = []
- for i in range(0, len(lst), n_each_req):
- result.append(lst[i:i + n_each_req])
- return result
-
-def map_to_json(map, language):
- dict_ = read_map_from_json(language)
- dict_.update(map)
- with open(f'docs/translate_{language.lower()}.json', 'w', encoding='utf8') as f:
- json.dump(dict_, f, indent=4, ensure_ascii=False)
-
-def read_map_from_json(language):
- if os.path.exists(f'docs/translate_{language.lower()}.json'):
- with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f:
- res = json.load(f)
- res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)}
- return res
- return {}
-
-def advanced_split(splitted_string, spliter, include_spliter=False):
- splitted_string_tmp = []
- for string_ in splitted_string:
- if spliter in string_:
- splitted = string_.split(spliter)
- for i, s in enumerate(splitted):
- if include_spliter:
- if i != len(splitted)-1:
- splitted[i] += spliter
- splitted[i] = splitted[i].strip()
- for i in reversed(range(len(splitted))):
- if not contains_chinese(splitted[i]):
- splitted.pop(i)
- splitted_string_tmp.extend(splitted)
- else:
- splitted_string_tmp.append(string_)
- splitted_string = splitted_string_tmp
- return splitted_string_tmp
-
-cached_translation = {}
-cached_translation = read_map_from_json(language=LANG)
-
-def trans(word_to_translate, language, special=False):
- if len(word_to_translate) == 0: return {}
- from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
- from toolbox import get_conf, ChatBotWithCookies
- proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
- llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
- 'top_p':1.0,
- 'max_length': None,
- 'temperature':0.4,
- }
- import random
- N_EACH_REQ = random.randint(16, 32)
- word_to_translate_split = split_list(word_to_translate, N_EACH_REQ)
- inputs_array = [str(s) for s in word_to_translate_split]
- inputs_show_user_array = inputs_array
- history_array = [[] for _ in inputs_array]
- if special: # to English using CamelCase Naming Convention
- sys_prompt_array = [f"Translate following names to English with CamelCase naming convention. Keep original format" for _ in inputs_array]
- else:
- sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array]
- chatbot = ChatBotWithCookies(llm_kwargs)
- gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array,
- inputs_show_user_array,
- llm_kwargs,
- chatbot,
- history_array,
- sys_prompt_array,
- )
- while True:
- try:
- gpt_say = next(gpt_say_generator)
- print(gpt_say[1][0][1])
- except StopIteration as e:
- result = e.value
- break
- translated_result = {}
- for i, r in enumerate(result):
- if i%2 == 1:
- try:
- res_before_trans = eval(result[i-1])
- res_after_trans = eval(result[i])
- if len(res_before_trans) != len(res_after_trans):
- raise RuntimeError
- for a,b in zip(res_before_trans, res_after_trans):
- translated_result[a] = b
- except:
- # try:
- # res_before_trans = word_to_translate_split[(i-1)//2]
- # res_after_trans = [s for s in result[i].split("', '")]
- # for a,b in zip(res_before_trans, res_after_trans):
- # translated_result[a] = b
- # except:
- print('GPT answers with unexpected format, some words may not be translated, but you can try again later to increase translation coverage.')
- res_before_trans = eval(result[i-1])
- for a in res_before_trans:
- translated_result[a] = None
- return translated_result
-
-
-def trans_json(word_to_translate, language, special=False):
- if len(word_to_translate) == 0: return {}
- from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
- from toolbox import get_conf, ChatBotWithCookies
- proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
- llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
- 'top_p':1.0,
- 'max_length': None,
- 'temperature':0.1,
- }
- import random
- N_EACH_REQ = random.randint(16, 32)
- random.shuffle(word_to_translate)
- word_to_translate_split = split_list(word_to_translate, N_EACH_REQ)
- inputs_array = [{k:"#" for k in s} for s in word_to_translate_split]
- inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array]
-
- inputs_show_user_array = inputs_array
- history_array = [[] for _ in inputs_array]
- sys_prompt_array = [TransPrompt for _ in inputs_array]
- chatbot = ChatBotWithCookies(llm_kwargs)
- gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array,
- inputs_show_user_array,
- llm_kwargs,
- chatbot,
- history_array,
- sys_prompt_array,
- )
- while True:
- try:
- gpt_say = next(gpt_say_generator)
- print(gpt_say[1][0][1])
- except StopIteration as e:
- result = e.value
- break
- translated_result = {}
- for i, r in enumerate(result):
- if i%2 == 1:
- try:
- translated_result.update(json.loads(result[i]))
- except:
- print(result[i])
- print(result)
- return translated_result
-
-
-def step_1_core_key_translate():
- def extract_chinese_characters(file_path):
- syntax = []
- with open(file_path, 'r', encoding='utf-8') as f:
- content = f.read()
- import ast
- root = ast.parse(content)
- for node in ast.walk(root):
- if isinstance(node, ast.Name):
- if contains_chinese(node.id): syntax.append(node.id)
- if isinstance(node, ast.Import):
- for n in node.names:
- if contains_chinese(n.name): syntax.append(n.name)
- elif isinstance(node, ast.ImportFrom):
- for n in node.names:
- if contains_chinese(n.name): syntax.append(n.name)
- for k in node.module.split('.'):
- if contains_chinese(k): syntax.append(k)
- return syntax
-
- def extract_chinese_characters_from_directory(directory_path):
- chinese_characters = []
- for root, dirs, files in os.walk(directory_path):
- if any([b in root for b in blacklist]):
- continue
- for file in files:
- if file.endswith('.py'):
- file_path = os.path.join(root, file)
- chinese_characters.extend(extract_chinese_characters(file_path))
- return chinese_characters
-
- directory_path = './'
- chinese_core_names = extract_chinese_characters_from_directory(directory_path)
- chinese_core_keys = [name for name in chinese_core_names]
- chinese_core_keys_norepeat = []
- for d in chinese_core_keys:
- if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d)
- need_translate = []
- cached_translation = read_map_from_json(language=LANG)
- cached_translation_keys = list(cached_translation.keys())
- for d in chinese_core_keys_norepeat:
- if d not in cached_translation_keys:
- need_translate.append(d)
-
- need_translate_mapping = trans(need_translate, language=LANG, special=True)
- map_to_json(need_translate_mapping, language=LANG)
- cached_translation = read_map_from_json(language=LANG)
- cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
-
- chinese_core_keys_norepeat_mapping = {}
- for k in chinese_core_keys_norepeat:
- chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]})
- chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0])))
-
- # ===============================================
- # copy
- # ===============================================
- def copy_source_code():
-
- from toolbox import get_conf
- import shutil
- import os
- try: shutil.rmtree(f'./multi-language/{LANG}/')
- except: pass
- os.makedirs(f'./multi-language', exist_ok=True)
- backup_dir = f'./multi-language/{LANG}/'
- shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist)
- copy_source_code()
-
- # ===============================================
- # primary key replace
- # ===============================================
- directory_path = f'./multi-language/{LANG}/'
- for root, dirs, files in os.walk(directory_path):
- for file in files:
- if file.endswith('.py'):
- file_path = os.path.join(root, file)
- syntax = []
- # read again
- with open(file_path, 'r', encoding='utf-8') as f:
- content = f.read()
-
- for k, v in chinese_core_keys_norepeat_mapping.items():
- content = content.replace(k, v)
-
- with open(file_path, 'w', encoding='utf-8') as f:
- f.write(content)
-
-
-def step_2_core_key_translate():
-
- # =================================================================================================
- # step2
- # =================================================================================================
-
- def load_string(strings, string_input):
- string_ = string_input.strip().strip(',').strip().strip('.').strip()
- if string_.startswith('[Local Message]'):
- string_ = string_.replace('[Local Message]', '')
- string_ = string_.strip().strip(',').strip().strip('.').strip()
- splitted_string = [string_]
- # --------------------------------------
- splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="。", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="<", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=">", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="[", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="]", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="【", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="】", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="?", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="#", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="\n", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=";", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="`", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False)
- splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False)
-
- # --------------------------------------
- for j, s in enumerate(splitted_string): # .com
- if '.com' in s: continue
- if '\'' in s: continue
- if '\"' in s: continue
- strings.append([s,0])
-
-
- def get_strings(node):
- strings = []
- # recursively traverse the AST
- for child in ast.iter_child_nodes(node):
- node = child
- if isinstance(child, ast.Str):
- if contains_chinese(child.s):
- load_string(strings=strings, string_input=child.s)
- elif isinstance(child, ast.AST):
- strings.extend(get_strings(child))
- return strings
-
- string_literals = []
- directory_path = f'./multi-language/{LANG}/'
- for root, dirs, files in os.walk(directory_path):
- for file in files:
- if file.endswith('.py'):
- file_path = os.path.join(root, file)
- syntax = []
- with open(file_path, 'r', encoding='utf-8') as f:
- content = f.read()
- # comments
- comments_arr = []
- for code_sp in content.splitlines():
- comments = re.findall(r'#.*$', code_sp)
- for comment in comments:
- load_string(strings=comments_arr, string_input=comment)
- string_literals.extend(comments_arr)
-
- # strings
- import ast
- tree = ast.parse(content)
- res = get_strings(tree, )
- string_literals.extend(res)
-
- [print(s) for s in string_literals]
- chinese_literal_names = []
- chinese_literal_names_norepeat = []
- for string, offset in string_literals:
- chinese_literal_names.append(string)
- chinese_literal_names_norepeat = []
- for d in chinese_literal_names:
- if d not in chinese_literal_names_norepeat: chinese_literal_names_norepeat.append(d)
- need_translate = []
- cached_translation = read_map_from_json(language=LANG)
- cached_translation_keys = list(cached_translation.keys())
- for d in chinese_literal_names_norepeat:
- if d not in cached_translation_keys:
- need_translate.append(d)
-
-
- up = trans_json(need_translate, language=LANG, special=False)
- map_to_json(up, language=LANG)
- cached_translation = read_map_from_json(language=LANG)
- cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
-
- # ===============================================
- # literal key replace
- # ===============================================
- directory_path = f'./multi-language/{LANG}/'
- for root, dirs, files in os.walk(directory_path):
- for file in files:
- if file.endswith('.py'):
- file_path = os.path.join(root, file)
- syntax = []
- # read again
- with open(file_path, 'r', encoding='utf-8') as f:
- content = f.read()
-
- for k, v in cached_translation.items():
- if v is None: continue
- if '"' in v:
- v = v.replace('"', "`")
- if '\'' in v:
- v = v.replace('\'', "`")
- content = content.replace(k, v)
-
- with open(file_path, 'w', encoding='utf-8') as f:
- f.write(content)
-
- if file.strip('.py') in cached_translation:
- file_new = cached_translation[file.strip('.py')] + '.py'
- file_path_new = os.path.join(root, file_new)
- with open(file_path_new, 'w', encoding='utf-8') as f:
- f.write(content)
- os.remove(file_path)
-
-step_1_core_key_translate()
-step_2_core_key_translate()
diff --git a/spaces/hezhaoqia/vits-simple-api/bert_vits2/modules.py b/spaces/hezhaoqia/vits-simple-api/bert_vits2/modules.py
deleted file mode 100644
index 9206f95b0037251225eddc1d64b60f749155135c..0000000000000000000000000000000000000000
--- a/spaces/hezhaoqia/vits-simple-api/bert_vits2/modules.py
+++ /dev/null
@@ -1,459 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from bert_vits2 import commons
-from bert_vits2.commons import init_weights, get_padding
-from bert_vits2.transforms import piecewise_rational_quadratic_transform
-from bert_vits2.attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert (kernel_size % 2 == 1)
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset:cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, :self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels:, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout,
- gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2 * self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
-
-
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels=0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow=True,
- gin_channels=gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/postprocessing/consolidate_postprocessing_simple.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/postprocessing/consolidate_postprocessing_simple.py
deleted file mode 100644
index bd36b2dce628046c65d9e4a4828c35592b0f5806..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/postprocessing/consolidate_postprocessing_simple.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import argparse
-from nnunet.postprocessing.consolidate_postprocessing import consolidate_folds
-from nnunet.utilities.folder_names import get_output_folder_name
-from nnunet.utilities.task_name_id_conversion import convert_id_to_task_name
-from nnunet.paths import default_cascade_trainer, default_trainer, default_plans_identifier
-
-
-def main():
- argparser = argparse.ArgumentParser(usage="Used to determine the postprocessing for a trained model. Useful for "
- "when the best configuration (2d, 3d_fullres etc) as selected manually.")
- argparser.add_argument("-m", type=str, required=True, help="U-Net model (2d, 3d_lowres, 3d_fullres or "
- "3d_cascade_fullres)")
- argparser.add_argument("-t", type=str, required=True, help="Task name or id")
- argparser.add_argument("-tr", type=str, required=False, default=None,
- help="nnUNetTrainer class. Default: %s, unless 3d_cascade_fullres "
- "(then it's %s)" % (default_trainer, default_cascade_trainer))
- argparser.add_argument("-pl", type=str, required=False, default=default_plans_identifier,
- help="Plans name, Default=%s" % default_plans_identifier)
- argparser.add_argument("-val", type=str, required=False, default="validation_raw",
- help="Validation folder name. Default: validation_raw")
-
- args = argparser.parse_args()
- model = args.m
- task = args.t
- trainer = args.tr
- plans = args.pl
- val = args.val
-
- if not task.startswith("Task"):
- task_id = int(task)
- task = convert_id_to_task_name(task_id)
-
- if trainer is None:
- if model == "3d_cascade_fullres":
- trainer = "nnUNetTrainerV2CascadeFullRes"
- else:
- trainer = "nnUNetTrainerV2"
-
- folder = get_output_folder_name(model, task, trainer, plans, None)
-
- consolidate_folds(folder, val, folds=(0,))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/huangjiefree/bingo/README.md b/spaces/huangjiefree/bingo/README.md
deleted file mode 100644
index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000
--- a/spaces/huangjiefree/bingo/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: bingo
-emoji: 😊
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/huggingfacejs/doc-vis-qa/README.md b/spaces/huggingfacejs/doc-vis-qa/README.md
deleted file mode 100644
index 06ad95b6cdd528aada4569d52775dbbc0490c664..0000000000000000000000000000000000000000
--- a/spaces/huggingfacejs/doc-vis-qa/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Document and visual question answering
-emoji: ❓
-colorFrom: pink
-colorTo: indigo
-sdk: static
-pinned: false
-license: mit
-description: Showcase document & visual question answering using huggingface.js
-duplicated_from: vvmnnnkv/doc-vis-qa
----
-
-Showcase document & visual question answering using the `@huggingface/inference` JS lib.
-
-Default models for inference:
- * Documents: https://huggingface.co/impira/layoutlm-document-qa
- * Images: https://huggingface.co/dandelin/vilt-b32-finetuned-vqa
\ No newline at end of file
diff --git a/spaces/hysts-duplicates/comparing-captioning-models/app.py b/spaces/hysts-duplicates/comparing-captioning-models/app.py
deleted file mode 100644
index 4c8298ed83ab51691d6d17a224c9455e20105dba..0000000000000000000000000000000000000000
--- a/spaces/hysts-duplicates/comparing-captioning-models/app.py
+++ /dev/null
@@ -1,161 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-
-import gradio as gr
-from gradio_client import Client
-
-DESCRIPTION = "# Comparing image captioning models"
-ORIGINAL_SPACE_INFO = """\
-- [GIT-large fine-tuned on COCO](https://huggingface.co/spaces/library-samples/image-captioning-with-git)
-- [BLIP-large](https://huggingface.co/spaces/library-samples/image-captioning-with-blip)
-- [BLIP-2 OPT 6.7B](https://huggingface.co/spaces/merve/BLIP2-with-transformers)
-- [BLIP-2 T5-XXL](https://huggingface.co/spaces/hysts/BLIP2-with-transformers)
-- [InstructBLIP](https://huggingface.co/spaces/library-samples/InstructBLIP)
-- [Fuyu-8B](https://huggingface.co/spaces/adept/fuyu-8b-demo)
-"""
-
-
-def generate_caption_git(image_path: str) -> str:
- try:
- client = Client("library-samples/image-captioning-with-git")
- return client.predict(image_path, api_name="/caption")
- except Exception:
- gr.Warning("The GIT-large Space is currently unavailable. Please try again later.")
- return ""
-
-
-def generate_caption_blip(image_path: str) -> str:
- try:
- client = Client("library-samples/image-captioning-with-blip")
- return client.predict(image_path, "A picture of", api_name="/caption")
- except Exception:
- gr.Warning("The BLIP-large Space is currently unavailable. Please try again later.")
- return ""
-
-
-def generate_caption_blip2_opt(image_path: str) -> str:
- try:
- client = Client("merve/BLIP2-with-transformers")
- return client.predict(
- image_path,
- "Beam search",
- 1, # temperature
- 1, # length penalty
- 1.5, # repetition penalty
- api_name="/caption",
- )
- except Exception:
- gr.Warning("The BLIP2 OPT6.7B Space is currently unavailable. Please try again later.")
- return ""
-
-
-def generate_caption_blip2_t5xxl(image_path: str) -> str:
- try:
- client = Client("hysts/BLIP2-with-transformers")
- return client.predict(
- image_path,
- "Beam search",
- 1, # temperature
- 1, # length penalty
- 1.5, # repetition penalty
- 50, # max length
- 1, # min length
- 5, # number of beams
- 0.9, # top p
- api_name="/caption",
- )
- except Exception:
- gr.Warning("The BLIP2 T5-XXL Space is currently unavailable. Please try again later.")
- return ""
-
-
-def generate_caption_instructblip(image_path: str) -> str:
- try:
- client = Client("library-samples/InstructBLIP")
- return client.predict(
- image_path,
- "Describe the image.",
- "Beam search",
- 5, # beam size
- 256, # max length
- 1, # min length
- 0.9, # top p
- 1.5, # repetition penalty
- 1.0, # length penalty
- 1.0, # temperature
- api_name="/run",
- )
- except Exception:
- gr.Warning("The InstructBLIP Space is currently unavailable. Please try again later.")
- return ""
-
-
-def generate_caption_fuyu(image_path: str) -> str:
- try:
- client = Client("adept/fuyu-8b-demo")
- return client.predict(image_path, "Generate a coco style caption.", fn_index=3)
- except Exception:
- gr.Warning("The Fuyu-8B Space is currently unavailable. Please try again later.")
- return ""
-
-
-def generate_captions(image_path: str) -> tuple[str, str, str, str, str, str]:
- return (
- generate_caption_git(image_path),
- generate_caption_blip(image_path),
- generate_caption_blip2_opt(image_path),
- generate_caption_blip2_t5xxl(image_path),
- generate_caption_instructblip(image_path),
- generate_caption_fuyu(image_path),
- )
-
-
-with gr.Blocks(css="style.css") as demo:
- gr.Markdown(DESCRIPTION)
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(type="filepath")
- run_button = gr.Button("Caption")
- with gr.Column():
- out_git = gr.Textbox(label="GIT-large fine-tuned on COCO")
- out_blip = gr.Textbox(label="BLIP-large")
- out_blip2_opt = gr.Textbox(label="BLIP-2 OPT 6.7B")
- out_blip2_t5xxl = gr.Textbox(label="BLIP-2 T5-XXL")
- out_instructblip = gr.Textbox(label="InstructBLIP")
- out_fuyu = gr.Textbox(label="Fuyu-8B")
-
- outputs = [
- out_git,
- out_blip,
- out_blip2_opt,
- out_blip2_t5xxl,
- out_instructblip,
- out_fuyu,
- ]
- gr.Examples(
- examples=[
- "http://images.cocodataset.org/val2017/000000039769.jpg",
- "https://huggingface.co/datasets/nielsr/textcaps-sample/resolve/main/stop_sign.png",
- "https://cdn.openai.com/dall-e-2/demos/text2im/astronaut/horse/photo/0.jpg",
- ],
- inputs=input_image,
- outputs=outputs,
- fn=generate_captions,
- cache_examples=os.getenv("CACHE_EXAMPLES") == "1",
- )
-
- with gr.Accordion(label="The original Spaces can be found here:", open=False):
- gr.Markdown(ORIGINAL_SPACE_INFO)
-
- run_button.click(
- fn=generate_captions,
- inputs=input_image,
- outputs=outputs,
- api_name="caption",
- )
-
-if __name__ == "__main__":
- demo.queue(max_size=20).launch()
diff --git a/spaces/hysts/Kandinsky-2-2/style.css b/spaces/hysts/Kandinsky-2-2/style.css
deleted file mode 100644
index 8b30b66115a5108682001d933e6296c2a3cf43bd..0000000000000000000000000000000000000000
--- a/spaces/hysts/Kandinsky-2-2/style.css
+++ /dev/null
@@ -1,16 +0,0 @@
-h1 {
- text-align: center;
-}
-
-#component-0 {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
-}
-
-#duplicate-button {
- margin: auto;
- color: white;
- background: #1565c0;
- border-radius: 100vh;
-}
diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/template_model.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/template_model.py
deleted file mode 100644
index 650d38cdc1bf61ed481c9d25fb709dc46483f757..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/template_model.py
+++ /dev/null
@@ -1,105 +0,0 @@
-"""Model class template
-
-This module provides a template for users to implement custom models.
-You can specify '--model template' to use this model.
-The class name should be consistent with both the filename and its model option.
-The filename should be _dataset.py
-The class name should be Dataset.py
-It implements a simple image-to-image translation baseline based on regression loss.
-Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss:
- min_ ||netG(data_A) - data_B||_1
-You need to implement the following functions:
- : Add model-specific options and rewrite default values for existing options.
- <__init__>: Initialize this model class.
- : Unpack input data and perform data pre-processing.
- : Run forward pass. This will be called by both and .
- : Update network weights; it will be called in every training iteration.
-"""
-import numpy as np
-import torch
-
-from . import networks
-from .base_model import BaseModel
-
-
-class TemplateModel(BaseModel):
- @staticmethod
- def modify_commandline_options(parser, is_train=True):
- """Add new model-specific options and rewrite default values for existing options.
-
- Parameters:
- parser -- the option parser
- is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options.
-
- Returns:
- the modified parser.
- """
- parser.set_defaults(
- dataset_mode="aligned"
- ) # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset.
- if is_train:
- parser.add_argument(
- "--lambda_regression", type=float, default=1.0, help="weight for the regression loss"
- ) # You can define new arguments for this model.
-
- return parser
-
- def __init__(self, opt):
- """Initialize this model class.
-
- Parameters:
- opt -- training/test options
-
- A few things can be done here.
- - (required) call the initialization function of BaseModel
- - define loss function, visualization images, model names, and optimizers
- """
- BaseModel.__init__(self, opt) # call the initialization method of BaseModel
- # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk.
- self.loss_names = ["loss_G"]
- # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images.
- self.visual_names = ["data_A", "data_B", "output"]
- # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks.
- # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them.
- self.model_names = ["G"]
- # define networks; you can use opt.isTrain to specify different behaviors for training and test.
- self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids)
- if self.isTrain: # only defined during training time
- # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss.
- # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device)
- self.criterionLoss = torch.nn.L1Loss()
- # define and initialize optimizers. You can define one optimizer for each network.
- # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
- self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
- self.optimizers = [self.optimizer]
-
- # Our program will automatically call to define schedulers, load networks, and print networks
-
- def set_input(self, input):
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
-
- Parameters:
- input: a dictionary that contains the data itself and its metadata information.
- """
- AtoB = self.opt.direction == "AtoB" # use to swap data_A and data_B
- self.data_A = input["A" if AtoB else "B"].to(self.device) # get image data A
- self.data_B = input["B" if AtoB else "A"].to(self.device) # get image data B
- self.image_paths = input["A_paths" if AtoB else "B_paths"] # get image paths
-
- def forward(self):
- """Run forward pass. This will be called by both functions and ."""
- self.output = self.netG(self.data_A) # generate output image given the input data_A
-
- def backward(self):
- """Calculate losses, gradients, and update network weights; called in every training iteration"""
- # caculate the intermediate results if necessary; here self.output has been computed during function
- # calculate loss given the input and intermediate results
- self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression
- self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G
-
- def optimize_parameters(self):
- """Update network weights; it will be called in every training iteration."""
- self.forward() # first call forward to calculate intermediate results
- self.optimizer.zero_grad() # clear network G's existing gradients
- self.backward() # calculate gradients for network G
- self.optimizer.step() # update gradients for network G
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cube Iq 4 Crack Full Version __HOT__.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cube Iq 4 Crack Full Version __HOT__.md
deleted file mode 100644
index 928a3ccbb3be7c4d3097d1cfa5f5369bc66c303a..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cube Iq 4 Crack Full Version __HOT__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-An AI system teaches itself to solve the Rubik's Cube more quickly than any human. ... An artificial intelligence system created by researchers at the University ... so a deep learning machine that can crack such a puzzle is getting closer to ... Bouquets of roses are for sale for Valentine's Day at a flower shop. 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ernesto Gutierrez Y Gonzalez El Patrimonio Pdf 38.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ernesto Gutierrez Y Gonzalez El Patrimonio Pdf 38.md
deleted file mode 100644
index 5b6eada12a38531b24339f3f1671343809769e41..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ernesto Gutierrez Y Gonzalez El Patrimonio Pdf 38.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
the court ruled thursday to order a new trial for 57-year-old ernestogonzalez, formerly the president of the vagos chapter in nicaragua. gonzalez didn't get a fair trial because a district court erred when it declined to answer a jury question about a conspiracy charge and gave the jury confusing i
-
searching for his daughter who was abducted by the cult. the film stars oscarnominee octavia spencer (the author's mother) as the bordertown matriarch. ernestogonzalez has been arrested since he was indicted in april, but is not in custody. ernestogonzalez has not publicly commented on the
carnival-biker drag racing competition that just ended. the two main characters, ernestogonzalez and jonathan leaird, are the sons of two biker families, the cocons and the posse. they compete for money and glory to become the chapter president of the los angeles vagos biker gang. the film is based on the true story of ernestogonzalez and his gang, the vagos, and tells the story of a
-
himself. ernestogonzalez was arrested in june and faces conspiracy to commit murder, use of a firearm and drug charges. but the charge against ernesto is also based on the accusation that he conspired with ivan s. gutierrez, a man who has been a fugitive for more than a decade. he was wanted on charges of forgery, identity theft and
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Cara Memasang Mod Motogp 13 17.md b/spaces/inreVtussa/clothingai/Examples/Cara Memasang Mod Motogp 13 17.md
deleted file mode 100644
index ba818bc9f5e66b024f187b593185cd18cd3474de..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Cara Memasang Mod Motogp 13 17.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-:06
-
-So the race is approaching!
-
-It's a little over a month since we first contacted at least one large manufacturer about creating a new racing circuit. The negotiations are still ongoing, but to help you pass your time until the announcement of the circuit, we've created a brand new show to show you all of the latest developments in the world of B-Racing. With a little help from the guys at FormulaE, we've created a video series of all the latest drivers and cars we're able to get a hold of!
-
-All of the following videos will be covering everything to do with the FormulaE car, and from inside the cockpit of the sport's only 1-make.
-
-We hope you enjoy watching and supporting the new series as much as we will be producing it!
-
-As we reported yesterday, we now have confirmation from the Department of Motor Vehicles in the state of New Jersey that the New Jersey Motorsports Park has received its special permit, allowing the venue to host the Motogp event on June 27th.
-
-The announcement follows New Jersey Governor Christie’s statement in which he formally granted the permit, with the racing venue located in Englishtown, NJ.
-
-The announcement also comes two days after the race was rescheduled from the original date of June 4th to the Saturday of the 27th due to a scheduling conflict with the F1 Grand Prix in Brazil.
-
-We have previously reported that the race will have 17-laps, the same distance as last year, which will be run on the 3.8-mile (6km) circuit. The 2015 edition of the race was won by Tito Rabat with a times of 2m 11.583s, with a lap of 5.18.208.
-
-The 2017 event will be the sixth time the event has been held at the venue, having previously hosted the BSB, BRSCC, F3 British Championship, the British Formula Ford Championship, the BNL 1000, the Indy Lights and the Porsche Carrera Cup.
-
-The event will take place on the Saturday of the 26th - 27th June, with the first day of practice set to take place on Friday, June 25th.
-
-The F3 British Championship was to have been held at the venue on April 29th, however it was announced earlier this month that the race would be held at Silverstone instead, leaving the NJMP event with no racing event in 2016.
-
-New Jersey 4fefd39f24
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Change Serial Number In Bios Hp Elitebook HOT!.md b/spaces/inreVtussa/clothingai/Examples/Change Serial Number In Bios Hp Elitebook HOT!.md
deleted file mode 100644
index fe25d050ef33c79c3e3e4aaac10cac7950a01021..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Change Serial Number In Bios Hp Elitebook HOT!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-free change serial number in bios hp elitebook 2530p software Whenever your Windows OS formats a hard disk drive, it assigns a new generated serial number to ... 1fdad05405
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Chessbase Light 2009 Serial Key Activation Key.rar [BETTER].md b/spaces/inreVtussa/clothingai/Examples/Chessbase Light 2009 Serial Key Activation Key.rar [BETTER].md
deleted file mode 100644
index 96bb297da4ad24872f992f7bbe3fe704f52703fe..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Chessbase Light 2009 Serial Key Activation Key.rar [BETTER].md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Activate Chessbase Light 2009 with Serial Key
-
Chessbase Light 2009 is a powerful chess software that allows you to access a huge online database of games, analyze your own games, train your tactics and play chess online. However, to unlock all the features of Chessbase Light 2009, you need to activate it with a serial key.
-
A serial key is a unique code that you can purchase from the official Chessbase website or from other authorized dealers. The serial key will be sent to you by email after you complete the payment. You can also find the serial key in the CD case if you bought a physical copy of Chessbase Light 2009.
-
chessbase Light 2009 serial key activation key.rar
To activate Chessbase Light 2009 with your serial key, you need to follow these steps:
-
-
Install Chessbase Light 2009 on your computer. You can download it from the official Chessbase website or use the CD that came with your purchase.
-
Run Chessbase Light 2009 and click on the "Activate" button on the start screen. You will see a dialog box asking for your serial number and your hardware key.
-
Enter your serial number in the first field. It should be a 16-digit code that looks like this: XXXX-XXXX-XXXX-XXXX.
-
Enter your hardware key in the second field. It should be a 12-digit code that looks like this: XXXX-XXXX-XXXX. You can find your hardware key by clicking on the "Hardware Key" button on the same dialog box. It will display your computer's unique identification number.
-
Click on the "OK" button to confirm your activation. You will see a message saying that your activation was successful and that you can now enjoy all the features of Chessbase Light 2009.
-
-
If you have any problems with your activation, you can contact Chessbase support by email at support@chessbase.com or by phone at +49-40-63 90 60-0. They will help you resolve any issues with your serial key or hardware key.
-
Chessbase Light 2009 is a great tool for chess enthusiasts of all levels. By activating it with your serial key, you can access millions of games online, improve your skills with interactive training, and play against other players from around the world. Don't miss this opportunity to take your chess game to the next level!
-
-
What are the features of Chessbase Light 2009?
-
Chessbase Light 2009 is a database program that gives you access to game collections and allows you to manage them with ease. You can retrieve games according to openings, players and tournaments, generate tournament cross tables, and view full graphic statistics of players or openings. You can also merge games on-the-fly into an opening tree and see the most popular moves and variations at a glance.
-
Chessbase Light 2009 also lets you play chess online on the Playchess server, where you can find opponents of any level, watch live broadcasts of top tournaments, and chat with other chess fans. You can also improve your chess skills with interactive training modules, such as tactics, endgames, and openings. You can even create your own training questions and answers with the Chess Media System.
-
Chessbase Light 2009 is compatible with Windows XP, Vista and Windows 7. It requires a minimum of 256 MB RAM, 1 GB hard disk space, and a DVD-ROM drive. It also supports UCI engines, such as Fritz, Rybka, Houdini, and Stockfish.
-
-
Why should you choose Chessbase Light 2009?
-
Chessbase Light 2009 is the ideal choice for chess enthusiasts who want to enjoy the benefits of a powerful chess software without spending too much money or time. Chessbase Light 2009 is easy to use and has a user-friendly interface. It is also fast and reliable, thanks to its new database technology that allows you to search and sort millions of games in seconds.
-
Chessbase Light 2009 is also flexible and versatile, as it can be upgraded to Chessbase 10 or Chessbase 11 with a serial key. You can also extend its functionality with add-ons, such as Mega Database 2011, Powerbook 2011, or Endgame Turbo 4. With these add-ons, you can access the largest and most up-to-date chess database in the world, get the best opening book for your engine analysis, or solve any endgame position instantly.
-
Chessbase Light 2009 is more than just a chess software. It is a chess companion that will help you learn, play, and enjoy chess more than ever before.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Crack Vcds 11 11 1 Fr Extra Quality.md b/spaces/inreVtussa/clothingai/Examples/Crack Vcds 11 11 1 Fr Extra Quality.md
deleted file mode 100644
index 38417caedd6eeec364fe4f04dc2e0775ff8efd61..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Crack Vcds 11 11 1 Fr Extra Quality.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
Based on this, I will explain the practical steps to crack VCDS. Firstly, you can crack software through “virtual drive”. In this case, you can obtain crack software from “virtual drive.” If you do this, it is very easy to obtain crack software for free. Most crack software sellers will ask you for a serial number, which can be obtained directly. After cracking, you can save the crack software that you crack on any folder.
Secondly, you can crack software through “common used software”. In this case, you can obtain crack software from “common used software.” After you crack software, you will be asked for serial numbers.
-
In the cases of using virtual drive and Auto Update, the software you crack will be available on the internet. This means that, in addition to cracking the software for free, you will also be able to obtain cracks for most software.
-
If you use Auto Update to update online VCDS crack interface without serial number, please be advised that the software you download and use is unregistered and illegal. If you continue to use the software, your cracked software may be reported, and your VCDS crack interface has no protection. For this reason, we strongly advise you to replace the VCDS crack interface with a legal copy.
-
-
In addition, it is also not possible to perform Auto Update on VCDS crack interface with serial number. After you open VCDS crack interface with serial number, you must crack the software. It is not possible to restore the cracked software without serial number.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Descargar Prensa Diaria Pdf Downloadl [UPD].md b/spaces/inreVtussa/clothingai/Examples/Descargar Prensa Diaria Pdf Downloadl [UPD].md
deleted file mode 100644
index 07c7de0e8c0abb30775ff6f28b7982642b3ef146..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Descargar Prensa Diaria Pdf Downloadl [UPD].md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
Descargar Prensa Diaria Pdf Downloadl: Cómo leer los periódicos gratis en tu dispositivo
-
-
Si te gusta estar informado de lo que ocurre en el mundo, pero no quieres gastar dinero en comprar los periódicos en papel, hay una solución muy práctica y cómoda: descargar prensa diaria pdf downloadl. Se trata de una forma de acceder a las versiones digitales de los principales medios de comunicación en formato PDF, que puedes leer en tu móvil, tablet o computadora sin necesidad de conexión a internet.
En este artículo te vamos a explicar cómo descargar prensa diaria pdf downloadl de forma fácil y segura, usando algunos canales de Telegram que se dedican a compartir los enlaces de descarga de los periódicos más populares. También te vamos a dar algunos consejos para que puedas disfrutar de la lectura sin problemas legales ni técnicos.
-
-
¿Qué es descargar prensa diaria pdf downloadl?
-
-
Descargar prensa diaria pdf downloadl es una forma de obtener los archivos PDF de los periódicos que se publican cada día en diferentes países y regiones. Estos archivos contienen las mismas páginas que las ediciones impresas, con las noticias, reportajes, entrevistas, opiniones, anuncios y demás secciones que caracterizan a cada medio.
-
-
Al descargar prensa diaria pdf downloadl, puedes guardar los archivos en tu dispositivo y abrirlos con un lector de PDF cuando quieras. Así puedes leer los periódicos sin depender de una conexión a internet, sin ocupar espacio físico y sin generar residuos de papel. Además, puedes compartir los archivos con otras personas o enviarlos a otros dispositivos.
-
-
¿Cómo descargar prensa diaria pdf downloadl?
-
-
Una de las formas más sencillas y rápidas de descargar prensa diaria pdf downloadl es usar algunos canales de Telegram que se dedican a compartir los enlaces de descarga de los periódicos. Telegram es una aplicación de mensajería instantánea que permite crear canales públicos donde se pueden difundir mensajes con contenido multimedia a una gran cantidad de usuarios.
-
-
Existen varios canales de Telegram que se especializan en ofrecer la descarga gratuita de los periódicos en PDF, tanto nacionales como internacionales, de diferentes temáticas y estilos. Algunos de los más populares son:
-
-
-
-
Periódicos y Revistas en PDF: Este canal comparte diariamente las versiones en PDF de muchos de los periódicos y revistas más populares de España y otros países. Solo tienes que pinchar sobre el enlace para descargarlos y leerlos en tu lector habitual de PDF.
-
Portadas Prensa: Este canal te muestra las portadas de los principales periódicos del día, para que puedas hacerte una idea de lo que traen por dentro. Si te interesa alguno, puedes solicitarlo al administrador del canal y te enviará el enlace de descarga.
-
Portadas Deportivas: Este canal te ofrece las portadas de los principales diarios deportivos de España y otros países. Si eres un amante del deporte, aquí podrás encontrar toda la información sobre tus equipos y competiciones favoritas.
-
El Jueves: Este canal te trae tanto las portadas como las mejores viñetas de la popular revista satírica El Jueves. Si te gusta el humor ácido y crítico, este es tu canal.
-
-
-
Para descargar prensa diaria pdf downloadl desde estos canales, solo tienes que seguir estos pasos:
-
-
-
Descarga e instala la aplicación de Telegram en tu dispositivo.
-
Busca el canal que te interese usando el buscador o siguiendo el enlace que te proporcionamos.
-
Suscríbete al canal pulsando el botón "Unirse".
-
Navega por los mensajes del canal hasta encontrar el periódico que quieras descargar.
-
Pulsa sobre el enlace de descarga y espera a que se complete el proceso.
-
Abre el archivo PDF con tu lector preferido y disfruta de la lectura.
-
-
-
¿Es legal descargar prensa diaria pdf downloadl?
-
-
Ahora que ya sabes cómo descargar prensa diaria pdf downloadl, puede que te preguntes si es legal hacerlo. La respuesta no es sencilla, ya que depende del país donde vivas, del medio que quieras descargar y del uso que le vayas a dar al archivo.
-
-
En general, se puede decir que leer la prensa en PDF sin haber pagado por ella es algo que no es legal, ya que supone una vulneración de los derechos de autor y una pérdida de ingresos para los editores. Por ello, la mayoría de los canales que se dedican a ello solo comparten las portadas o los enlaces externos, no los archivos completos.
-
-
Sin embargo, también hay casos en los que los propios medios ofrecen la descarga gratuita o parcial de sus ediciones en PDF, como una forma de promocionarse o fidelizar a sus lectores. Por ejemplo, el diario La Prensa permite la descarga gratis de sus ediciones impresas durante la emergencia por el COVID-19. En estos casos, sí es legal descargar prensa diaria pdf downloadl siempre que respetes las condiciones establecidas por el medio.
-
-
En cualquier caso, te recomendamos que uses el sentido común y la ética a la hora de descargar prensa diaria pdf downloadl. Si quieres apoyar al periodismo independiente y de calidad, lo mejor es que pagues por tu suscripción o compres el periódico en papel cuando puedas. Así contribuirás a mantener vivo un sector vital para la democracia y la sociedad.
-
-
Conclusión
-
-
Descargar prensa diaria pdf downloadl es una forma práctica y cómoda de acceder a las versiones digitales de los periódicos que se publican cada día. Puedes hacerlo usando algunos canales de Telegram que se dedican a compartir los enlaces de descarga. Sin embargo, debes tener cuidado con la legalidad y la ética de esta práctica, ya que puede suponer un perjuicio para los medios y los autores. Si quieres apoyar al periodismo profesional y responsable, lo mejor es que pagues por tu suscripción o compres el periódico en papel cuando puedas.
-
¿Qué ventajas tiene descargar prensa diaria pdf downloadl?
-
-
Descargar prensa diaria pdf downloadl tiene muchas ventajas para los lectores que quieren estar al día de lo que ocurre en el mundo. Algunas de ellas son:
-
-
-
Ahorro de dinero: Al descargar prensa diaria pdf downloadl, no tienes que pagar por la suscripción o la compra de los periódicos en papel, lo que supone un ahorro considerable a largo plazo.
-
Ahorro de espacio: Al descargar prensa diaria pdf downloadl, no tienes que almacenar los periódicos en tu casa o en tu oficina, lo que te permite tener más espacio disponible y evitar el desorden.
-
Ahorro de tiempo: Al descargar prensa diaria pdf downloadl, no tienes que ir al quiosco o al punto de venta a buscar los periódicos que te interesan, lo que te ahorra tiempo y desplazamientos.
-
Ahorro de recursos: Al descargar prensa diaria pdf downloadl, contribuyes a reducir el consumo de papel y la generación de residuos, lo que ayuda a cuidar el medio ambiente y los recursos naturales.
-
Acceso a más información: Al descargar prensa diaria pdf downloadl, puedes acceder a una mayor variedad de medios y contenidos, tanto nacionales como internacionales, de diferentes temáticas y estilos, lo que te permite ampliar tu visión y tu criterio.
-
Acceso a más comodidad: Al descargar prensa diaria pdf downloadl, puedes leer los periódicos en el dispositivo que prefieras, en el momento que quieras y en el lugar que elijas, lo que te ofrece más comodidad y flexibilidad.
-
-
-
¿Qué desventajas tiene descargar prensa diaria pdf downloadl?
-
-
Aunque descargar prensa diaria pdf downloadl tiene muchas ventajas, también tiene algunas desventajas que hay que tener en cuenta. Algunas de ellas son:
-
-
-
Pérdida de calidad: Al descargar prensa diaria pdf downloadl, puede que la calidad de las imágenes y los textos no sea la misma que la de los periódicos en papel, lo que puede afectar a la experiencia de lectura.
-
Pérdida de legalidad: Al descargar prensa diaria pdf downloadl, puede que estés infringiendo los derechos de autor y la propiedad intelectual de los medios y los autores, lo que puede acarrearte problemas legales o éticos.
-
Pérdida de seguridad: Al descargar prensa diaria pdf downloadl, puede que te expongas a virus o malware que puedan dañar tu dispositivo o robar tu información personal, lo que puede comprometer tu seguridad y tu privacidad.
-
Pérdida de apoyo: Al descargar prensa diaria pdf downloadl, puede que estés dejando de apoyar al periodismo independiente y de calidad, lo que puede perjudicar al sector y a la sociedad en general.
-
-
-
Recomendaciones para descargar prensa diaria pdf downloadl
-
-
Si quieres descargar prensa diaria pdf downloadl sin problemas ni riesgos, te recomendamos que sigas estas recomendaciones:
-
-
-
Usa solo canales de Telegram confiables y seguros, que no contengan enlaces maliciosos o falsos.
-
No descargues ni compartas periódicos que no hayan sido autorizados por sus editores o autores, respeta sus derechos y su trabajo.
-
No abuses del uso de esta práctica, limita la cantidad y la frecuencia de las descargas.
-
No uses esta práctica como sustituto del pago por la suscripción o la compra de los periódicos, sino como un complemento o una alternativa ocasional.
-
No uses esta práctica como única fuente de información, contrasta las noticias con otras fuentes y medios.
-
-
-
Conclusión
-
-
Descargar prensa diaria pdf downloadl es una forma práctica y cómoda de acceder a las versiones digitales de los periódicos que se publican cada día. Puedes hacerlo usando algunos canales de Telegram que se dedican a compartir los enlaces de descarga. Sin embargo, debes tener cuidado con la legalidad y la ética de esta práctica, ya que puede suponer un perjuicio para los medios y los autores. Si quieres apoyar al periodismo profesional y responsable, lo mejor es que pagues por tu suscripción o compres el periódico en papel cuando puedas.
-
Conclusión
-
-
Descargar prensa diaria pdf downloadl es una forma práctica y cómoda de acceder a las versiones digitales de los periódicos que se publican cada día. Puedes hacerlo usando algunos canales de Telegram que se dedican a compartir los enlaces de descarga. Sin embargo, debes tener cuidado con la legalidad y la ética de esta práctica, ya que puede suponer un perjuicio para los medios y los autores. Si quieres apoyar al periodismo profesional y responsable, lo mejor es que pagues por tu suscripción o compres el periódico en papel cuando puedas.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/ispast/Genshin_MB_VITS_TTS/README.md b/spaces/ispast/Genshin_MB_VITS_TTS/README.md
deleted file mode 100644
index d1028fbc08d08059d62d5ce8c988e782b06fe32f..0000000000000000000000000000000000000000
--- a/spaces/ispast/Genshin_MB_VITS_TTS/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Genshin TTS
-emoji: 🔥
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-duplicated_from: CikeyQI/Genshin_MB_VITS_TTS
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jacinthes/PubMed-fact-checker/README.md b/spaces/jacinthes/PubMed-fact-checker/README.md
deleted file mode 100644
index fdeeec48e3da319e635955ee8b7ec2d6b3e507d3..0000000000000000000000000000000000000000
--- a/spaces/jacinthes/PubMed-fact-checker/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PubMed Fact Checker
-emoji: 🌍
-colorFrom: indigo
-colorTo: red
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jiaxianustc/mbp/UltraFlow/layers/egnn.py b/spaces/jiaxianustc/mbp/UltraFlow/layers/egnn.py
deleted file mode 100644
index e5c11743138d389e5fb3e2e5534805829f272cd9..0000000000000000000000000000000000000000
--- a/spaces/jiaxianustc/mbp/UltraFlow/layers/egnn.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import dgl.function as fn
-
-class EGNNConv(nn.Module):
-
- def __init__(self, in_size, hidden_size, out_size, edge_feat_size=0):
- super(EGNNConv, self).__init__()
-
- self.in_size = in_size
- self.hidden_size = hidden_size
- self.out_size = out_size
- self.edge_feat_size = edge_feat_size
- act_fn = nn.SiLU()
-
- # \phi_e
- self.edge_mlp = nn.Sequential(
- # +1 for the radial feature: ||x_i - x_j||^2
- nn.Linear(in_size * 2 + edge_feat_size + 1, hidden_size),
- act_fn,
- nn.Linear(hidden_size, hidden_size),
- act_fn
- )
-
- # \phi_h
- self.node_mlp = nn.Sequential(
- nn.Linear(in_size + hidden_size, hidden_size),
- act_fn,
- nn.Linear(hidden_size, out_size)
- )
-
- # \phi_x
- self.coord_mlp = nn.Sequential(
- nn.Linear(hidden_size, hidden_size),
- act_fn,
- nn.Linear(hidden_size, 1, bias=False)
- )
-
- def message(self, edges):
- """message function for EGNN"""
- # concat features for edge mlp
- if self.edge_feat_size > 0:
- f = torch.cat(
- [edges.src['h'], edges.dst['h'], edges.data['radial'], edges.data['a']],
- dim=-1
- )
- else:
- f = torch.cat([edges.src['h'], edges.dst['h'], edges.data['radial']], dim=-1)
-
- msg_h = self.edge_mlp(f)
- msg_x = self.coord_mlp(msg_h) * edges.data['x_diff']
-
- return {'msg_x': msg_x, 'msg_h': msg_h}
-
- def forward(self, graph, node_feat, coord_feat, edge_feat=None):
-
- with graph.local_scope():
- # node feature
- graph.ndata['h'] = node_feat
- # coordinate feature
- graph.ndata['x'] = coord_feat
- # edge feature
- if self.edge_feat_size > 0:
- assert edge_feat is not None, "Edge features must be provided."
- graph.edata['a'] = edge_feat
- # get coordinate diff & radial features
- graph.apply_edges(fn.u_sub_v('x', 'x', 'x_diff'))
- graph.edata['radial'] = graph.edata['x_diff'].square().sum(dim=1).unsqueeze(-1)
- # normalize coordinate difference
- graph.edata['x_diff'] = graph.edata['x_diff'] / (graph.edata['radial'].sqrt() + 1e-30)
- graph.apply_edges(self.message)
- graph.update_all(fn.copy_e('msg_x', 'm'), fn.mean('m', 'x_neigh'))
- graph.update_all(fn.copy_e('msg_h', 'm'), fn.sum('m', 'h_neigh'))
-
- h_neigh, x_neigh = graph.ndata['h_neigh'], graph.ndata['x_neigh']
-
- h = self.node_mlp(
- torch.cat([node_feat, h_neigh], dim=-1)
- )
- x = coord_feat + x_neigh
-
- return h, x
-
-class EGNN(nn.Module):
- def __init__(self, input_node_dim, input_edge_dim, hidden_dim, num_layers, dropout, JK='sum'):
- super(EGNN, self).__init__()
-
- self.num_layers = num_layers
-
- # List of MLPs
- self.egnn_layers = torch.nn.ModuleList()
- self.batch_norms = torch.nn.ModuleList()
-
- for layer in range(self.num_layers - 1):
- if layer == 0:
- self.egnn_layers.append(EGNNConv(input_node_dim, hidden_dim, hidden_dim, input_edge_dim))
- else:
- self.egnn_layers.append(EGNNConv(hidden_dim, hidden_dim, hidden_dim, input_edge_dim))
-
- self.batch_norms.append(nn.BatchNorm1d(hidden_dim))
-
- self.drop = nn.Dropout(dropout)
- self.JK = JK
-
- def forward(self, g, Perturb=None):
- hidden_rep = []
- node_feats = g.ndata.pop('h').float()
- edge_feats = g.edata['e']
- coord_feats = g.ndata['pos']
- for idx, egnn in enumerate(self.egnn_layers):
- if idx == 0 and Perturb is not None:
- node_feats = node_feats + Perturb
- node_feats, coord_feats = egnn(g, node_feats, coord_feats, edge_feats)
- node_feats = self.batch_norms[idx](node_feats)
- node_feats = F.relu(node_feats)
- node_feats = self.drop(node_feats)
- hidden_rep.append(node_feats)
-
- if self.JK == 'sum':
- hidden_rep = [h.unsqueeze(0) for h in hidden_rep]
- return torch.sum(torch.cat(hidden_rep, dim=0), dim=0)
- elif self.JK == 'max':
- hidden_rep = [h.unsqueeze(0) for h in hidden_rep]
- return torch.max(torch.cat(hidden_rep, dim=0), dim=0)[0]
- elif self.JK == 'concat':
- return torch.cat(hidden_rep, dim=1)
- elif self.JK == 'last':
- return hidden_rep[-1]
diff --git a/spaces/jirufengyu/face_recognition/video.py b/spaces/jirufengyu/face_recognition/video.py
deleted file mode 100644
index 290992e3e5cc11f01e8f20cb95a355401f3250c9..0000000000000000000000000000000000000000
--- a/spaces/jirufengyu/face_recognition/video.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import face_recognition
-import cv2
-import numpy as np
-
-# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
-# other example, but it includes some basic performance tweaks to make things run a lot faster:
-# 1. Process each video frame at 1/4 resolution (though still display it at full resolution)
-# 2. Only detect faces in every other frame of video.
-
-# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
-# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
-# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
-
-# Get a reference to webcam #0 (the default one)
-video_capture = cv2.VideoCapture(0)
-
-# Load a sample picture and learn how to recognize it.
-obama_image = face_recognition.load_image_file("obama.jpg")
-obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
-
-# Load a second sample picture and learn how to recognize it.
-biden_image = face_recognition.load_image_file("biden.jpg")
-biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
-
-# Load a second sample picture and learn how to recognize it.
-me = face_recognition.load_image_file("me.jpg")
-me_face_encoding = face_recognition.face_encodings(me)[0]
-
-wang = face_recognition.load_image_file("wang.jpg")
-wang_face_encoding = face_recognition.face_encodings(wang)[0]
-
-# Create arrays of known face encodings and their names
-known_face_encodings = [
- obama_face_encoding,
- biden_face_encoding,
- me_face_encoding,
- wang_face_encoding
-]
-known_face_names = [
- "Barack Obama",
- "Joe Biden",
- "me",
- "wang"
-]
-
-# Initialize some variables
-face_locations = []
-face_encodings = []
-face_names = []
-process_this_frame = True
-
-while True:
- # Grab a single frame of video
- ret, frame = video_capture.read()
-
- # Only process every other frame of video to save time
- if process_this_frame:
- # Resize frame of video to 1/4 size for faster face recognition processing
- small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
-
- # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
- rgb_small_frame = small_frame[:, :, ::-1]
-
- # Find all the faces and face encodings in the current frame of video
- face_locations = face_recognition.face_locations(rgb_small_frame)
- face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
-
- face_names = []
- for face_encoding in face_encodings:
- # See if the face is a match for the known face(s)
- matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
- name = "Unknown"
-
- # # If a match was found in known_face_encodings, just use the first one.
- # if True in matches:
- # first_match_index = matches.index(True)
- # name = known_face_names[first_match_index]
-
- # Or instead, use the known face with the smallest distance to the new face
- face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
- best_match_index = np.argmin(face_distances)
- if matches[best_match_index]:
- name = known_face_names[best_match_index]
-
- face_names.append(name)
-
- process_this_frame = not process_this_frame
-
-
- # Display the results
- for (top, right, bottom, left), name in zip(face_locations, face_names):
- # Scale back up face locations since the frame we detected in was scaled to 1/4 size
- top *= 4
- right *= 4
- bottom *= 4
- left *= 4
-
- # Draw a box around the face
- cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
-
- # Draw a label with a name below the face
- cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
- font = cv2.FONT_HERSHEY_DUPLEX
- cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
-
- # Display the resulting image
- cv2.imshow('Video', frame)
-
- # Hit 'q' on the keyboard to quit!
- if cv2.waitKey(1) & 0xFF == ord('q'):
- break
-
-# Release handle to the webcam
-video_capture.release()
-cv2.destroyAllWindows()
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/PKCS1_v1_5.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/PKCS1_v1_5.py
deleted file mode 100644
index ac888edb497bd42c5c70ded0501e418ea3d1ce3e..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/PKCS1_v1_5.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-"""
-Legacy module for PKCS#1 v1.5 signatures.
-
-:undocumented: __package__
-"""
-
-import types
-
-from Crypto.Signature import pkcs1_15
-
-def _pycrypto_verify(self, hash_object, signature):
- try:
- self._verify(hash_object, signature)
- except (ValueError, TypeError):
- return False
- return True
-
-def new(rsa_key):
- pkcs1 = pkcs1_15.new(rsa_key)
- pkcs1._verify = pkcs1.verify
- pkcs1.verify = types.MethodType(_pycrypto_verify, pkcs1)
- return pkcs1
-
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py
deleted file mode 100644
index 57163d726c1a5e850eabe8ec72a44c9ec514b715..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py
+++ /dev/null
@@ -1,164 +0,0 @@
-""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT)
-tool to store its hinting source data.
-
-TSI1 contains the text of the glyph programs in the form of low-level assembly
-code, as well as the 'extra' programs 'fpgm', 'ppgm' (i.e. 'prep'), and 'cvt'.
-"""
-from . import DefaultTable
-from fontTools.misc.loggingTools import LogMixin
-from fontTools.misc.textTools import strjoin, tobytes, tostr
-
-
-class table_T_S_I__1(LogMixin, DefaultTable.DefaultTable):
-
- extras = {0xFFFA: "ppgm", 0xFFFB: "cvt", 0xFFFC: "reserved", 0xFFFD: "fpgm"}
-
- indextable = "TSI0"
-
- def decompile(self, data, ttFont):
- totalLength = len(data)
- indextable = ttFont[self.indextable]
- for indices, isExtra in zip(
- (indextable.indices, indextable.extra_indices), (False, True)
- ):
- programs = {}
- for i, (glyphID, textLength, textOffset) in enumerate(indices):
- if isExtra:
- name = self.extras[glyphID]
- else:
- name = ttFont.getGlyphName(glyphID)
- if textOffset > totalLength:
- self.log.warning("textOffset > totalLength; %r skipped" % name)
- continue
- if textLength < 0x8000:
- # If the length stored in the record is less than 32768, then use
- # that as the length of the record.
- pass
- elif textLength == 0x8000:
- # If the length is 32768, compute the actual length as follows:
- isLast = i == (len(indices) - 1)
- if isLast:
- if isExtra:
- # For the last "extra" record (the very last record of the
- # table), the length is the difference between the total
- # length of the TSI1 table and the textOffset of the final
- # record.
- nextTextOffset = totalLength
- else:
- # For the last "normal" record (the last record just prior
- # to the record containing the "magic number"), the length
- # is the difference between the textOffset of the record
- # following the "magic number" (0xFFFE) record (i.e. the
- # first "extra" record), and the textOffset of the last
- # "normal" record.
- nextTextOffset = indextable.extra_indices[0][2]
- else:
- # For all other records with a length of 0x8000, the length is
- # the difference between the textOffset of the record in
- # question and the textOffset of the next record.
- nextTextOffset = indices[i + 1][2]
- assert nextTextOffset >= textOffset, "entries not sorted by offset"
- if nextTextOffset > totalLength:
- self.log.warning(
- "nextTextOffset > totalLength; %r truncated" % name
- )
- nextTextOffset = totalLength
- textLength = nextTextOffset - textOffset
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError(
- "%r textLength (%d) must not be > 32768" % (name, textLength)
- )
- text = data[textOffset : textOffset + textLength]
- assert len(text) == textLength
- text = tostr(text, encoding="utf-8")
- if text:
- programs[name] = text
- if isExtra:
- self.extraPrograms = programs
- else:
- self.glyphPrograms = programs
-
- def compile(self, ttFont):
- if not hasattr(self, "glyphPrograms"):
- self.glyphPrograms = {}
- self.extraPrograms = {}
- data = b""
- indextable = ttFont[self.indextable]
- glyphNames = ttFont.getGlyphOrder()
-
- indices = []
- for i in range(len(glyphNames)):
- if len(data) % 2:
- data = (
- data + b"\015"
- ) # align on 2-byte boundaries, fill with return chars. Yum.
- name = glyphNames[i]
- if name in self.glyphPrograms:
- text = tobytes(self.glyphPrograms[name], encoding="utf-8")
- else:
- text = b""
- textLength = len(text)
- if textLength >= 0x8000:
- textLength = 0x8000
- indices.append((i, textLength, len(data)))
- data = data + text
-
- extra_indices = []
- codes = sorted(self.extras.items())
- for i in range(len(codes)):
- if len(data) % 2:
- data = (
- data + b"\015"
- ) # align on 2-byte boundaries, fill with return chars.
- code, name = codes[i]
- if name in self.extraPrograms:
- text = tobytes(self.extraPrograms[name], encoding="utf-8")
- else:
- text = b""
- textLength = len(text)
- if textLength >= 0x8000:
- textLength = 0x8000
- extra_indices.append((code, textLength, len(data)))
- data = data + text
- indextable.set(indices, extra_indices)
- return data
-
- def toXML(self, writer, ttFont):
- names = sorted(self.glyphPrograms.keys())
- writer.newline()
- for name in names:
- text = self.glyphPrograms[name]
- if not text:
- continue
- writer.begintag("glyphProgram", name=name)
- writer.newline()
- writer.write_noindent(text.replace("\r", "\n"))
- writer.newline()
- writer.endtag("glyphProgram")
- writer.newline()
- writer.newline()
- extra_names = sorted(self.extraPrograms.keys())
- for name in extra_names:
- text = self.extraPrograms[name]
- if not text:
- continue
- writer.begintag("extraProgram", name=name)
- writer.newline()
- writer.write_noindent(text.replace("\r", "\n"))
- writer.newline()
- writer.endtag("extraProgram")
- writer.newline()
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "glyphPrograms"):
- self.glyphPrograms = {}
- self.extraPrograms = {}
- lines = strjoin(content).replace("\r", "\n").split("\n")
- text = "\r".join(lines[1:-1])
- if name == "glyphProgram":
- self.glyphPrograms[attrs["name"]] = text
- elif name == "extraProgram":
- self.extraPrograms[attrs["name"]] = text
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_c_v_t.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_c_v_t.py
deleted file mode 100644
index 7f94677522e4b8b8a4e55c079f618e6046b045b8..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_c_v_t.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from fontTools.misc.textTools import safeEval
-from . import DefaultTable
-import sys
-import array
-
-
-class table__c_v_t(DefaultTable.DefaultTable):
- def decompile(self, data, ttFont):
- values = array.array("h")
- values.frombytes(data)
- if sys.byteorder != "big":
- values.byteswap()
- self.values = values
-
- def compile(self, ttFont):
- values = self.values[:]
- if sys.byteorder != "big":
- values.byteswap()
- return values.tobytes()
-
- def toXML(self, writer, ttFont):
- for i in range(len(self.values)):
- value = self.values[i]
- writer.simpletag("cv", value=value, index=i)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "values"):
- self.values = array.array("h")
- if name == "cv":
- index = safeEval(attrs["index"])
- value = safeEval(attrs["value"])
- for i in range(1 + index - len(self.values)):
- self.values.append(0)
- self.values[index] = value
-
- def __len__(self):
- return len(self.values)
-
- def __getitem__(self, index):
- return self.values[index]
-
- def __setitem__(self, index, value):
- self.values[index] = value
-
- def __delitem__(self, index):
- del self.values[index]
diff --git a/spaces/jordonpeter01/ai-comic-factory/src/lib/computeSecretFingerprint.ts b/spaces/jordonpeter01/ai-comic-factory/src/lib/computeSecretFingerprint.ts
deleted file mode 100644
index e4d543979a59b6d4ddba0091a422251c359ccb97..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/ai-comic-factory/src/lib/computeSecretFingerprint.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-import { computeSha256 } from "./computeSha256"
-
-const secretFingerprint = `${process.env.SECRET_FINGERPRINT || ""}`
-
-export function computeSecretFingerprint(input: string) {
- return computeSha256(`${secretFingerprint}_${input}`)
-}
\ No newline at end of file
diff --git a/spaces/josStorer/ChatGLM-6B-Int4-API-OpenAI-Compatible/models/models--silver--chatglm-6b-int4-slim/snapshots/02e096b3805c579caf5741a6d8eddd5ba7a74e0d/tokenization_chatglm.py b/spaces/josStorer/ChatGLM-6B-Int4-API-OpenAI-Compatible/models/models--silver--chatglm-6b-int4-slim/snapshots/02e096b3805c579caf5741a6d8eddd5ba7a74e0d/tokenization_chatglm.py
deleted file mode 100644
index 1c57bda213f264f03d4f09866dd2eafdf0f790c1..0000000000000000000000000000000000000000
--- a/spaces/josStorer/ChatGLM-6B-Int4-API-OpenAI-Compatible/models/models--silver--chatglm-6b-int4-slim/snapshots/02e096b3805c579caf5741a6d8eddd5ba7a74e0d/tokenization_chatglm.py
+++ /dev/null
@@ -1,336 +0,0 @@
-"""Tokenization classes for ChatGLM."""
-import sys
-import unicodedata
-from typing import List, Optional, Union
-from functools import lru_cache
-import os
-import collections
-import re
-
-from transformers.tokenization_utils import PreTrainedTokenizer
-from icetk.text_tokenizer import TextTokenizer
-from icetk.utils import auto_create
-import icetk.sentencepiece_model_pb2 as sp_model
-from transformers.utils import logging
-
-logger = logging.get_logger(__name__)
-
-PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
- "silver/chatglm-6b-int4-slim": 2048,
-}
-
-
-class SPTokenizer:
- def __init__(
- self,
- vocab_file,
- max_blank_length=80,
- byte_fallback=True,
- ):
- assert vocab_file is not None
- self.vocab_file = vocab_file
- self.special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "", "", "", "", ""]
- self.max_blank_length = max_blank_length
- self.byte_fallback = byte_fallback
- self.text_tokenizer = self._build_text_tokenizer(encode_special_tokens=False)
- self.special_text_tokenizer = self._build_text_tokenizer(encode_special_tokens=True)
-
- @staticmethod
- def _configure_tokenizer(
- text_tokenizer: TextTokenizer,
- special_tokens: List[str],
- max_blank_length: int,
- byte_fallback: bool,
- encode_special_tokens=False,
- ):
- # special token
- special_token_type = 4 if encode_special_tokens else 3 # 3 - CONTROL, 4 - USER_DEFINE
- for token in special_tokens:
- text_tokenizer.proto.pieces.append(
- sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=special_token_type)
- )
- # whitespaces
- for token in [SPTokenizer.get_tab_token()] + [
- SPTokenizer.get_blank_token(i) for i in range(2, max_blank_length + 1)
- ]:
- text_tokenizer.proto.pieces.append(sp_model.ModelProto.SentencePiece(piece=token, score=0.0, type=4))
- # byte fallback
- if byte_fallback:
- text_tokenizer.proto.trainer_spec.byte_fallback = True
- for i in range(256):
- text_tokenizer.proto.pieces.append(
- sp_model.ModelProto.SentencePiece(piece="<0x{:02X}>".format(i), score=0.0, type=6)
- )
- text_tokenizer.refresh()
-
- def _build_text_tokenizer(self, encode_special_tokens=False):
- tokenizer = TextTokenizer(self.vocab_file)
- self._configure_tokenizer(
- tokenizer, self.special_tokens, self.max_blank_length, self.byte_fallback, encode_special_tokens
- )
- return tokenizer
-
- def _get_text_tokenizer(self, encode_special_tokens=False):
- if encode_special_tokens:
- return self.special_text_tokenizer
- else:
- return self.text_tokenizer
-
- @staticmethod
- def get_blank_token(length: int):
- assert length >= 2
- return f"<|blank_{length}|>"
-
- @staticmethod
- def get_tab_token():
- return f"<|tab|>"
-
- @property
- def num_text_tokens(self):
- return self.text_tokenizer.num_tokens
-
- @property
- def num_tokens(self):
- return self.num_text_tokens
-
- @staticmethod
- def _encode_whitespaces(text: str, max_len: int = 80):
- text = text.replace("\t", SPTokenizer.get_tab_token())
- for i in range(max_len, 1, -1):
- text = text.replace(" " * i, SPTokenizer.get_blank_token(i))
- return text
-
- def _preprocess(self, text: str, linebreak=True, whitespaces=True):
- if linebreak:
- text = text.replace("\n", "")
- if whitespaces:
- text = self._encode_whitespaces(text, max_len=self.max_blank_length)
- return text
-
- def encode(
- self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
- ) -> List[int]:
- """
- @param text: Text to encode.
- @param linebreak: Whether to encode newline (\n) in text.
- @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
- @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
- @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
- """
- text = self._preprocess(text, linebreak, whitespaces)
- if not add_dummy_prefix:
- text = "" + text
- tmp = self._get_text_tokenizer(encode_special_tokens=special_tokens).encode(text)
- tokens = [x for x in tmp]
- return tokens if add_dummy_prefix else tokens[2:]
-
- def decode(self, text_ids: List[int], special_tokens=False) -> str:
- ids = [int(_id) for _id in text_ids]
- ids = [_id for _id in ids if _id >= 0]
- text = self._get_text_tokenizer(encode_special_tokens=special_tokens).decode(ids)
- text = text.replace("", "\n")
- text = text.replace(SPTokenizer.get_tab_token(), "\t")
- for i in range(2, self.max_blank_length + 1):
- text = text.replace(self.get_blank_token(i), " " * i)
- return text
-
- def tokenize(
- self, text: str, linebreak=True, whitespaces=True, special_tokens=False, add_dummy_prefix=True
- ) -> List[str]:
- """
- @param text: Text to encode.
- @param linebreak: Whether to encode newline (\n) in text.
- @param whitespaces: Whether to encode multiple whitespaces or tab in text, useful for source code encoding.
- @param special_tokens: Whether to encode special token ([MASK], [gMASK], etc.) in text.
- @param add_dummy_prefix: Whether to add dummy blank space in the beginning.
- """
- text = self._preprocess(text, linebreak, whitespaces)
- if not add_dummy_prefix:
- text = "" + text
- tokens = self._get_text_tokenizer(encode_special_tokens=special_tokens).tokenize(text)
- return tokens if add_dummy_prefix else tokens[2:]
-
- def __getitem__(self, x: Union[int, str]):
- if isinstance(x, int):
- return self.text_tokenizer.convert_id_to_token(x)
- elif isinstance(x, str):
- return self.text_tokenizer.convert_token_to_id(x)
- else:
- raise ValueError("The key should be str or int.")
-
-
-class ChatGLMTokenizer(PreTrainedTokenizer):
- """
- Construct a ChatGLM tokenizer. Based on byte-level Byte-Pair-Encoding.
-
- Args:
- vocab_file (`str`):
- Path to the vocabulary file.
- """
-
- vocab_files_names = {"vocab_file": "ice_text.model"}
- max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
- model_input_names = ["input_ids"]
-
- def __init__(
- self,
- vocab_file,
- do_lower_case=False,
- remove_space=False,
- bos_token='sop',
- eos_token='eos',
- eop_token='eop',
- mask_token='[MASK]',
- gmask_token='[gMASK]',
- padding_side="left",
- **kwargs
- ) -> None:
- super().__init__(
- do_lower_case=do_lower_case,
- remove_space=remove_space,
- padding_side=padding_side,
- **kwargs
- )
-
- self.do_lower_case = do_lower_case
- self.remove_space = remove_space
- self.vocab_file = vocab_file
-
- self.bos_token = bos_token
- self.eos_token = eos_token
- self.eop_token = eop_token
- self.mask_token = mask_token
- self.gMASK_token = gmask_token
-
- self.sp_tokenizer = SPTokenizer(vocab_file)
-
- """ Initialisation """
-
- @property
- def eop_token_id(self) -> Optional[int]:
- """
- `Optional[int]`: Id of the end of sentence token in the vocabulary. Returns `None` if the token has not been
- set.
- """
- if self.eop_token is None:
- return None
- return self.convert_tokens_to_ids(self.eop_token)
-
- @property
- def vocab_size(self):
- """ Returns vocab size """
- return self.sp_tokenizer.num_tokens
-
- def get_vocab(self):
- """ Returns vocab as a dict """
- vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
- vocab.update(self.added_tokens_encoder)
- return vocab
-
- def preprocess_text(self, inputs):
- if self.remove_space:
- outputs = " ".join(inputs.strip().split())
- else:
- outputs = inputs
-
- if self.do_lower_case:
- outputs = outputs.lower()
-
- return outputs
-
- def _tokenize(self, text, **kwargs):
- """ Returns a tokenized string. """
- text = self.preprocess_text(text)
-
- seq = self.sp_tokenizer.tokenize(text)
-
- return seq
-
- def decode(
- self,
- token_ids: Union[List[int], List[List[int]]],
- skip_special_tokens: bool = False,
- clean_up_tokenization_spaces: bool = True,
- spaces_between_special_tokens: bool = True,
- **kwargs
- ) -> str:
- if isinstance(token_ids[0], list):
- tokens = []
- for single_token_ids in token_ids:
- if self.pad_token_id in single_token_ids: # remove pad
- single_token_ids = list(filter((self.pad_token_id).__ne__, single_token_ids))
- tokens.append(self.sp_tokenizer.decode(single_token_ids))
- return (tokens)
- else:
- if self.pad_token_id in token_ids: # remove pad
- token_ids = list(filter((self.pad_token_id).__ne__, token_ids))
- return self.sp_tokenizer.decode(token_ids)
-
- def _convert_token_to_id(self, token):
- """ Converts a token (str) in an id using the vocab. """
- return self.sp_tokenizer[token]
-
- def _convert_id_to_token(self, index):
- """Converts an index (integer) in a token (str) using the vocab."""
- return self.sp_tokenizer[index]
-
- def save_vocabulary(self, save_directory, filename_prefix=None):
- """
- Save the vocabulary and special tokens file to a directory.
-
- Args:
- save_directory (`str`):
- The directory in which to save the vocabulary.
- filename_prefix (`str`, *optional*):
- An optional prefix to add to the named of the saved files.
-
- Returns:
- `Tuple(str)`: Paths to the files saved.
- """
- if os.path.isdir(save_directory):
- vocab_file = os.path.join(
- save_directory, VOCAB_FILES_NAMES["vocab_file"]
- )
- else:
- vocab_file = save_directory
-
- with open(self.vocab_file, 'rb') as fin:
- proto_str = fin.read()
-
- with open(vocab_file, "wb") as writer:
- writer.write(proto_str)
-
- return (vocab_file,)
-
- def build_inputs_with_special_tokens(
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
- ) -> List[int]:
- """
- Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
- adding special tokens. A BERT sequence has the following format:
-
- - single sequence: `[CLS] X [SEP]`
- - pair of sequences: `[CLS] A [SEP] B [SEP]`
-
- Args:
- token_ids_0 (`List[int]`):
- List of IDs to which the special tokens will be added.
- token_ids_1 (`List[int]`, *optional*):
- Optional second list of IDs for sequence pairs.
-
- Returns:
- `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
- """
- if token_ids_1 is not None:
- token_ids_0 += token_ids_1
- mask_ids = self.sp_tokenizer[self.mask_token]
- gmask_ids = self.sp_tokenizer[self.gMASK_token]
- if mask_ids not in token_ids_0 and gmask_ids not in token_ids_0:
- token_ids_0 += [gmask_ids]
-
- if token_ids_0[-1] != mask_ids and token_ids_0[-1] != gmask_ids:
- token_ids_0 += [self.sp_tokenizer[self.eos_token]]
-
- token_ids_0 += [self.sp_tokenizer[self.bos_token]]
-
- return token_ids_0
diff --git a/spaces/josegabmuz/gradio-test/README.md b/spaces/josegabmuz/gradio-test/README.md
deleted file mode 100644
index 7046494951f241b48026f538911db5e61df880f0..0000000000000000000000000000000000000000
--- a/spaces/josegabmuz/gradio-test/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Gradio Test
-emoji: 🏢
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/byt5/__init__.py b/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/byt5/__init__.py
deleted file mode 100644
index da022c16301721a096a208e8bdb2a71bb87f9788..0000000000000000000000000000000000000000
--- a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/t5/byt5/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright 2022 The T5X Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This empty file is needed for loading the gin files in this directory.
diff --git a/spaces/justest/mdn-chatbot/README.md b/spaces/justest/mdn-chatbot/README.md
deleted file mode 100644
index 1a43b4350a6de8fd21b10195fbf61dd3c4e696f0..0000000000000000000000000000000000000000
--- a/spaces/justest/mdn-chatbot/README.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: MDN Chatbot
-emoji: 🔮
-colorFrom: blue
-colorTo: white
-sdk: docker
-pinned: false
-app_port: 7860
----
-
-This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app).
-
-## Getting Started
-
-First, run the development server:
-
-```bash
-npm run dev
-# or
-yarn dev
-# or
-pnpm dev
-```
-
-Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
-
-You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
-
-This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font.
-
-## Learn More
-
-To learn more about Next.js, take a look at the following resources:
-
-- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
-- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
-
-You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome!
-
-## Deploy on Vercel
-
-The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
-
-Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.
diff --git a/spaces/kcagle/AutoGPT/autogpt/memory/weaviate.py b/spaces/kcagle/AutoGPT/autogpt/memory/weaviate.py
deleted file mode 100644
index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000
--- a/spaces/kcagle/AutoGPT/autogpt/memory/weaviate.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import uuid
-
-import weaviate
-from weaviate import Client
-from weaviate.embedded import EmbeddedOptions
-from weaviate.util import generate_uuid5
-
-from autogpt.config import Config
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-def default_schema(weaviate_index):
- return {
- "class": weaviate_index,
- "properties": [
- {
- "name": "raw_text",
- "dataType": ["text"],
- "description": "original text for the embedding",
- }
- ],
- }
-
-
-class WeaviateMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- auth_credentials = self._build_auth_credentials(cfg)
-
- url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
-
- if cfg.use_weaviate_embedded:
- self.client = Client(
- embedded_options=EmbeddedOptions(
- hostname=cfg.weaviate_host,
- port=int(cfg.weaviate_port),
- persistence_data_path=cfg.weaviate_embedded_path,
- )
- )
-
- print(
- f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
- )
- else:
- self.client = Client(url, auth_client_secret=auth_credentials)
-
- self.index = WeaviateMemory.format_classname(cfg.memory_index)
- self._create_schema()
-
- @staticmethod
- def format_classname(index):
- # weaviate uses capitalised index names
- # The python client uses the following code to format
- # index names before the corresponding class is created
- if len(index) == 1:
- return index.capitalize()
- return index[0].capitalize() + index[1:]
-
- def _create_schema(self):
- schema = default_schema(self.index)
- if not self.client.schema.contains(schema):
- self.client.schema.create_class(schema)
-
- def _build_auth_credentials(self, cfg):
- if cfg.weaviate_username and cfg.weaviate_password:
- return weaviate.AuthClientPassword(
- cfg.weaviate_username, cfg.weaviate_password
- )
- if cfg.weaviate_api_key:
- return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
- else:
- return None
-
- def add(self, data):
- vector = get_ada_embedding(data)
-
- doc_uuid = generate_uuid5(data, self.index)
- data_object = {"raw_text": data}
-
- with self.client.batch as batch:
- batch.add_data_object(
- uuid=doc_uuid,
- data_object=data_object,
- class_name=self.index,
- vector=vector,
- )
-
- return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.client.schema.delete_all()
-
- # weaviate does not yet have a neat way to just remove the items in an index
- # without removing the entire schema, therefore we need to re-create it
- # after a call to delete_all
- self._create_schema()
-
- return "Obliterated"
-
- def get_relevant(self, data, num_relevant=5):
- query_embedding = get_ada_embedding(data)
- try:
- results = (
- self.client.query.get(self.index, ["raw_text"])
- .with_near_vector({"vector": query_embedding, "certainty": 0.7})
- .with_limit(num_relevant)
- .do()
- )
-
- if len(results["data"]["Get"][self.index]) > 0:
- return [
- str(item["raw_text"]) for item in results["data"]["Get"][self.index]
- ]
- else:
- return []
-
- except Exception as err:
- print(f"Unexpected error {err=}, {type(err)=}")
- return []
-
- def get_stats(self):
- result = self.client.query.aggregate(self.index).with_meta_count().do()
- class_data = result["data"]["Aggregate"][self.index]
-
- return class_data[0]["meta"] if class_data else {}
diff --git a/spaces/kevinwang676/Bark-New-Version/parseinput.py b/spaces/kevinwang676/Bark-New-Version/parseinput.py
deleted file mode 100644
index 0795e9065cf97b62b8cf276dc309877f95dad5da..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/Bark-New-Version/parseinput.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import re
-import xml.etree.ElementTree as ET
-from xml.sax import saxutils
-#import nltk
-
-# Chunked generation originally from https://github.com/serp-ai/bark-with-voice-clone
-def split_and_recombine_text(text, desired_length=150, max_length=200):
- # return nltk.sent_tokenize(text)
-
- # from https://github.com/neonbjb/tortoise-tts
- """Split text it into chunks of a desired length trying to keep sentences intact."""
- # normalize text, remove redundant whitespace and convert non-ascii quotes to ascii
- text = re.sub(r"\n\n+", "\n", text)
- text = re.sub(r"\s+", " ", text)
- text = re.sub(r"[“”]", '"', text)
-
- rv = []
- in_quote = False
- current = ""
- split_pos = []
- pos = -1
- end_pos = len(text) - 1
-
- def seek(delta):
- nonlocal pos, in_quote, current
- is_neg = delta < 0
- for _ in range(abs(delta)):
- if is_neg:
- pos -= 1
- current = current[:-1]
- else:
- pos += 1
- current += text[pos]
- if text[pos] == '"':
- in_quote = not in_quote
- return text[pos]
-
- def peek(delta):
- p = pos + delta
- return text[p] if p < end_pos and p >= 0 else ""
-
- def commit():
- nonlocal rv, current, split_pos
- rv.append(current)
- current = ""
- split_pos = []
-
- while pos < end_pos:
- c = seek(1)
- # do we need to force a split?
- if len(current) >= max_length:
- if len(split_pos) > 0 and len(current) > (desired_length / 2):
- # we have at least one sentence and we are over half the desired length, seek back to the last split
- d = pos - split_pos[-1]
- seek(-d)
- else:
- # no full sentences, seek back until we are not in the middle of a word and split there
- while c not in "!?.\n " and pos > 0 and len(current) > desired_length:
- c = seek(-1)
- commit()
- # check for sentence boundaries
- elif not in_quote and (c in "!?\n" or (c == "." and peek(1) in "\n ")):
- # seek forward if we have consecutive boundary markers but still within the max length
- while (
- pos < len(text) - 1 and len(current) < max_length and peek(1) in "!?."
- ):
- c = seek(1)
- split_pos.append(pos)
- if len(current) >= desired_length:
- commit()
- # treat end of quote as a boundary if its followed by a space or newline
- elif in_quote and peek(1) == '"' and peek(2) in "\n ":
- seek(2)
- split_pos.append(pos)
- rv.append(current)
-
- # clean up, remove lines with only whitespace or punctuation
- rv = [s.strip() for s in rv]
- rv = [s for s in rv if len(s) > 0 and not re.match(r"^[\s\.,;:!?]*$", s)]
-
- return rv
-
-def is_ssml(value):
- try:
- ET.fromstring(value)
- except ET.ParseError:
- return False
- return True
-
-def build_ssml(rawtext, selected_voice):
- texts = rawtext.split("\n")
- joinedparts = ""
- for textpart in texts:
- joinedparts = joinedparts + f"\n{saxutils.escape(textpart)}"
- ssml = f"""
-
- {joinedparts}
-
- """
- return ssml
-
-def create_clips_from_ssml(ssmlinput):
- # Parse the XML
- tree = ET.ElementTree(ET.fromstring(ssmlinput))
- root = tree.getroot()
-
- # Create an empty list
- voice_list = []
-
- # Loop through all voice tags
- for voice in root.iter('{http://www.w3.org/2001/10/synthesis}voice'):
- # Extract the voice name attribute and the content text
- voice_name = voice.attrib['name']
- voice_content = voice.text.strip() if voice.text else ''
- if(len(voice_content) > 0):
- parts = split_and_recombine_text(voice_content)
- for p in parts:
- # add to tuple list
- voice_list.append((voice_name, p))
-
- return voice_list
\ No newline at end of file
diff --git a/spaces/kevinwang676/test-1/infer_pack/transforms.py b/spaces/kevinwang676/test-1/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/test-1/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/knotdgaf/gradiotest/app.py b/spaces/knotdgaf/gradiotest/app.py
deleted file mode 100644
index e8e46cdcb97c2e1f4553a725e61264f1d6cd7698..0000000000000000000000000000000000000000
--- a/spaces/knotdgaf/gradiotest/app.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import time
-
-from theme_dropdown import create_theme_dropdown # noqa: F401
-
-import gradio as gr
-
-dropdown, js = create_theme_dropdown()
-
-with gr.Blocks(theme='knotdgaf/gradiotest') as demo:
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=10):
- gr.Markdown(
- """
- # Theme preview: `gradiotest`
- To use this theme, set `theme='knotdgaf/gradiotest'` in `gr.Blocks()` or `gr.Interface()`.
- You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version
- of this theme.
- """
- )
- with gr.Column(scale=3):
- with gr.Box():
- dropdown.render()
- toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True)
-
- dropdown.change(None, dropdown, None, _js=js)
- toggle_dark.click(
- None,
- _js="""
- () => {
- document.body.classList.toggle('dark');
- document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)'
- }
- """,
- )
-
- name = gr.Textbox(
- label="Name",
- info="Full name, including middle name. No special characters.",
- placeholder="John Doe",
- value="John Doe",
- interactive=True,
- )
-
- with gr.Row():
- slider1 = gr.Slider(label="Slider 1")
- slider2 = gr.Slider(label="Slider 2")
- gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
-
- with gr.Row():
- with gr.Column(variant="panel", scale=1):
- gr.Markdown("## Panel 1")
- radio = gr.Radio(
- ["A", "B", "C"],
- label="Radio",
- info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
- )
- drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False)
- drop_2 = gr.Dropdown(
- ["Option A", "Option B", "Option C"],
- multiselect=True,
- value=["Option A"],
- label="Dropdown",
- interactive=True,
- )
- check = gr.Checkbox(label="Go")
- with gr.Column(variant="panel", scale=2):
- img = gr.Image(
- "https://gradio.app/assets/img/header-image.jpg", label="Image"
- ).style(height=320)
- with gr.Row():
- go_btn = gr.Button("Go", label="Primary Button", variant="primary")
- clear_btn = gr.Button(
- "Clear", label="Secondary Button", variant="secondary"
- )
-
- def go(*args):
- time.sleep(3)
- return "https://gradio.app/assets/img/header-image.jpg"
-
- go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go")
-
- def clear():
- time.sleep(0.2)
- return None
-
- clear_btn.click(clear, None, img)
-
- with gr.Row():
- btn1 = gr.Button("Button 1").style(size="sm")
- btn2 = gr.UploadButton().style(size="sm")
- stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style(
- size="sm"
- )
-
- with gr.Row():
- gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
- gr.JSON(
- value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON"
- )
- gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
- gr.File()
- with gr.Row():
- gr.ColorPicker()
- gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4")
- gr.Gallery(
- [
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
- "lion",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
- "logo",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
- "tower",
- ),
- ]
- ).style(height="200px", grid=2)
-
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
- chat_btn = gr.Button("Add messages")
-
- def chat(history):
- time.sleep(2)
- yield [["How are you?", "I am good."]]
-
- chat_btn.click(
- lambda history: history
- + [["How are you?", "I am good."]]
- + (time.sleep(2) or []),
- chatbot,
- chatbot,
- )
- with gr.Column(scale=1):
- with gr.Accordion("Advanced Settings"):
- gr.Markdown("Hello")
- gr.Number(label="Chatbot control 1")
- gr.Number(label="Chatbot control 2")
- gr.Number(label="Chatbot control 3")
-
-
-if __name__ == "__main__":
- demo.queue().launch()
diff --git a/spaces/kukuhtw/VToonify/vtoonify_model.py b/spaces/kukuhtw/VToonify/vtoonify_model.py
deleted file mode 100644
index 8a506c2da195acafa2e6a18b3ef0874a58b5b15f..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/VToonify/vtoonify_model.py
+++ /dev/null
@@ -1,284 +0,0 @@
-from __future__ import annotations
-import gradio as gr
-import pathlib
-import sys
-sys.path.insert(0, 'vtoonify')
-
-from util import load_psp_standalone, get_video_crop_parameter, tensor2cv2
-import torch
-import torch.nn as nn
-import numpy as np
-import dlib
-import cv2
-from model.vtoonify import VToonify
-from model.bisenet.model import BiSeNet
-import torch.nn.functional as F
-from torchvision import transforms
-from model.encoder.align_all_parallel import align_face
-import gc
-import huggingface_hub
-import os
-
-MODEL_REPO = 'PKUWilliamYang/VToonify'
-
-class Model():
- def __init__(self, device):
- super().__init__()
-
- self.device = device
- self.style_types = {
- 'cartoon1': ['vtoonify_d_cartoon/vtoonify_s026_d0.5.pt', 26],
- 'cartoon1-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 26],
- 'cartoon2-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 64],
- 'cartoon3-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 153],
- 'cartoon4': ['vtoonify_d_cartoon/vtoonify_s299_d0.5.pt', 299],
- 'cartoon4-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 299],
- 'cartoon5-d': ['vtoonify_d_cartoon/vtoonify_s_d.pt', 8],
- 'comic1-d': ['vtoonify_d_comic/vtoonify_s_d.pt', 28],
- 'comic2-d': ['vtoonify_d_comic/vtoonify_s_d.pt', 18],
- 'arcane1': ['vtoonify_d_arcane/vtoonify_s000_d0.5.pt', 0],
- 'arcane1-d': ['vtoonify_d_arcane/vtoonify_s_d.pt', 0],
- 'arcane2': ['vtoonify_d_arcane/vtoonify_s077_d0.5.pt', 77],
- 'arcane2-d': ['vtoonify_d_arcane/vtoonify_s_d.pt', 77],
- 'caricature1': ['vtoonify_d_caricature/vtoonify_s039_d0.5.pt', 39],
- 'caricature2': ['vtoonify_d_caricature/vtoonify_s068_d0.5.pt', 68],
- 'pixar': ['vtoonify_d_pixar/vtoonify_s052_d0.5.pt', 52],
- 'pixar-d': ['vtoonify_d_pixar/vtoonify_s_d.pt', 52],
- 'illustration1-d': ['vtoonify_d_illustration/vtoonify_s054_d_c.pt', 54],
- 'illustration2-d': ['vtoonify_d_illustration/vtoonify_s004_d_c.pt', 4],
- 'illustration3-d': ['vtoonify_d_illustration/vtoonify_s009_d_c.pt', 9],
- 'illustration4-d': ['vtoonify_d_illustration/vtoonify_s043_d_c.pt', 43],
- 'illustration5-d': ['vtoonify_d_illustration/vtoonify_s086_d_c.pt', 86],
- }
-
- self.landmarkpredictor = self._create_dlib_landmark_model()
- self.parsingpredictor = self._create_parsing_model()
- self.pspencoder = self._load_encoder()
- self.transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- self.vtoonify, self.exstyle = self._load_default_model()
- self.color_transfer = False
- self.style_name = 'cartoon1'
- self.video_limit_cpu = 100
- self.video_limit_gpu = 300
-
- @staticmethod
- def _create_dlib_landmark_model():
- return dlib.shape_predictor(huggingface_hub.hf_hub_download(MODEL_REPO,
- 'models/shape_predictor_68_face_landmarks.dat'))
-
- def _create_parsing_model(self):
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO, 'models/faceparsing.pth'),
- map_location=lambda storage, loc: storage))
- parsingpredictor.to(self.device).eval()
- return parsingpredictor
-
- def _load_encoder(self) -> nn.Module:
- style_encoder_path = huggingface_hub.hf_hub_download(MODEL_REPO,'models/encoder.pt')
- return load_psp_standalone(style_encoder_path, self.device)
-
- def _load_default_model(self) -> tuple[torch.Tensor, str]:
- vtoonify = VToonify(backbone = 'dualstylegan')
- vtoonify.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO,
- 'models/vtoonify_d_cartoon/vtoonify_s026_d0.5.pt'),
- map_location=lambda storage, loc: storage)['g_ema'])
- vtoonify.to(self.device)
- tmp = np.load(huggingface_hub.hf_hub_download(MODEL_REPO,'models/vtoonify_d_cartoon/exstyle_code.npy'), allow_pickle=True).item()
- exstyle = torch.tensor(tmp[list(tmp.keys())[26]]).to(self.device)
- with torch.no_grad():
- exstyle = vtoonify.zplus2wplus(exstyle)
- return vtoonify, exstyle
-
- def load_model(self, style_type: str) -> tuple[torch.Tensor, str]:
- if 'illustration' in style_type:
- self.color_transfer = True
- else:
- self.color_transfer = False
- if style_type not in self.style_types.keys():
- return None, 'Oops, wrong Style Type. Please select a valid model.'
- self.style_name = style_type
- model_path, ind = self.style_types[style_type]
- style_path = os.path.join('models',os.path.dirname(model_path),'exstyle_code.npy')
- self.vtoonify.load_state_dict(torch.load(huggingface_hub.hf_hub_download(MODEL_REPO,'models/'+model_path),
- map_location=lambda storage, loc: storage)['g_ema'])
- tmp = np.load(huggingface_hub.hf_hub_download(MODEL_REPO, style_path), allow_pickle=True).item()
- exstyle = torch.tensor(tmp[list(tmp.keys())[ind]]).to(self.device)
- with torch.no_grad():
- exstyle = self.vtoonify.zplus2wplus(exstyle)
- return exstyle, 'Model of %s loaded.'%(style_type)
-
- def detect_and_align(self, frame, top, bottom, left, right, return_para=False):
- message = 'Error: no face detected! Please retry or change the photo.'
- paras = get_video_crop_parameter(frame, self.landmarkpredictor, [left, right, top, bottom])
- instyle = None
- h, w, scale = 0, 0, 0
- if paras is not None:
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR image, we apply gaussian blur to it to avoid over-sharp stylization results
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- with torch.no_grad():
- I = align_face(frame, self.landmarkpredictor)
- if I is not None:
- I = self.transform(I).unsqueeze(dim=0).to(self.device)
- instyle = self.pspencoder(I)
- instyle = self.vtoonify.zplus2wplus(instyle)
- message = 'Successfully rescale the frame to (%d, %d)'%(bottom-top, right-left)
- else:
- frame = np.zeros((256,256,3), np.uint8)
- else:
- frame = np.zeros((256,256,3), np.uint8)
- if return_para:
- return frame, instyle, message, w, h, top, bottom, left, right, scale
- return frame, instyle, message
-
- #@torch.inference_mode()
- def detect_and_align_image(self, image: str, top: int, bottom: int, left: int, right: int
- ) -> tuple[np.ndarray, torch.Tensor, str]:
- if image is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load empty file.'
- frame = cv2.imread(image)
- if frame is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load the image.'
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
- return self.detect_and_align(frame, top, bottom, left, right)
-
- def detect_and_align_video(self, video: str, top: int, bottom: int, left: int, right: int
- ) -> tuple[np.ndarray, torch.Tensor, str]:
- if video is None:
- return np.zeros((256,256,3), np.uint8), None, 'Error: fail to load empty file.'
- video_cap = cv2.VideoCapture(video)
- if video_cap.get(7) == 0:
- video_cap.release()
- return np.zeros((256,256,3), np.uint8), torch.zeros(1,18,512).to(self.device), 'Error: fail to load the video.'
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- video_cap.release()
- return self.detect_and_align(frame, top, bottom, left, right)
-
- def detect_and_align_full_video(self, video: str, top: int, bottom: int, left: int, right: int) -> tuple[str, torch.Tensor, str]:
- message = 'Error: no face detected! Please retry or change the video.'
- instyle = None
- if video is None:
- return 'default.mp4', instyle, 'Error: fail to load empty file.'
- video_cap = cv2.VideoCapture(video)
- if video_cap.get(7) == 0:
- video_cap.release()
- return 'default.mp4', instyle, 'Error: fail to load the video.'
- num = min(self.video_limit_gpu, int(video_cap.get(7)))
- if self.device == 'cpu':
- num = min(self.video_limit_cpu, num)
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- frame, instyle, message, w, h, top, bottom, left, right, scale = self.detect_and_align(frame, top, bottom, left, right, True)
- if instyle is None:
- return 'default.mp4', instyle, message
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter('input.mp4', fourcc, video_cap.get(5), (int(right-left), int(bottom-top)))
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- for i in range(num-1):
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
-
- videoWriter.release()
- video_cap.release()
-
- return 'input.mp4', instyle, 'Successfully rescale the video to (%d, %d)'%(bottom-top, right-left)
-
- def image_toonify(self, aligned_face: np.ndarray, instyle: torch.Tensor, exstyle: torch.Tensor, style_degree: float, style_type: str) -> tuple[np.ndarray, str]:
- #print(style_type + ' ' + self.style_name)
- if instyle is None or aligned_face is None:
- return np.zeros((256,256,3), np.uint8), 'Opps, something wrong with the input. Please go to Step 2 and Rescale Image/First Frame again.'
- if self.style_name != style_type:
- exstyle, _ = self.load_model(style_type)
- if exstyle is None:
- return np.zeros((256,256,3), np.uint8), 'Opps, something wrong with the style type. Please go to Step 1 and load model again.'
- with torch.no_grad():
- if self.color_transfer:
- s_w = exstyle
- else:
- s_w = instyle.clone()
- s_w[:,:7] = exstyle[:,:7]
-
- x = self.transform(aligned_face).unsqueeze(dim=0).to(self.device)
- x_p = F.interpolate(self.parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- inputs = torch.cat((x, x_p/16.), dim=1)
- y_tilde = self.vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- print('*** Toonify %dx%d image with style of %s'%(y_tilde.shape[2], y_tilde.shape[3], style_type))
- return ((y_tilde[0].cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8), 'Successfully toonify the image with style of %s'%(self.style_name)
-
- def video_tooniy(self, aligned_video: str, instyle: torch.Tensor, exstyle: torch.Tensor, style_degree: float, style_type: str) -> tuple[str, str]:
- #print(style_type + ' ' + self.style_name)
- if aligned_video is None:
- return 'default.mp4', 'Opps, something wrong with the input. Please go to Step 2 and Rescale Video again.'
- video_cap = cv2.VideoCapture(aligned_video)
- if instyle is None or aligned_video is None or video_cap.get(7) == 0:
- video_cap.release()
- return 'default.mp4', 'Opps, something wrong with the input. Please go to Step 2 and Rescale Video again.'
- if self.style_name != style_type:
- exstyle, _ = self.load_model(style_type)
- if exstyle is None:
- return 'default.mp4', 'Opps, something wrong with the style type. Please go to Step 1 and load model again.'
- num = min(self.video_limit_gpu, int(video_cap.get(7)))
- if self.device == 'cpu':
- num = min(self.video_limit_cpu, num)
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter('output.mp4', fourcc,
- video_cap.get(5), (int(video_cap.get(3)*4),
- int(video_cap.get(4)*4)))
-
- batch_frames = []
- if video_cap.get(3) != 0:
- if self.device == 'cpu':
- batch_size = max(1, int(4 * 256* 256/ video_cap.get(3) / video_cap.get(4)))
- else:
- batch_size = min(max(1, int(4 * 400 * 360/ video_cap.get(3) / video_cap.get(4))), 4)
- else:
- batch_size = 1
- print('*** Toonify using batch size of %d on %dx%d video of %d frames with style of %s'%(batch_size, int(video_cap.get(3)*4), int(video_cap.get(4)*4), num, style_type))
- with torch.no_grad():
- if self.color_transfer:
- s_w = exstyle
- else:
- s_w = instyle.clone()
- s_w[:,:7] = exstyle[:,:7]
- for i in range(num):
- success, frame = video_cap.read()
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- batch_frames += [self.transform(frame).unsqueeze(dim=0).to(self.device)]
- if len(batch_frames) == batch_size or (i+1) == num:
- x = torch.cat(batch_frames, dim=0)
- batch_frames = []
- with torch.no_grad():
- x_p = F.interpolate(self.parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- inputs = torch.cat((x, x_p/16.), dim=1)
- y_tilde = self.vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- for k in range(y_tilde.size(0)):
- videoWriter.write(tensor2cv2(y_tilde[k].cpu()))
- gc.collect()
-
- videoWriter.release()
- video_cap.release()
- return 'output.mp4', 'Successfully toonify video of %d frames with style of %s'%(num, self.style_name)
-
-
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/FtexImagePlugin.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/FtexImagePlugin.py
deleted file mode 100644
index c7c32252b87f95abd3fe655983055563aa824457..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/FtexImagePlugin.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""
-A Pillow loader for .ftc and .ftu files (FTEX)
-Jerome Leclanche
-
-The contents of this file are hereby released in the public domain (CC0)
-Full text of the CC0 license:
- https://creativecommons.org/publicdomain/zero/1.0/
-
-Independence War 2: Edge Of Chaos - Texture File Format - 16 October 2001
-
-The textures used for 3D objects in Independence War 2: Edge Of Chaos are in a
-packed custom format called FTEX. This file format uses file extensions FTC
-and FTU.
-* FTC files are compressed textures (using standard texture compression).
-* FTU files are not compressed.
-Texture File Format
-The FTC and FTU texture files both use the same format. This
-has the following structure:
-{header}
-{format_directory}
-{data}
-Where:
-{header} = {
- u32:magic,
- u32:version,
- u32:width,
- u32:height,
- u32:mipmap_count,
- u32:format_count
-}
-
-* The "magic" number is "FTEX".
-* "width" and "height" are the dimensions of the texture.
-* "mipmap_count" is the number of mipmaps in the texture.
-* "format_count" is the number of texture formats (different versions of the
-same texture) in this file.
-
-{format_directory} = format_count * { u32:format, u32:where }
-
-The format value is 0 for DXT1 compressed textures and 1 for 24-bit RGB
-uncompressed textures.
-The texture data for a format starts at the position "where" in the file.
-
-Each set of texture data in the file has the following structure:
-{data} = format_count * { u32:mipmap_size, mipmap_size * { u8 } }
-* "mipmap_size" is the number of bytes in that mip level. For compressed
-textures this is the size of the texture data compressed with DXT1. For 24 bit
-uncompressed textures, this is 3 * width * height. Following this are the image
-bytes for that mipmap level.
-
-Note: All data is stored in little-Endian (Intel) byte order.
-"""
-
-import struct
-from enum import IntEnum
-from io import BytesIO
-
-from . import Image, ImageFile
-from ._deprecate import deprecate
-
-MAGIC = b"FTEX"
-
-
-class Format(IntEnum):
- DXT1 = 0
- UNCOMPRESSED = 1
-
-
-def __getattr__(name):
- for enum, prefix in {Format: "FORMAT_"}.items():
- if name.startswith(prefix):
- name = name[len(prefix) :]
- if name in enum.__members__:
- deprecate(f"{prefix}{name}", 10, f"{enum.__name__}.{name}")
- return enum[name]
- msg = f"module '{__name__}' has no attribute '{name}'"
- raise AttributeError(msg)
-
-
-class FtexImageFile(ImageFile.ImageFile):
- format = "FTEX"
- format_description = "Texture File Format (IW2:EOC)"
-
- def _open(self):
- if not _accept(self.fp.read(4)):
- msg = "not an FTEX file"
- raise SyntaxError(msg)
- struct.unpack(" str:
- return sys.getfilesystemencoding() or sys.getdefaultencoding()
-
-
-def _make_text_stream(
- stream: t.BinaryIO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_readable: bool = False,
- force_writable: bool = False,
-) -> t.TextIO:
- if encoding is None:
- encoding = get_best_encoding(stream)
- if errors is None:
- errors = "replace"
- return _NonClosingTextIOWrapper(
- stream,
- encoding,
- errors,
- line_buffering=True,
- force_readable=force_readable,
- force_writable=force_writable,
- )
-
-
-def is_ascii_encoding(encoding: str) -> bool:
- """Checks if a given encoding is ascii."""
- try:
- return codecs.lookup(encoding).name == "ascii"
- except LookupError:
- return False
-
-
-def get_best_encoding(stream: t.IO) -> str:
- """Returns the default stream encoding if not found."""
- rv = getattr(stream, "encoding", None) or sys.getdefaultencoding()
- if is_ascii_encoding(rv):
- return "utf-8"
- return rv
-
-
-class _NonClosingTextIOWrapper(io.TextIOWrapper):
- def __init__(
- self,
- stream: t.BinaryIO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_readable: bool = False,
- force_writable: bool = False,
- **extra: t.Any,
- ) -> None:
- self._stream = stream = t.cast(
- t.BinaryIO, _FixupStream(stream, force_readable, force_writable)
- )
- super().__init__(stream, encoding, errors, **extra)
-
- def __del__(self) -> None:
- try:
- self.detach()
- except Exception:
- pass
-
- def isatty(self) -> bool:
- # https://bitbucket.org/pypy/pypy/issue/1803
- return self._stream.isatty()
-
-
-class _FixupStream:
- """The new io interface needs more from streams than streams
- traditionally implement. As such, this fix-up code is necessary in
- some circumstances.
-
- The forcing of readable and writable flags are there because some tools
- put badly patched objects on sys (one such offender are certain version
- of jupyter notebook).
- """
-
- def __init__(
- self,
- stream: t.BinaryIO,
- force_readable: bool = False,
- force_writable: bool = False,
- ):
- self._stream = stream
- self._force_readable = force_readable
- self._force_writable = force_writable
-
- def __getattr__(self, name: str) -> t.Any:
- return getattr(self._stream, name)
-
- def read1(self, size: int) -> bytes:
- f = getattr(self._stream, "read1", None)
-
- if f is not None:
- return t.cast(bytes, f(size))
-
- return self._stream.read(size)
-
- def readable(self) -> bool:
- if self._force_readable:
- return True
- x = getattr(self._stream, "readable", None)
- if x is not None:
- return t.cast(bool, x())
- try:
- self._stream.read(0)
- except Exception:
- return False
- return True
-
- def writable(self) -> bool:
- if self._force_writable:
- return True
- x = getattr(self._stream, "writable", None)
- if x is not None:
- return t.cast(bool, x())
- try:
- self._stream.write("") # type: ignore
- except Exception:
- try:
- self._stream.write(b"")
- except Exception:
- return False
- return True
-
- def seekable(self) -> bool:
- x = getattr(self._stream, "seekable", None)
- if x is not None:
- return t.cast(bool, x())
- try:
- self._stream.seek(self._stream.tell())
- except Exception:
- return False
- return True
-
-
-def _is_binary_reader(stream: t.IO, default: bool = False) -> bool:
- try:
- return isinstance(stream.read(0), bytes)
- except Exception:
- return default
- # This happens in some cases where the stream was already
- # closed. In this case, we assume the default.
-
-
-def _is_binary_writer(stream: t.IO, default: bool = False) -> bool:
- try:
- stream.write(b"")
- except Exception:
- try:
- stream.write("")
- return False
- except Exception:
- pass
- return default
- return True
-
-
-def _find_binary_reader(stream: t.IO) -> t.Optional[t.BinaryIO]:
- # We need to figure out if the given stream is already binary.
- # This can happen because the official docs recommend detaching
- # the streams to get binary streams. Some code might do this, so
- # we need to deal with this case explicitly.
- if _is_binary_reader(stream, False):
- return t.cast(t.BinaryIO, stream)
-
- buf = getattr(stream, "buffer", None)
-
- # Same situation here; this time we assume that the buffer is
- # actually binary in case it's closed.
- if buf is not None and _is_binary_reader(buf, True):
- return t.cast(t.BinaryIO, buf)
-
- return None
-
-
-def _find_binary_writer(stream: t.IO) -> t.Optional[t.BinaryIO]:
- # We need to figure out if the given stream is already binary.
- # This can happen because the official docs recommend detaching
- # the streams to get binary streams. Some code might do this, so
- # we need to deal with this case explicitly.
- if _is_binary_writer(stream, False):
- return t.cast(t.BinaryIO, stream)
-
- buf = getattr(stream, "buffer", None)
-
- # Same situation here; this time we assume that the buffer is
- # actually binary in case it's closed.
- if buf is not None and _is_binary_writer(buf, True):
- return t.cast(t.BinaryIO, buf)
-
- return None
-
-
-def _stream_is_misconfigured(stream: t.TextIO) -> bool:
- """A stream is misconfigured if its encoding is ASCII."""
- # If the stream does not have an encoding set, we assume it's set
- # to ASCII. This appears to happen in certain unittest
- # environments. It's not quite clear what the correct behavior is
- # but this at least will force Click to recover somehow.
- return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii")
-
-
-def _is_compat_stream_attr(stream: t.TextIO, attr: str, value: t.Optional[str]) -> bool:
- """A stream attribute is compatible if it is equal to the
- desired value or the desired value is unset and the attribute
- has a value.
- """
- stream_value = getattr(stream, attr, None)
- return stream_value == value or (value is None and stream_value is not None)
-
-
-def _is_compatible_text_stream(
- stream: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str]
-) -> bool:
- """Check if a stream's encoding and errors attributes are
- compatible with the desired values.
- """
- return _is_compat_stream_attr(
- stream, "encoding", encoding
- ) and _is_compat_stream_attr(stream, "errors", errors)
-
-
-def _force_correct_text_stream(
- text_stream: t.IO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- is_binary: t.Callable[[t.IO, bool], bool],
- find_binary: t.Callable[[t.IO], t.Optional[t.BinaryIO]],
- force_readable: bool = False,
- force_writable: bool = False,
-) -> t.TextIO:
- if is_binary(text_stream, False):
- binary_reader = t.cast(t.BinaryIO, text_stream)
- else:
- text_stream = t.cast(t.TextIO, text_stream)
- # If the stream looks compatible, and won't default to a
- # misconfigured ascii encoding, return it as-is.
- if _is_compatible_text_stream(text_stream, encoding, errors) and not (
- encoding is None and _stream_is_misconfigured(text_stream)
- ):
- return text_stream
-
- # Otherwise, get the underlying binary reader.
- possible_binary_reader = find_binary(text_stream)
-
- # If that's not possible, silently use the original reader
- # and get mojibake instead of exceptions.
- if possible_binary_reader is None:
- return text_stream
-
- binary_reader = possible_binary_reader
-
- # Default errors to replace instead of strict in order to get
- # something that works.
- if errors is None:
- errors = "replace"
-
- # Wrap the binary stream in a text stream with the correct
- # encoding parameters.
- return _make_text_stream(
- binary_reader,
- encoding,
- errors,
- force_readable=force_readable,
- force_writable=force_writable,
- )
-
-
-def _force_correct_text_reader(
- text_reader: t.IO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_readable: bool = False,
-) -> t.TextIO:
- return _force_correct_text_stream(
- text_reader,
- encoding,
- errors,
- _is_binary_reader,
- _find_binary_reader,
- force_readable=force_readable,
- )
-
-
-def _force_correct_text_writer(
- text_writer: t.IO,
- encoding: t.Optional[str],
- errors: t.Optional[str],
- force_writable: bool = False,
-) -> t.TextIO:
- return _force_correct_text_stream(
- text_writer,
- encoding,
- errors,
- _is_binary_writer,
- _find_binary_writer,
- force_writable=force_writable,
- )
-
-
-def get_binary_stdin() -> t.BinaryIO:
- reader = _find_binary_reader(sys.stdin)
- if reader is None:
- raise RuntimeError("Was not able to determine binary stream for sys.stdin.")
- return reader
-
-
-def get_binary_stdout() -> t.BinaryIO:
- writer = _find_binary_writer(sys.stdout)
- if writer is None:
- raise RuntimeError("Was not able to determine binary stream for sys.stdout.")
- return writer
-
-
-def get_binary_stderr() -> t.BinaryIO:
- writer = _find_binary_writer(sys.stderr)
- if writer is None:
- raise RuntimeError("Was not able to determine binary stream for sys.stderr.")
- return writer
-
-
-def get_text_stdin(
- encoding: t.Optional[str] = None, errors: t.Optional[str] = None
-) -> t.TextIO:
- rv = _get_windows_console_stream(sys.stdin, encoding, errors)
- if rv is not None:
- return rv
- return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True)
-
-
-def get_text_stdout(
- encoding: t.Optional[str] = None, errors: t.Optional[str] = None
-) -> t.TextIO:
- rv = _get_windows_console_stream(sys.stdout, encoding, errors)
- if rv is not None:
- return rv
- return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True)
-
-
-def get_text_stderr(
- encoding: t.Optional[str] = None, errors: t.Optional[str] = None
-) -> t.TextIO:
- rv = _get_windows_console_stream(sys.stderr, encoding, errors)
- if rv is not None:
- return rv
- return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True)
-
-
-def _wrap_io_open(
- file: t.Union[str, os.PathLike, int],
- mode: str,
- encoding: t.Optional[str],
- errors: t.Optional[str],
-) -> t.IO:
- """Handles not passing ``encoding`` and ``errors`` in binary mode."""
- if "b" in mode:
- return open(file, mode)
-
- return open(file, mode, encoding=encoding, errors=errors)
-
-
-def open_stream(
- filename: str,
- mode: str = "r",
- encoding: t.Optional[str] = None,
- errors: t.Optional[str] = "strict",
- atomic: bool = False,
-) -> t.Tuple[t.IO, bool]:
- binary = "b" in mode
-
- # Standard streams first. These are simple because they ignore the
- # atomic flag. Use fsdecode to handle Path("-").
- if os.fsdecode(filename) == "-":
- if any(m in mode for m in ["w", "a", "x"]):
- if binary:
- return get_binary_stdout(), False
- return get_text_stdout(encoding=encoding, errors=errors), False
- if binary:
- return get_binary_stdin(), False
- return get_text_stdin(encoding=encoding, errors=errors), False
-
- # Non-atomic writes directly go out through the regular open functions.
- if not atomic:
- return _wrap_io_open(filename, mode, encoding, errors), True
-
- # Some usability stuff for atomic writes
- if "a" in mode:
- raise ValueError(
- "Appending to an existing file is not supported, because that"
- " would involve an expensive `copy`-operation to a temporary"
- " file. Open the file in normal `w`-mode and copy explicitly"
- " if that's what you're after."
- )
- if "x" in mode:
- raise ValueError("Use the `overwrite`-parameter instead.")
- if "w" not in mode:
- raise ValueError("Atomic writes only make sense with `w`-mode.")
-
- # Atomic writes are more complicated. They work by opening a file
- # as a proxy in the same folder and then using the fdopen
- # functionality to wrap it in a Python file. Then we wrap it in an
- # atomic file that moves the file over on close.
- import errno
- import random
-
- try:
- perm: t.Optional[int] = os.stat(filename).st_mode
- except OSError:
- perm = None
-
- flags = os.O_RDWR | os.O_CREAT | os.O_EXCL
-
- if binary:
- flags |= getattr(os, "O_BINARY", 0)
-
- while True:
- tmp_filename = os.path.join(
- os.path.dirname(filename),
- f".__atomic-write{random.randrange(1 << 32):08x}",
- )
- try:
- fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm)
- break
- except OSError as e:
- if e.errno == errno.EEXIST or (
- os.name == "nt"
- and e.errno == errno.EACCES
- and os.path.isdir(e.filename)
- and os.access(e.filename, os.W_OK)
- ):
- continue
- raise
-
- if perm is not None:
- os.chmod(tmp_filename, perm) # in case perm includes bits in umask
-
- f = _wrap_io_open(fd, mode, encoding, errors)
- af = _AtomicFile(f, tmp_filename, os.path.realpath(filename))
- return t.cast(t.IO, af), True
-
-
-class _AtomicFile:
- def __init__(self, f: t.IO, tmp_filename: str, real_filename: str) -> None:
- self._f = f
- self._tmp_filename = tmp_filename
- self._real_filename = real_filename
- self.closed = False
-
- @property
- def name(self) -> str:
- return self._real_filename
-
- def close(self, delete: bool = False) -> None:
- if self.closed:
- return
- self._f.close()
- os.replace(self._tmp_filename, self._real_filename)
- self.closed = True
-
- def __getattr__(self, name: str) -> t.Any:
- return getattr(self._f, name)
-
- def __enter__(self) -> "_AtomicFile":
- return self
-
- def __exit__(self, exc_type, exc_value, tb): # type: ignore
- self.close(delete=exc_type is not None)
-
- def __repr__(self) -> str:
- return repr(self._f)
-
-
-def strip_ansi(value: str) -> str:
- return _ansi_re.sub("", value)
-
-
-def _is_jupyter_kernel_output(stream: t.IO) -> bool:
- while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)):
- stream = stream._stream
-
- return stream.__class__.__module__.startswith("ipykernel.")
-
-
-def should_strip_ansi(
- stream: t.Optional[t.IO] = None, color: t.Optional[bool] = None
-) -> bool:
- if color is None:
- if stream is None:
- stream = sys.stdin
- return not isatty(stream) and not _is_jupyter_kernel_output(stream)
- return not color
-
-
-# On Windows, wrap the output streams with colorama to support ANSI
-# color codes.
-# NOTE: double check is needed so mypy does not analyze this on Linux
-if sys.platform.startswith("win") and WIN:
- from ._winconsole import _get_windows_console_stream
-
- def _get_argv_encoding() -> str:
- import locale
-
- return locale.getpreferredencoding()
-
- _ansi_stream_wrappers: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary()
-
- def auto_wrap_for_ansi(
- stream: t.TextIO, color: t.Optional[bool] = None
- ) -> t.TextIO:
- """Support ANSI color and style codes on Windows by wrapping a
- stream with colorama.
- """
- try:
- cached = _ansi_stream_wrappers.get(stream)
- except Exception:
- cached = None
-
- if cached is not None:
- return cached
-
- import colorama
-
- strip = should_strip_ansi(stream, color)
- ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip)
- rv = t.cast(t.TextIO, ansi_wrapper.stream)
- _write = rv.write
-
- def _safe_write(s):
- try:
- return _write(s)
- except BaseException:
- ansi_wrapper.reset_all()
- raise
-
- rv.write = _safe_write
-
- try:
- _ansi_stream_wrappers[stream] = rv
- except Exception:
- pass
-
- return rv
-
-else:
-
- def _get_argv_encoding() -> str:
- return getattr(sys.stdin, "encoding", None) or get_filesystem_encoding()
-
- def _get_windows_console_stream(
- f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str]
- ) -> t.Optional[t.TextIO]:
- return None
-
-
-def term_len(x: str) -> int:
- return len(strip_ansi(x))
-
-
-def isatty(stream: t.IO) -> bool:
- try:
- return stream.isatty()
- except Exception:
- return False
-
-
-def _make_cached_stream_func(
- src_func: t.Callable[[], t.TextIO], wrapper_func: t.Callable[[], t.TextIO]
-) -> t.Callable[[], t.TextIO]:
- cache: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary()
-
- def func() -> t.TextIO:
- stream = src_func()
- try:
- rv = cache.get(stream)
- except Exception:
- rv = None
- if rv is not None:
- return rv
- rv = wrapper_func()
- try:
- cache[stream] = rv
- except Exception:
- pass
- return rv
-
- return func
-
-
-_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin)
-_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout)
-_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr)
-
-
-binary_streams: t.Mapping[str, t.Callable[[], t.BinaryIO]] = {
- "stdin": get_binary_stdin,
- "stdout": get_binary_stdout,
- "stderr": get_binary_stderr,
-}
-
-text_streams: t.Mapping[
- str, t.Callable[[t.Optional[str], t.Optional[str]], t.TextIO]
-] = {
- "stdin": get_text_stdin,
- "stdout": get_text_stdout,
- "stderr": get_text_stderr,
-}
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/boundsPen.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/boundsPen.py
deleted file mode 100644
index d833cc89b90b38937aa0e21c26bc7e7e84f5ee7d..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/boundsPen.py
+++ /dev/null
@@ -1,100 +0,0 @@
-from fontTools.misc.arrayTools import updateBounds, pointInRect, unionRect
-from fontTools.misc.bezierTools import calcCubicBounds, calcQuadraticBounds
-from fontTools.pens.basePen import BasePen
-
-
-__all__ = ["BoundsPen", "ControlBoundsPen"]
-
-
-class ControlBoundsPen(BasePen):
-
- """Pen to calculate the "control bounds" of a shape. This is the
- bounding box of all control points, so may be larger than the
- actual bounding box if there are curves that don't have points
- on their extremes.
-
- When the shape has been drawn, the bounds are available as the
- ``bounds`` attribute of the pen object. It's a 4-tuple::
-
- (xMin, yMin, xMax, yMax).
-
- If ``ignoreSinglePoints`` is True, single points are ignored.
- """
-
- def __init__(self, glyphSet, ignoreSinglePoints=False):
- BasePen.__init__(self, glyphSet)
- self.ignoreSinglePoints = ignoreSinglePoints
- self.init()
-
- def init(self):
- self.bounds = None
- self._start = None
-
- def _moveTo(self, pt):
- self._start = pt
- if not self.ignoreSinglePoints:
- self._addMoveTo()
-
- def _addMoveTo(self):
- if self._start is None:
- return
- bounds = self.bounds
- if bounds:
- self.bounds = updateBounds(bounds, self._start)
- else:
- x, y = self._start
- self.bounds = (x, y, x, y)
- self._start = None
-
- def _lineTo(self, pt):
- self._addMoveTo()
- self.bounds = updateBounds(self.bounds, pt)
-
- def _curveToOne(self, bcp1, bcp2, pt):
- self._addMoveTo()
- bounds = self.bounds
- bounds = updateBounds(bounds, bcp1)
- bounds = updateBounds(bounds, bcp2)
- bounds = updateBounds(bounds, pt)
- self.bounds = bounds
-
- def _qCurveToOne(self, bcp, pt):
- self._addMoveTo()
- bounds = self.bounds
- bounds = updateBounds(bounds, bcp)
- bounds = updateBounds(bounds, pt)
- self.bounds = bounds
-
-
-class BoundsPen(ControlBoundsPen):
-
- """Pen to calculate the bounds of a shape. It calculates the
- correct bounds even when the shape contains curves that don't
- have points on their extremes. This is somewhat slower to compute
- than the "control bounds".
-
- When the shape has been drawn, the bounds are available as the
- ``bounds`` attribute of the pen object. It's a 4-tuple::
-
- (xMin, yMin, xMax, yMax)
- """
-
- def _curveToOne(self, bcp1, bcp2, pt):
- self._addMoveTo()
- bounds = self.bounds
- bounds = updateBounds(bounds, pt)
- if not pointInRect(bcp1, bounds) or not pointInRect(bcp2, bounds):
- bounds = unionRect(
- bounds, calcCubicBounds(self._getCurrentPoint(), bcp1, bcp2, pt)
- )
- self.bounds = bounds
-
- def _qCurveToOne(self, bcp, pt):
- self._addMoveTo()
- bounds = self.bounds
- bounds = updateBounds(bounds, pt)
- if not pointInRect(bcp, bounds):
- bounds = unionRect(
- bounds, calcQuadraticBounds(self._getCurrentPoint(), bcp, pt)
- )
- self.bounds = bounds
diff --git a/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/static/controlnetlora.html b/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/static/controlnetlora.html
deleted file mode 100644
index a543ce32b3a93803650335209ecf801bbb4a1988..0000000000000000000000000000000000000000
--- a/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/static/controlnetlora.html
+++ /dev/null
@@ -1,412 +0,0 @@
-
-
-
-
-
- Real-Time Latent Consistency Model ControlNet
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/leilevy/bingo/src/pages/api/healthz.ts b/spaces/leilevy/bingo/src/pages/api/healthz.ts
deleted file mode 100644
index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000
--- a/spaces/leilevy/bingo/src/pages/api/healthz.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- res.status(200).end('ok')
-}
diff --git a/spaces/leogabraneth/text-generation-webui-main/docs/README.md b/spaces/leogabraneth/text-generation-webui-main/docs/README.md
deleted file mode 100644
index d2efbf1df6a450993ef2a3e5106b75e61357dbf5..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/docs/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-These files is a mirror of the documentation at:
-
-# https://github.com/oobabooga/text-generation-webui/wiki
-
-It is recommended to browse it there. Contributions can be sent here and will later be synced with the wiki.
diff --git a/spaces/lewispons/GrammarGuru/src/models/gensim_vect_v2.py b/spaces/lewispons/GrammarGuru/src/models/gensim_vect_v2.py
deleted file mode 100644
index 4144b6dc4d2929c8470cd85d1f8c7fc09d914ad1..0000000000000000000000000000000000000000
--- a/spaces/lewispons/GrammarGuru/src/models/gensim_vect_v2.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import pandas as pd
-from gensim import corpora
-from gensim import similarities
-from gensim.models import TfidfModel
-from gensim.parsing import strip_tags, strip_numeric, \
- strip_multiple_whitespaces, stem_text, strip_punctuation, \
- remove_stopwords, preprocess_string
-import re
-
-from typing import List
-from utils.constants import TEST_INPUTS
-import argparse
-from random import choice
-
-transform_to_lower = lambda s: s.lower()
-remove_single_char = lambda s: re.sub(r'\s+\w{1}\s+', '', s)
-
-class PaperRecommender:
- def __init__(self,
- num_samples=3000,
- corpus_dictionary_path="30Ktokens",
- arxiv_dataset_path="/Users/luis.morales/Desktop/arxiv-paper-recommender/data/processed/reduced_arxiv_papers.parquet.gzip",
- save_dict=False,
- query=""):
- self.num_samples = num_samples
- self.corpus_dictionary_path = corpus_dictionary_path
- self.arxiv_dataset_path = arxiv_dataset_path
- self.save_dict = save_dict
- self.query = query
- self.cleaning_filters = [
- strip_tags,
- strip_numeric,
- strip_punctuation,
- strip_multiple_whitespaces,
- transform_to_lower,
- remove_stopwords,
- remove_single_char
- ]
- self.dictionary = None
- self.index = None
- self.tfidf_model = None
- self.df = None
-
- def gensim_tokenizer(self, docs: List[str]):
- tokenized_docs = list()
- for doc in docs:
- processed_words = preprocess_string(doc, self.cleaning_filters)
- tokenized_docs.append(processed_words)
- return tokenized_docs
-
- def cleaning_pipe(self, document):
- processed_words = preprocess_string(document, self.cleaning_filters)
- return processed_words
-
- def get_gensim_dictionary(self, tokenized_docs: List[str], dict_name: str = "corpus"):
- dictionary = corpora.Dictionary(tokenized_docs)
- if self.save_dict:
- parent_folder = "/Users/luis.morales/Desktop/arxiv-paper-recommender/models/nlp_dictionaries"
- dictionary.save(f'{parent_folder}/{dict_name}.dict')
- return dictionary
-
- def get_closest_n(self, query: str, n: int):
- query_document = self.cleaning_pipe(query)
- query_bow = self.dictionary.doc2bow(query_document)
- sims = self.index[self.tfidf_model[query_bow]]
- top_idx = sims.argsort()[-1 * n:][::-1]
- return top_idx
-
- def get_recommendations_metadata(self, query: str, n: int):
- recommendations_idxs = self.get_closest_n(query, n)
- recommendations_metadata = self.df.iloc[recommendations_idxs]
- recommendations_metadata = recommendations_metadata.reset_index(drop=True)
- return recommendations_metadata
-
- def run_recommender(self):
- if self.num_samples is None:
- self.df = pd.read_parquet(self.arxiv_dataset_path)
-
- self.df = pd.read_parquet(self.arxiv_dataset_path).sample(self.num_samples).reset_index(drop=True)
- corpus = self.df['cleaned_abstracts'].to_list()
-
- tokenized_corpus = self.gensim_tokenizer(corpus)
- self.dictionary = self.get_gensim_dictionary(tokenized_docs=tokenized_corpus, dict_name=self.corpus_dictionary_path)
-
- BoW_corpus = [self.dictionary.doc2bow(doc, allow_update=True) for doc in tokenized_corpus]
-
- self.tfidf_model = TfidfModel(BoW_corpus)
- self.index = similarities.SparseMatrixSimilarity(self.tfidf_model[BoW_corpus], num_features=len(self.dictionary))
- if self.query is None:
- self.query = choice(TEST_INPUTS)
- return self.results
diff --git a/spaces/lewiswu1209/MockingBird/mkgui/base/api/fastapi_utils.py b/spaces/lewiswu1209/MockingBird/mkgui/base/api/fastapi_utils.py
deleted file mode 100644
index adf582a7c33c2d68ed32fb8b3382fdeb388db0d0..0000000000000000000000000000000000000000
--- a/spaces/lewiswu1209/MockingBird/mkgui/base/api/fastapi_utils.py
+++ /dev/null
@@ -1,102 +0,0 @@
-"""Collection of utilities for FastAPI apps."""
-
-import inspect
-from typing import Any, Type
-
-from fastapi import FastAPI, Form
-from pydantic import BaseModel
-
-
-def as_form(cls: Type[BaseModel]) -> Any:
- """Adds an as_form class method to decorated models.
-
- The as_form class method can be used with FastAPI endpoints
- """
- new_params = [
- inspect.Parameter(
- field.alias,
- inspect.Parameter.POSITIONAL_ONLY,
- default=(Form(field.default) if not field.required else Form(...)),
- )
- for field in cls.__fields__.values()
- ]
-
- async def _as_form(**data): # type: ignore
- return cls(**data)
-
- sig = inspect.signature(_as_form)
- sig = sig.replace(parameters=new_params)
- _as_form.__signature__ = sig # type: ignore
- setattr(cls, "as_form", _as_form)
- return cls
-
-
-def patch_fastapi(app: FastAPI) -> None:
- """Patch function to allow relative url resolution.
-
- This patch is required to make fastapi fully functional with a relative url path.
- This code snippet can be copy-pasted to any Fastapi application.
- """
- from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
- from starlette.requests import Request
- from starlette.responses import HTMLResponse
-
- async def redoc_ui_html(req: Request) -> HTMLResponse:
- assert app.openapi_url is not None
- redoc_ui = get_redoc_html(
- openapi_url="./" + app.openapi_url.lstrip("/"),
- title=app.title + " - Redoc UI",
- )
-
- return HTMLResponse(redoc_ui.body.decode("utf-8"))
-
- async def swagger_ui_html(req: Request) -> HTMLResponse:
- assert app.openapi_url is not None
- swagger_ui = get_swagger_ui_html(
- openapi_url="./" + app.openapi_url.lstrip("/"),
- title=app.title + " - Swagger UI",
- oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url,
- )
-
- # insert request interceptor to have all request run on relativ path
- request_interceptor = (
- "requestInterceptor: (e) => {"
- "\n\t\t\tvar url = window.location.origin + window.location.pathname"
- '\n\t\t\turl = url.substring( 0, url.lastIndexOf( "/" ) + 1);'
- "\n\t\t\turl = e.url.replace(/http(s)?:\/\/[^/]*\//i, url);" # noqa: W605
- "\n\t\t\te.contextUrl = url"
- "\n\t\t\te.url = url"
- "\n\t\t\treturn e;}"
- )
-
- return HTMLResponse(
- swagger_ui.body.decode("utf-8").replace(
- "dom_id: '#swagger-ui',",
- "dom_id: '#swagger-ui',\n\t\t" + request_interceptor + ",",
- )
- )
-
- # remove old docs route and add our patched route
- routes_new = []
- for app_route in app.routes:
- if app_route.path == "/docs": # type: ignore
- continue
-
- if app_route.path == "/redoc": # type: ignore
- continue
-
- routes_new.append(app_route)
-
- app.router.routes = routes_new
-
- assert app.docs_url is not None
- app.add_route(app.docs_url, swagger_ui_html, include_in_schema=False)
- assert app.redoc_url is not None
- app.add_route(app.redoc_url, redoc_ui_html, include_in_schema=False)
-
- # Make graphql realtive
- from starlette import graphql
-
- graphql.GRAPHIQL = graphql.GRAPHIQL.replace(
- "({{REQUEST_PATH}}", '("." + {{REQUEST_PATH}}'
- )
diff --git a/spaces/lightli/bingo-newbing/Dockerfile b/spaces/lightli/bingo-newbing/Dockerfile
deleted file mode 100644
index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000
--- a/spaces/lightli/bingo-newbing/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM node:18
-
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME
-
-# Switch to the "user" user
-USER user
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY --chown=user package*.json $HOME/app/
-
-RUN npm install
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app/
-
-RUN npm run build
-
-ENV PORT 7860
-EXPOSE 7860
-
-CMD npm start
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Contoh Proposal Usaha Peternakan Ayam 65.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Contoh Proposal Usaha Peternakan Ayam 65.md
deleted file mode 100644
index c40188faa64bf7f6ec40afd54f292994734f97f6..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Contoh Proposal Usaha Peternakan Ayam 65.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
dengan asal dari dunia darat, sejumlah ulat berlokasi di beberapa daerah. ulat ini mempengaruhi puluhan juta penduduk di beberapa daerah baca selengkapnya dan selalu menjadi pilihan transportasi. bahkan di beberapa daerah terdapat upada pencemaran parasut baca selengkapnya. adapun, upad pencemaran parasut hanya ingin ada skala kehidupan yang nahas, sehingga hampir lupa baca selengkapnya. selain tidak ada pengamanan, aturan berlaku hanya puncak bulan.
-
masa lalu, keputusan akan memungkinkan investor memperoleh penghasilan dari sistem usaha usaha diperuntukkan dalam sungai dan rumah. investasi dari sisi lain, mencabut sumber daya yang diperlakukan sebagai penghasilan dari perusahaan baca selengkapnya..
keuntungan investasi di indonesia, majalah di indonesia, anda ditaksirasi memiliki keuntungan, maka keputusan pemerintah bisa menjadi subyek dari satu pengamatan di bawah artikel ini baca selengkapnya..
-
leasehold house in jakarta,under the law of public property, house ownership should not exceed the period of 35 years, therefore the operator of leasehold house should strive to maintain the use status of that house, of course, the house owner can also open their own house, known as "grand enterprise". baca selengkapnya..
-
not to force to relocate to the study location which is not suitable for the study and so that the results and data analysis data in the designated location can be handled more easily and effectively baca selengkapnya..
-
the site location of the housing development in tanjung pantur are located in a very deep valley, which leads to housing market is very tricky to develop the housing structure in the deep valley area. putra (as the investor) also needs to consider the health of the environment, water, land use planning baca selengkapnya..
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/JASF Janes Advanced Strike Fighters Serial Numberrar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/JASF Janes Advanced Strike Fighters Serial Numberrar.md
deleted file mode 100644
index 453eae1f1b2eb1b7aee06363adb2c546d6cedab8..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/JASF Janes Advanced Strike Fighters Serial Numberrar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
JASF Janes Advanced Strike Fighters Serial Numberrar
-
-See more of J.A.S.F. - Jane's Advanced Strike Fighters on Facebook. Log In ... I cannot seem to be able to find the serial, or key, that I can enter into STEAM. · 3y. 4d29de3e1b
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Manchali Padosan 1080p Movie Torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Manchali Padosan 1080p Movie Torrent.md
deleted file mode 100644
index 3289e363e0729e461c187c75c8272021bcd9a205..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Manchali Padosan 1080p Movie Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-