diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyberlink PowerDirector 11 Full Version with Crack Download and Install Guide.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyberlink PowerDirector 11 Full Version with Crack Download and Install Guide.md deleted file mode 100644 index 99784d78e4eda2e62afa2bd155730617c546e398..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyberlink PowerDirector 11 Full Version with Crack Download and Install Guide.md +++ /dev/null @@ -1,93 +0,0 @@ - -

Cyberlink PowerDirector 11 Full Version with Crack: A Comprehensive Review

-

If you are looking for a powerful and easy-to-use video editing software, you might have heard of Cyberlink PowerDirector. It is one of the most popular and versatile video editors on the market, with a range of features and tools that can help you create stunning videos for any purpose. But what if you want to use the full version of Cyberlink PowerDirector without paying for it? Is there a way to get Cyberlink PowerDirector 11 full version with crack?

-

cyberlink powerdirector 11 full version with crack


DOWNLOAD >>>>> https://byltly.com/2uKyY2



-

In this article, we will review Cyberlink PowerDirector 11, its features, pros and cons, and how to download and install it with a crack. We will also compare it with some alternatives that you might want to consider. By the end of this article, you will have a clear idea of whether Cyberlink PowerDirector 11 is the right video editor for you and how to get it for free.

-

Features of Cyberlink PowerDirector 11

-

Cyberlink PowerDirector 11 is a video editing software that was released in 2012. It is designed for both beginners and professionals, with a user-friendly interface and a comprehensive set of features. Here are some of the main features of Cyberlink PowerDirector 11:

-

Express Video Creation

-

If you want to create a video quickly and easily, you can use the Express Project module. This module allows you to choose from a variety of templates that are suitable for different types of videos, such as travel, wedding, sports, etc. You can then drag and drop your clips into the timeline, add transitions, effects, music, and titles, and produce your video in minutes.

-

Action Camera Center

-

If you are into action sports or adventure videos, you will love the Action Camera Center. This feature lets you edit your footage from action cameras like GoPro, DJI, or Sony. You can apply effects such as slow motion, freeze frame, zoom, pan, or rotate. You can also correct lens distortion, stabilize shaky videos, or remove background noise.

-

Simplified Color Adjustment

-

If you want to enhance the color and tone of your videos, you can use the Simplified Color Adjustment feature. This feature lets you adjust the brightness, contrast, saturation, hue, temperature, tint, and exposure of your videos with simple sliders. You can also use one-click color correction tools such as Auto Tone or White Balance. If you want more control over your color grading, you can use advanced tools such as Color Director or Color Match.

-

Customizable Design Tools

-

If you want to add some creativity and style to your videos, you can use the Customizable Design Tools. These tools let you create and edit titles, transitions, PiP objects (picture-in-picture), masks, subtitles, etc. You can also use the new Brush Tool to draw shapes or masks on your videos. You can customize these elements with different fonts, colors, sizes, animations, etc.

-

New Effects and Enhancements

-

If you want to spice up your videos with some special effects, you can use the New Effects and Enhancements feature. This feature lets you access a large library of effects that are categorized into themes such as Bloggers' Social Media Pack, Holiday Pack Vol 11, Travel Pack 6, Wedding Pack, etc. You can also use third-party plug-ins from sources such as NewBlueFX, proDAD, or BorisFX to add more effects to your videos.

-

360 Video Stabilization and Editing

-

If you want to edit your videos in 360 degrees, you can use the 360 Video Stabilization and Editing feature. This feature lets you import and edit your footage from 360 cameras such as Samsung Gear VR, Ricoh Theta, or Kodak Pixpro . You can apply effects such as stabilization, trimming, splitting, or adding titles to your 360 videos. You can also use the True360 View Designer to convert your 360 videos into standard videos with different perspectives.

-

cyberlink powerdirector 11 ultimate suite crack
-cyberlink powerdirector 11 ultra download + crack
-cyberlink powerdirector 11 deluxe free download full version with crack
-cyberlink powerdirector 11 activation key crack
-cyberlink powerdirector 11 serial number crack
-how to install cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack premium crack
-cyberlink powerdirector 11 director suite crack
-cyberlink powerdirector 11 free download full version for windows 10 with crack
-cyberlink powerdirector 11 patch crack
-cyberlink powerdirector 11 ultimate download + crack
-cyberlink powerdirector 11 keygen crack
-cyberlink powerdirector 11 free download full version for windows 7 with crack
-cyberlink powerdirector 11 license key crack
-cyberlink powerdirector 11 registration code crack
-how to use cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack essential crack
-cyberlink powerdirector 11 ultimate serial key + crack full version
-cyberlink powerdirector 11 free download full version for windows 8 with crack
-cyberlink powerdirector 11 activation code crack
-how to get cyberlink powerdirector 11 for free with crack
-cyberlink powerdirector 11 content pack premium download + crack
-cyberlink powerdirector 11 ultra serial key + crack full version
-cyberlink powerdirector 11 free download full version for windows xp with crack
-cyberlink powerdirector 11 product key crack
-how to update cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack essential download + crack
-cyberlink powerdirector 11 deluxe serial key + crack full version
-cyberlink powerdirector 11 free download full version for mac with crack
-cyberlink powerdirector 11 activation patch crack
-how to register cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack premium free download + crack
-cyberlink powerdirector 11 ultra keygen + crack full version
-cyberlink powerdirector 11 free download full version for android with crack
-cyberlink powerdirector 11 trial resetter + crack full version
-how to uninstall cyberlink powerdirector 11 with crack
-cyberlink powerdirector 11 content pack essential free download + crack
-cyberlink powerdirector 11 deluxe keygen + crack full version
-cyberlink powerdirector 11 free download full version for linux with crack
-cyberlink powerdirector 11 activation manager + crack full version

-

Pros and Cons of Cyberlink PowerDirector 11

-

As with any software, Cyberlink PowerDirector 11 has its advantages and disadvantages. Here are some of them:

-

Pros

- -

Cons

- -

How to Download and Install Cyberlink PowerDirector 11 Full Version with Crack

-

If you are interested in using Cyberlink PowerDirector 11 full version with crack, you will need to follow these steps:

-

Step 1: Download the setup file and the crack file from a reliable source

-

You will need to find a website that offers both the setup file and the crack file for Cyberlink PowerDirector 11. You can search for them on Google or other search engines, but be careful not to download any malware or viruses along with them. You should also check the reviews and ratings of the website before downloading anything.

-

One possible website that offers both files is FileCR.com . You can download them from these links:

- -

Note that these links are only for reference purposes and we do not endorse or guarantee their safety or legality.

-

Step 2: Run the setup file and follow the instructions to install the program

-

Once you have downloaded both files, you will need to run the setup file and follow the instructions on the screen to install Cyberlink PowerDirector 11 on your computer. You may need to agree to some terms and conditions and choose some options such as language, destination folder, etc. You may also need to enter a serial number or activation code that comes with the setup file. You should not launch or run the program after installation.

-

Step 3: Copy the crack file and paste it into the installation folder

- 0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Project 32 Bit Full Crack What You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Project 32 Bit Full Crack What You Need to Know.md deleted file mode 100644 index ce083e534ae4ba818da4006b7ac5c3734ef8791b..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Project 32 Bit Full Crack What You Need to Know.md +++ /dev/null @@ -1,29 +0,0 @@ -
-

How to Download Microsoft Project 32 Bit Full Crack for Free

-

Microsoft Project is a powerful project management software that helps you plan, track, and manage your projects. It allows you to create schedules, assign tasks, monitor progress, manage resources, and collaborate with your team. Microsoft Project is widely used by professionals and organizations in various fields and industries.

-

However, Microsoft Project is not a cheap software. It requires a subscription to Microsoft 365 or a one-time purchase of a standalone license. If you want to use Microsoft Project without paying for it, you might be tempted to download Microsoft Project 32 bit full crack for free from the internet.

-

download microsoft project 32 bit full crack


Download Zip ✪✪✪ https://byltly.com/2uKvbI



-

A crack is a modified version of a software that bypasses its security and activation features. By using a crack, you can run the software without a valid license or product key. However, downloading and using Microsoft Project 32 bit full crack is not a good idea for several reasons.

-

The Risks of Downloading Microsoft Project 32 Bit Full Crack

-

Downloading Microsoft Project 32 bit full crack from the internet is risky and illegal. Here are some of the dangers and disadvantages of doing so:

- -

The Benefits of Using Genuine Microsoft Project

-

Instead of downloading Microsoft Project 32 bit full crack, you should consider using genuine Microsoft Project. Here are some of the benefits of doing so:

- -

How to Get Genuine Microsoft Project

-

If you want to get genuine Microsoft Project for your computer, you have two options:

-
    -
  1. Subscribe to Microsoft 365. Microsoft 365 is a cloud-based service that gives you access to various Microsoft applications such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive, Teams, and more. It also includes Microsoft Project as part of its plans. You can choose from different plans depending on your needs and budget. You can pay monthly or yearly for the subscription and enjoy all the benefits of Microsoft 365.
  2. -
  3. Purchase a standalone license. A standalone license is a one-time purchase that gives you the

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Downloadgametrainsimulatormodindonesia.md b/spaces/1gistliPinn/ChatGPT4/Examples/Downloadgametrainsimulatormodindonesia.md deleted file mode 100644 index 07b13592fa748314bd28952b8038a7e7b18280cd..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Downloadgametrainsimulatormodindonesia.md +++ /dev/null @@ -1,110 +0,0 @@ -
    -

    Download Game Train Simulator Mod Indonesia: How to Enjoy Realistic and Immersive Trainz Experience

    - -

    If you are a fan of train simulation games and want to experience a realistic and immersive trainz experience, you might want to download game train simulator mod Indonesia. This is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. But what is game train simulator mod Indonesia exactly, and how can you download and install it? In this article, we will answer these questions and more.

    -

    downloadgametrainsimulatormodindonesia


    Download 🌟 https://imgfil.com/2uy21n



    - -

    What is Game Train Simulator Mod Indonesia?

    - -

    Game train simulator mod Indonesia is a custom mod that adds Indonesian content to various train simulators. It is developed by the Indonesian Trainz Community, a group of passionate trainz fans who have been around since 2009. The mod aims to provide a realistic and immersive trainz experience, with features such as:

    - -
      -
    • A variety of Indonesian locomotives and coaches, such as GE U18C, GE U20C, GE CC206, passenger and freight coaches.
    • -
    • A selection of Indonesian stations, such as Gambir, Karawang, Purwakarta, Bandung.
    • -
    • A number of Indonesian routes, such as Jakarta-Bandung, Jakarta-Surabaya, Jakarta-Medan.
    • -
    • A realistic and detailed Indonesian scenery, with custom maps, buildings, trees, roads, bridges, etc.
    • -
    • A dynamic and interactive Indonesian environment, with weather, time, traffic, signals, etc.
    • -
    - -

    Game train simulator mod Indonesia is compatible with several train simulators, such as Trainz Simulator 2009, Trainz Simulator 2012, Trainz Simulator 2019, Indonesian Train Simulator (Android), etc. It is also constantly updated and improved by the developers and the community feedback.

    - -

    How to Download and Install Game Train Simulator Mod Indonesia?

    - -

    If you want to play with game train simulator mod Indonesia, you will need to download and install the mod for the train simulator you want to play. Here are the steps to do so:

    - -
      -
    1. Download and install the train simulator of your choice from their official websites or app stores.
    2. -
    3. Download the latest version of game train simulator mod Indonesia from the YouTube channel or website of the Indonesian Trainz Community. You can find the links for different train simulators below:
    4. - -
    5. Extract the downloaded files to your train simulator folder.
    6. -
    7. Launch your train simulator and select game train simulator mod Indonesia as your content.
    8. -
    9. Create your scenario and start playing!
    10. -
    - -

    If you encounter any issues or need any help with the installation process, you can contact the Indonesian Trainz Community on their YouTube channel or website. They will be happy to assist you.

    - -

    Conclusion

    - -

    Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!

    -

    What are the Tips and Tricks for Game Train Simulator Mod Indonesia?

    - -

    Game train simulator mod Indonesia is a fun and enjoyable mod for train simulators, but it can also be challenging and tricky at times. If you want to master the game and have a smooth and satisfying trainz experience, you might want to follow some tips and tricks. Here are some of them:

    -

    - -
      -
    • Read the instructions and tutorials carefully before playing. They will help you understand the basics and features of the game.
    • -
    • Adjust the settings and preferences according to your device and preferences. You can change the graphics, sound, controls, etc. to suit your needs.
    • -
    • Choose the right locomotive and coach for your scenario. Different locomotives and coaches have different characteristics, such as speed, power, capacity, etc. Choose the ones that match your objectives and preferences.
    • -
    • Follow the rules and regulations of the game. Respect the signals, speed limits, timetables, etc. They will help you avoid accidents and penalties.
    • -
    • Plan your route and strategy ahead. Use the map, GPS, route planner, etc. to plan your route and strategy. They will help you avoid traffic, delays, detours, etc.
    • -
    • Use the camera angles wisely. Switch between different camera angles to get a better view of your surroundings and situation. They will help you avoid obstacles, hazards, errors, etc.
    • -
    • Save your progress frequently. Use the save and load functions to save your progress frequently. They will help you avoid losing your progress in case of crashes, errors, etc.
    • -
    • Have fun and enjoy the game. Don't take the game too seriously or stress yourself too much. Remember that it is just a game and the main purpose is to have fun and enjoy the trainz experience.
    • -
    - -

    Game train simulator mod Indonesia is a mod that can offer you a lot of fun and enjoyment, but also a lot of challenge and difficulty. If you want to overcome the challenge and difficulty and have a smooth and satisfying trainz experience, you might want to follow these tips and tricks. They will help you improve your skills and performance in game train simulator mod Indonesia.

    - -

    How to Get More Content for Game Train Simulator Mod Indonesia?

    - -

    If you want to get more content for game train simulator mod Indonesia, such as more locomotives, coaches, stations, routes, scenery, etc., you have two options:

    - -
      -
    1. You can download more content from the Indonesian Trainz Community website or YouTube channel. They have a lot of content available for free download. You can find the links for different train simulators in the previous sections.
    2. -
    3. You can create your own content using the content creation tools provided by the train simulators. You can use the surveyor tool, asset editor tool, script editor tool, etc. to create your own content. You can also share your content with other players on the Indonesian Trainz Community website or YouTube channel.
    4. -
    - -

    Game train simulator mod Indonesia is a mod that has a lot of content available for you to enjoy, but it also allows you to get more content or create your own content if you want to. If you want to get more content or create your own content for game train simulator mod Indonesia, you can follow these options. They will help you expand your trainz experience with game train simulator mod Indonesia.

    - -

    Conclusion

    - -

    Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!

    -

    What are the Reviews and Ratings for Game Train Simulator Mod Indonesia?

    - -

    Game train simulator mod Indonesia is a popular and well-received mod for train simulators. It has received a lot of positive reviews and ratings from the players and critics. Here are some of them:

    - -
      -
    • On Google Play, game train simulator mod Indonesia has a rating of 4.4 out of 5 stars, based on 187K reviews. Most of the reviews praise the game for its realism, graphics, sound, gameplay, etc.
    • -
    • On APKPure, game train simulator mod Indonesia has a rating of 8.5 out of 10, based on 18 reviews. Most of the reviews commend the game for its quality, features, content, etc.
    • -
    • On YouTube, game train simulator mod Indonesia has a lot of videos showcasing the game and its features. Most of the videos have a lot of views, likes, comments, and subscriptions.
    • -
    • On the Indonesian Trainz Community website and YouTube channel, game train simulator mod Indonesia has a lot of feedback and suggestions from the players and fans. Most of the feedback and suggestions are positive and constructive.
    • -
    - -

    Game train simulator mod Indonesia is a mod that has a lot of fans and supporters. It has received a lot of positive feedback and recognition from the players and critics. If you want to see what other people think about game train simulator mod Indonesia, you can check out these reviews and ratings.

    - -

    How to Support Game Train Simulator Mod Indonesia?

    - -

    If you like game train simulator mod Indonesia and want to support it, you have several ways to do so. Here are some of them:

    - -
      -
    1. You can rate and review the game on Google Play, APKPure, or any other platform where you downloaded it. This will help the game get more exposure and recognition.
    2. -
    3. You can share the game with your friends and family who are interested in train simulation games. This will help the game get more downloads and players.
    4. -
    5. You can subscribe to the Indonesian Trainz Community website or YouTube channel. This will help you get more updates and information about the game and its development.
    6. -
    7. You can donate to the Indonesian Trainz Community website or YouTube channel. This will help them cover the costs of developing and maintaining the game.
    8. -
    9. You can provide feedback and suggestions to the Indonesian Trainz Community website or YouTube channel. This will help them improve and enhance the game according to your preferences and needs.
    10. -
    - -

    Game train simulator mod Indonesia is a mod that deserves your support and appreciation. It is a mod that provides you with a realistic and immersive trainz experience for free. If you want to support game train simulator mod Indonesia, you can follow these ways. They will help you show your gratitude and respect to the developers and creators of game train simulator mod Indonesia.

    - -

    Conclusion

    - -

    Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!

    -

    Conclusion

    - -

    Game train simulator mod Indonesia is a custom mod that allows you to play on various train simulators with Indonesian trains, stations, routes, and scenery. It is designed to provide a realistic and immersive trainz experience for train simulation fans. You can download and install it for free from the YouTube channel or website of the Indonesian Trainz Community. If you want to enjoy realistic and immersive trainz experience with game train simulator mod Indonesia, you can find more information on their YouTube channel or website. Have fun playing!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Auto Report FB How to Automate Reporting on Facebook with a Simple App.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Auto Report FB How to Automate Reporting on Facebook with a Simple App.md deleted file mode 100644 index 8460b1cfe3069254d0d7350108fa852483f97a33..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Auto Report FB How to Automate Reporting on Facebook with a Simple App.md +++ /dev/null @@ -1,164 +0,0 @@ - -

    Auto Report Facebook APK: What Is It and How to Use It

    -

    Facebook is one of the most popular social media platforms in the world, with billions of users and millions of posts every day. However, not all of these posts are appropriate or respectful. Some of them may contain spam, hate speech, violence, nudity, or other violations of Facebook's community standards. If you encounter such posts, you can report them to Facebook and hope that they will take action. But what if you want to report multiple posts or profiles at once, without wasting time and effort? This is where auto report facebook apk comes in handy.

    -

    Introduction

    -

    What is auto report facebook apk?

    -

    Auto report facebook apk is an Android application that allows you to automatically report any Facebook profile or post that you want. It is not an official app from Facebook, but a third-party tool developed by independent developers. It works by using your Facebook account to send multiple reports to Facebook's servers, with the aim of getting the target profile or post removed or banned.

    -

    auto report facebook apk


    DOWNLOAD » https://urlin.us/2uT1Co



    -

    Why would you want to use it?

    -

    There are many reasons why you might want to use auto report facebook apk. For example, you might want to:

    -
      -
    • Report a fake or impersonating profile that is trying to scam or harass you or your friends.
    • -
    • Report a spammy or malicious post that is spreading false information or harmful links.
    • -
    • Report a hateful or abusive post that is targeting you or someone else based on their identity, beliefs, or opinions.
    • -
    • Report a violent or graphic post that is showing disturbing images or videos.
    • -
    • Report a nude or sexual post that is violating your privacy or consent.
    • -
    -

    What are the benefits and drawbacks of using it?

    -

    Using auto report facebook apk can have some benefits and drawbacks. Some of the benefits are:

    -
      -
    • You can save time and energy by reporting multiple profiles or posts at once, instead of doing it manually one by one.
    • -
    • You can increase the chances of getting the target profile or post removed or banned, by sending more reports than usual.
    • -
    • You can protect yourself and others from harmful or offensive content on Facebook, by reducing its visibility and reach.
    • -
    -

    Some of the drawbacks are:

    -
      -
    • You may risk violating Facebook's terms of service, by using an unauthorized app that manipulates their system.
    • -
    • You may risk losing your Facebook account, by logging in with your credentials on a third-party app that may not be secure or trustworthy.
    • -
    • You may risk reporting innocent profiles or posts, by using the app incorrectly or irresponsibly.
    • -
    -

    How to download and install auto report facebook apk

    -

    Step 1: Find a reliable source for the apk file

    -

    The first step to use auto report facebook apk is to find a reliable source for the apk file. The apk file is the installation package for Android applications. You can search for it online, but be careful not to download it from shady or unknown websites that may contain viruses or malware. One of the sources that you can try is GitHub, where you can find the project page for auto report facebook apk. There, you can see the latest version of the app, its features, and its instructions. You can also download the apk file from there by clicking on the "Releases" tab and then on the "Assets" section. Make sure you download the file that ends with ".apk".

    -

    Step 2: Enable unknown sources on your device

    -

    The second step to use auto report facebook apk is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To do this, you need to go to your device's settings, then to security or privacy, and then to unknown sources or install unknown apps. There, you need to toggle on the option that allows you to install apps from unknown sources. You may also need to grant permission to the browser or file manager that you are using to download the apk file.

    -

    Step 3: Download and install the apk file

    -

    The third step to use auto report facebook apk is to download and install the apk file. To do this, you need to open the browser or file manager that you used to download the apk file, and then locate the file in your downloads folder or wherever you saved it. Then, you need to tap on the file and follow the instructions on the screen to install the app. You may need to grant some permissions to the app, such as access to your storage, contacts, and location. Once the installation is complete, you can find the app icon on your home screen or app drawer.

    -

    auto report fb apk download
    -facebook auto reporter v2
    -auto report facebook account
    -facebook auto report script
    -auto report facebook group
    -facebook auto report bot
    -auto report facebook page
    -facebook auto report tool
    -auto report fb apk free
    -facebook auto reporter v2 download
    -auto report facebook online
    -facebook auto report imacros
    -auto report facebook app
    -facebook auto report software
    -auto report fb apk 2023
    -facebook auto reporter v2 free
    -auto report facebook profile
    -facebook auto report chrome extension
    -auto report fb apk latest version
    -facebook auto reporter v2 tutorial
    -auto report facebook comment
    -facebook auto report generator
    -auto report fb apk mod
    -facebook auto reporter v2 script
    -auto report fb apk no root
    -facebook auto reporter v2 youtube
    -auto report fb apk for android
    -facebook auto report hack
    -auto report fb apk 2022
    -facebook auto reporter v2 github
    -auto report fb apk pro
    -facebook auto report website
    -auto report fb apk terbaru
    -facebook auto reporter v2 online
    -auto report fb apk 2021
    -facebook auto reporter v2 review
    -auto report fb apk update
    -facebook auto reporter v2 crack
    -auto report fb apk premium
    -facebook auto reporter v2 alternative
    -auto report fb apk old version
    -facebook auto reporter v2 for pc
    -auto report fb apk 2020
    -facebook auto reporter v2 for android
    -auto report fb apk cracked
    -facebook auto reporter v2 for mac
    -auto report fb apk full version
    -facebook reporting tools free

    -

    How to use auto report facebook apk

    -

    Step 1: Launch the app and log in with your Facebook account

    -

    The first step to use auto report facebook apk is to launch the app and log in with your Facebook account. To do this, you need to tap on the app icon and wait for it to load. Then, you need to enter your Facebook email or phone number and password, and tap on "Log In". You may also need to enter a verification code or confirm your identity if prompted by Facebook. Once you are logged in, you will see the main interface of the app, which consists of a menu bar at the top and a list of profiles or posts at the bottom.

    -

    Step 2: Select the target profile or post that you want to report

    -

    The second step to use auto report facebook apk is to select the target profile or post that you want to report. To do this, you need to tap on the menu bar at the top and choose one of the options: "Report Profile" or "Report Post". Then, you need to enter the URL or ID of the profile or post that you want to report in the text box, and tap on "Search". You will see a preview of the profile or post below, along with some information such as name, date, and content. You can also tap on "Load More" to see more profiles or posts that match your search criteria.

    -

    Step 3: Choose the reason for reporting and submit

    -

    The third step to use auto report facebook apk is to choose the reason for reporting and submit. To do this, you need to tap on the profile or post that you want to report from the list below, and then tap on "Report". You will see a pop-up window with a list of reasons for reporting, such as spam, fake account, hate speech, violence, nudity, or other. You can select one or more reasons that apply to your case, and then tap on "Submit". You will see a confirmation message that your report has been sent successfully. You can repeat this process for as many profiles or posts as you want.

    -

    Conclusion

    -

    Summary of the main points

    -

    In this article, we have explained what auto report facebook apk is and how to use it. Auto report facebook apk is an Android application that allows you to automatically report any Facebook profile or post that you want. It can help you save time and energy by reporting multiple profiles or posts at once, increase the chances of getting them removed or banned, and protect yourself and others from harmful or offensive content on Facebook. However, it also has some drawbacks, such as violating Facebook's terms of service, risking losing your Facebook account, and reporting innocent profiles or posts.

    -

    Call to action and disclaimer

    -

    If you want to try auto report facebook apk for yourself, you can download it from GitHub and follow the steps that we have outlined above. However, we advise you to use it with caution and responsibility, as we are not responsible for any consequences that may arise from using it. We also recommend that you respect Facebook's community standards and only report profiles or posts that truly violate them. Remember that reporting is a serious matter and should not be abused for personal vendetta or malicious intent.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Reason for reportingDescription
    SpamThe profile or post is unsolicited, repetitive, or irrelevant.
    Fake accountThe profile is not representing a real person or entity.
    Hate speechThe profile or post is attacking or discriminating against a group or individual based on their race, ethnicity, religion, gender, sexual orientation, disability, or other characteristic.
    ViolenceThe profile or post is promoting or showing physical harm, threats, or cruelty to oneself or others.
    NudityThe profile or post is displaying or soliciting sexual or explicit content that violates Facebook's policies.
    OtherThe profile or post is violating Facebook's community standards in some other way.
    -

    This is an example of a table that you can use in your article to illustrate the different reasons for reporting a profile or post on Facebook.

    -

    FAQs

    -

    What is the difference between auto report facebook apk and the report feature on Facebook?

    -

    The report feature on Facebook is the official way to report a profile or post that violates Facebook's community standards. You can access it by clicking on the three dots icon on the top right corner of any profile or post, and then selecting "Report". You can then choose the reason for reporting and follow the instructions. The report feature on Facebook allows you to report one profile or post at a time, and it may take some time for Facebook to review and act on your report.

    -

    Auto report facebook apk is an unofficial app that allows you to automatically report multiple profiles or posts at once. You can download it from GitHub and install it on your Android device. You can then log in with your Facebook account and enter the URL or ID of the profile or post that you want to report. You can then choose the reason for reporting and submit. Auto report facebook apk sends multiple reports to Facebook's servers, with the aim of getting the profile or post removed or banned faster.

    -

    Is auto report facebook apk safe to use?

    -

    Auto report facebook apk is not a safe app to use, as it may pose some risks to your device and your Facebook account. Some of the risks are:

    -
      -
    • It may contain viruses or malware that can harm your device or steal your data.
    • -
    • It may not be secure or trustworthy, as it requires you to log in with your Facebook credentials on a third-party app that may not protect your privacy.
    • -
    • It may violate Facebook's terms of service, as it manipulates their system and abuses their report feature.
    • -
    • It may result in your Facebook account being suspended or banned, as Facebook may detect your abnormal activity and flag you as a spammer or a violator.
    • -
    • It may report innocent profiles or posts, as it may not be accurate or responsible in selecting the target profile or post.
    • -
    -

    Therefore, we advise you to use auto report facebook apk with caution and responsibility, and at your own risk. We also recommend that you use the official report feature on Facebook instead, as it is safer and more reliable.

    -

    How can I avoid being reported by auto report facebook apk?

    -

    The best way to avoid being reported by auto report facebook apk is to follow Facebook's community standards and not post anything that violates them. Some of the things that you should avoid posting are:

    -
      -
    • Fake or impersonating profiles that try to scam or harass others.
    • -
    • Spammy or malicious posts that spread false information or harmful links.
    • -
    • Hateful or abusive posts that target others based on their identity, beliefs, or opinions.
    • -
    • Violent or graphic posts that show disturbing images or videos.
    • -
    • Nude or sexual posts that violate others' privacy or consent.
    • -
    -

    If you follow these guidelines, you will not only avoid being reported by auto report facebook apk, but also create a positive and respectful environment on Facebook for yourself and others.

    -

    How can I report a profile or post that is using auto report facebook apk?

    -

    If you suspect that a profile or post is using auto report facebook apk to report others unfairly or maliciously, you can report them to Facebook using the official report feature. To do this, you need to click on the three dots icon on the top right corner of the profile or post, and then select "Report". You can then choose the reason for reporting, such as "It's spam" or "It's abusive or harmful". You can also provide more details or feedback to Facebook, such as "This profile or post is using auto report facebook apk to report others". Facebook will then review your report and take appropriate action.

    -

    What are some alternatives to auto report facebook apk?

    -

    If you are looking for some alternatives to auto report facebook apk, you can try some of these options:

    -
      -
    • Use the official report feature on Facebook, as it is safer and more reliable.
    • -
    • Use the block or unfriend feature on Facebook, as it will prevent you from seeing or interacting with the profile or post that you don't like.
    • -
    • Use the hide or snooze feature on Facebook, as it will reduce the visibility or frequency of the profile or post that you don't want to see.
    • -
    • Use the mute or unfollow feature on Facebook, as it will stop the notifications or updates from the profile or post that you are not interested in.
    • -
    • Use the feedback or rating feature on Facebook, as it will help Facebook improve their content quality and relevance.
    • -
    -

    These options will help you manage your Facebook experience better, without resorting to auto report facebook apk.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Create Convert and Edit PDF Files with PrimoPDF - A Free and Reliable PDF Creator.md b/spaces/1phancelerku/anime-remove-background/Create Convert and Edit PDF Files with PrimoPDF - A Free and Reliable PDF Creator.md deleted file mode 100644 index b6023506142d59c9cb4b7c81474ae560f1dbc47e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Create Convert and Edit PDF Files with PrimoPDF - A Free and Reliable PDF Creator.md +++ /dev/null @@ -1,92 +0,0 @@ - -

    How to Download PrimoPDF: A Free PDF Converter and Creator

    -

    If you are looking for a free and easy way to convert or create PDF files from any Windows application, you might want to try PrimoPDF. PrimoPDF is a free tool provided by Nitro Software, Inc that offers high-quality conversion to PDF, comprising a user-friendly interface that enables printing to PDF from virtually any Windows application.

    -

    In this article, we will show you how to download PrimoPDF, how to use it to convert and create PDF files, and some tips and tricks for using it effectively. We will also answer some frequently asked questions about PrimoPDF.

    -

    download primopdf


    Downloadhttps://jinyurl.com/2uNMvq



    -

    Benefits of PrimoPDF

    -

    PrimoPDF has many benefits that make it a great choice for PDF productivity. Here are some of the main benefits of PrimoPDF:

    -
      -
    • High-quality conversion: PrimoPDF can convert any type of file to PDF with high fidelity and accuracy. You can choose from four output quality settings: Screen, eBook, Print, and Prepress. You can also customize the PDF settings to suit your needs.
    • -
    • User-friendly interface: PrimoPDF has a simple and intuitive interface that allows you to print to PDF from any Windows application. You just need to select PrimoPDF as the printer and click on OK. You can also drag and drop files to the PrimoPDF desktop icon to convert them to PDF.
    • -
    • Security features: PrimoPDF can protect your PDF files with 128-bit encryption and password protection. You can also add watermarks and stamps to your PDF files to prevent unauthorized copying or editing.
    • -
    • Free and easy to use: PrimoPDF is completely free and does not require any registration or subscription. It is also easy to install and use, with no ads or pop-ups.
    • -
    -

    Steps to Download PrimoPDF

    -

    To download PrimoPDF, you need to follow these steps:

    -
      -
    1. Visit the official website of PrimoPDF. You will see a page like this:
    2. -
    -

    PrimoPDF homepage

    -
      -
    1. Click on the "Download Now" button. You will be redirected to a page where you can choose between two options: "Premium Upgrade" or "Free Download".
    2. -
    -

    PrimoPDF download options

    -

    How to download primopdf for free
    -Download primopdf for windows 10
    -PrimoPDF - PDF converter and creator
    -Download primopdf from nitro software
    -PrimoPDF download and installation guide
    -PrimoPDF review and features
    -Download primopdf for mac
    -PrimoPDF alternatives and comparisons
    -How to use primopdf to create pdf files
    -Download primopdf offline installer
    -PrimoPDF vs Adobe Acrobat
    -Download primopdf for android
    -PrimoPDF - the best free pdf tool
    -Download primopdf from sourceforge
    -PrimoPDF troubleshooting and support
    -How to update primopdf to the latest version
    -Download primopdf for linux
    -PrimoPDF pros and cons
    -How to uninstall primopdf from your pc
    -Download primopdf from cnet download
    -PrimoPDF security and privacy
    -How to convert pdf files with primopdf
    -Download primopdf for chromebook
    -PrimoPDF user manual and tutorials
    -How to merge pdf files with primopdf
    -Download primopdf for windows 7
    -PrimoPDF license and terms of use
    -How to edit pdf files with primopdf
    -Download primopdf for windows 8.1
    -PrimoPDF feedback and ratings
    -How to sign pdf files with primopdf
    -Download primopdf for windows xp
    -PrimoPDF FAQs and tips
    -How to protect pdf files with primopdf
    -Download primopdf for windows vista
    -PrimoPDF blog and news
    -How to print pdf files with primopdf
    -Download primopdf for ios
    -PrimoPDF coupons and discounts
    -How to compress pdf files with primopdf
    -Download primopdf for windows server 2023
    -PrimoPDF testimonials and case studies
    -How to rotate pdf files with primopdf
    -Download primopdf portable version
    -PrimoPDF awards and recognition
    -How to split pdf files with primopdf
    -Download primopdf old versions
    -PrimoPDF social media and community
    -How to add bookmarks to pdf files with primopdf

    -
      -
    1. If you want to upgrade to Nitro Pro, which is a more advanced PDF software that offers more features and functions, you can click on the "Premium Upgrade" button. You will be able to enjoy a free trial of Nitro Pro for 14 days before deciding whether to purchase it or not.
    2. -
    3. If you want to download PrimoPDF for free, you can click on the "Free Download" button. You will see a dialog box like this:
    4. -
    -

    PrimoPDF save dialog box

    -
      -
    1. Choose a location on your computer where you want to save the PrimoPDF installer file and click on "Save". The file name is "PrimoSetup.exe" and the file size is about 7 MB.
    2. -
    3. Once the download is complete, locate the PrimoPDF installer file on your computer and double-click on it. You will see a window like this:
    4. -
    -

    PrimoPDF install window

    -
      -
    1. Follow the instructions to install PrimoPDF on your computer. You will need to accept the license agreement, choose the installation folder, and select the components you want to install. You can also opt out of installing any additional software or toolbars that may be offered by PrimoPDF.
    2. -
    3. Once the installation is complete, you will see a window like this:
    4. -
    -

    PrimoPDF finish window

    -
      -
    1. Click on "Finish" to exit the installer. You have successfully downloaded and installed PrimoPDF on your computer.
    2. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Kaash Paige Love Songs MP3 and Listen Offline.md b/spaces/1phancelerku/anime-remove-background/Download Kaash Paige Love Songs MP3 and Listen Offline.md deleted file mode 100644 index fdf6a5a44ac9c44c7e67b569041a1453980a144b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Kaash Paige Love Songs MP3 and Listen Offline.md +++ /dev/null @@ -1,105 +0,0 @@ - -

    Download Kaash Paige Love Songs MP3: How to Enjoy the Best of R&B

    -

    If you are a fan of R&B music, you have probably heard of Kaash Paige, the rising star who has captivated millions of listeners with her soulful voice and relatable lyrics. Kaash Paige is known for her love songs, which express her feelings and experiences with romance, heartbreak, and self-love. In this article, we will tell you more about who Kaash Paige is, why you should listen to her love songs, and how to download them in mp3 format for free. We will also give you some tips on how to enjoy her music on different devices and create your own playlists and mixtapes.

    -

    Who is Kaash Paige and why you should listen to her love songs

    -

    Kaash Paige is a 20-year-old singer and songwriter from Dallas, Texas, who started making music when she was 14. She rose to fame in 2018 with her viral hit "Love Songs", which sampled Brent Faiyaz's "Poison". Since then, she has released two EPs, Parked Car Convos and Teenage Fever, and collaborated with artists like Don Toliver, Isaiah Rashad, K Camp, and 6LACK. She is currently signed to Def Jam Recordings and is working on her debut album.

    -

    download kaash paige love songs mp3


    DOWNLOADhttps://jinyurl.com/2uNOOz



    -

    Kaash Paige's background and musical influences

    -

    Kaash Paige was born as Kaashara Bostic on January 8, 2001, in Dallas, Texas. She grew up in a musical family, as her father was a DJ and her mother was a singer. She was exposed to various genres of music, such as hip-hop, soul, jazz, rock, and gospel. She cites Lauryn Hill, Erykah Badu, Frank Ocean, Drake, Jhené Aiko, SZA, and Brent Faiyaz as some of her main influences. She also draws inspiration from anime, movies, books, and nature.

    -

    Kaash Paige's style and themes

    -

    Kaash Paige's style can be described as a blend of R&B, soul, hip-hop, and alternative. She has a smooth and soothing voice that can switch from singing to rapping effortlessly. She writes her own lyrics, which are honest, vulnerable, and poetic. She often sings about love, relationships, emotions, self-discovery, and growing up. Some of her recurring themes are nostalgia, loneliness, intimacy, infatuation, and empowerment.

    -

    Kaash Paige's most popular love songs

    -

    Kaash Paige has released many love songs that have resonated with her fans and critics alike. Some of her most popular ones are:

    -
      -
    • "Love Songs": This is the song that put Kaash Paige on the map. It is a catchy and melodic tune that samples Brent Faiyaz's "Poison". It talks about missing someone who used to make you feel special and sing love songs with you.
    • -
    • "64'": This is a laid-back and nostalgic song that features rapper K Camp. It reminisces about riding in a 1964 Chevrolet Impala with a lover and enjoying the simple moments.
    • -
    • "Break Up Song": This is a sad and emotional song that deals with. the end of a relationship and the pain of letting go. It features a sample of Drake's "Doing It Wrong".
    • -
    • "Soul Ties": This is a smooth and sensual song that explores the concept of soul ties, which are the emotional and spiritual bonds that form between people who have sex. It features rapper 6LACK, who adds his own perspective on the topic.
    • -
    • "London": This is a dreamy and romantic song that expresses the desire to travel to London with a lover and escape from reality. It has a lo-fi and atmospheric vibe that matches the mood of the lyrics.
    • -
    -

    How to download Kaash Paige love songs mp3 for free

    -

    If you want to enjoy Kaash Paige's love songs offline, you might want to download them in mp3 format. Mp3 is a popular and widely supported audio file format that can be played on various devices and platforms. Mp3 files are also smaller in size than other formats, which means they take up less storage space and bandwidth. Plus, mp3 files can be easily edited, converted, and transferred.

    -

    The benefits of downloading mp3 files

    -

    Downloading mp3 files has many benefits, such as:

    -
      -
    • You can listen to your favorite songs anytime and anywhere, without relying on an internet connection or streaming service.
    • -
    • You can save money on data charges and subscription fees, as you don't have to stream music online.
    • -
    • You can create your own music library and organize it according to your preferences.
    • -
    • You can customize your songs by adding metadata, artwork, lyrics, and tags.
    • -
    • You can share your songs with your friends and family via email, Bluetooth, or social media.
    • -
    -

    The best websites and apps to download Kaash Paige love songs mp3

    -

    There are many websites and apps that allow you to download Kaash Paige love songs mp3 for free. However, not all of them are safe, legal, or reliable. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them might also violate the copyright laws and infringe on the rights of the artists and labels. Therefore, you should be careful and choose only the reputable and trustworthy sources. Here are some of the best ones:

    -

    YouTube

    -

    YouTube is the most popular video-sharing platform in the world, where you can find millions of music videos, including Kaash Paige's love songs. You can watch them online or download them in mp3 format using a YouTube to mp3 converter. There are many online converters available, such as YTMP3, Y2Mate, 4K Video Downloader, etc. You just need to copy the URL of the video you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.

    -

    download kaash paige love songs lyrics
    -kaash paige love songs mp3 free download
    -kaash paige love songs remix download
    -download kaash paige love songs video
    -kaash paige love songs album download
    -download kaash paige love songs instrumental
    -kaash paige love songs mp3 320kbps download
    -kaash paige love songs acoustic download
    -download kaash paage love songs feat 6lack
    -kaash paige love songs ringtone download
    -download kaash paige love songs slowed
    -kaash paige love songs zip download
    -kaash paige love songs spotify download
    -download kaash paige love songs clean version
    -kaash paige love songs karaoke download
    -download kaash paige love songs genius lyrics
    -kaash paige love songs soundcloud download
    -kaash paige love songs live performance download
    -download kaash paige love songs piano cover
    -kaash paige love songs tiktok version download
    -download kaash paige love songs reaction video
    -kaash paige love songs behind the scenes download
    -kaash paige love songs official music video download
    -download kaash paige love songs guitar chords
    -kaash paige love songs dance challenge download
    -download kaash paige love songs nightcore
    -kaash paige love songs mashup download
    -kaash paige love songs unplugged download
    -download kaash paige love songs meaning explained
    -kaash paige love songs extended version download
    -download kaash paige love songs sample used
    -kaash paige love songs flac download
    -kaash paige love songs radio edit download
    -download kaash paige love songs interview clip
    -kaash paage love songs fan made video download
    -download kaash pagee love songss lyrics translation
    -kassh pagee luv songz mp3 downlod

    -

    Spotify

    -

    Spotify is one of the most popular music streaming services in the world, where you can listen to millions of songs, including Kaash Paige's love songs. You can access Spotify for free with ads or pay for a premium subscription that offers more features and benefits. One of them is the ability to download songs for offline listening. However, this feature is only available for premium users and only works within the Spotify app. You cannot transfer or play the downloaded songs on other devices or apps. If you want to download Spotify songs in mp3 format, you will need a Spotify to mp3 converter. There are some online converters available, such as SpotiFlyer, AudKit Spotify Music Converter, TuneFab Spotify Music Converter, etc. You just need to copy the URL of the song or playlist you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.

    SoundCloud

    -

    SoundCloud is another popular music streaming service in the world, where you can discover and listen to millions of songs, including Kaash Paige's love songs. You can access SoundCloud for free with ads or pay for a premium subscription that offers more features and benefits. One of them is the ability to download songs for offline listening. However, this feature is only available for some songs and only works within the SoundCloud app. You cannot transfer or play the downloaded songs on other devices or apps. If you want to download SoundCloud songs in mp3 format, you will need a SoundCloud to mp3 converter. There are some online converters available, such as SCDL, SoundCloud Downloader, KlickAud, etc. You just need to copy the URL of the song you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.

    -

    Audiomack

    -

    Audiomack is a music streaming and discovery platform that allows artists to upload their music and fans to listen to it for free. You can find many songs by Kaash Paige on Audiomack, as well as other genres and artists. You can access Audiomack for free with ads or pay for a premium subscription that offers more features and benefits. One of them is the ability to download songs for offline listening. However, this feature is only available for some songs and only works within the Audiomack app. You cannot transfer or play the downloaded songs on other devices or apps. If you want to download Audiomack songs in mp3 format, you will need an Audiomack to mp3 converter. There are some online converters available, such as Audiomack Downloader, MP3FY, MP3Juices, etc. You just need to copy the URL of the song you want to download, paste it into the converter's website or app, choose the mp3 format and quality, and click on the download button. The process is fast and easy, but you should be aware of the potential risks of downloading from unverified sources.

    -

    How to enjoy Kaash Paige love songs mp3 on different devices

    -

    Once you have downloaded Kaash Paige love songs mp3 from any of the sources mentioned above, you can enjoy them on different devices, such as your phone, tablet, computer, or laptop. However, you might need to transfer or play them differently depending on the device and the app you use. Here are some tips on how to do that:

    -

    How to transfer mp3 files to your phone or tablet

    -

    If you have downloaded mp3 files on your computer or laptop, you can transfer them to your phone or tablet using a USB cable or a wireless method. Here are the steps for each method:

    -
      -
    • USB cable: Connect your phone or tablet to your computer or laptop using a USB cable. Make sure your device is unlocked and select the file transfer option on your device's screen. On your computer or laptop, open the folder where you saved the mp3 files and drag and drop them to your device's folder. Once the transfer is complete, disconnect your device and open the music app of your choice.
    • -
    • Wireless method: There are many apps that allow you to transfer files wirelessly between devices using Wi-Fi or Bluetooth. Some of them are SHAREit, Xender, Zapya, etc. You just need to install the app on both devices and follow the instructions on how to connect them and send files.
    • -
    -

    How to play mp3 files on your computer or laptop

    -

    If you have downloaded mp3 files on your computer or laptop, you can play them using any media player that supports mp3 format. Some of them are Windows Media Player, VLC Media Player, iTunes, etc. You just need to open the media player of your choice and browse for the folder where you saved the mp3 files. You can also create playlists and edit metadata within the media player.

    -

    How to create playlists and mixtapes with Kaash Paige love songs mp3

    -

    If you want to create playlists and mixtapes with Kaash Paige love songs mp3, you can use any music app that allows you to do that. Some of them are Spotify, SoundCloud, Audiomack, etc. You just need to open the app of your choice and select the option to create a new playlist or mixtape. Then, you can add Kaash Paige love songs mp3 from your device's storage or from the app's library. You can also rearrange, rename, delete, or share your playlists and mixtapes within the app.

    -Conclusion and FAQs -

    In conclusion, Kaash Paige is a talented and promising R&B singer who has a lot of love songs that you can enjoy. You can download her love songs in mp3 format for free from various websites and apps, such as YouTube, Spotify, SoundCloud, and Audiomack. You can also transfer and play them on different devices, such as your phone, tablet, computer, or laptop. You can also create your own playlists and mixtapes with her love songs and share them with your friends and family. Kaash Paige's love songs are perfect for any mood and occasion, whether you are feeling happy, sad, romantic, or nostalgic. If you are looking for some quality R&B music, you should definitely check out Kaash Paige's love songs mp3.

    -

    Here are some FAQs that you might have about Kaash Paige and her love songs:

    -
      -
    • Q: What does Kaash Paige mean?
    • -
    • A: Kaash Paige is a stage name that stands for Kill All Arrogance Stop Hatred. Paige is her middle name.
    • -
    • Q: What is Kaash Paige's real name?
    • -
    • A: Kaash Paige's real name is Kaashara Bostic.
    • -
    • Q: How old is Kaash Paige?
    • -
    • A: Kaash Paige is 20 years old. She was born on January 8, 2001.
    • -
    • Q: Where is Kaash Paige from?
    • -
    • A: Kaash Paige is from Dallas, Texas.
    • -
    • Q: Is Kaash Paige single?
    • -
    • A: Kaash Paige has not publicly confirmed her relationship status. She has been linked to rapper Don Toliver in the past, but they have not confirmed their romance.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/util/visualizer.py b/spaces/4Taps/SadTalker/src/face3d/util/visualizer.py deleted file mode 100644 index 4023a6d4086acba9bc88e079f625194d324d7c9e..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/util/visualizer.py +++ /dev/null @@ -1,227 +0,0 @@ -"""This script defines the visualizer for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import os -import sys -import ntpath -import time -from . import util, html -from subprocess import Popen, PIPE -from torch.utils.tensorboard import SummaryWriter - -def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256): - """Save images to the disk. - - Parameters: - webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details) - visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs - image_path (str) -- the string is used to create image paths - aspect_ratio (float) -- the aspect ratio of saved images - width (int) -- the images will be resized to width x width - - This function will save images stored in 'visuals' to the HTML file specified by 'webpage'. - """ - image_dir = webpage.get_image_dir() - short_path = ntpath.basename(image_path[0]) - name = os.path.splitext(short_path)[0] - - webpage.add_header(name) - ims, txts, links = [], [], [] - - for label, im_data in visuals.items(): - im = util.tensor2im(im_data) - image_name = '%s/%s.png' % (label, name) - os.makedirs(os.path.join(image_dir, label), exist_ok=True) - save_path = os.path.join(image_dir, image_name) - util.save_image(im, save_path, aspect_ratio=aspect_ratio) - ims.append(image_name) - txts.append(label) - links.append(image_name) - webpage.add_images(ims, txts, links, width=width) - - -class Visualizer(): - """This class includes several functions that can display/save images and print/save logging information. - - It uses a Python library tensprboardX for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images. - """ - - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: create a tensorboard writer - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the option - self.use_html = opt.isTrain and not opt.no_html - self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, 'logs', opt.name)) - self.win_size = opt.display_winsize - self.name = opt.name - self.saved = False - if self.use_html: # create an HTML object at /web/; images will be saved under /web/images/ - self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web') - self.img_dir = os.path.join(self.web_dir, 'images') - print('create web directory %s...' % self.web_dir) - util.mkdirs([self.web_dir, self.img_dir]) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - def reset(self): - """Reset the self.saved status""" - self.saved = False - - - def display_current_results(self, visuals, total_iters, epoch, save_result): - """Display current results on tensorboad; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of images to display or save - total_iters (int) -- total iterations - epoch (int) - - the current epoch - save_result (bool) - - if save the current results to an HTML file - """ - for label, image in visuals.items(): - self.writer.add_image(label, util.tensor2im(image), total_iters, dataformats='HWC') - - if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved. - self.saved = True - # save images to the disk - for label, image in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label)) - util.save_image(image_numpy, img_path) - - # update website - webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0) - for n in range(epoch, 0, -1): - webpage.add_header('epoch [%d]' % n) - ims, txts, links = [], [], [] - - for label, image_numpy in visuals.items(): - image_numpy = util.tensor2im(image) - img_path = 'epoch%.3d_%s.png' % (n, label) - ims.append(img_path) - txts.append(label) - links.append(img_path) - webpage.add_images(ims, txts, links, width=self.win_size) - webpage.save() - - def plot_current_losses(self, total_iters, losses): - # G_loss_collection = {} - # D_loss_collection = {} - # for name, value in losses.items(): - # if 'G' in name or 'NCE' in name or 'idt' in name: - # G_loss_collection[name] = value - # else: - # D_loss_collection[name] = value - # self.writer.add_scalars('G_collec', G_loss_collection, total_iters) - # self.writer.add_scalars('D_collec', D_loss_collection, total_iters) - for name, value in losses.items(): - self.writer.add_scalar(name, value, total_iters) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message - - -class MyVisualizer: - def __init__(self, opt): - """Initialize the Visualizer class - - Parameters: - opt -- stores all the experiment flags; needs to be a subclass of BaseOptions - Step 1: Cache the training/test options - Step 2: create a tensorboard writer - Step 3: create an HTML object for saveing HTML filters - Step 4: create a logging file to store training losses - """ - self.opt = opt # cache the optio - self.name = opt.name - self.img_dir = os.path.join(opt.checkpoints_dir, opt.name, 'results') - - if opt.phase != 'test': - self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, opt.name, 'logs')) - # create a logging file to store training losses - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - - def display_current_results(self, visuals, total_iters, epoch, dataset='train', save_results=False, count=0, name=None, - add_image=True): - """Display current results on tensorboad; save current results to an HTML file. - - Parameters: - visuals (OrderedDict) - - dictionary of images to display or save - total_iters (int) -- total iterations - epoch (int) - - the current epoch - dataset (str) - - 'train' or 'val' or 'test' - """ - # if (not add_image) and (not save_results): return - - for label, image in visuals.items(): - for i in range(image.shape[0]): - image_numpy = util.tensor2im(image[i]) - if add_image: - self.writer.add_image(label + '%s_%02d'%(dataset, i + count), - image_numpy, total_iters, dataformats='HWC') - - if save_results: - save_path = os.path.join(self.img_dir, dataset, 'epoch_%s_%06d'%(epoch, total_iters)) - if not os.path.isdir(save_path): - os.makedirs(save_path) - - if name is not None: - img_path = os.path.join(save_path, '%s.png' % name) - else: - img_path = os.path.join(save_path, '%s_%03d.png' % (label, i + count)) - util.save_image(image_numpy, img_path) - - - def plot_current_losses(self, total_iters, losses, dataset='train'): - for name, value in losses.items(): - self.writer.add_scalar(name + '/%s'%dataset, value, total_iters) - - # losses: same format as |losses| of plot_current_losses - def print_current_losses(self, epoch, iters, losses, t_comp, t_data, dataset='train'): - """print current losses on console; also save the losses to the disk - - Parameters: - epoch (int) -- current epoch - iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch) - losses (OrderedDict) -- training losses stored in the format of (name, float) pairs - t_comp (float) -- computational time per data point (normalized by batch_size) - t_data (float) -- data loading time per data point (normalized by batch_size) - """ - message = '(dataset: %s, epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % ( - dataset, epoch, iters, t_comp, t_data) - for k, v in losses.items(): - message += '%s: %.3f ' % (k, v) - - print(message) # print the message - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) # save the message diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/t2m_trans.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/t2m_trans.py deleted file mode 100644 index 54bd0a485d7e8dbeaaac91d049f63ebd136cb074..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/t2m_trans.py +++ /dev/null @@ -1,211 +0,0 @@ -import math -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.distributions import Categorical -import models.pos_encoding as pos_encoding - -class Text2Motion_Transformer(nn.Module): - - def __init__(self, - num_vq=1024, - embed_dim=512, - clip_dim=512, - block_size=16, - num_layers=2, - n_head=8, - drop_out_rate=0.1, - fc_rate=4): - super().__init__() - self.trans_base = CrossCondTransBase(num_vq, embed_dim, clip_dim, block_size, num_layers, n_head, drop_out_rate, fc_rate) - self.trans_head = CrossCondTransHead(num_vq, embed_dim, block_size, num_layers, n_head, drop_out_rate, fc_rate) - self.block_size = block_size - self.num_vq = num_vq - - def get_block_size(self): - return self.block_size - - def forward(self, idxs, clip_feature): - feat = self.trans_base(idxs, clip_feature) - logits = self.trans_head(feat) - return logits - - def sample(self, clip_feature, if_categorial=False): - for k in range(self.block_size): - if k == 0: - x = [] - else: - x = xs - logits = self.forward(x, clip_feature) - logits = logits[:, -1, :] - probs = F.softmax(logits, dim=-1) - if if_categorial: - dist = Categorical(probs) - idx = dist.sample() - if idx == self.num_vq: - break - idx = idx.unsqueeze(-1) - else: - _, idx = torch.topk(probs, k=1, dim=-1) - if idx[0] == self.num_vq: - break - # append to the sequence and continue - if k == 0: - xs = idx - else: - xs = torch.cat((xs, idx), dim=1) - - if k == self.block_size - 1: - return xs[:, :-1] - return xs - -class CausalCrossConditionalSelfAttention(nn.Module): - - def __init__(self, embed_dim=512, block_size=16, n_head=8, drop_out_rate=0.1): - super().__init__() - assert embed_dim % 8 == 0 - # key, query, value projections for all heads - self.key = nn.Linear(embed_dim, embed_dim) - self.query = nn.Linear(embed_dim, embed_dim) - self.value = nn.Linear(embed_dim, embed_dim) - - self.attn_drop = nn.Dropout(drop_out_rate) - self.resid_drop = nn.Dropout(drop_out_rate) - - self.proj = nn.Linear(embed_dim, embed_dim) - # causal mask to ensure that attention is only applied to the left in the input sequence - self.register_buffer("mask", torch.tril(torch.ones(block_size, block_size)).view(1, 1, block_size, block_size)) - self.n_head = n_head - - def forward(self, x): - B, T, C = x.size() - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T) - att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) - att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf')) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) - y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_drop(self.proj(y)) - return y - -class Block(nn.Module): - - def __init__(self, embed_dim=512, block_size=16, n_head=8, drop_out_rate=0.1, fc_rate=4): - super().__init__() - self.ln1 = nn.LayerNorm(embed_dim) - self.ln2 = nn.LayerNorm(embed_dim) - self.attn = CausalCrossConditionalSelfAttention(embed_dim, block_size, n_head, drop_out_rate) - self.mlp = nn.Sequential( - nn.Linear(embed_dim, fc_rate * embed_dim), - nn.GELU(), - nn.Linear(fc_rate * embed_dim, embed_dim), - nn.Dropout(drop_out_rate), - ) - - def forward(self, x): - x = x + self.attn(self.ln1(x)) - x = x + self.mlp(self.ln2(x)) - return x - -class CrossCondTransBase(nn.Module): - - def __init__(self, - num_vq=1024, - embed_dim=512, - clip_dim=512, - block_size=16, - num_layers=2, - n_head=8, - drop_out_rate=0.1, - fc_rate=4): - super().__init__() - self.tok_emb = nn.Embedding(num_vq + 2, embed_dim) - self.cond_emb = nn.Linear(clip_dim, embed_dim) - self.pos_embedding = nn.Embedding(block_size, embed_dim) - self.drop = nn.Dropout(drop_out_rate) - # transformer block - self.blocks = nn.Sequential(*[Block(embed_dim, block_size, n_head, drop_out_rate, fc_rate) for _ in range(num_layers)]) - self.pos_embed = pos_encoding.PositionEmbedding(block_size, embed_dim, 0.0, False) - - self.block_size = block_size - - self.apply(self._init_weights) - - def get_block_size(self): - return self.block_size - - def _init_weights(self, module): - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=0.02) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def forward(self, idx, clip_feature): - if len(idx) == 0: - token_embeddings = self.cond_emb(clip_feature).unsqueeze(1) - else: - b, t = idx.size() - assert t <= self.block_size, "Cannot forward, model block size is exhausted." - # forward the Trans model - token_embeddings = self.tok_emb(idx) - token_embeddings = torch.cat([self.cond_emb(clip_feature).unsqueeze(1), token_embeddings], dim=1) - - x = self.pos_embed(token_embeddings) - x = self.blocks(x) - - return x - - -class CrossCondTransHead(nn.Module): - - def __init__(self, - num_vq=1024, - embed_dim=512, - block_size=16, - num_layers=2, - n_head=8, - drop_out_rate=0.1, - fc_rate=4): - super().__init__() - - self.blocks = nn.Sequential(*[Block(embed_dim, block_size, n_head, drop_out_rate, fc_rate) for _ in range(num_layers)]) - self.ln_f = nn.LayerNorm(embed_dim) - self.head = nn.Linear(embed_dim, num_vq + 1, bias=False) - self.block_size = block_size - - self.apply(self._init_weights) - - def get_block_size(self): - return self.block_size - - def _init_weights(self, module): - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=0.02) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def forward(self, x): - x = self.blocks(x) - x = self.ln_f(x) - logits = self.head(x) - return logits - - - - - - diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/docs/Makefile b/spaces/AIFILMS/generate_human_motion/pyrender/docs/Makefile deleted file mode 100644 index b1064a04362a0c4372fae351f99ed3bd9f82ff92..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/pyrender/docs/Makefile +++ /dev/null @@ -1,23 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -SOURCEDIR = source -BUILDDIR = build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -clean: - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - rm -rf ./source/generated/* - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/trainer.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/trainer.py deleted file mode 100644 index dbb190dd6cfb938071de77ffecda560c9ddecc85..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/trainer.py +++ /dev/null @@ -1,561 +0,0 @@ -import random -import subprocess -import traceback -from datetime import datetime - -from torch.cuda.amp import GradScaler, autocast -import numpy as np -import torch.optim -import torch.utils.data -import copy -import logging -import os -import re -import sys -import torch -import torch.distributed as dist -import torch.multiprocessing as mp -import tqdm - -from text_to_speech.utils.commons.ckpt_utils import get_last_checkpoint, get_all_ckpts -from text_to_speech.utils.commons.ddp_utils import DDP -from text_to_speech.utils.commons.hparams import hparams -from text_to_speech.utils.commons.tensor_utils import move_to_cuda -from text_to_speech.utils.os_utils import remove_file - - -class Tee(object): - def __init__(self, name, mode): - self.file = open(name, mode) - self.stdout = sys.stdout - sys.stdout = self - - def __del__(self): - sys.stdout = self.stdout - self.file.close() - - def write(self, data): - self.file.write(data) - self.stdout.write(data) - - def flush(self): - self.file.flush() - - -class Trainer: - def __init__( - self, - work_dir, - default_save_path=None, - accumulate_grad_batches=1, - max_updates=160000, - print_nan_grads=False, - val_check_interval=2000, - num_sanity_val_steps=5, - amp=False, - # tb logger - log_save_interval=100, - tb_log_interval=10, - # checkpoint - monitor_key='val_loss', - monitor_mode='min', - num_ckpt_keep=5, - save_best=True, - resume_from_checkpoint=0, - seed=1234, - debug=False, - ): - os.makedirs(work_dir, exist_ok=True) - self.work_dir = work_dir - self.accumulate_grad_batches = accumulate_grad_batches - self.max_updates = max_updates - self.num_sanity_val_steps = num_sanity_val_steps - self.print_nan_grads = print_nan_grads - self.default_save_path = default_save_path - self.resume_from_checkpoint = resume_from_checkpoint if resume_from_checkpoint > 0 else None - self.seed = seed - self.debug = debug - # model and optm - self.task = None - self.optimizers = [] - - # trainer state - self.testing = False - self.global_step = 0 - self.current_epoch = 0 - self.total_batches = 0 - - # configure checkpoint - self.monitor_key = monitor_key - self.num_ckpt_keep = num_ckpt_keep - self.save_best = save_best - self.monitor_op = np.less if monitor_mode == 'min' else np.greater - self.best_val_results = np.Inf if monitor_mode == 'min' else -np.Inf - self.mode = 'min' - - # allow int, string and gpu list - self.all_gpu_ids = [ - int(x) for x in os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",") if x != ''] - self.num_gpus = len(self.all_gpu_ids) - self.on_gpu = self.num_gpus > 0 - self.root_gpu = 0 - logging.info(f'GPU available: {torch.cuda.is_available()}, GPU used: {self.all_gpu_ids}') - self.use_ddp = self.num_gpus > 1 - self.proc_rank = 0 - # Tensorboard logging - self.log_save_interval = log_save_interval - self.val_check_interval = val_check_interval - self.tb_log_interval = tb_log_interval - self.amp = amp - self.amp_scalar = GradScaler() - - def test(self, task_cls): - self.testing = True - self.fit(task_cls) - - def fit(self, task_cls): - if len(self.all_gpu_ids) > 1: - mp.spawn(self.ddp_run, nprocs=self.num_gpus, args=(task_cls, copy.deepcopy(hparams))) - else: - self.task = task_cls() - self.task.trainer = self - self.run_single_process(self.task) - return 1 - - def ddp_run(self, gpu_idx, task_cls, hparams_): - hparams.update(hparams_) - self.proc_rank = gpu_idx - self.init_ddp_connection(self.proc_rank, self.num_gpus) - if dist.get_rank() != 0 and not self.debug: - sys.stdout = open(os.devnull, "w") - sys.stderr = open(os.devnull, "w") - task = task_cls() - task.trainer = self - torch.cuda.set_device(gpu_idx) - self.root_gpu = gpu_idx - self.task = task - self.run_single_process(task) - - def run_single_process(self, task): - """Sanity check a few things before starting actual training. - - :param task: - """ - # build model, optm and load checkpoint - if self.proc_rank == 0: - self.save_terminal_logs() - if not self.testing: - self.save_codes() - - model = task.build_model() - if model is not None: - task.model = model - checkpoint, _ = get_last_checkpoint(self.work_dir, self.resume_from_checkpoint) - if checkpoint is not None: - self.restore_weights(checkpoint) - elif self.on_gpu: - task.cuda(self.root_gpu) - if not self.testing: - self.optimizers = task.configure_optimizers() - self.fisrt_epoch = True - if checkpoint is not None: - self.restore_opt_state(checkpoint) - del checkpoint - # clear cache after restore - if self.on_gpu: - torch.cuda.empty_cache() - - if self.use_ddp: - self.task = self.configure_ddp(self.task) - dist.barrier() - - task_ref = self.get_task_ref() - task_ref.trainer = self - task_ref.testing = self.testing - # link up experiment object - if self.proc_rank == 0: - task_ref.build_tensorboard(save_dir=self.work_dir, name='tb_logs') - else: - os.makedirs('tmp', exist_ok=True) - task_ref.build_tensorboard(save_dir='tmp', name='tb_tmp') - self.logger = task_ref.logger - try: - if self.testing: - self.run_evaluation(test=True) - else: - self.train() - except KeyboardInterrupt as e: - traceback.print_exc() - task_ref.on_keyboard_interrupt() - - #################### - # valid and test - #################### - def run_evaluation(self, test=False): - eval_results = self.evaluate(self.task, test, tqdm_desc='Valid' if not test else 'test', - max_batches=hparams['eval_max_batches']) - if eval_results is not None and 'tb_log' in eval_results: - tb_log_output = eval_results['tb_log'] - self.log_metrics_to_tb(tb_log_output) - if self.proc_rank == 0 and not test: - self.save_checkpoint(epoch=self.current_epoch, logs=eval_results) - - def evaluate(self, task, test=False, tqdm_desc='Valid', max_batches=None): - if max_batches == -1: - max_batches = None - # enable eval mode - task.zero_grad() - task.eval() - torch.set_grad_enabled(False) - - task_ref = self.get_task_ref() - if test: - ret = task_ref.test_start() - if ret == 'EXIT': - return - else: - task_ref.validation_start() - outputs = [] - dataloader = task_ref.test_dataloader() if test else task_ref.val_dataloader() - pbar = tqdm.tqdm(dataloader, desc=tqdm_desc, total=max_batches, dynamic_ncols=True, unit='step', - disable=self.root_gpu > 0) - # give model a chance to do something with the outputs (and method defined) - for batch_idx, batch in enumerate(pbar): - if batch is None: # pragma: no cover - continue - # stop short when on fast_dev_run (sets max_batch=1) - if max_batches is not None and batch_idx >= max_batches: - break - - # make dataloader_idx arg in validation_step optional - if self.on_gpu: - batch = move_to_cuda(batch, self.root_gpu) - args = [batch, batch_idx] - if self.use_ddp: - output = task(*args) - else: - if test: - output = task_ref.test_step(*args) - else: - output = task_ref.validation_step(*args) - # track outputs for collation - outputs.append(output) - # give model a chance to do something with the outputs (and method defined) - if test: - eval_results = task_ref.test_end(outputs) - else: - eval_results = task_ref.validation_end(outputs) - # enable train mode again - task.train() - torch.set_grad_enabled(True) - return eval_results - - #################### - # train - #################### - def train(self): - task_ref = self.get_task_ref() - task_ref.on_train_start() - if self.num_sanity_val_steps > 0: - # run tiny validation (if validation defined) to make sure program won't crash during val - self.evaluate(self.task, False, 'Sanity Val', max_batches=self.num_sanity_val_steps) - # clear cache before training - if self.on_gpu: - torch.cuda.empty_cache() - dataloader = task_ref.train_dataloader() - epoch = self.current_epoch - # run all epochs - while True: - # set seed for distributed sampler (enables shuffling for each epoch) - if self.use_ddp and hasattr(dataloader.sampler, 'set_epoch'): - dataloader.sampler.set_epoch(epoch) - # update training progress in trainer and model - task_ref.current_epoch = epoch - self.current_epoch = epoch - # total batches includes multiple val checks - self.batch_loss_value = 0 # accumulated grads - # before epoch hook - task_ref.on_epoch_start() - - # run epoch - train_pbar = tqdm.tqdm(dataloader, initial=self.global_step, total=float('inf'), - dynamic_ncols=True, unit='step', disable=self.root_gpu > 0) - for batch_idx, batch in enumerate(train_pbar): - if self.global_step % self.val_check_interval == 0 and not self.fisrt_epoch: - self.run_evaluation() - pbar_metrics, tb_metrics = self.run_training_batch(batch_idx, batch) - train_pbar.set_postfix(**pbar_metrics) - self.fisrt_epoch = False - # when metrics should be logged - if (self.global_step + 1) % self.tb_log_interval == 0: - # logs user requested information to logger - self.log_metrics_to_tb(tb_metrics) - - self.global_step += 1 - task_ref.global_step = self.global_step - if self.global_step > self.max_updates: - print("| Training end..") - break - # epoch end hook - task_ref.on_epoch_end() - epoch += 1 - if self.global_step > self.max_updates: - break - task_ref.on_train_end() - - def run_training_batch(self, batch_idx, batch): - if batch is None: - return {} - all_progress_bar_metrics = [] - all_log_metrics = [] - task_ref = self.get_task_ref() - for opt_idx, optimizer in enumerate(self.optimizers): - if optimizer is None: - continue - # make sure only the gradients of the current optimizer's paramaters are calculated - # in the training step to prevent dangling gradients in multiple-optimizer setup. - if len(self.optimizers) > 1: - for param in task_ref.parameters(): - param.requires_grad = False - for group in optimizer.param_groups: - for param in group['params']: - param.requires_grad = True - - # forward pass - with autocast(enabled=self.amp): - if self.on_gpu: - batch = move_to_cuda(copy.copy(batch), self.root_gpu) - args = [batch, batch_idx, opt_idx] - if self.use_ddp: - output = self.task(*args) - else: - output = task_ref.training_step(*args) - loss = output['loss'] - if loss is None: - continue - progress_bar_metrics = output['progress_bar'] - log_metrics = output['tb_log'] - # accumulate loss - loss = loss / self.accumulate_grad_batches - - # backward pass - if loss.requires_grad: - if self.amp: - self.amp_scalar.scale(loss).backward() - else: - loss.backward() - - # track progress bar metrics - all_log_metrics.append(log_metrics) - all_progress_bar_metrics.append(progress_bar_metrics) - - if loss is None: - continue - - # nan grads - if self.print_nan_grads: - has_nan_grad = False - for name, param in task_ref.named_parameters(): - if (param.grad is not None) and torch.isnan(param.grad.float()).any(): - print("| NaN params: ", name, param, param.grad) - has_nan_grad = True - if has_nan_grad: - exit(0) - - # gradient update with accumulated gradients - if (self.global_step + 1) % self.accumulate_grad_batches == 0: - grad_norm_dict = task_ref.on_before_optimization(opt_idx) - if grad_norm_dict is not None: - all_log_metrics[-1].update(grad_norm_dict) - if self.amp: - self.amp_scalar.step(optimizer) - self.amp_scalar.update() - else: - optimizer.step() - optimizer.zero_grad() - task_ref.on_after_optimization(self.current_epoch, batch_idx, optimizer, opt_idx) - - # collapse all metrics into one dict - all_progress_bar_metrics = {k: v for d in all_progress_bar_metrics for k, v in d.items()} - all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()} - return all_progress_bar_metrics, all_log_metrics - - #################### - # load and save checkpoint - #################### - def restore_weights(self, checkpoint): - # load model state - task_ref = self.get_task_ref() - - for k, v in checkpoint['state_dict'].items(): - getattr(task_ref, k).load_state_dict(v) - - if self.on_gpu: - task_ref.cuda(self.root_gpu) - # load training state (affects trainer only) - self.best_val_results = checkpoint['checkpoint_callback_best'] - self.global_step = checkpoint['global_step'] - self.current_epoch = checkpoint['epoch'] - task_ref.global_step = self.global_step - - # wait for all models to restore weights - if self.use_ddp: - # wait for all processes to catch up - dist.barrier() - - def restore_opt_state(self, checkpoint): - if self.testing: - return - # restore the optimizers - optimizer_states = checkpoint['optimizer_states'] - for optimizer, opt_state in zip(self.optimizers, optimizer_states): - if optimizer is None: - return - try: - optimizer.load_state_dict(opt_state) - # move optimizer to GPU 1 weight at a time - if self.on_gpu: - for state in optimizer.state.values(): - for k, v in state.items(): - if isinstance(v, torch.Tensor): - state[k] = v.cuda(self.root_gpu) - except ValueError: - print("| WARMING: optimizer parameters not match !!!") - try: - if dist.is_initialized() and dist.get_rank() > 0: - return - except Exception as e: - print(e) - return - did_restore = True - return did_restore - - def save_checkpoint(self, epoch, logs=None): - monitor_op = np.less - ckpt_path = f'{self.work_dir}/model_ckpt_steps_{self.global_step}.ckpt' - logging.info(f'Epoch {epoch:05d}@{self.global_step}: saving model to {ckpt_path}') - self._atomic_save(ckpt_path) - for old_ckpt in get_all_ckpts(self.work_dir)[self.num_ckpt_keep:]: - remove_file(old_ckpt) - logging.info(f'Delete ckpt: {os.path.basename(old_ckpt)}') - current = None - if logs is not None and self.monitor_key in logs: - current = logs[self.monitor_key] - if current is not None and self.save_best: - if monitor_op(current, self.best_val_results): - best_filepath = f'{self.work_dir}/model_ckpt_best.pt' - self.best_val_results = current - logging.info( - f'Epoch {epoch:05d}@{self.global_step}: {self.monitor_key} reached {current:0.5f}. ' - f'Saving model to {best_filepath}') - self._atomic_save(best_filepath) - - def _atomic_save(self, filepath): - checkpoint = self.dump_checkpoint() - tmp_path = str(filepath) + ".part" - torch.save(checkpoint, tmp_path, _use_new_zipfile_serialization=False) - os.replace(tmp_path, filepath) - - def dump_checkpoint(self): - checkpoint = {'epoch': self.current_epoch, 'global_step': self.global_step, - 'checkpoint_callback_best': self.best_val_results} - # save optimizers - optimizer_states = [] - for i, optimizer in enumerate(self.optimizers): - if optimizer is not None: - optimizer_states.append(optimizer.state_dict()) - - checkpoint['optimizer_states'] = optimizer_states - task_ref = self.get_task_ref() - checkpoint['state_dict'] = { - k: v.state_dict() for k, v in task_ref.named_children() if len(list(v.parameters())) > 0} - return checkpoint - - #################### - # DDP - #################### - def configure_ddp(self, task): - task = DDP(task, device_ids=[self.root_gpu], find_unused_parameters=True) - random.seed(self.seed) - np.random.seed(self.seed) - return task - - def init_ddp_connection(self, proc_rank, world_size): - root_node = '127.0.0.1' - root_node = self.resolve_root_node_address(root_node) - os.environ['MASTER_ADDR'] = root_node - dist.init_process_group('nccl', rank=proc_rank, world_size=world_size) - - def resolve_root_node_address(self, root_node): - if '[' in root_node: - name = root_node.split('[')[0] - number = root_node.split(',')[0] - if '-' in number: - number = number.split('-')[0] - number = re.sub('[^0-9]', '', number) - root_node = name + number - return root_node - - #################### - # utils - #################### - def get_task_ref(self): - from text_to_speech.utils.commons.base_task import BaseTask - task: BaseTask = self.task.module if isinstance(self.task, DDP) else self.task - return task - - def log_metrics_to_tb(self, metrics, step=None): - """Logs the metric dict passed in. - - :param metrics: - """ - # turn all tensors to scalars - scalar_metrics = self.metrics_to_scalars(metrics) - - step = step if step is not None else self.global_step - # log actual metrics - if self.proc_rank == 0: - self.log_metrics(self.logger, scalar_metrics, step=step) - - @staticmethod - def log_metrics(logger, metrics, step=None): - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - v = v.item() - logger.add_scalar(k, v, step) - - def metrics_to_scalars(self, metrics): - new_metrics = {} - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - v = v.item() - - if type(v) is dict: - v = self.metrics_to_scalars(v) - - new_metrics[k] = v - - return new_metrics - - def save_terminal_logs(self): - t = datetime.now().strftime('%Y%m%d%H%M%S') - os.makedirs(f'{self.work_dir}/terminal_logs', exist_ok=True) - Tee(f'{self.work_dir}/terminal_logs/log_{t}.txt', 'w') - - def save_codes(self): - if len(hparams['save_codes']) > 0: - t = datetime.now().strftime('%Y%m%d%H%M%S') - code_dir = f'{self.work_dir}/codes/{t}' - subprocess.check_call(f'mkdir -p "{code_dir}"', shell=True) - for c in hparams['save_codes']: - if os.path.exists(c): - subprocess.check_call( - f'rsync -aR ' - f'--include="*.py" ' - f'--include="*.yaml" ' - f'--exclude="__pycache__" ' - f'--include="*/" ' - f'--exclude="*" ' - f'"./{c}" "{code_dir}/"', - shell=True) - print(f"| Copied codes to {code_dir}.") diff --git a/spaces/AIGText/GlyphControl/annotator/canny/__init__.py b/spaces/AIGText/GlyphControl/annotator/canny/__init__.py deleted file mode 100644 index cb0da951dc838ec9dec2131007e036113281800b..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/annotator/canny/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -import cv2 - - -class CannyDetector: - def __call__(self, img, low_threshold, high_threshold): - return cv2.Canny(img, low_threshold, high_threshold) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/sequence.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/sequence.js deleted file mode 100644 index 3aaf79a43af266f73e993e3f5abc2f48856debdf..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/sequence.js +++ /dev/null @@ -1,2 +0,0 @@ -import Sequence from './logic/runcommands/sequence/Sequence.js'; -export default Sequence; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/CustomProgress.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/CustomProgress.d.ts deleted file mode 100644 index 8116e2f82a5b38a7394908436a6f4aea5339dc07..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/CustomProgress.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import CustomProgress from '../../../plugins/customprogress'; -export default CustomProgress; \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/visualizers/base.py b/spaces/AlexWang/lama/saicinpainting/training/visualizers/base.py deleted file mode 100644 index 675f01682ddf5e31b6cc341735378c6f3b242e49..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/visualizers/base.py +++ /dev/null @@ -1,73 +0,0 @@ -import abc -from typing import Dict, List - -import numpy as np -import torch -from skimage import color -from skimage.segmentation import mark_boundaries - -from . import colors - -COLORS, _ = colors.generate_colors(151) # 151 - max classes for semantic segmentation - - -class BaseVisualizer: - @abc.abstractmethod - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - """ - Take a batch, make an image from it and visualize - """ - raise NotImplementedError() - - -def visualize_mask_and_images(images_dict: Dict[str, np.ndarray], keys: List[str], - last_without_mask=True, rescale_keys=None, mask_only_first=None, - black_mask=False) -> np.ndarray: - mask = images_dict['mask'] > 0.5 - result = [] - for i, k in enumerate(keys): - img = images_dict[k] - img = np.transpose(img, (1, 2, 0)) - - if rescale_keys is not None and k in rescale_keys: - img = img - img.min() - img /= img.max() + 1e-5 - if len(img.shape) == 2: - img = np.expand_dims(img, 2) - - if img.shape[2] == 1: - img = np.repeat(img, 3, axis=2) - elif (img.shape[2] > 3): - img_classes = img.argmax(2) - img = color.label2rgb(img_classes, colors=COLORS) - - if mask_only_first: - need_mark_boundaries = i == 0 - else: - need_mark_boundaries = i < len(keys) - 1 or not last_without_mask - - if need_mark_boundaries: - if black_mask: - img = img * (1 - mask[0][..., None]) - img = mark_boundaries(img, - mask[0], - color=(1., 0., 0.), - outline_color=(1., 1., 1.), - mode='thick') - result.append(img) - return np.concatenate(result, axis=1) - - -def visualize_mask_and_images_batch(batch: Dict[str, torch.Tensor], keys: List[str], max_items=10, - last_without_mask=True, rescale_keys=None) -> np.ndarray: - batch = {k: tens.detach().cpu().numpy() for k, tens in batch.items() - if k in keys or k == 'mask'} - - batch_size = next(iter(batch.values())).shape[0] - items_to_vis = min(batch_size, max_items) - result = [] - for i in range(items_to_vis): - cur_dct = {k: tens[i] for k, tens in batch.items()} - result.append(visualize_mask_and_images(cur_dct, keys, last_without_mask=last_without_mask, - rescale_keys=rescale_keys)) - return np.concatenate(result, axis=0) diff --git a/spaces/AlexWortega/Kandinsky2.0/app.py b/spaces/AlexWortega/Kandinsky2.0/app.py deleted file mode 100644 index c432b9d2555914993a84662e1966119530f32106..0000000000000000000000000000000000000000 --- a/spaces/AlexWortega/Kandinsky2.0/app.py +++ /dev/null @@ -1,215 +0,0 @@ - -import gradio as gr -import torch -from torch import autocast -from kandinsky2 import get_kandinsky2 -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -model = get_kandinsky2(device, task_type='text2img') - - - - -def infer(prompt): - images = model.generate_text2img(prompt, batch_size=4, h=512, w=512, num_steps=75, denoised_type='dynamic_threshold', dynamic_threshold_v=99.5, sampler='ddim_sampler', ddim_eta=0.05, guidance_scale=10) - return images - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - #container-advanced-btns{ - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-items: center; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; - } - #share-btn * { - all: unset; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #generated_id{ - min-height: 700px - } -""" -block = gr.Blocks(css=css) - -examples = [ - [ - 'Красная площадь' - ], - [ - 'Thinking man in anime style' - ], - [ - 'אבוקדו' - ], -] - -with block as demo: - gr.Markdown(""" - - -[![Framework: PyTorch](https://img.shields.io/badge/Framework-PyTorch-orange.svg)](https://pytorch.org/) [![Huggingface space](https://img.shields.io/badge/🤗-Huggingface-yello.svg)](https://huggingface.co/sberbank-ai/Kandinsky_2.0) - - - -## Model architecture: - -It is a latent diffusion model with two multilingual text encoders: -* mCLIP-XLMR 560M parameters -* mT5-encoder-small 146M parameters - -These encoders and multilingual training datasets unveil the real multilingual text-to-image generation experience! - -**Kandinsky 2.0** was trained on a large 1B multilingual set, including samples that we used to train Kandinsky. - -In terms of diffusion architecture Kandinsky 2.0 implements UNet with 1.2B parameters. - -**Kandinsky 2.0** architecture overview: -![](NatallE.png) - - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - - text = gr.Textbox( - label="Enter your prompt", show_label=False, max_lines=1 - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Run").style( - margin=False, - rounded=(False, True, True, False), - ) - - gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="generated_id").style( - grid=[2], height="auto" - ) - - ex = gr.Examples(examples=examples, fn=infer, inputs=[text], outputs=gallery, cache_examples=True) - ex.dataset.headers = [""] - - text.submit(infer, inputs=[text], outputs=gallery) - btn.click(infer, inputs=[text], outputs=gallery) -gr.Markdown(""" - - -# Authors - -+ Arseniy Shakhmatov: [Github](https://github.com/cene555), [Blog](https://t.me/gradientdip) -+ Anton Razzhigaev: [Github](https://github.com/razzant), [Blog](https://t.me/abstractDL) -+ Aleksandr Nikolich: [Github](https://github.com/AlexWortega), [Blog](https://t.me/lovedeathtransformers) -+ Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse) -+ Igor Pavlov: [Github](https://github.com/boomb0om) -+ Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey) -+ Denis Dimitrov: [Github](https://github.com/denndimitrov) - - """ - ) - -demo.queue(max_size=25).launch() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_examples.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_examples.md deleted file mode 100644 index f97a9ad09ac507a122de9e93cc80f1d07793ac0a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_examples.md +++ /dev/null @@ -1,282 +0,0 @@ - - -# Community pipelines - -[[open-in-colab]] - -> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).** - -**Community** examples consist of both inference and training examples that have been added by the community. -Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out. -If a community doesn't work as expected, please open an issue and ping the author on it. - -| Example | Description | Code Example | Colab | Author | -|:---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------:| -| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) | -| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | -| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) | -| Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) | -| Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) | -| Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech) - -To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly. -```py -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder" -) -``` - -## Example usages - -### CLIP Guided Stable Diffusion - -CLIP guided stable diffusion can help to generate more realistic images -by guiding stable diffusion at every denoising step with an additional CLIP model. - -The following code requires roughly 12GB of GPU RAM. - -```python -from diffusers import DiffusionPipeline -from transformers import CLIPImageProcessor, CLIPModel -import torch - - -feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") -clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) - - -guided_pipeline = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="clip_guided_stable_diffusion", - clip_model=clip_model, - feature_extractor=feature_extractor, - torch_dtype=torch.float16, -) -guided_pipeline.enable_attention_slicing() -guided_pipeline = guided_pipeline.to("cuda") - -prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" - -generator = torch.Generator(device="cuda").manual_seed(0) -images = [] -for i in range(4): - image = guided_pipeline( - prompt, - num_inference_steps=50, - guidance_scale=7.5, - clip_guidance_scale=100, - num_cutouts=4, - use_cutouts=False, - generator=generator, - ).images[0] - images.append(image) - -# save images locally -for i, img in enumerate(images): - img.save(f"./clip_guided_sd/image_{i}.png") -``` - -The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab. -Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images: - -![clip_guidance](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/clip_guidance/merged_clip_guidance.jpg). - -### One Step Unet - -The dummy "one-step-unet" can be run as follows: - -```python -from diffusers import DiffusionPipeline - -pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") -pipe() -``` - -**Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841). - -### Stable Diffusion Interpolation - -The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes. - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - torch_dtype=torch.float16, - safety_checker=None, # Very important for videos...lots of false positives while interpolating - custom_pipeline="interpolate_stable_diffusion", -).to("cuda") -pipe.enable_attention_slicing() - -frame_filepaths = pipe.walk( - prompts=["a dog", "a cat", "a horse"], - seeds=[42, 1337, 1234], - num_interpolation_steps=16, - output_dir="./dreams", - batch_size=4, - height=512, - width=512, - guidance_scale=8.5, - num_inference_steps=50, -) -``` - -The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion. - -> **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.** - -### Stable Diffusion Mega - -The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class. - -```python -#!/usr/bin/env python3 -from diffusers import DiffusionPipeline -import PIL -import requests -from io import BytesIO -import torch - - -def download_image(url): - response = requests.get(url) - return PIL.Image.open(BytesIO(response.content)).convert("RGB") - - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="stable_diffusion_mega", - torch_dtype=torch.float16, -) -pipe.to("cuda") -pipe.enable_attention_slicing() - - -### Text-to-Image - -images = pipe.text2img("An astronaut riding a horse").images - -### Image-to-Image - -init_image = download_image( - "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" -) - -prompt = "A fantasy landscape, trending on artstation" - -images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images - -### Inpainting - -img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" -mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" -init_image = download_image(img_url).resize((512, 512)) -mask_image = download_image(mask_url).resize((512, 512)) - -prompt = "a cat sitting on a bench" -images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images -``` - -As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline. - -### Long Prompt Weighting Stable Diffusion - -The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using "()" or decrease words weighting by using "[]" -The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class. - -#### pytorch - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - "hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16 -) -pipe = pipe.to("cuda") - -prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms" -neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry" - -pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] -``` - -#### onnxruntime - -```python -from diffusers import DiffusionPipeline -import torch - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="lpw_stable_diffusion_onnx", - revision="onnx", - provider="CUDAExecutionProvider", -) - -prompt = "a photo of an astronaut riding a horse on mars, best quality" -neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" - -pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] -``` - -if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal. - -### Speech to Image - -The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion. - -```Python -import torch - -import matplotlib.pyplot as plt -from datasets import load_dataset -from diffusers import DiffusionPipeline -from transformers import ( - WhisperForConditionalGeneration, - WhisperProcessor, -) - - -device = "cuda" if torch.cuda.is_available() else "cpu" - -ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") - -audio_sample = ds[3] - -text = audio_sample["text"].lower() -speech_data = audio_sample["audio"]["array"] - -model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device) -processor = WhisperProcessor.from_pretrained("openai/whisper-small") - -diffuser_pipeline = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - custom_pipeline="speech_to_image_diffusion", - speech_model=model, - speech_processor=processor, - - torch_dtype=torch.float16, -) - -diffuser_pipeline.enable_attention_slicing() -diffuser_pipeline = diffuser_pipeline.to(device) - -output = diffuser_pipeline(speech_data) -plt.imshow(output.images[0]) -``` -This example produces the following image: - -![image](https://user-images.githubusercontent.com/45072645/196901736-77d9c6fc-63ee-4072-90b0-dc8b903d63e3.png) \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text_inversion.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text_inversion.md deleted file mode 100644 index 948127bc09b93839f4717253d64d0a50da6b1c3d..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text_inversion.md +++ /dev/null @@ -1,275 +0,0 @@ - - - - -# Textual-Inversion - -[[open-in-colab]] - -[textual-inversion](https://arxiv.org/abs/2208.01618)은 소수의 예시 이미지에서 새로운 콘셉트를 포착하는 기법입니다. 이 기술은 원래 [Latent Diffusion](https://github.com/CompVis/latent-diffusion)에서 시연되었지만, 이후 [Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/conceptual/stable_diffusion)과 같은 유사한 다른 모델에도 적용되었습니다. 학습된 콘셉트는 text-to-image 파이프라인에서 생성된 이미지를 더 잘 제어하는 데 사용할 수 있습니다. 이 모델은 텍스트 인코더의 임베딩 공간에서 새로운 '단어'를 학습하여 개인화된 이미지 생성을 위한 텍스트 프롬프트 내에서 사용됩니다. - -![Textual Inversion example](https://textual-inversion.github.io/static/images/editing/colorful_teapot.JPG) -By using just 3-5 images you can teach new concepts to a model such as Stable Diffusion for personalized image generation (image source). - -이 가이드에서는 textual-inversion으로 [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) 모델을 학습하는 방법을 설명합니다. 이 가이드에서 사용된 모든 textual-inversion 학습 스크립트는 [여기](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)에서 확인할 수 있습니다. 내부적으로 어떻게 작동하는지 자세히 살펴보고 싶으시다면 해당 링크를 참조해주시기 바랍니다. - - - -[Stable Diffusion Textual Inversion Concepts Library](https://huggingface.co/sd-concepts-library)에는 커뮤니티에서 제작한 학습된 textual-inversion 모델들이 있습니다. 시간이 지남에 따라 더 많은 콘셉트들이 추가되어 유용한 리소스로 성장할 것입니다! - - - -시작하기 전에 학습을 위한 의존성 라이브러리들을 설치해야 합니다: - -```bash -pip install diffusers accelerate transformers -``` - -의존성 라이브러리들의 설치가 완료되면, [🤗Accelerate](https://github.com/huggingface/accelerate/) 환경을 초기화시킵니다. - -```bash -accelerate config -``` - -별도의 설정없이, 기본 🤗Accelerate 환경을 설정하려면 다음과 같이 하세요: - -```bash -accelerate config default -``` - -또는 사용 중인 환경이 노트북과 같은 대화형 셸을 지원하지 않는다면, 다음과 같이 사용할 수 있습니다: - -```py -from accelerate.utils import write_basic_config - -write_basic_config() -``` - -마지막으로, Memory-Efficient Attention을 통해 메모리 사용량을 줄이기 위해 [xFormers](https://huggingface.co/docs/diffusers/main/en/training/optimization/xformers)를 설치합니다. xFormers를 설치한 후, 학습 스크립트에 `--enable_xformers_memory_efficient_attention` 인자를 추가합니다. xFormers는 Flax에서 지원되지 않습니다. - -## 허브에 모델 업로드하기 - -모델을 허브에 저장하려면, 학습 스크립트에 다음 인자를 추가해야 합니다. - -```bash ---push_to_hub -``` - -## 체크포인트 저장 및 불러오기 - -학습중에 모델의 체크포인트를 정기적으로 저장하는 것이 좋습니다. 이렇게 하면 어떤 이유로든 학습이 중단된 경우 저장된 체크포인트에서 학습을 다시 시작할 수 있습니다. 학습 스크립트에 다음 인자를 전달하면 500단계마다 전체 학습 상태가 `output_dir`의 하위 폴더에 체크포인트로서 저장됩니다. - -```bash ---checkpointing_steps=500 -``` - -저장된 체크포인트에서 학습을 재개하려면, 학습 스크립트와 재개할 특정 체크포인트에 다음 인자를 전달하세요. - -```bash ---resume_from_checkpoint="checkpoint-1500" -``` - -## 파인 튜닝 - -학습용 데이터셋으로 [고양이 장난감 데이터셋](https://huggingface.co/datasets/diffusers/cat_toy_example)을 다운로드하여 디렉토리에 저장하세요. 여러분만의 고유한 데이터셋을 사용하고자 한다면, [학습용 데이터셋 만들기](https://huggingface.co/docs/diffusers/training/create_dataset) 가이드를 살펴보시기 바랍니다. - -```py -from huggingface_hub import snapshot_download - -local_dir = "./cat" -snapshot_download( - "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" -) -``` - -모델의 리포지토리 ID(또는 모델 가중치가 포함된 디렉터리 경로)를 `MODEL_NAME` 환경 변수에 할당하고, 해당 값을 [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) 인자에 전달합니다. 그리고 이미지가 포함된 디렉터리 경로를 `DATA_DIR` 환경 변수에 할당합니다. - -이제 [학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py)를 실행할 수 있습니다. 스크립트는 다음 파일을 생성하고 리포지토리에 저장합니다. - -- `learned_embeds.bin` -- `token_identifier.txt` -- `type_of_concept.txt`. - - - -💡V100 GPU 1개를 기준으로 전체 학습에는 최대 1시간이 걸립니다. 학습이 완료되기를 기다리는 동안 궁금한 점이 있으면 아래 섹션에서 [textual-inversion이 어떻게 작동하는지](https://huggingface.co/docs/diffusers/training/text_inversion#how-it-works) 자유롭게 확인하세요 ! - - - - - -```bash -export MODEL_NAME="runwayml/stable-diffusion-v1-5" -export DATA_DIR="./cat" - -accelerate launch textual_inversion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$DATA_DIR \ - --learnable_property="object" \ - --placeholder_token="" --initializer_token="toy" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --max_train_steps=3000 \ - --learning_rate=5.0e-04 --scale_lr \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --output_dir="textual_inversion_cat" \ - --push_to_hub -``` - - - -💡학습 성능을 올리기 위해, 플레이스홀더 토큰(``)을 (단일한 임베딩 벡터가 아닌) 복수의 임베딩 벡터로 표현하는 것 역시 고려할 있습니다. 이러한 트릭이 모델이 보다 복잡한 이미지의 스타일(앞서 말한 콘셉트)을 더 잘 캡처하는 데 도움이 될 수 있습니다. 복수의 임베딩 벡터 학습을 활성화하려면 다음 옵션을 전달하십시오. - -```bash ---num_vectors=5 -``` - - - - - -TPU에 액세스할 수 있는 경우, [Flax 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion_flax.py)를 사용하여 더 빠르게 모델을 학습시켜보세요. (물론 GPU에서도 작동합니다.) 동일한 설정에서 Flax 학습 스크립트는 PyTorch 학습 스크립트보다 최소 70% 더 빨라야 합니다! ⚡️ - -시작하기 앞서 Flax에 대한 의존성 라이브러리들을 설치해야 합니다. - -```bash -pip install -U -r requirements_flax.txt -``` - -모델의 리포지토리 ID(또는 모델 가중치가 포함된 디렉터리 경로)를 `MODEL_NAME` 환경 변수에 할당하고, 해당 값을 [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) 인자에 전달합니다. - -그런 다음 [학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion_flax.py)를 시작할 수 있습니다. - -```bash -export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" -export DATA_DIR="./cat" - -python textual_inversion_flax.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$DATA_DIR \ - --learnable_property="object" \ - --placeholder_token="" --initializer_token="toy" \ - --resolution=512 \ - --train_batch_size=1 \ - --max_train_steps=3000 \ - --learning_rate=5.0e-04 --scale_lr \ - --output_dir="textual_inversion_cat" \ - --push_to_hub -``` - - - -### 중간 로깅 - -모델의 학습 진행 상황을 추적하는 데 관심이 있는 경우, 학습 과정에서 생성된 이미지를 저장할 수 있습니다. 학습 스크립트에 다음 인수를 추가하여 중간 로깅을 활성화합니다. - -- `validation_prompt` : 샘플을 생성하는 데 사용되는 프롬프트(기본값은 `None`으로 설정되며, 이 때 중간 로깅은 비활성화됨) -- `num_validation_images` : 생성할 샘플 이미지 수 -- `validation_steps` : `validation_prompt`로부터 샘플 이미지를 생성하기 전 스텝의 수 - -```bash ---validation_prompt="A backpack" ---num_validation_images=4 ---validation_steps=100 -``` - -## 추론 - -모델을 학습한 후에는, 해당 모델을 [`StableDiffusionPipeline`]을 사용하여 추론에 사용할 수 있습니다. - -textual-inversion 스크립트는 기본적으로 textual-inversion을 통해 얻어진 임베딩 벡터만을 저장합니다. 해당 임베딩 벡터들은 텍스트 인코더의 임베딩 행렬에 추가되어 있습습니다. - - - - - -💡 커뮤니티는 [sd-concepts-library](https://huggingface.co/sd-concepts-library) 라는 대규모의 textual-inversion 임베딩 벡터 라이브러리를 만들었습니다. textual-inversion 임베딩을 밑바닥부터 학습하는 대신, 해당 라이브러리에 본인이 찾는 textual-inversion 임베딩이 이미 추가되어 있지 않은지를 확인하는 것도 좋은 방법이 될 것 같습니다. - - - -textual-inversion 임베딩 벡터을 불러오기 위해서는, 먼저 해당 임베딩 벡터를 학습할 때 사용한 모델을 불러와야 합니다. 여기서는 [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/docs/diffusers/training/runwayml/stable-diffusion-v1-5) 모델이 사용되었다고 가정하고 불러오겠습니다. - -```python -from diffusers import StableDiffusionPipeline -import torch - -model_id = "runwayml/stable-diffusion-v1-5" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") -``` - -다음으로 `TextualInversionLoaderMixin.load_textual_inversion` 함수를 통해, textual-inversion 임베딩 벡터를 불러와야 합니다. 여기서 우리는 이전의 `` 예제의 임베딩을 불러올 것입니다. - -```python -pipe.load_textual_inversion("sd-concepts-library/cat-toy") -``` - -이제 플레이스홀더 토큰(``)이 잘 동작하는지를 확인하는 파이프라인을 실행할 수 있습니다. - -```python -prompt = "A backpack" - -image = pipe(prompt, num_inference_steps=50).images[0] -image.save("cat-backpack.png") -``` - -`TextualInversionLoaderMixin.load_textual_inversion`은 Diffusers 형식으로 저장된 텍스트 임베딩 벡터를 로드할 수 있을 뿐만 아니라, [Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) 형식으로 저장된 임베딩 벡터도 로드할 수 있습니다. 이렇게 하려면, 먼저 [civitAI](https://civitai.com/models/3036?modelVersionId=8387)에서 임베딩 벡터를 다운로드한 다음 로컬에서 불러와야 합니다. - -```python -pipe.load_textual_inversion("./charturnerv2.pt") -``` - - - -현재 Flax에 대한 `load_textual_inversion` 함수는 없습니다. 따라서 학습 후 textual-inversion 임베딩 벡터가 모델의 일부로서 저장되었는지를 확인해야 합니다. 그런 다음은 다른 Flax 모델과 마찬가지로 실행할 수 있습니다. - -```python -import jax -import numpy as np -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from diffusers import FlaxStableDiffusionPipeline - -model_path = "path-to-your-trained-model" -pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) - -prompt = "A backpack" -prng_seed = jax.random.PRNGKey(0) -num_inference_steps = 50 - -num_samples = jax.device_count() -prompt = num_samples * [prompt] -prompt_ids = pipeline.prepare_inputs(prompt) - -# shard inputs and rng -params = replicate(params) -prng_seed = jax.random.split(prng_seed, jax.device_count()) -prompt_ids = shard(prompt_ids) - -images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images -images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) -image.save("cat-backpack.png") -``` - - - -## 작동 방식 - -![Diagram from the paper showing overview](https://textual-inversion.github.io/static/images/training/training.JPG) -Architecture overview from the Textual Inversion blog post. - -일반적으로 텍스트 프롬프트는 모델에 전달되기 전에 임베딩으로 토큰화됩니다. textual-inversion은 비슷한 작업을 수행하지만, 위 다이어그램의 특수 토큰 `S*`로부터 새로운 토큰 임베딩 `v*`를 학습합니다. 모델의 아웃풋은 디퓨전 모델을 조정하는 데 사용되며, 디퓨전 모델이 단 몇 개의 예제 이미지에서 신속하고 새로운 콘셉트를 이해하는 데 도움을 줍니다. - -이를 위해 textual-inversion은 제너레이터 모델과 학습용 이미지의 노이즈 버전을 사용합니다. 제너레이터는 노이즈가 적은 버전의 이미지를 예측하려고 시도하며 토큰 임베딩 `v*`은 제너레이터의 성능에 따라 최적화됩니다. 토큰 임베딩이 새로운 콘셉트를 성공적으로 포착하면 디퓨전 모델에 더 유용한 정보를 제공하고 노이즈가 적은 더 선명한 이미지를 생성하는 데 도움이 됩니다. 이러한 최적화 프로세스는 일반적으로 다양한 프롬프트와 이미지에 수천 번에 노출됨으로써 이루어집니다. - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py deleted file mode 100644 index fa00a0d201af350dcc56584e19a222150f978132..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py +++ /dev/null @@ -1,621 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from PIL import Image -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DPMSolverMultistepScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - StableDiffusionInpaintPipelineLegacy, - UNet2DConditionModel, - UNet2DModel, - VQModel, -) -from diffusers.utils import floats_tensor, load_image, nightly, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, load_numpy, preprocess_image, require_torch_gpu - - -enable_full_determinism() - - -class StableDiffusionInpaintLegacyPipelineFastTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @property - def dummy_image(self): - batch_size = 1 - num_channels = 3 - sizes = (32, 32) - - image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device) - return image - - @property - def dummy_uncond_unet(self): - torch.manual_seed(0) - model = UNet2DModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=3, - out_channels=3, - down_block_types=("DownBlock2D", "AttnDownBlock2D"), - up_block_types=("AttnUpBlock2D", "UpBlock2D"), - ) - return model - - @property - def dummy_cond_unet(self): - torch.manual_seed(0) - model = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - return model - - @property - def dummy_cond_unet_inpaint(self): - torch.manual_seed(0) - model = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=9, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - return model - - @property - def dummy_vq_model(self): - torch.manual_seed(0) - model = VQModel( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=3, - ) - return model - - @property - def dummy_vae(self): - torch.manual_seed(0) - model = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - return model - - @property - def dummy_text_encoder(self): - torch.manual_seed(0) - config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - return CLIPTextModel(config) - - @property - def dummy_extractor(self): - def extract(*args, **kwargs): - class Out: - def __init__(self): - self.pixel_values = torch.ones([0]) - - def to(self, device): - self.pixel_values.to(device) - return self - - return Out() - - return extract - - def test_stable_diffusion_inpaint_legacy(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - unet = self.dummy_cond_unet - scheduler = PNDMScheduler(skip_prk_steps=True) - vae = self.dummy_vae - bert = self.dummy_text_encoder - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB") - mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((32, 32)) - - # make sure here that pndm scheduler skips prk - sd_pipe = StableDiffusionInpaintPipelineLegacy( - unet=unet, - scheduler=scheduler, - vae=vae, - text_encoder=bert, - tokenizer=tokenizer, - safety_checker=None, - feature_extractor=self.dummy_extractor, - ) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - generator = torch.Generator(device=device).manual_seed(0) - output = sd_pipe( - [prompt], - generator=generator, - guidance_scale=6.0, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - ) - - image = output.images - - generator = torch.Generator(device=device).manual_seed(0) - image_from_tuple = sd_pipe( - [prompt], - generator=generator, - guidance_scale=6.0, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.4941, 0.5396, 0.4689, 0.6338, 0.5392, 0.4094, 0.5477, 0.5904, 0.5165]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_inpaint_legacy_batched(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - unet = self.dummy_cond_unet - scheduler = PNDMScheduler(skip_prk_steps=True) - vae = self.dummy_vae - bert = self.dummy_text_encoder - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB") - init_images_tens = preprocess_image(init_image, batch_size=2) - init_masks_tens = init_images_tens + 4 - - # make sure here that pndm scheduler skips prk - sd_pipe = StableDiffusionInpaintPipelineLegacy( - unet=unet, - scheduler=scheduler, - vae=vae, - text_encoder=bert, - tokenizer=tokenizer, - safety_checker=None, - feature_extractor=self.dummy_extractor, - ) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - generator = torch.Generator(device=device).manual_seed(0) - images = sd_pipe( - [prompt] * 2, - generator=generator, - guidance_scale=6.0, - num_inference_steps=2, - output_type="np", - image=init_images_tens, - mask_image=init_masks_tens, - ).images - - assert images.shape == (2, 32, 32, 3) - - image_slice_0 = images[0, -3:, -3:, -1].flatten() - image_slice_1 = images[1, -3:, -3:, -1].flatten() - - expected_slice_0 = np.array([0.4697, 0.3770, 0.4096, 0.4653, 0.4497, 0.4183, 0.3950, 0.4668, 0.4672]) - expected_slice_1 = np.array([0.4105, 0.4987, 0.5771, 0.4921, 0.4237, 0.5684, 0.5496, 0.4645, 0.5272]) - - assert np.abs(expected_slice_0 - image_slice_0).max() < 1e-2 - assert np.abs(expected_slice_1 - image_slice_1).max() < 1e-2 - - def test_stable_diffusion_inpaint_legacy_negative_prompt(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - unet = self.dummy_cond_unet - scheduler = PNDMScheduler(skip_prk_steps=True) - vae = self.dummy_vae - bert = self.dummy_text_encoder - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB") - mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((32, 32)) - - # make sure here that pndm scheduler skips prk - sd_pipe = StableDiffusionInpaintPipelineLegacy( - unet=unet, - scheduler=scheduler, - vae=vae, - text_encoder=bert, - tokenizer=tokenizer, - safety_checker=None, - feature_extractor=self.dummy_extractor, - ) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - negative_prompt = "french fries" - generator = torch.Generator(device=device).manual_seed(0) - output = sd_pipe( - prompt, - negative_prompt=negative_prompt, - generator=generator, - guidance_scale=6.0, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - ) - - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.4941, 0.5396, 0.4689, 0.6338, 0.5392, 0.4094, 0.5477, 0.5904, 0.5165]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_inpaint_legacy_num_images_per_prompt(self): - device = "cpu" - unet = self.dummy_cond_unet - scheduler = PNDMScheduler(skip_prk_steps=True) - vae = self.dummy_vae - bert = self.dummy_text_encoder - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB") - mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((32, 32)) - - # make sure here that pndm scheduler skips prk - sd_pipe = StableDiffusionInpaintPipelineLegacy( - unet=unet, - scheduler=scheduler, - vae=vae, - text_encoder=bert, - tokenizer=tokenizer, - safety_checker=None, - feature_extractor=self.dummy_extractor, - ) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - - # test num_images_per_prompt=1 (default) - images = sd_pipe( - prompt, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - ).images - - assert images.shape == (1, 32, 32, 3) - - # test num_images_per_prompt=1 (default) for batch of prompts - batch_size = 2 - images = sd_pipe( - [prompt] * batch_size, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - ).images - - assert images.shape == (batch_size, 32, 32, 3) - - # test num_images_per_prompt for single prompt - num_images_per_prompt = 2 - images = sd_pipe( - prompt, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - num_images_per_prompt=num_images_per_prompt, - ).images - - assert images.shape == (num_images_per_prompt, 32, 32, 3) - - # test num_images_per_prompt for batch of prompts - batch_size = 2 - images = sd_pipe( - [prompt] * batch_size, - num_inference_steps=2, - output_type="np", - image=init_image, - mask_image=mask_image, - num_images_per_prompt=num_images_per_prompt, - ).images - - assert images.shape == (batch_size * num_images_per_prompt, 32, 32, 3) - - -@slow -@require_torch_gpu -class StableDiffusionInpaintLegacyPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, generator_device="cpu", seed=0): - generator = torch.Generator(device=generator_device).manual_seed(seed) - init_image = load_image( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint/input_bench_image.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint/input_bench_mask.png" - ) - inputs = { - "prompt": "A red cat sitting on a park bench", - "image": init_image, - "mask_image": mask_image, - "generator": generator, - "num_inference_steps": 3, - "strength": 0.75, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_inpaint_legacy_pndm(self): - pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, 253:256, 253:256, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.5665, 0.6117, 0.6430, 0.4057, 0.4594, 0.5658, 0.1596, 0.3106, 0.4305]) - - assert np.abs(expected_slice - image_slice).max() < 3e-3 - - def test_stable_diffusion_inpaint_legacy_batched(self): - pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - inputs["prompt"] = [inputs["prompt"]] * 2 - inputs["image"] = preprocess_image(inputs["image"], batch_size=2) - - mask = inputs["mask_image"].convert("L") - mask = np.array(mask).astype(np.float32) / 255.0 - mask = torch.from_numpy(1 - mask) - masks = torch.vstack([mask[None][None]] * 2) - inputs["mask_image"] = masks - - image = pipe(**inputs).images - assert image.shape == (2, 512, 512, 3) - - image_slice_0 = image[0, 253:256, 253:256, -1].flatten() - image_slice_1 = image[1, 253:256, 253:256, -1].flatten() - - expected_slice_0 = np.array( - [0.52093095, 0.4176447, 0.32752383, 0.6175223, 0.50563973, 0.36470804, 0.65460044, 0.5775188, 0.44332123] - ) - expected_slice_1 = np.array( - [0.3592432, 0.4233033, 0.3914635, 0.31014425, 0.3702293, 0.39412856, 0.17526966, 0.2642669, 0.37480092] - ) - - assert np.abs(expected_slice_0 - image_slice_0).max() < 3e-3 - assert np.abs(expected_slice_1 - image_slice_1).max() < 3e-3 - - def test_stable_diffusion_inpaint_legacy_k_lms(self): - pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None - ) - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, 253:256, 253:256, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.4534, 0.4467, 0.4329, 0.4329, 0.4339, 0.4220, 0.4244, 0.4332, 0.4426]) - - assert np.abs(expected_slice - image_slice).max() < 3e-3 - - def test_stable_diffusion_inpaint_legacy_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([0.5977, 1.5449, 1.0586, -0.3250, 0.7383, -0.0862, 0.4631, -0.2571, -1.1289]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 1e-3 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([0.5190, 1.1621, 0.6885, 0.2424, 0.3337, -0.1617, 0.6914, -0.1957, -0.5474]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 1e-3 - - callback_fn.has_been_called = False - - pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == 2 - - -@nightly -@require_torch_gpu -class StableDiffusionInpaintLegacyPipelineNightlyTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): - generator = torch.Generator(device=generator_device).manual_seed(seed) - init_image = load_image( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint/input_bench_image.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint/input_bench_mask.png" - ) - inputs = { - "prompt": "A red cat sitting on a park bench", - "image": init_image, - "mask_image": mask_image, - "generator": generator, - "num_inference_steps": 50, - "strength": 0.75, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_inpaint_pndm(self): - sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5") - sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_pndm.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_inpaint_ddim(self): - sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5") - sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_ddim.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_inpaint_lms(self): - sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5") - sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_lms.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 - - def test_inpaint_dpm(self): - sd_pipe = StableDiffusionInpaintPipelineLegacy.from_pretrained("runwayml/stable-diffusion-v1-5") - sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config) - sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(torch_device) - inputs["num_inference_steps"] = 30 - image = sd_pipe(**inputs).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" - "/stable_diffusion_inpaint_legacy/stable_diffusion_1_5_dpm_multi.npy" - ) - max_diff = np.abs(expected_image - image).max() - assert max_diff < 1e-3 diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py deleted file mode 100644 index 5c623eb56836760694b50f3e4e66aa0f1fc069df..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './danet_r50-d8_512x512_40k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/main.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/main.css deleted file mode 100644 index 11f1afddd620f6a0fa198ebd1d627ab6b6afbd75..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/main.css +++ /dev/null @@ -1,612 +0,0 @@ -.tabs.svelte-710i53 { - margin-top: 0 -} - -.py-6 { - padding-top: 2.5rem -} - -.small-button { - min-width: 0 !important; - max-width: 171px; - height: 39.594px; - align-self: end; -} - -.refresh-button { - max-width: 4.4em; - min-width: 2.2em !important; - height: 39.594px; - align-self: end; - line-height: 1em; - border-radius: 0.5em; - flex: none; -} - -.refresh-button-small { - max-width: 2.2em; -} - -.button_nowrap { - white-space: nowrap; -} - -#slim-column { - flex: none !important; - min-width: 0 !important; -} - -.slim-dropdown { - background-color: transparent !important; - border: none !important; - padding: 0 !important; -} - -#download-label, #upload-label { - min-height: 0 -} - -.dark svg { - fill: white; -} - -.dark a { - color: white !important; -} - -ol li p, ul li p { - display: inline-block; -} - -#chat-tab, #default-tab, #notebook-tab, #parameters, #chat-settings, #lora, #training-tab, #model-tab, #session-tab { - border: 0; -} - -.gradio-container-3-18-0 .prose * h1, h2, h3, h4 { - color: white; -} - -.gradio-container { - max-width: 100% !important; - padding-top: 0 !important; -} - -#extensions { - margin-top: 5px; - margin-bottom: 35px; -} - -.extension-tab { - border: 0 !important; -} - -span.math.inline { - font-size: 27px; - vertical-align: baseline !important; -} - -div.svelte-15lo0d8 > *, div.svelte-15lo0d8 > .form > * { - flex-wrap: nowrap; -} - -.header_bar { - background-color: #f7f7f7; - margin-bottom: 19px; - display: inline !important; - overflow-x: scroll; - margin-left: calc(-1 * var(--size-4)); - margin-right: calc(-1 * var(--size-4)); -} - -.dark .header_bar { - border: none !important; - background-color: #8080802b; -} - -.header_bar button.selected { - border-radius: 0; -} - -.textbox_default textarea { - height: calc(100dvh - 271px); -} - -.textbox_default_output textarea { - height: calc(100dvh - 185px); -} - -.textbox textarea { - height: calc(100dvh - 241px); -} - -.textbox_logits textarea { - height: calc(100dvh - 236px); -} - -.textbox_logits_notebook textarea { - height: calc(100dvh - 292px); -} - -.monospace textarea { - font-family: monospace; -} - -.textbox_default textarea, -.textbox_default_output textarea, -.textbox_logits textarea, -.textbox_logits_notebook textarea, -.textbox textarea { - font-size: 16px !important; - color: #46464A !important; -} - -.dark textarea { - color: #efefef !important; -} - -@media screen and (max-width: 711px) { - .textbox_default textarea { - height: calc(100dvh - 259px); - } - - div .default-token-counter { - top: calc( 0.5 * (100dvh - 236px) ) !important; - } - - .transparent-substring { - display: none; - } - - .hover-menu { - min-width: 250px !important; - } -} - -/* Hide the gradio footer*/ -footer { - display: none !important; -} - -button { - font-size: 14px !important; -} - -.file-saver { - position: fixed !important; - top: 50%; - left: 50%; - transform: translate(-50%, -50%); /* center horizontally */ - max-width: 500px; - background-color: var(--input-background-fill); - border: 2px solid black !important; - z-index: 1000; -} - -.dark .file-saver { - border: 2px solid white !important; -} - -.checkboxgroup-table label { - background: none !important; - padding: 0 !important; - border: 0 !important; -} - -.checkboxgroup-table div { - display: grid !important; -} - -.markdown ul ol { - font-size: 100% !important; -} - -.pretty_scrollbar::-webkit-scrollbar { - width: 5px; -} - -.pretty_scrollbar::-webkit-scrollbar-track { - background: transparent; -} - -.pretty_scrollbar::-webkit-scrollbar-thumb, -.pretty_scrollbar::-webkit-scrollbar-thumb:hover { - background: #c5c5d2; -} - -.dark .pretty_scrollbar::-webkit-scrollbar-thumb, -.dark .pretty_scrollbar::-webkit-scrollbar-thumb:hover { - background: #374151; -} - -.pretty_scrollbar::-webkit-resizer { - background: #c5c5d2; -} - -.dark .pretty_scrollbar::-webkit-resizer { - background: #374151; -} - -audio { - max-width: 100%; -} - -/* Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui */ -.token-counter { - position: absolute !important; - top: calc( 0.5 * (100dvh - 218px) ) !important; - right: 2px; - z-index: 100; - background: var(--input-background-fill) !important; - min-height: 0 !important; -} - -.default-token-counter { - top: calc( 0.5 * (100dvh - 248px) ) !important; -} - -.token-counter span { - padding: 1px; - box-shadow: 0 0 0 0.3em rgba(192,192,192,0.15), inset 0 0 0.6em rgba(192,192,192,0.075); - border: 2px solid rgba(192,192,192,0.4) !important; - border-radius: 0.4em; -} - -.no-background { - background: var(--background-fill-primary) !important; - padding: 0px !important; -} - -/*---------------------------------------------- - Chat tab -----------------------------------------------*/ -.h-\[40vh\], .wrap.svelte-byatnx.svelte-byatnx.svelte-byatnx { - height: 66.67vh -} - -.gradio-container { - margin-left: auto !important; - margin-right: auto !important; -} - -.w-screen { - width: unset -} - -div.svelte-362y77>*, div.svelte-362y77>.form>* { - flex-wrap: nowrap -} - -.pending.svelte-1ed2p3z { - opacity: 1; -} - -.wrap.svelte-6roggh.svelte-6roggh { - max-height: 92.5%; -} - -/* This is for the microphone button in the whisper extension */ -.sm.svelte-1ipelgc { - width: 100%; -} - -#chat-tab button#Generate, #chat-tab button#stop { - width: 89.3438px !important; -} - -#chat-tab button, #notebook-tab button, #default-tab button { - min-width: 0 !important; -} - -#chat-tab > :first-child, #extensions { - max-width: 880px; - margin-left: auto; - margin-right: auto; -} - -@media screen and (max-width: 688px) { - #chat-tab { - padding-left: 0px; - padding-right: 0px; - } - - .chat-parent { - height: calc(100dvh - 179px) !important; - } - - .old-ui .chat-parent { - height: calc(100dvh - 310px) !important; - } -} - -.chat { - margin-left: auto; - margin-right: auto; - max-width: 880px; - height: 100%; - overflow-y: auto; - padding-right: 15px; - display: flex; - flex-direction: column; - word-break: break-word; - overflow-wrap: anywhere; -} - -.chat-parent { - height: calc(100dvh - 181px); - overflow: auto !important; -} - -.old-ui .chat-parent { - height: calc(100dvh - 270px); -} - -.chat-parent.bigchat { - height: calc(100dvh - 181px) !important; -} - -.chat > .messages { - display: flex; - flex-direction: column; -} - -.chat .message:last-child { - margin-bottom: 0px !important; - padding-bottom: 0px !important; -} - -.message-body li { - margin-top: 0 !important; - margin-bottom: 0 !important; -} - -.message-body li > p { - display: inline !important; -} - -.message-body ul, .message-body ol { - font-size: 15px !important; -} - -.message-body ul { - list-style-type: disc !important; -} - -.message-body pre { - margin-bottom: 1.25em !important; -} - -.message-body code { - white-space: pre-wrap !important; - word-wrap: break-word !important; -} - -.message-body :not(pre) > code { - white-space: normal !important; -} - -#chat-input { - padding: 0; - padding-top: 18px; - background: transparent; - border: none; -} - -#chat-input textarea:focus { - box-shadow: none !important; -} - -@media print { - body { - visibility: hidden; - } - - .chat { - visibility: visible; - position: absolute; - left: 0; - top: 0; - max-width: unset; - max-height: unset; - width: 100%; - overflow-y: visible; - } - - .message { - break-inside: avoid; - } - - .gradio-container { - overflow: visible; - } - - .tab-nav { - display: none !important; - } - - #chat-tab > :first-child { - max-width: unset; - } -} - -#show-controls { - position: absolute; - height: 100%; - background-color: var(--background-fill-primary); - border: 0px; - border-radius: 0px; -} - -#show-controls label { - z-index: 1000; - position: absolute; - left: calc(100% - 168px); -} - -#typing-container { - display: none; - position: absolute; - background-color: transparent; - left: -2px; - padding: var(--block-padding); -} - -.typing { - position: relative; -} - -.visible-dots #typing-container { - display: block; -} - -.typing span { - content: ''; - animation: blink 1.5s infinite; - animation-fill-mode: both; - height: 10px; - width: 10px; - background: #3b5998;; - position: absolute; - left:0; - top:0; - border-radius: 50%; -} - -.typing .dot1 { - animation-delay: .2s; - margin-left: calc(10px * 1.5); -} - -.typing .dot2 { - animation-delay: .4s; - margin-left: calc(10px * 3); -} - -@keyframes blink { - 0% { - opacity: .1; - } - 20% { - opacity: 1; - } - 100% { - opacity: .1; - } -} - -#chat-tab .generating { - display: none !important; -} - -.hover-element { - position: relative; - font-size: 24px; -} - -.hover-menu { - display: none; - position: absolute; - bottom: 80%; - left: 0; - background-color: var(--background-fill-secondary); - box-shadow: 0 0 10px rgba(0, 0, 0, 0.5); - z-index: 10000; - min-width: 330px; - flex-direction: column; -} - -.hover-menu button { - width: 100%; - background: transparent !important; - border-radius: 0px !important; - justify-content: space-between; - margin: 0 !important; - height: 36px; -} - -.hover-menu button:not(#clear-history-confirm) { - border-bottom: 0 !important; -} - -.hover-menu button:not(#clear-history-confirm):last-child { - border-bottom: var(--button-border-width) solid var(--button-secondary-border-color) !important; -} - -.hover-menu button:hover { - background: var(--button-secondary-background-fill-hover) !important; -} - -.transparent-substring { - opacity: 0.333; -} - -#chat-tab:not(.old-ui) #chat-buttons { - display: none !important; -} - -#gr-hover-container { - min-width: 0 !important; - display: flex; - flex-direction: column-reverse; - padding-right: 20px; - padding-bottom: 3px; - flex-grow: 0 !important; -} - -#generate-stop-container { - min-width: 0 !important; - display: flex; - flex-direction: column-reverse; - padding-bottom: 3px; - flex: 0 auto !important; -} - -#chat-input-container { - min-width: 0 !important; -} - -#chat-input-container > .form { - background: transparent; - border: none; -} - -#chat-input-row { - padding-bottom: 20px; -} - -.old-ui #chat-input-row, #chat-input-row.bigchat { - padding-bottom: 0px !important; -} - -#chat-col { - padding-bottom: 115px; -} - -.old-ui #chat-col, #chat-col.bigchat { - padding-bottom: 95px !important; -} - -.old-ui #chat-buttons #clear-history-confirm { - order: -1; -} - -.chat ol, .chat ul { - margin-top: 6px !important; -} - -/*---------------------------------------------- - Past chats menus -----------------------------------------------*/ -#past-chats-row { - margin-bottom: calc( -1 * var(--layout-gap) ); -} - -#rename-row label { - margin-top: var(--layout-gap); -} - -/*---------------------------------------------- - Keep dropdown menus above errored components -----------------------------------------------*/ -.options { - z-index: 100 !important; -} diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py deleted file mode 100644 index c52dda18b41705705b47dd0e995b124048c16fba..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/multi_scale_deform_attn.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd.function import Function, once_differentiable - -from annotator.uniformer.mmcv import deprecated_api_warning -from annotator.uniformer.mmcv.cnn import constant_init, xavier_init -from annotator.uniformer.mmcv.cnn.bricks.registry import ATTENTION -from annotator.uniformer.mmcv.runner import BaseModule -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['ms_deform_attn_backward', 'ms_deform_attn_forward']) - - -class MultiScaleDeformableAttnFunction(Function): - - @staticmethod - def forward(ctx, value, value_spatial_shapes, value_level_start_index, - sampling_locations, attention_weights, im2col_step): - """GPU version of multi-scale deformable attention. - - Args: - value (Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - im2col_step (Tensor): The step used in image to column. - - Returns: - Tensor: has shape (bs, num_queries, embed_dims) - """ - - ctx.im2col_step = im2col_step - output = ext_module.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step=ctx.im2col_step) - ctx.save_for_backward(value, value_spatial_shapes, - value_level_start_index, sampling_locations, - attention_weights) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - """GPU version of backward function. - - Args: - grad_output (Tensor): Gradient - of output tensor of forward. - - Returns: - Tuple[Tensor]: Gradient - of input tensors in forward. - """ - value, value_spatial_shapes, value_level_start_index,\ - sampling_locations, attention_weights = ctx.saved_tensors - grad_value = torch.zeros_like(value) - grad_sampling_loc = torch.zeros_like(sampling_locations) - grad_attn_weight = torch.zeros_like(attention_weights) - - ext_module.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output.contiguous(), - grad_value, - grad_sampling_loc, - grad_attn_weight, - im2col_step=ctx.im2col_step) - - return grad_value, None, None, \ - grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch(value, value_spatial_shapes, - sampling_locations, attention_weights): - """CPU version of multi-scale deformable attention. - - Args: - value (Tensor): The value has shape - (bs, num_keys, mum_heads, embed_dims//num_heads) - value_spatial_shapes (Tensor): Spatial shape of - each feature map, has shape (num_levels, 2), - last dimension 2 represent (h, w) - sampling_locations (Tensor): The location of sampling points, - has shape - (bs ,num_queries, num_heads, num_levels, num_points, 2), - the last dimension 2 represent (x, y). - attention_weights (Tensor): The weight of sampling points used - when calculate the attention, has shape - (bs ,num_queries, num_heads, num_levels, num_points), - - Returns: - Tensor: has shape (bs, num_queries, embed_dims) - """ - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ =\ - sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], - dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = value_list[level].flatten(2).transpose(1, 2).reshape( - bs * num_heads, embed_dims, H_, W_) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, - level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, - sampling_grid_l_, - mode='bilinear', - padding_mode='zeros', - align_corners=False) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points) - output = (torch.stack(sampling_value_list, dim=-2).flatten(-2) * - attention_weights).sum(-1).view(bs, num_heads * embed_dims, - num_queries) - return output.transpose(1, 2).contiguous() - - -@ATTENTION.register_module() -class MultiScaleDeformableAttention(BaseModule): - """An attention module used in Deformable-Detr. - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dims (int): The embedding dimension of Attention. - Default: 256. - num_heads (int): Parallel attention heads. Default: 64. - num_levels (int): The number of feature map used in - Attention. Default: 4. - num_points (int): The number of sampling points for - each query in each head. Default: 4. - im2col_step (int): The step used in image_to_column. - Default: 64. - dropout (float): A Dropout layer on `inp_identity`. - Default: 0.1. - batch_first (bool): Key, Query and Value are shape of - (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - norm_cfg (dict): Config dict for normalization layer. - Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims=256, - num_heads=8, - num_levels=4, - num_points=4, - im2col_step=64, - dropout=0.1, - batch_first=False, - norm_cfg=None, - init_cfg=None): - super().__init__(init_cfg) - if embed_dims % num_heads != 0: - raise ValueError(f'embed_dims must be divisible by num_heads, ' - f'but got {embed_dims} and {num_heads}') - dim_per_head = embed_dims // num_heads - self.norm_cfg = norm_cfg - self.dropout = nn.Dropout(dropout) - self.batch_first = batch_first - - # you'd better set dim_per_head to a power of 2 - # which is more efficient in the CUDA implementation - def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError( - 'invalid input for _is_power_of_2: {} (type: {})'.format( - n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - if not _is_power_of_2(dim_per_head): - warnings.warn( - "You'd better set embed_dims in " - 'MultiScaleDeformAttention to make ' - 'the dimension of each attention head a power of 2 ' - 'which is more efficient in our CUDA implementation.') - - self.im2col_step = im2col_step - self.embed_dims = embed_dims - self.num_levels = num_levels - self.num_heads = num_heads - self.num_points = num_points - self.sampling_offsets = nn.Linear( - embed_dims, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dims, - num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dims, embed_dims) - self.output_proj = nn.Linear(embed_dims, embed_dims) - self.init_weights() - - def init_weights(self): - """Default initialization for Parameters of Module.""" - constant_init(self.sampling_offsets, 0.) - thetas = torch.arange( - self.num_heads, - dtype=torch.float32) * (2.0 * math.pi / self.num_heads) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = (grid_init / - grid_init.abs().max(-1, keepdim=True)[0]).view( - self.num_heads, 1, 1, - 2).repeat(1, self.num_levels, self.num_points, 1) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - - self.sampling_offsets.bias.data = grid_init.view(-1) - constant_init(self.attention_weights, val=0., bias=0.) - xavier_init(self.value_proj, distribution='uniform', bias=0.) - xavier_init(self.output_proj, distribution='uniform', bias=0.) - self._is_init = True - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiScaleDeformableAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_padding_mask=None, - reference_points=None, - spatial_shapes=None, - level_start_index=None, - **kwargs): - """Forward Function of MultiScaleDeformAttention. - - Args: - query (Tensor): Query of Transformer with shape - (num_query, bs, embed_dims). - key (Tensor): The key tensor with shape - `(num_key, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_key, bs, embed_dims)`. - identity (Tensor): The tensor used for addition, with the - same shape as `query`. Default None. If None, - `query` will be used. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. Default - None. - reference_points (Tensor): The normalized reference - points with shape (bs, num_query, num_levels, 2), - all elements is range in [0, 1], top-left (0,0), - bottom-right (1, 1), including padding area. - or (N, Length_{query}, num_levels, 4), add - additional two dimensions is (w, h) to - form reference boxes. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_key]. - spatial_shapes (Tensor): Spatial shape of features in - different levels. With shape (num_levels, 2), - last dimension represents (h, w). - level_start_index (Tensor): The start index of each level. - A tensor has shape ``(num_levels, )`` and can be represented - as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...]. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - - if value is None: - value = query - - if identity is None: - identity = query - if query_pos is not None: - query = query + query_pos - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], 0.0) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points) - attention_weights = attention_weights.softmax(-1) - - attention_weights = attention_weights.view(bs, num_query, - self.num_heads, - self.num_levels, - self.num_points) - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack( - [spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = reference_points[:, :, None, :, None, :] \ - + sampling_offsets \ - / offset_normalizer[None, None, None, :, None, :] - elif reference_points.shape[-1] == 4: - sampling_locations = reference_points[:, :, None, :, None, :2] \ - + sampling_offsets / self.num_points \ - * reference_points[:, :, None, :, None, 2:] \ - * 0.5 - else: - raise ValueError( - f'Last dim of reference_points must be' - f' 2 or 4, but get {reference_points.shape[-1]} instead.') - if torch.cuda.is_available() and value.is_cuda: - output = MultiScaleDeformableAttnFunction.apply( - value, spatial_shapes, level_start_index, sampling_locations, - attention_weights, self.im2col_step) - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights) - - output = self.output_proj(output) - - if not self.batch_first: - # (num_query, bs ,embed_dims) - output = output.permute(1, 0, 2) - - return self.dropout(output) + identity diff --git a/spaces/Arnx/MusicGenXvAKN/MODEL_CARD.md b/spaces/Arnx/MusicGenXvAKN/MODEL_CARD.md deleted file mode 100644 index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/MODEL_CARD.md +++ /dev/null @@ -1,81 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details** See [our paper][arxiv] - -**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Quantitative analysis - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/Arnx/MusicGenXvAKN/tests/__init__.py b/spaces/Arnx/MusicGenXvAKN/tests/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/tests/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/Artrajz/vits-simple-api/utils/utils.py b/spaces/Artrajz/vits-simple-api/utils/utils.py deleted file mode 100644 index fcca4711767b4c60f49932e00dcd11bbd9bfddea..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/utils/utils.py +++ /dev/null @@ -1,95 +0,0 @@ -import logging -import os -from json import loads -from torch import load, FloatTensor -from numpy import float32 -import librosa - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - - -def load_checkpoint(checkpoint_path, model): - checkpoint_dict = load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict.get('iteration', None) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logging.info(f"{k} is not in the checkpoint") - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - if iteration: - logging.info(f"Loaded checkpoint '{checkpoint_path}' (iteration {iteration})") - else: - logging.info(f"Loaded checkpoint '{checkpoint_path}'") - return - - -def get_hparams_from_file(config_path): - with open(config_path, 'r', encoding='utf-8') as f: - data = f.read() - config = loads(data) - - hparams = HParams(**config) - return hparams - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return FloatTensor(audio.astype(float32)) - - -def clean_folder(folder_path): - for filename in os.listdir(folder_path): - file_path = os.path.join(folder_path, filename) - # 如果是文件,则删除文件 - if os.path.isfile(file_path): - os.remove(file_path) - - -# is none -> True, is not none -> False -def check_is_none(s): - return s is None or (isinstance(s, str) and str(s).isspace()) or str(s) == "" - -def save_audio(audio, path): - with open(path,"wb") as f: - f.write(audio) diff --git a/spaces/ArturStepanenko/digitsSpace/README.md b/spaces/ArturStepanenko/digitsSpace/README.md deleted file mode 100644 index 5b412acfd57dbf88e2592a5618e037031d155218..0000000000000000000000000000000000000000 --- a/spaces/ArturStepanenko/digitsSpace/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: digitsSpace -emoji: 🐨 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AsakuraMizu/moe-tts/README.md b/spaces/AsakuraMizu/moe-tts/README.md deleted file mode 100644 index f32eafaa03c3ac48c1d4989d5429fb6febded0aa..0000000000000000000000000000000000000000 --- a/spaces/AsakuraMizu/moe-tts/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Moe TTS -emoji: 😊🎙️ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: skytnt/moe-tts ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/git.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/git.py deleted file mode 100644 index 8d1d499376744954308bdf96f80e5b5a39a24195..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/git.py +++ /dev/null @@ -1,526 +0,0 @@ -import logging -import os.path -import pathlib -import re -import urllib.parse -import urllib.request -from typing import List, Optional, Tuple - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.utils.misc import HiddenText, display_path, hide_url -from pip._internal.utils.subprocess import make_command -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RemoteNotValidError, - RevOptions, - VersionControl, - find_path_to_project_root_from_repo_root, - vcs, -) - -urlsplit = urllib.parse.urlsplit -urlunsplit = urllib.parse.urlunsplit - - -logger = logging.getLogger(__name__) - - -GIT_VERSION_REGEX = re.compile( - r"^git version " # Prefix. - r"(\d+)" # Major. - r"\.(\d+)" # Dot, minor. - r"(?:\.(\d+))?" # Optional dot, patch. - r".*$" # Suffix, including any pre- and post-release segments we don't care about. -) - -HASH_REGEX = re.compile("^[a-fA-F0-9]{40}$") - -# SCP (Secure copy protocol) shorthand. e.g. 'git@example.com:foo/bar.git' -SCP_REGEX = re.compile( - r"""^ - # Optional user, e.g. 'git@' - (\w+@)? - # Server, e.g. 'github.com'. - ([^/:]+): - # The server-side path. e.g. 'user/project.git'. Must start with an - # alphanumeric character so as not to be confusable with a Windows paths - # like 'C:/foo/bar' or 'C:\foo\bar'. - (\w[^:]*) - $""", - re.VERBOSE, -) - - -def looks_like_hash(sha: str) -> bool: - return bool(HASH_REGEX.match(sha)) - - -class Git(VersionControl): - name = "git" - dirname = ".git" - repo_name = "clone" - schemes = ( - "git+http", - "git+https", - "git+ssh", - "git+git", - "git+file", - ) - # Prevent the user's environment variables from interfering with pip: - # https://github.com/pypa/pip/issues/1130 - unset_environ = ("GIT_DIR", "GIT_WORK_TREE") - default_arg_rev = "HEAD" - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return [rev] - - def is_immutable_rev_checkout(self, url: str, dest: str) -> bool: - _, rev_options = self.get_url_rev_options(hide_url(url)) - if not rev_options.rev: - return False - if not self.is_commit_id_equal(dest, rev_options.rev): - # the current commit is different from rev, - # which means rev was something else than a commit hash - return False - # return False in the rare case rev is both a commit hash - # and a tag or a branch; we don't want to cache in that case - # because that branch/tag could point to something else in the future - is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0]) - return not is_tag_or_branch - - def get_git_version(self) -> Tuple[int, ...]: - version = self.run_command( - ["version"], - command_desc="git version", - show_stdout=False, - stdout_only=True, - ) - match = GIT_VERSION_REGEX.match(version) - if not match: - logger.warning("Can't parse git version: %s", version) - return () - return tuple(int(c) for c in match.groups()) - - @classmethod - def get_current_branch(cls, location: str) -> Optional[str]: - """ - Return the current branch, or None if HEAD isn't at a branch - (e.g. detached HEAD). - """ - # git-symbolic-ref exits with empty stdout if "HEAD" is a detached - # HEAD rather than a symbolic ref. In addition, the -q causes the - # command to exit with status code 1 instead of 128 in this case - # and to suppress the message to stderr. - args = ["symbolic-ref", "-q", "HEAD"] - output = cls.run_command( - args, - extra_ok_returncodes=(1,), - show_stdout=False, - stdout_only=True, - cwd=location, - ) - ref = output.strip() - - if ref.startswith("refs/heads/"): - return ref[len("refs/heads/") :] - - return None - - @classmethod - def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]: - """ - Return (sha_or_none, is_branch), where sha_or_none is a commit hash - if the revision names a remote branch or tag, otherwise None. - - Args: - dest: the repository directory. - rev: the revision name. - """ - # Pass rev to pre-filter the list. - output = cls.run_command( - ["show-ref", rev], - cwd=dest, - show_stdout=False, - stdout_only=True, - on_returncode="ignore", - ) - refs = {} - # NOTE: We do not use splitlines here since that would split on other - # unicode separators, which can be maliciously used to install a - # different revision. - for line in output.strip().split("\n"): - line = line.rstrip("\r") - if not line: - continue - try: - ref_sha, ref_name = line.split(" ", maxsplit=2) - except ValueError: - # Include the offending line to simplify troubleshooting if - # this error ever occurs. - raise ValueError(f"unexpected show-ref line: {line!r}") - - refs[ref_name] = ref_sha - - branch_ref = f"refs/remotes/origin/{rev}" - tag_ref = f"refs/tags/{rev}" - - sha = refs.get(branch_ref) - if sha is not None: - return (sha, True) - - sha = refs.get(tag_ref) - - return (sha, False) - - @classmethod - def _should_fetch(cls, dest: str, rev: str) -> bool: - """ - Return true if rev is a ref or is a commit that we don't have locally. - - Branches and tags are not considered in this method because they are - assumed to be always available locally (which is a normal outcome of - ``git clone`` and ``git fetch --tags``). - """ - if rev.startswith("refs/"): - # Always fetch remote refs. - return True - - if not looks_like_hash(rev): - # Git fetch would fail with abbreviated commits. - return False - - if cls.has_commit(dest, rev): - # Don't fetch if we have the commit locally. - return False - - return True - - @classmethod - def resolve_revision( - cls, dest: str, url: HiddenText, rev_options: RevOptions - ) -> RevOptions: - """ - Resolve a revision to a new RevOptions object with the SHA1 of the - branch, tag, or ref if found. - - Args: - rev_options: a RevOptions object. - """ - rev = rev_options.arg_rev - # The arg_rev property's implementation for Git ensures that the - # rev return value is always non-None. - assert rev is not None - - sha, is_branch = cls.get_revision_sha(dest, rev) - - if sha is not None: - rev_options = rev_options.make_new(sha) - rev_options.branch_name = rev if is_branch else None - - return rev_options - - # Do not show a warning for the common case of something that has - # the form of a Git commit hash. - if not looks_like_hash(rev): - logger.warning( - "Did not find branch or tag '%s', assuming revision or ref.", - rev, - ) - - if not cls._should_fetch(dest, rev): - return rev_options - - # fetch the requested revision - cls.run_command( - make_command("fetch", "-q", url, rev_options.to_args()), - cwd=dest, - ) - # Change the revision to the SHA of the ref we fetched - sha = cls.get_revision(dest, rev="FETCH_HEAD") - rev_options = rev_options.make_new(sha) - - return rev_options - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """ - Return whether the current commit hash equals the given name. - - Args: - dest: the repository directory. - name: a string name. - """ - if not name: - # Then avoid an unnecessary subprocess call. - return False - - return cls.get_revision(dest) == name - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info("Cloning %s%s to %s", url, rev_display, display_path(dest)) - if verbosity <= 0: - flags: Tuple[str, ...] = ("--quiet",) - elif verbosity == 1: - flags = () - else: - flags = ("--verbose", "--progress") - if self.get_git_version() >= (2, 17): - # Git added support for partial clone in 2.17 - # https://git-scm.com/docs/partial-clone - # Speeds up cloning by functioning without a complete copy of repository - self.run_command( - make_command( - "clone", - "--filter=blob:none", - *flags, - url, - dest, - ) - ) - else: - self.run_command(make_command("clone", *flags, url, dest)) - - if rev_options.rev: - # Then a specific revision was requested. - rev_options = self.resolve_revision(dest, url, rev_options) - branch_name = getattr(rev_options, "branch_name", None) - logger.debug("Rev options %s, branch_name %s", rev_options, branch_name) - if branch_name is None: - # Only do a checkout if the current commit id doesn't match - # the requested revision. - if not self.is_commit_id_equal(dest, rev_options.rev): - cmd_args = make_command( - "checkout", - "-q", - rev_options.to_args(), - ) - self.run_command(cmd_args, cwd=dest) - elif self.get_current_branch(dest) != branch_name: - # Then a specific branch was requested, and that branch - # is not yet checked out. - track_branch = f"origin/{branch_name}" - cmd_args = [ - "checkout", - "-b", - branch_name, - "--track", - track_branch, - ] - self.run_command(cmd_args, cwd=dest) - else: - sha = self.get_revision(dest) - rev_options = rev_options.make_new(sha) - - logger.info("Resolved %s to commit %s", url, rev_options.rev) - - #: repo may contain submodules - self.update_submodules(dest) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - self.run_command( - make_command("config", "remote.origin.url", url), - cwd=dest, - ) - cmd_args = make_command("checkout", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - self.update_submodules(dest) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - # First fetch changes from the default remote - if self.get_git_version() >= (1, 9): - # fetch tags in addition to everything else - self.run_command(["fetch", "-q", "--tags"], cwd=dest) - else: - self.run_command(["fetch", "-q"], cwd=dest) - # Then reset to wanted revision (maybe even origin/master) - rev_options = self.resolve_revision(dest, url, rev_options) - cmd_args = make_command("reset", "--hard", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - #: update submodules - self.update_submodules(dest) - - @classmethod - def get_remote_url(cls, location: str) -> str: - """ - Return URL of the first remote encountered. - - Raises RemoteNotFoundError if the repository does not have a remote - url configured. - """ - # We need to pass 1 for extra_ok_returncodes since the command - # exits with return code 1 if there are no matching lines. - stdout = cls.run_command( - ["config", "--get-regexp", r"remote\..*\.url"], - extra_ok_returncodes=(1,), - show_stdout=False, - stdout_only=True, - cwd=location, - ) - remotes = stdout.splitlines() - try: - found_remote = remotes[0] - except IndexError: - raise RemoteNotFoundError - - for remote in remotes: - if remote.startswith("remote.origin.url "): - found_remote = remote - break - url = found_remote.split(" ")[1] - return cls._git_remote_to_pip_url(url.strip()) - - @staticmethod - def _git_remote_to_pip_url(url: str) -> str: - """ - Convert a remote url from what git uses to what pip accepts. - - There are 3 legal forms **url** may take: - - 1. A fully qualified url: ssh://git@example.com/foo/bar.git - 2. A local project.git folder: /path/to/bare/repository.git - 3. SCP shorthand for form 1: git@example.com:foo/bar.git - - Form 1 is output as-is. Form 2 must be converted to URI and form 3 must - be converted to form 1. - - See the corresponding test test_git_remote_url_to_pip() for examples of - sample inputs/outputs. - """ - if re.match(r"\w+://", url): - # This is already valid. Pass it though as-is. - return url - if os.path.exists(url): - # A local bare remote (git clone --mirror). - # Needs a file:// prefix. - return pathlib.PurePath(url).as_uri() - scp_match = SCP_REGEX.match(url) - if scp_match: - # Add an ssh:// prefix and replace the ':' with a '/'. - return scp_match.expand(r"ssh://\1\2/\3") - # Otherwise, bail out. - raise RemoteNotValidError(url) - - @classmethod - def has_commit(cls, location: str, rev: str) -> bool: - """ - Check if rev is a commit that is available in the local repository. - """ - try: - cls.run_command( - ["rev-parse", "-q", "--verify", "sha^" + rev], - cwd=location, - log_failed_cmd=False, - ) - except InstallationError: - return False - else: - return True - - @classmethod - def get_revision(cls, location: str, rev: Optional[str] = None) -> str: - if rev is None: - rev = "HEAD" - current_rev = cls.run_command( - ["rev-parse", rev], - show_stdout=False, - stdout_only=True, - cwd=location, - ) - return current_rev.strip() - - @classmethod - def get_subdirectory(cls, location: str) -> Optional[str]: - """ - Return the path to Python project root, relative to the repo root. - Return None if the project root is in the repo root. - """ - # find the repo root - git_dir = cls.run_command( - ["rev-parse", "--git-dir"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - if not os.path.isabs(git_dir): - git_dir = os.path.join(location, git_dir) - repo_root = os.path.abspath(os.path.join(git_dir, "..")) - return find_path_to_project_root_from_repo_root(location, repo_root) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - """ - Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'. - That's required because although they use SSH they sometimes don't - work with a ssh:// scheme (e.g. GitHub). But we need a scheme for - parsing. Hence we remove it again afterwards and return it as a stub. - """ - # Works around an apparent Git bug - # (see https://article.gmane.org/gmane.comp.version-control.git/146500) - scheme, netloc, path, query, fragment = urlsplit(url) - if scheme.endswith("file"): - initial_slashes = path[: -len(path.lstrip("/"))] - newpath = initial_slashes + urllib.request.url2pathname(path).replace( - "\\", "/" - ).lstrip("/") - after_plus = scheme.find("+") + 1 - url = scheme[:after_plus] + urlunsplit( - (scheme[after_plus:], netloc, newpath, query, fragment), - ) - - if "://" not in url: - assert "file:" not in url - url = url.replace("git+", "git+ssh://") - url, rev, user_pass = super().get_url_rev_and_auth(url) - url = url.replace("ssh://", "") - else: - url, rev, user_pass = super().get_url_rev_and_auth(url) - - return url, rev, user_pass - - @classmethod - def update_submodules(cls, location: str) -> None: - if not os.path.exists(os.path.join(location, ".gitmodules")): - return - cls.run_command( - ["submodule", "update", "--init", "--recursive", "-q"], - cwd=location, - ) - - @classmethod - def get_repository_root(cls, location: str) -> Optional[str]: - loc = super().get_repository_root(location) - if loc: - return loc - try: - r = cls.run_command( - ["rev-parse", "--show-toplevel"], - cwd=location, - show_stdout=False, - stdout_only=True, - on_returncode="raise", - log_failed_cmd=False, - ) - except BadCommand: - logger.debug( - "could not determine if %s is under git control " - "because git is not available", - location, - ) - return None - except InstallationError: - return None - return os.path.normpath(r.rstrip("\r\n")) - - @staticmethod - def should_add_vcs_url_prefix(repo_url: str) -> bool: - """In either https or ssh form, requirements must be prefixed with git+.""" - return True - - -vcs.register(Git) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py deleted file mode 100644 index 1989dfcd0855d833a75e403f6a5e88725d78022f..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import itertools -import logging -from collections import defaultdict -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union -import torch -from fvcore.common.param_scheduler import CosineParamScheduler, MultiStepParamScheduler - -from detectron2.config import CfgNode - -from .lr_scheduler import LRMultiplier, WarmupParamScheduler - -_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]] -_GradientClipper = Callable[[_GradientClipperInput], None] - - -class GradientClipType(Enum): - VALUE = "value" - NORM = "norm" - - -def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper: - """ - Creates gradient clipping closure to clip by value or by norm, - according to the provided config. - """ - cfg = copy.deepcopy(cfg) - - def clip_grad_norm(p: _GradientClipperInput): - torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE) - - def clip_grad_value(p: _GradientClipperInput): - torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE) - - _GRADIENT_CLIP_TYPE_TO_CLIPPER = { - GradientClipType.VALUE: clip_grad_value, - GradientClipType.NORM: clip_grad_norm, - } - return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)] - - -def _generate_optimizer_class_with_gradient_clipping( - optimizer: Type[torch.optim.Optimizer], - *, - per_param_clipper: Optional[_GradientClipper] = None, - global_clipper: Optional[_GradientClipper] = None, -) -> Type[torch.optim.Optimizer]: - """ - Dynamically creates a new type that inherits the type of a given instance - and overrides the `step` method to add gradient clipping - """ - assert ( - per_param_clipper is None or global_clipper is None - ), "Not allowed to use both per-parameter clipping and global clipping" - - def optimizer_wgc_step(self, closure=None): - if per_param_clipper is not None: - for group in self.param_groups: - for p in group["params"]: - per_param_clipper(p) - else: - # global clipper for future use with detr - # (https://github.com/facebookresearch/detr/pull/287) - all_params = itertools.chain(*[g["params"] for g in self.param_groups]) - global_clipper(all_params) - super(type(self), self).step(closure) - - OptimizerWithGradientClip = type( - optimizer.__name__ + "WithGradientClip", - (optimizer,), - {"step": optimizer_wgc_step}, - ) - return OptimizerWithGradientClip - - -def maybe_add_gradient_clipping( - cfg: CfgNode, optimizer: Type[torch.optim.Optimizer] -) -> Type[torch.optim.Optimizer]: - """ - If gradient clipping is enabled through config options, wraps the existing - optimizer type to become a new dynamically created class OptimizerWithGradientClip - that inherits the given optimizer and overrides the `step` method to - include gradient clipping. - - Args: - cfg: CfgNode, configuration options - optimizer: type. A subclass of torch.optim.Optimizer - - Return: - type: either the input `optimizer` (if gradient clipping is disabled), or - a subclass of it with gradient clipping included in the `step` method. - """ - if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED: - return optimizer - if isinstance(optimizer, torch.optim.Optimizer): - optimizer_type = type(optimizer) - else: - assert issubclass(optimizer, torch.optim.Optimizer), optimizer - optimizer_type = optimizer - - grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS) - OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping( - optimizer_type, per_param_clipper=grad_clipper - ) - if isinstance(optimizer, torch.optim.Optimizer): - optimizer.__class__ = OptimizerWithGradientClip # a bit hacky, not recommended - return optimizer - else: - return OptimizerWithGradientClip - - -def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer: - """ - Build an optimizer from config. - """ - params = get_default_optimizer_params( - model, - base_lr=cfg.SOLVER.BASE_LR, - weight_decay_norm=cfg.SOLVER.WEIGHT_DECAY_NORM, - bias_lr_factor=cfg.SOLVER.BIAS_LR_FACTOR, - weight_decay_bias=cfg.SOLVER.WEIGHT_DECAY_BIAS, - ) - return maybe_add_gradient_clipping(cfg, torch.optim.SGD)( - params, - lr=cfg.SOLVER.BASE_LR, - momentum=cfg.SOLVER.MOMENTUM, - nesterov=cfg.SOLVER.NESTEROV, - weight_decay=cfg.SOLVER.WEIGHT_DECAY, - ) - - -def get_default_optimizer_params( - model: torch.nn.Module, - base_lr: Optional[float] = None, - weight_decay: Optional[float] = None, - weight_decay_norm: Optional[float] = None, - bias_lr_factor: Optional[float] = 1.0, - weight_decay_bias: Optional[float] = None, - overrides: Optional[Dict[str, Dict[str, float]]] = None, -) -> List[Dict[str, Any]]: - """ - Get default param list for optimizer, with support for a few types of - overrides. If no overrides needed, this is equivalent to `model.parameters()`. - - Args: - base_lr: lr for every group by default. Can be omitted to use the one in optimizer. - weight_decay: weight decay for every group by default. Can be omitted to use the one - in optimizer. - weight_decay_norm: override weight decay for params in normalization layers - bias_lr_factor: multiplier of lr for bias parameters. - weight_decay_bias: override weight decay for bias parameters - overrides: if not `None`, provides values for optimizer hyperparameters - (LR, weight decay) for module parameters with a given name; e.g. - ``{"embedding": {"lr": 0.01, "weight_decay": 0.1}}`` will set the LR and - weight decay values for all module parameters named `embedding`. - - For common detection models, ``weight_decay_norm`` is the only option - needed to be set. ``bias_lr_factor,weight_decay_bias`` are legacy settings - from Detectron1 that are not found useful. - - Example: - :: - torch.optim.SGD(get_default_optimizer_params(model, weight_decay_norm=0), - lr=0.01, weight_decay=1e-4, momentum=0.9) - """ - if overrides is None: - overrides = {} - defaults = {} - if base_lr is not None: - defaults["lr"] = base_lr - if weight_decay is not None: - defaults["weight_decay"] = weight_decay - bias_overrides = {} - if bias_lr_factor is not None and bias_lr_factor != 1.0: - # NOTE: unlike Detectron v1, we now by default make bias hyperparameters - # exactly the same as regular weights. - if base_lr is None: - raise ValueError("bias_lr_factor requires base_lr") - bias_overrides["lr"] = base_lr * bias_lr_factor - if weight_decay_bias is not None: - bias_overrides["weight_decay"] = weight_decay_bias - if len(bias_overrides): - if "bias" in overrides: - raise ValueError("Conflicting overrides for 'bias'") - overrides["bias"] = bias_overrides - - norm_module_types = ( - torch.nn.BatchNorm1d, - torch.nn.BatchNorm2d, - torch.nn.BatchNorm3d, - torch.nn.SyncBatchNorm, - # NaiveSyncBatchNorm inherits from BatchNorm2d - torch.nn.GroupNorm, - torch.nn.InstanceNorm1d, - torch.nn.InstanceNorm2d, - torch.nn.InstanceNorm3d, - torch.nn.LayerNorm, - torch.nn.LocalResponseNorm, - ) - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - for module in model.modules(): - for module_param_name, value in module.named_parameters(recurse=False): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - - hyperparams = copy.copy(defaults) - if isinstance(module, norm_module_types) and weight_decay_norm is not None: - hyperparams["weight_decay"] = weight_decay_norm - hyperparams.update(overrides.get(module_param_name, {})) - params.append({"params": [value], **hyperparams}) - return reduce_param_groups(params) - - -def _expand_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, Any]]: - # Transform parameter groups into per-parameter structure. - # Later items in `params` can overwrite parameters set in previous items. - ret = defaultdict(dict) - for item in params: - assert "params" in item - cur_params = {x: y for x, y in item.items() if x != "params"} - for param in item["params"]: - ret[param].update({"params": [param], **cur_params}) - return list(ret.values()) - - -def reduce_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, Any]]: - # Reorganize the parameter groups and merge duplicated groups. - # The number of parameter groups needs to be as small as possible in order - # to efficiently use the PyTorch multi-tensor optimizer. Therefore instead - # of using a parameter_group per single parameter, we reorganize the - # parameter groups and merge duplicated groups. This approach speeds - # up multi-tensor optimizer significantly. - params = _expand_param_groups(params) - groups = defaultdict(list) # re-group all parameter groups by their hyperparams - for item in params: - cur_params = tuple((x, y) for x, y in item.items() if x != "params") - groups[cur_params].extend(item["params"]) - ret = [] - for param_keys, param_values in groups.items(): - cur = {kv[0]: kv[1] for kv in param_keys} - cur["params"] = param_values - ret.append(cur) - return ret - - -def build_lr_scheduler( - cfg: CfgNode, optimizer: torch.optim.Optimizer -) -> torch.optim.lr_scheduler._LRScheduler: - """ - Build a LR scheduler from config. - """ - name = cfg.SOLVER.LR_SCHEDULER_NAME - - if name == "WarmupMultiStepLR": - steps = [x for x in cfg.SOLVER.STEPS if x <= cfg.SOLVER.MAX_ITER] - if len(steps) != len(cfg.SOLVER.STEPS): - logger = logging.getLogger(__name__) - logger.warning( - "SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. " - "These values will be ignored." - ) - sched = MultiStepParamScheduler( - values=[cfg.SOLVER.GAMMA ** k for k in range(len(steps) + 1)], - milestones=steps, - num_updates=cfg.SOLVER.MAX_ITER, - ) - elif name == "WarmupCosineLR": - sched = CosineParamScheduler(1, 0) - else: - raise ValueError("Unknown LR scheduler: {}".format(name)) - - sched = WarmupParamScheduler( - sched, - cfg.SOLVER.WARMUP_FACTOR, - min(cfg.SOLVER.WARMUP_ITERS / cfg.SOLVER.MAX_ITER, 1.0), - cfg.SOLVER.WARMUP_METHOD, - ) - return LRMultiplier(optimizer, multiplier=sched, max_iter=cfg.SOLVER.MAX_ITER) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_events.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_events.py deleted file mode 100644 index c1b03e4d1a703a417a83c2805be1ca15a4e458ed..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_events.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import os -import tempfile -import unittest - -from detectron2.utils.events import CommonMetricPrinter, EventStorage, JSONWriter - - -class TestEventWriter(unittest.TestCase): - def testScalar(self): - with tempfile.TemporaryDirectory( - prefix="detectron2_tests" - ) as dir, EventStorage() as storage: - json_file = os.path.join(dir, "test.json") - writer = JSONWriter(json_file) - for k in range(60): - storage.put_scalar("key", k, smoothing_hint=False) - if (k + 1) % 20 == 0: - writer.write() - storage.step() - writer.close() - with open(json_file) as f: - data = [json.loads(l) for l in f] - self.assertTrue([int(k["key"]) for k in data] == [19, 39, 59]) - - def testScalarMismatchedPeriod(self): - with tempfile.TemporaryDirectory( - prefix="detectron2_tests" - ) as dir, EventStorage() as storage: - json_file = os.path.join(dir, "test.json") - - writer = JSONWriter(json_file) - for k in range(60): - if k % 17 == 0: # write in a differnt period - storage.put_scalar("key2", k, smoothing_hint=False) - storage.put_scalar("key", k, smoothing_hint=False) - if (k + 1) % 20 == 0: - writer.write() - storage.step() - writer.close() - with open(json_file) as f: - data = [json.loads(l) for l in f] - self.assertTrue([int(k.get("key2", 0)) for k in data] == [17, 0, 34, 0, 51, 0]) - self.assertTrue([int(k.get("key", 0)) for k in data] == [0, 19, 0, 39, 0, 59]) - self.assertTrue([int(k["iteration"]) for k in data] == [17, 19, 34, 39, 51, 59]) - - def testPrintETA(self): - with EventStorage() as s: - p1 = CommonMetricPrinter(10) - p2 = CommonMetricPrinter() - - s.put_scalar("time", 1.0) - s.step() - s.put_scalar("time", 1.0) - s.step() - - with self.assertLogs("detectron2.utils.events") as logs: - p1.write() - self.assertIn("eta", logs.output[0]) - - with self.assertLogs("detectron2.utils.events") as logs: - p2.write() - self.assertNotIn("eta", logs.output[0]) diff --git a/spaces/AzulaFire/SparkDebate/README.md b/spaces/AzulaFire/SparkDebate/README.md deleted file mode 100644 index 45bb5a1ae0f781cdbefe4a3ec9b74c4b12e84db8..0000000000000000000000000000000000000000 --- a/spaces/AzulaFire/SparkDebate/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SparkDebate -emoji: 🌖 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- -# 知识库问答因为部署环境没卡,所以暂时无法使用(🥺 -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Bart92/RVC_HF/demucs/parser.py b/spaces/Bart92/RVC_HF/demucs/parser.py deleted file mode 100644 index 4e8a19cf976e3c6dfe411da64b8dce3e9a4548e0..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/demucs/parser.py +++ /dev/null @@ -1,244 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -from pathlib import Path - - -def get_parser(): - parser = argparse.ArgumentParser("demucs", description="Train and evaluate Demucs.") - default_raw = None - default_musdb = None - if 'DEMUCS_RAW' in os.environ: - default_raw = Path(os.environ['DEMUCS_RAW']) - if 'DEMUCS_MUSDB' in os.environ: - default_musdb = Path(os.environ['DEMUCS_MUSDB']) - parser.add_argument( - "--raw", - type=Path, - default=default_raw, - help="Path to raw audio, can be faster, see python3 -m demucs.raw to extract.") - parser.add_argument("--no_raw", action="store_const", const=None, dest="raw") - parser.add_argument("-m", - "--musdb", - type=Path, - default=default_musdb, - help="Path to musdb root") - parser.add_argument("--is_wav", action="store_true", - help="Indicate that the MusDB dataset is in wav format (i.e. MusDB-HQ).") - parser.add_argument("--metadata", type=Path, default=Path("metadata/"), - help="Folder where metadata information is stored.") - parser.add_argument("--wav", type=Path, - help="Path to a wav dataset. This should contain a 'train' and a 'valid' " - "subfolder.") - parser.add_argument("--samplerate", type=int, default=44100) - parser.add_argument("--audio_channels", type=int, default=2) - parser.add_argument("--samples", - default=44100 * 10, - type=int, - help="number of samples to feed in") - parser.add_argument("--data_stride", - default=44100, - type=int, - help="Stride for chunks, shorter = longer epochs") - parser.add_argument("-w", "--workers", default=10, type=int, help="Loader workers") - parser.add_argument("--eval_workers", default=2, type=int, help="Final evaluation workers") - parser.add_argument("-d", - "--device", - help="Device to train on, default is cuda if available else cpu") - parser.add_argument("--eval_cpu", action="store_true", help="Eval on test will be run on cpu.") - parser.add_argument("--dummy", help="Dummy parameter, useful to create a new checkpoint file") - parser.add_argument("--test", help="Just run the test pipeline + one validation. " - "This should be a filename relative to the models/ folder.") - parser.add_argument("--test_pretrained", help="Just run the test pipeline + one validation, " - "on a pretrained model. ") - - parser.add_argument("--rank", default=0, type=int) - parser.add_argument("--world_size", default=1, type=int) - parser.add_argument("--master") - - parser.add_argument("--checkpoints", - type=Path, - default=Path("checkpoints"), - help="Folder where to store checkpoints etc") - parser.add_argument("--evals", - type=Path, - default=Path("evals"), - help="Folder where to store evals and waveforms") - parser.add_argument("--save", - action="store_true", - help="Save estimated for the test set waveforms") - parser.add_argument("--logs", - type=Path, - default=Path("logs"), - help="Folder where to store logs") - parser.add_argument("--models", - type=Path, - default=Path("models"), - help="Folder where to store trained models") - parser.add_argument("-R", - "--restart", - action='store_true', - help='Restart training, ignoring previous run') - - parser.add_argument("--seed", type=int, default=42) - parser.add_argument("-e", "--epochs", type=int, default=180, help="Number of epochs") - parser.add_argument("-r", - "--repeat", - type=int, - default=2, - help="Repeat the train set, longer epochs") - parser.add_argument("-b", "--batch_size", type=int, default=64) - parser.add_argument("--lr", type=float, default=3e-4) - parser.add_argument("--mse", action="store_true", help="Use MSE instead of L1") - parser.add_argument("--init", help="Initialize from a pre-trained model.") - - # Augmentation options - parser.add_argument("--no_augment", - action="store_false", - dest="augment", - default=True, - help="No basic data augmentation.") - parser.add_argument("--repitch", type=float, default=0.2, - help="Probability to do tempo/pitch change") - parser.add_argument("--max_tempo", type=float, default=12, - help="Maximum relative tempo change in %% when using repitch.") - - parser.add_argument("--remix_group_size", - type=int, - default=4, - help="Shuffle sources using group of this size. Useful to somewhat " - "replicate multi-gpu training " - "on less GPUs.") - parser.add_argument("--shifts", - type=int, - default=10, - help="Number of random shifts used for the shift trick.") - parser.add_argument("--overlap", - type=float, - default=0.25, - help="Overlap when --split_valid is passed.") - - # See model.py for doc - parser.add_argument("--growth", - type=float, - default=2., - help="Number of channels between two layers will increase by this factor") - parser.add_argument("--depth", - type=int, - default=6, - help="Number of layers for the encoder and decoder") - parser.add_argument("--lstm_layers", type=int, default=2, help="Number of layers for the LSTM") - parser.add_argument("--channels", - type=int, - default=64, - help="Number of channels for the first encoder layer") - parser.add_argument("--kernel_size", - type=int, - default=8, - help="Kernel size for the (transposed) convolutions") - parser.add_argument("--conv_stride", - type=int, - default=4, - help="Stride for the (transposed) convolutions") - parser.add_argument("--context", - type=int, - default=3, - help="Context size for the decoder convolutions " - "before the transposed convolutions") - parser.add_argument("--rescale", - type=float, - default=0.1, - help="Initial weight rescale reference") - parser.add_argument("--no_resample", action="store_false", - default=True, dest="resample", - help="No Resampling of the input/output x2") - parser.add_argument("--no_glu", - action="store_false", - default=True, - dest="glu", - help="Replace all GLUs by ReLUs") - parser.add_argument("--no_rewrite", - action="store_false", - default=True, - dest="rewrite", - help="No 1x1 rewrite convolutions") - parser.add_argument("--normalize", action="store_true") - parser.add_argument("--no_norm_wav", action="store_false", dest='norm_wav', default=True) - - # Tasnet options - parser.add_argument("--tasnet", action="store_true") - parser.add_argument("--split_valid", - action="store_true", - help="Predict chunks by chunks for valid and test. Required for tasnet") - parser.add_argument("--X", type=int, default=8) - - # Other options - parser.add_argument("--show", - action="store_true", - help="Show model architecture, size and exit") - parser.add_argument("--save_model", action="store_true", - help="Skip traning, just save final model " - "for the current checkpoint value.") - parser.add_argument("--save_state", - help="Skip training, just save state " - "for the current checkpoint value. You should " - "provide a model name as argument.") - - # Quantization options - parser.add_argument("--q-min-size", type=float, default=1, - help="Only quantize layers over this size (in MB)") - parser.add_argument( - "--qat", type=int, help="If provided, use QAT training with that many bits.") - - parser.add_argument("--diffq", type=float, default=0) - parser.add_argument( - "--ms-target", type=float, default=162, - help="Model size target in MB, when using DiffQ. Best model will be kept " - "only if it is smaller than this target.") - - return parser - - -def get_name(parser, args): - """ - Return the name of an experiment given the args. Some parameters are ignored, - for instance --workers, as they do not impact the final result. - """ - ignore_args = set([ - "checkpoints", - "deterministic", - "eval", - "evals", - "eval_cpu", - "eval_workers", - "logs", - "master", - "rank", - "restart", - "save", - "save_model", - "save_state", - "show", - "workers", - "world_size", - ]) - parts = [] - name_args = dict(args.__dict__) - for name, value in name_args.items(): - if name in ignore_args: - continue - if value != parser.get_default(name): - if isinstance(value, Path): - parts.append(f"{name}={value.name}") - else: - parts.append(f"{name}={value}") - if parts: - name = " ".join(parts) - else: - name = "default" - return name diff --git a/spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Hack Mod Apk An1.md b/spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Hack Mod Apk An1.md deleted file mode 100644 index b2d5052820b0a79c844948232766446fa43d2496..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Hack Mod Apk An1.md +++ /dev/null @@ -1,87 +0,0 @@ - -

    Camioneros de Europa 3 Hack Mod APK: Cómo descargarlo e instalarlo

    -

    Si eres un fan de los juegos realistas de conducción de camiones, es posible que hayas oído hablar de Truckers of Europe 3, un popular juego de simulador que te permite experimentar la vida de un camionero en Europa. Pero lo que si quieres disfrutar del juego sin limitaciones o restricciones? Ahí es donde un mod hack APK viene muy bien. En este artículo, le diremos todo lo que necesita saber sobre Camioneros de Europa 3 hack mod APK, incluyendo lo que es, cómo descargar e instalar, y cómo usarlo. ¡Vamos a empezar!

    -

    camioneros de europa 3 hack mod apk an1


    Download File ❤❤❤ https://bltlly.com/2v6JgL



    -

    ¿Qué es Camioneros de Europa 3?

    -

    Truckers of Europe 3 es un juego de simulador de conductor de camión realista desarrollado por Wanda Software. El juego cuenta con gráficos que están cerca de los de las computadoras de escritorio, y le permite conducir varios camiones a través de diferentes países europeos. Puede elegir entre diferentes tipos de carga, como contenedores, tuberías de acero, alimentos o cualquier producto útil, y entregarlos a sus destinos. También puede personalizar su camión con diferentes piezas, colores y accesorios. El juego también tiene física realista, efectos meteorológicos, tráfico y ciclos día-noche.

    -

    Características de los camioneros de Europa 3

    -

    Algunas de las principales características de Truckers of Europe 3 son:

    -
      -
    • Múltiples camiones para elegir, cada uno con sus propias características y rendimiento.
    • -
    • Diferentes tipos de carga a transportar, cada uno con su propio peso y tamaño.
    • -
    • Varias rutas y lugares para explorar, desde carreteras hasta caminos rurales.
    • -
    • Física de conducción y controles realistas, incluyendo volante, pedales, caja de cambios e indicadores.
    • -
    • Condiciones climáticas dinámicas y ciclos día-noche que afectan el juego.
    • -
    • Apariencia y accesorios de camiones personalizables, como pintura, ruedas, luces, cuernos y más.
    • -
    • Radio en el juego con varios géneros musicales y estaciones de noticias.
    • -
    • Logros y tablas de clasificación para competir con otros jugadores.
    • - -

      ¿Por qué es posible que desee utilizar un Hack Mod APK

      -

      Si bien Truckers of Europe 3 es un juego divertido e inmersivo, también tiene algunos inconvenientes que podrían hacer que desee utilizar un mod hack APK. Algunos de estos inconvenientes son:

      -
        -
      • El juego requiere mucho espacio de almacenamiento y RAM para funcionar sin problemas.
      • -
      • El juego tiene anuncios que pueden interrumpir su juego o consumir sus datos.
      • -
      • El juego tiene compras en la aplicación que pueden darle una ventaja sobre otros jugadores o desbloquear más características.
      • -
      • El juego puede ser desafiante y frustrante a veces, especialmente cuando tienes que lidiar con el tráfico, accidentes, multas o carga dañada.
      • -
      -

      Un hack mod APK es una versión modificada del juego original que puede evitar estos inconvenientes y darle más libertad y disfrute. Un mod hack APK también puede proporcionar algunas características adicionales que no están disponibles en el juego original.

      -

      ¿Qué es Camioneros de Europa 3 Hack Mod APK?

      -

      Camioneros de Europa 3 hack mod APK es una versión modificada del juego original que puede darle dinero ilimitado, desbloquear todos los camiones y accesorios, eliminar anuncios, y más. Con este hack mod APK, puede jugar el juego sin limitaciones o restricciones. Usted

      Sin embargo, antes de decidirse a descargar e instalar camioneros de Europa 3 hack mod APK, también debe ser consciente de los riesgos potenciales y desventajas que vienen con él.

      -

      Beneficios de los camioneros de Europa 3 Hack Mod APK

      -

      Algunos de los beneficios de usar camioneros de Europa 3 hack mod APK son:

      -
        -
      • Puede obtener dinero ilimitado que puede utilizar para comprar y actualizar cualquier camión o accesorio que desee.
      • -
      • Puede desbloquear todos los camiones y accesorios que están bloqueados o requieren dinero real para comprar.
      • -
      • Puede eliminar los anuncios que pueden ser molestos o distraer mientras juega el juego.
      • -
      • Puedes disfrutar del juego sin ninguna dificultad o frustración, ya que puedes evitar el tráfico, accidentes, multas o carga dañada.
      • - -
      -

      Riesgos de los camioneros de Europa 3 Hack Mod APK

      -

      Algunos de los riesgos de usar camioneros de Europa 3 hack mod APK son:

      -

      -
        -
      • Puede exponer su dispositivo a malware o virus que pueden dañar sus datos o sistema.
      • -
      • Puede violar los términos y condiciones del juego original y obtener prohibido o suspendido de jugar.
      • -
      • Puede perder su progreso o datos si el mod hack APK no es compatible con la última versión del juego o su dispositivo.
      • -
      • Puedes arruinar la diversión y el desafío del juego, ya que puedes completar fácilmente todas las misiones y logros sin ningún esfuerzo.
      • -
      • Puedes perderte las actualizaciones y nuevas características que los desarrolladores agregan al juego original regularmente.
      • -
      -

      Cómo descargar e instalar camioneros de Europa 3 Hack Mod APK

      -

      Si todavía quieres probar Camioneros de Europa 3 hack mod APK, es necesario seguir algunos pasos para descargar e instalar en su dispositivo. Estos son los pasos:

      -

      Paso 1: Encontrar una fuente confiable

      -

      El primer paso es encontrar una fuente confiable que proporciona el enlace de descarga para camioneros de Europa 3 hack mod APK. Puedes buscar en línea sitios web o foros que ofrecen este servicio, pero ten cuidado de no hacer clic en enlaces sospechosos o falsos que puedan dañar tu dispositivo. También puede comprobar las revisiones y calificaciones de otros usuarios que han descargado el mod hack APK de la misma fuente. Una buena fuente debe proporcionar un enlace de descarga seguro y de trabajo, así como una descripción detallada e instrucciones para usar el mod hack APK.

      -

      Paso 2: Habilitar fuentes desconocidas en su dispositivo

      - -

      Paso 3: Descargar el archivo APK

      -

      El tercer paso es descargar el archivo APK de la fuente que ha elegido. Puede usar su navegador o una aplicación de administrador de descargas para hacer esto. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargar el archivo. El tamaño del archivo puede variar dependiendo de la fuente y la versión del mod hack APK. Una vez completada la descarga, puedes encontrar el archivo en tu carpeta de descargas o donde sea que lo hayas guardado.

      -

      Paso 4: Instalar el archivo APK

      -

      El paso final es instalar el archivo APK en su dispositivo. Para hacer esto, toque en el archivo y siga las instrucciones de instalación. Es posible que vea un mensaje de advertencia de que instalar esta aplicación podría dañar su dispositivo, pero puede ignorarlo si confía en la fuente que está utilizando. También es posible que tenga que permitir algunos permisos para que la aplicación funcione correctamente. Una vez que la instalación se ha completado, puede iniciar la aplicación y disfrutar de los camioneros de Europa 3 hack mod APK.

      -

      Cómo utilizar camioneros de Europa 3 Hack Mod APK

      -

      Ahora que ha descargado e instalado Camioneros de Europa 3 hack mod APK, es posible que se pregunte cómo usarlo. Aquí hay algunos consejos y trucos para jugar Camioneros de Europa 3 con hack mod APK:

      -

      Consejos y trucos para jugar camioneros de Europa 3 con Hack Mod APK

      -
        -
      • Utilice la función de dinero ilimitado para comprar y actualizar cualquier camión o accesorio que desee. También puede usarlo para pagar cualquier multa o reparar cualquier daño que pueda sufrir mientras conduce.
      • -
      • Utilice la función de desbloqueo de todos los camiones y accesorios para probar diferentes camiones y personalizarlos según su preferencia. También puede cambiar entre diferentes camiones sin perder su progreso o carga.
      • -
      • Utilice la función de eliminación de anuncios para jugar el juego sin interrupciones o distracciones. También puede guardar sus datos y batería al no cargar ningún anuncio.
      • - -
      • Utilice la función de hackeo de velocidad para conducir más rápido que el límite de velocidad normal. También puede superar a otros vehículos y llegar a su destino más rápido. Sin embargo, tenga cuidado de no estrellarse o ser atrapado por la policía.
      • -
      • Utilice la función de navegación de mapa gratuito para encontrar la mejor ruta a su destino. También puede ver las condiciones del tráfico y la carretera y evitar retrasos o desvíos.
      • -
      -

      Conclusión

      -

      Truckers of Europe 3 es un juego de simulador de conductor de camión realista que le permite experimentar la vida de un camionero en Europa. Sin embargo, si desea jugar el juego sin limitaciones o restricciones, puede utilizar un mod hack APK que puede darle dinero ilimitado, desbloquear todos los camiones y accesorios, eliminar anuncios, y más. En este artículo, hemos explicado lo que Camioneros de Europa 3 hack mod APK es, cómo descargarlo e instalarlo, y cómo usarlo. También hemos proporcionado algunos consejos y trucos para jugar Camioneros de Europa 3 con hack mod APK. Sin embargo, también le advertimos sobre los posibles riesgos y desventajas de usar un mod de hackeo APK, tales como malware, prohibiciones, pérdida de datos o pérdida de diversión. Por lo tanto, le aconsejamos que utilice un mod hack APK a su propio riesgo y discreción. Esperamos que haya encontrado este artículo útil e informativo. ¡Feliz transporte!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre camioneros de Europa 3 mod hack APK:

      -
        -
      1. Es camioneros de Europa 3 hack mod APK seguro de usar?
      2. -

        Camioneros de Europa 3 hack mod APK no es una aplicación oficial de los desarrolladores del juego original. Por lo tanto, no se garantiza su seguridad. Puede contener malware o virus que puedan dañar su dispositivo o datos. También podría violar los términos y condiciones del juego original y conseguir que se le prohibió o suspendido de jugar. Por lo tanto, usted debe utilizar un mod hack APK a su propio riesgo y discreción.

        -
      3. ¿Cómo actualizo camioneros de Europa 3 hack mod APK?
      4. - -
      5. ¿Puedo jugar camioneros de Europa 3 hack mod APK en línea?
      6. -

        Camioneros de Europa 3 hack mod APK podría no funcionar bien con el modo en línea del juego original. Es posible que se enfrenten a algunos errores o problemas técnicos al jugar en línea con otros jugadores. También puede ser detectado por el sistema anti-cheat del juego y obtener prohibido o suspendido de jugar. Por lo tanto, le recomendamos que juegue Camioneros de Europa 3 hack mod APK offline o en modo de un solo jugador.

        -
      7. ¿Puedo usar camioneros de Europa 3 hack mod APK en dispositivos iOS?
      8. -

        Camioneros de Europa 3 hack mod APK es una aplicación para Android que solo se puede instalar en dispositivos Android. Por lo tanto, no puede usarlo en dispositivos iOS como iPhones o iPads. Sin embargo, es posible que encuentre algunas formas alternativas de utilizar un mod hack APK en dispositivos iOS, como el uso de un emulador o una herramienta de jailbreak. Sin embargo, estos métodos no son recomendables ya que podrían dañar su dispositivo o datos.

        -
      9. ¿Puedo usar camioneros de Europa 3 hack mod APK en PC?
      10. -

        Camioneros de Europa 3 hack mod APK es una aplicación para Android que solo se puede instalar en dispositivos Android. Por lo tanto, no se puede utilizar en el PC directamente. Sin embargo, usted puede encontrar algunas maneras de utilizar un mod hack APK en el PC, como el uso de un emulador o una máquina virtual. Estos métodos le permiten ejecutar aplicaciones Android en el PC mediante la simulación de un entorno Android. Sin embargo, no se garantiza que estos métodos funcionen bien o sin problemas.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cmo Crear Y Construir Apk.md b/spaces/Benson/text-generation/Examples/Cmo Crear Y Construir Apk.md deleted file mode 100644 index 511352b9c63af7857d3b4d3d912d62622471fc94..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Crear Y Construir Apk.md +++ /dev/null @@ -1,133 +0,0 @@ - -

      Tải elaboración y construcción APK - Un juego de construcción gratis para Android

      -

      ¿Te gustan los juegos de construcción? ¿Quieres dar rienda suelta a tu creatividad e imaginación? ¿Quieres divertirte con tus amigos y familiares? Si respondiste afirmativamente a cualquiera de estas preguntas, entonces deberías probar Crafting and Building APK, un nuevo juego de construcción gratuito para dispositivos Android. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo e instalarlo, por qué deberías jugarlo, cómo jugarlo y cuáles son algunas alternativas. ¡Vamos a empezar!

      -

      ¿Qué es la elaboración y construcción de APK?

      -

      Una breve introducción al juego y sus características

      -

      Elaboración y construcción de APK es un juego gratuito para dispositivos Android que le permite construir sus propias construcciones, desde casas y castillos hasta minas y templos. También puedes explorar el mundo creado por otros jugadores, interactuar con aldeanos y animales, personalizar tu personaje y jugar online con tus amigos. El juego tiene muchas características que lo hacen divertido y atractivo, como:

      -

      Cómo crear y construir apk


      Download Ziphttps://bltlly.com/2v6LAv



      -
        -
      • Gráficos geniales: disfruta de los mejores gráficos de píxeles con fps altos.
      • -
      • Muchos tipos de bloques: elige entre hierba, piedra, diamante y más para construir tu imperio.
      • -
      • Juegos multijugador: juega online y ayuda a tu amigo a construir su casa o competir con ellos.
      • -
      • Divertido juego: juega con mascotas, busca cuevas ocultas y diviértete con tus amigos.
      • -
      • Juego gratis: juega el juego gratis sin limitaciones ni anuncios.
      • -
      -

      Cómo descargar e instalar el juego en tu dispositivo

      -

      Descargar e instalar Elaboración y construcción de APK en su dispositivo es muy fácil. Solo tiene que seguir estos sencillos pasos:

      -
        -
      1. Ir a este enlace o este enlace o este enlace o este enlace para descargar el archivo APK del juego.
      2. - -
      3. Toque en el archivo y permita la instalación desde fuentes desconocidas si se le solicita.
      4. -
      5. Espere a que el proceso de instalación termine y luego inicie el juego desde el cajón de la aplicación o la pantalla de inicio.
      6. -
      7. ¡Disfruta jugando a crear y construir APK en tu dispositivo!
      8. -
      -

      ¿Por qué debe jugar a la elaboración y construcción de APK?

      -

      Los beneficios de jugar este juego para diferentes grupos de edad

      -

      Elaboración y construcción de APK es un juego que puede ser disfrutado por cualquier persona, independientemente de su edad o origen. Estos son algunos de los beneficios de jugar este juego para diferentes grupos de edad:

      -
        -
      • Para los niños: Jugar a este juego puede ayudar a los niños a desarrollar su creatividad, imaginación, habilidades para resolver problemas, conciencia espacial, coordinación mano-ojo y habilidades motoras finas. También puede fomentar su curiosidad, interés y pasión por aprender cosas nuevas.
      • -

        Los aspectos divertidos y creativos del juego

        -

        Elaboración y construcción de APK es un juego que también puede proporcionar un montón de oportunidades divertidas y creativas para los jugadores. Estos son algunos de los aspectos divertidos y creativos del juego:

        -
          -
        • Puedes construir lo que quieras, desde casas sencillas y granjas hasta ciudades y monumentos complejos. También puede decorar sus edificios con muebles, pinturas, alfombras y más.
        • -
        • Puedes explorar el mundo creado por otros jugadores, descubrir nuevos lugares y admirar sus creaciones. También puede interactuar con aldeanos y animales, comerciar con ellos o luchar contra ellos.
        • -
        • Puedes personalizar tu personaje con diferentes pieles, ropa, accesorios y peinados. También puede cambiar la apariencia de sus mascotas y vehículos.
        • -
        • Puedes jugar en línea con tus amigos y familiares, chatear con ellos, colaborar con ellos o competir con ellos. También puede unirse o crear sus propios servidores y comunidades.
        • -
        -

        El modo multijugador y la interacción social con otros jugadores

        - -
          -
        • Puedes jugar online con hasta 10 jugadores al mismo tiempo, ya sea en modo cooperativo o competitivo. También puede invitar a sus amigos a unirse a su juego o unirse a su juego.
        • -
        • Puedes chatear con otros jugadores usando mensajes de texto o de voz. También puedes usar emojis y pegatinas para expresar tus emociones y reacciones.
        • -
        • Puedes compartir tus creaciones con otros jugadores, calificar sus creaciones, comentarlas o seguirlas. También puedes obtener comentarios y sugerencias de otros jugadores sobre cómo mejorar tus habilidades y experiencia.
        • -
        • Puede unirse o crear sus propios servidores y comunidades, donde puede conocer gente nueva, hacer amigos o encontrar socios. También puedes participar en eventos, concursos, desafíos o juegos organizados por otros jugadores o por ti mismo.
        • -
        -

        ¿Cómo se juega elaboración y construcción APK?

        -

        El juego básico y los controles

        -

        Elaboración y construcción de APK es un juego que es fácil de jugar y controlar. Estos son algunos de los juegos básicos y controles:

        -
          -
        • Para moverse, puede usar el joystick en el lado izquierdo de la pantalla. Para mirar alrededor, puede deslizar el dedo hacia el lado derecho de la pantalla.
        • -
        • Para saltar, puede tocar el botón de salto en el lado derecho de la pantalla. Para volar, puede pulsar dos veces el botón de salto y luego usar el joystick para controlar su dirección.
        • -
        • Para seleccionar un tipo de bloque, puede tocar el botón de inventario en el lado derecho de la pantalla y luego elegir entre los bloques disponibles. Para colocar un bloque, puede tocar en la pantalla donde desea colocarlo. Para romper un bloque, puede pulsar y mantener pulsado en la pantalla donde desea romperlo.
        • -
        • Para interactuar con un objeto, como una puerta, un cofre o un animal, puede pulsarlo. Para usar un objeto, como una espada, un arco o una poción, puede seleccionarlo de su inventario y luego tocar en la pantalla donde desea usarlo.
        • - -
        -

        Los diferentes modos y opciones disponibles

        -

        Elaboración y construcción de APK es un juego que ofrece diferentes modos y opciones para los jugadores para elegir. Estos son algunos de los diferentes modos y opciones disponibles:

        -
          -
        • Modo creativo: En este modo, tienes recursos ilimitados y puedes construir lo que quieras sin restricciones. También puedes volar y acceder a todos los bloques y objetos del juego.
        • -
        • Modo de supervivencia: En este modo, tienes que reunir recursos, crear herramientas y armas, y sobrevivir a los peligros del mundo. También tienes que controlar tus niveles de salud, hambre y sed. También puedes luchar contra enemigos, como zombis, arañas y esqueletos.
        • -
        • Modo aventura: En este modo, puedes explorar el mundo creado por otros jugadores, completar misiones y recoger recompensas. También puede interactuar con aldeanos y animales, comerciar con ellos o luchar contra ellos.
        • -
        • Modo multijugador: En este modo, puedes jugar online con otros jugadores, ya sea en modo cooperativo o competitivo. También puede chatear con ellos, compartir sus creaciones o unirse a sus servidores y comunidades.
        • -
        • Opciones: En este menú, puede cambiar la configuración del juego, como el sonido, los gráficos, el idioma y los controles. También puede acceder a la sección de ayuda, donde puede encontrar tutoriales, consejos y preguntas frecuentes.
        • -
        -

        Algunos consejos y trucos para mejorar tus habilidades y experiencia

        -

        Elaboración y construcción de APK es un juego que puede ser desafiante y gratificante al mismo tiempo. Aquí hay algunos consejos y trucos para mejorar tus habilidades y experiencia:

        -
          -
        • Utilice la mesa de elaboración para crear nuevos artículos, tales como herramientas, armas, armaduras y muebles. También puede utilizar el horno para fundir minerales y cocinar alimentos.
        • -
        • Usa el mapa para navegar por el mundo y encontrar tu ubicación. También puedes usar la brújula para encontrar tu punto de aparición o tu hogar.
        • - -
        • Utiliza antorchas para iluminar tus edificios y evitar que los monstruos aparezcan. También puedes usar antorchas para marcar tu camino o tu territorio.
        • -
        • Use cofres para guardar sus artículos y mantenerlos seguros. También puede usar letreros para etiquetar sus cofres o sus edificios.
        • -
        • Use escaleras para subir o bajar paredes o torres. También puede usar escaleras o losas para crear pendientes o techos.
        • -
        • Use puertas para asegurar sus entradas o salidas. También puede usar trampillas para crear pasajes secretos o habitaciones ocultas.
        • -
        • Use cercas para encerrar sus granjas o jardines. También puede usar puertas para acceder a ellas.
        • -
        • Usa cubos para recoger agua o lava. También puedes usar cubos para crear fuentes o piscinas.
        • -
        • Usa semillas para cultivar cultivos o flores. También puedes usar harina de huesos para acelerar su crecimiento.
        • -
        -

        ¿Cuáles son algunas alternativas a la elaboración y construcción de APK?

        -

        Una comparación de juegos similares en el mercado

        -

        Elaboración y construcción APK no es el único juego de construcción en el mercado. Hay muchos otros juegos similares que puedes probar si buscas más opciones. Estos son algunos de ellos:

        -

        - -JuegoDescripciónPrecio -MinecraftEl juego de construcción más popular del mundo, donde puedes crear lo que quieras con bloques en un mundo de caja de arena. También puedes jugar online con millones de jugadores. $6.99 -RobloxUna plataforma donde puedes jugar millones de juegos creados por otros usuarios o crear tus propios juegos con Roblox Studio. También puedes personalizar tu avatar y chatear con otros jugadores. Gratis (con compras en la aplicación) -TerrariaUn juego de aventura en 2D donde puedes explorar, construir, crear, luchar y sobrevivir en un mundo generado al azar. También puedes jugar online con hasta 8 jugadores. $4.99 - -con animales y plantas. También puede visitar otras islas y jugar con otros jugadores. Gratis (con compras en la aplicación) - -

        Los pros y los contras de cada alternativa

        -

        Cada uno de estos juegos tiene sus propios pros y contras que debes considerar antes de elegir uno. Estos son algunos de ellos:

        - -JuegoProsContras -Minecraft- Tiene una base de fans enorme y leal
        - Tiene un montón de contenido y actualizaciones
        - Tiene una gran cantidad de mods y plugins
        - Tiene un montón de potencial educativo y creativo- Requiere una cuenta de pago para jugar en línea
        - Puede ser lento o defectuoso en algunos dispositivos
        Puede ser demasiado complejo o abrumador para algunos jugadores
        - Puede ser adictivo o perjudicial para algunos jugadores -Roblox- Tiene mucha variedad y diversidad
        - Tiene mucho contenido generado por el usuario
        - Tiene muchas características sociales y opciones
        - Tiene muchas oportunidades para aprender y ganar- Tiene mucho contenido inapropiado o inseguro Tiene un montón de anuncios y microtransacciones
        - Tiene un montón de hackers y estafadores
        - Tiene un montón de problemas técnicos y problemas técnicos -Terraria- Tiene mucha profundidad y detalle
        - Tiene mucha exploración y aventura
        - Tiene mucha personalización y personalización
        - Tiene mucho desafío y valor de repetición- Tiene una curva de aprendizaje empinada
        - Tiene un tamaño mundial limitado Tiene un modo multijugador limitado
        - Tiene una calidad gráfica limitada -Sandbox 3D- Tiene una interfaz simple e intuitiva
        - Tiene unos gráficos coloridos y vibrantes
        - Tiene un juego relajante y casual
        - Tiene una comunidad amigable y de apoyo- Tiene un bloque limitado de tipos y elementos
        - Tiene modos de juego y opciones limitadas br<>-> Tiene características y funciones limitadas en línea
        - Tiene un desarrollo y soporte limitados - - -

        La mejor opción para sus preferencias y necesidades

        -

        La mejor opción para tus preferencias y necesidades depende de lo que estés buscando en un juego de construcción. Aquí hay algunas preguntas que puedes hacerte para ayudarte a decidir:

        -
          -
        • ¿Quieres jugar online o offline?
        • -
        • ¿Quieres jugar solo o con otros?
        • -
        • ¿Quieres construir o explorar?
        • -
        • ¿Quieres crear o consumir?
        • -
        • ¿Quieres aprender o divertirte?
        • -
        • ¿Quieres pagar o jugar gratis?
        • -
        • ¿Quieres tener más control o más libertad?
        • -
        • ¿Quieres tener más realismo o más fantasía?
        • -
        • ¿Quieres tener más simplicidad o más complejidad?
        • -
        • ¿Quieres tener más calidad o más cantidad?
        • -
        -

        Basado en sus respuestas, puede comparar los diferentes juegos y elegir el que más le convenga. Por supuesto, también puedes probarlos todos y ver cuál te gusta más. ¡La elección es tuya!

        -

        Conclusión

        -

        Un resumen de los puntos principales y una llamada a la acción

        -y control, y ofrece diferentes modos y opciones para los jugadores a elegir. También es un juego que tiene muchos aspectos divertidos y creativos, como construir lo que quieras, explorar el mundo, personalizar tu personaje y jugar online con tus amigos. Sin embargo, si buscas más alternativas, también puedes probar otros juegos similares en el mercado, como Minecraft, Roblox, Terraria, Sandbox 3D y Crafty Lands. Cada uno de estos juegos tiene sus propios pros y contras que usted debe considerar antes de elegir uno. La mejor opción para tus preferencias y necesidades depende de lo que estés buscando en un juego de construcción. Esperamos que este artículo le ha ayudado a aprender más acerca de la elaboración y construcción de APK y para decidir si desea descargar e instalar en su dispositivo o no. Si lo haces, esperamos que tengas mucha diversión y creatividad con este juego. ¡Gracias por leer!

        -

        Preguntas frecuentes

        - -
          -
        1. ¿Es la elaboración y construcción de APK seguro para descargar e instalar?
        2. -

          Sí, Elaboración y construcción de APK es seguro para descargar e instalar. No contiene ningún virus, malware o spyware. Sin embargo, siempre debe descargarlo de una fuente de confianza, como este enlace o ">este enlace o este enlace o este enlace o este enlace o craftingandbuild@gmail.com. También puede visitar su sitio web en este enlace o su página de Facebook en = self.data_length - eos = property(eos, eos.__doc__) - - def check(self, pattern): - """ - Apply `pattern` on the current position and return - the match object. (Doesn't touch pos). Use this for - lookahead. - """ - if self.eos: - raise EndOfText() - if pattern not in self._re_cache: - self._re_cache[pattern] = re.compile(pattern, self.flags) - return self._re_cache[pattern].match(self.data, self.pos) - - def test(self, pattern): - """Apply a pattern on the current position and check - if it patches. Doesn't touch pos. - """ - return self.check(pattern) is not None - - def scan(self, pattern): - """ - Scan the text for the given pattern and update pos/match - and related fields. The return value is a boolean that - indicates if the pattern matched. The matched value is - stored on the instance as ``match``, the last value is - stored as ``last``. ``start_pos`` is the position of the - pointer before the pattern was matched, ``pos`` is the - end position. - """ - if self.eos: - raise EndOfText() - if pattern not in self._re_cache: - self._re_cache[pattern] = re.compile(pattern, self.flags) - self.last = self.match - m = self._re_cache[pattern].match(self.data, self.pos) - if m is None: - return False - self.start_pos = m.start() - self.pos = m.end() - self.match = m.group() - return True - - def get_char(self): - """Scan exactly one char.""" - self.scan('.') - - def __repr__(self): - return '<%s %d/%d>' % ( - self.__class__.__name__, - self.pos, - self.data_length - ) diff --git a/spaces/BigChungux/Pet_Survey/info.md b/spaces/BigChungux/Pet_Survey/info.md deleted file mode 100644 index 7ec3f3af6b44c914fa8189cff02cd9ab75900f98..0000000000000000000000000000000000000000 --- a/spaces/BigChungux/Pet_Survey/info.md +++ /dev/null @@ -1,16 +0,0 @@ -# 😌 [Edit info.md - Your app's title here] - -### 🧐 Problem Statement and Research Summary -[add info about your problem statement and your research here!] - -### 🎣 Data Collection Plan -[Edit info.md - add info about what data you collected and why here!] - -### 💥 Ethical Considerations (Data Privacy and Bias) -* Data privacy: [Edit info.md - add info about you considered users' privacy here!] -* Bias: [Edit info.md - add info about you considered bias here!] - -### 👻 Our Team -[Edit info.md - add info about your team members here!] - -![aiEDU logo](https://images.squarespace-cdn.com/content/v1/5e4efdef6d10420691f02bc1/5db5a8a3-1761-4fce-a096-bd5f2515162f/aiEDU+_black+logo+stacked.png?format=100w) diff --git a/spaces/CALM/Dashboard/README.md b/spaces/CALM/Dashboard/README.md deleted file mode 100644 index e7ac9a99ca9d4052bbb6f923b352955d3c251dd6..0000000000000000000000000000000000000000 --- a/spaces/CALM/Dashboard/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- - -title: Dashboard -emoji: 🌐 -colorFrom: blue -colorTo: red -sdk: streamlit - - - - -app_file: app.py -pinned: true - ---- - - -# Training transformers together dashboard - -[![Generic badge](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/training-transformers-together/training-transformers-together-dashboard) - -A dashboard app for Hugging Face Spaces ---- - -Autogenerated using [this template](https://github.com/nateraw/spaces-template) - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/run_inference_tests.sh b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/run_inference_tests.sh deleted file mode 100644 index 17e422d576e5fe9efcd85790954c569c962657d6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/run_inference_tests.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -BIN="python tools/train_net.py" -OUTPUT="inference_test_output" -NUM_GPUS=2 - -CFG_LIST=( "${@:1}" ) - -if [ ${#CFG_LIST[@]} -eq 0 ]; then - CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml ) -fi - -echo "========================================================================" -echo "Configs to run:" -echo "${CFG_LIST[@]}" -echo "========================================================================" - - -for cfg in "${CFG_LIST[@]}"; do - echo "========================================================================" - echo "Running $cfg ..." - echo "========================================================================" - $BIN \ - --eval-only \ - --num-gpus $NUM_GPUS \ - --config-file "$cfg" \ - OUTPUT_DIR $OUTPUT - rm -rf $OUTPUT -done - - -echo "========================================================================" -echo "Running demo.py ..." -echo "========================================================================" -DEMO_BIN="python demo/demo.py" -COCO_DIR=datasets/coco/val2014 -mkdir -pv $OUTPUT - -set -v - -$DEMO_BIN --config-file ./configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml \ - --input $COCO_DIR/COCO_val2014_0000001933* --output $OUTPUT -rm -rf $OUTPUT diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/universal_categories.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/universal_categories.h deleted file mode 100644 index 2389796b190ae04772882edd2a83c7642cdf8103..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/universal_categories.h +++ /dev/null @@ -1,87 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -// XXX eliminate this file - -namespace thrust -{ - -// define these types without inheritance to avoid ambiguous conversion to base classes - -struct input_universal_iterator_tag -{ - operator input_host_iterator_tag () {return input_host_iterator_tag();} - - operator input_device_iterator_tag () {return input_device_iterator_tag();} -}; - -struct output_universal_iterator_tag -{ - operator output_host_iterator_tag () {return output_host_iterator_tag();} - - operator output_device_iterator_tag () {return output_device_iterator_tag();} -}; - -struct forward_universal_iterator_tag - : input_universal_iterator_tag -{ - operator forward_host_iterator_tag () {return forward_host_iterator_tag();}; - - operator forward_device_iterator_tag () {return forward_device_iterator_tag();}; -}; - -struct bidirectional_universal_iterator_tag - : forward_universal_iterator_tag -{ - operator bidirectional_host_iterator_tag () {return bidirectional_host_iterator_tag();}; - - operator bidirectional_device_iterator_tag () {return bidirectional_device_iterator_tag();}; -}; - - -namespace detail -{ - -// create this struct to control conversion precedence in random_access_universal_iterator_tag -template -struct one_degree_of_separation - : T -{ -}; - -} // end detail - - -struct random_access_universal_iterator_tag -{ - // these conversions are all P0 - operator random_access_host_iterator_tag () {return random_access_host_iterator_tag();}; - - operator random_access_device_iterator_tag () {return random_access_device_iterator_tag();}; - - // bidirectional_universal_iterator_tag is P1 - operator detail::one_degree_of_separation () {return detail::one_degree_of_separation();} - -}; - - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/binary_search.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/binary_search.h deleted file mode 100644 index a2c33f32aad3991ba000f095b20e635a0844f718..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/binary_search.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits the binary search algorithms -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/assign_value.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/assign_value.h deleted file mode 100644 index d38934affd1a0a51fb64e011106b9af83dca8cdb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/assign_value.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the assign_value.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch assign_value - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_ASSIGN_VALUE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/assign_value.h> -#include __THRUST_HOST_SYSTEM_ASSIGN_VALUE_HEADER -#undef __THRUST_HOST_SYSTEM_ASSIGN_VALUE_HEADER - -#define __THRUST_DEVICE_SYSTEM_ASSIGN_VALUE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/assign_value.h> -#include __THRUST_DEVICE_SYSTEM_ASSIGN_VALUE_HEADER -#undef __THRUST_DEVICE_SYSTEM_ASSIGN_VALUE_HEADER - diff --git a/spaces/CVPR/WALT/mmdet/core/visualization/__init__.py b/spaces/CVPR/WALT/mmdet/core/visualization/__init__.py deleted file mode 100644 index 4ff995c0861490941f8cfc19ebbd41a2ee7e2d65..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/visualization/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) - -__all__ = ['imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib'] diff --git a/spaces/CVPR/drawings-to-human/static/_app/immutable/pages/__layout.svelte-d07d8fed.js b/spaces/CVPR/drawings-to-human/static/_app/immutable/pages/__layout.svelte-d07d8fed.js deleted file mode 100644 index f19b8c122fad862801aa0b45ec63c7785ce137f6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/drawings-to-human/static/_app/immutable/pages/__layout.svelte-d07d8fed.js +++ /dev/null @@ -1 +0,0 @@ -import{S as i,i as n,s as p,F as l,G as w,H as c,I as d,q as h,o as g}from"../chunks/index-bcf2726a.js";function m(s){let r;const a=s[1].default,t=l(a,s,s[0],null);return{c(){t&&t.c()},l(e){t&&t.l(e)},m(e,o){t&&t.m(e,o),r=!0},p(e,[o]){t&&t.p&&(!r||o&1)&&w(t,a,e,e[0],r?d(a,e[0],o,null):c(e[0]),null)},i(e){r||(h(t,e),r=!0)},o(e){g(t,e),r=!1},d(e){t&&t.d(e)}}}function b(s,r,a){let{$$slots:t={},$$scope:e}=r;return s.$$set=o=>{"$$scope"in o&&a(0,e=o.$$scope)},[e,t]}class u extends i{constructor(r){super(),n(this,r,b,m,p,{})}}export{u as default}; diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/soft_nms.py b/spaces/CVPR/regionclip-demo/detectron2/layers/soft_nms.py deleted file mode 100644 index 576089db79278bbd62aa87b17fc1ed13ac2261c7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/layers/soft_nms.py +++ /dev/null @@ -1,261 +0,0 @@ -import torch - -from detectron2.structures import Boxes, RotatedBoxes, pairwise_iou, pairwise_iou_rotated - -""" Soft-NMS Pull request from: https://github.com/facebookresearch/detectron2/pull/1183/files -""" - -def soft_nms(boxes, scores, method, gaussian_sigma, linear_threshold, prune_threshold): - """ - Performs soft non-maximum suppression algorithm on axis aligned boxes - Args: - boxes (Tensor[N, 5]): - boxes where NMS will be performed. They - are expected to be in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept -""" - return _soft_nms( - Boxes, - pairwise_iou, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, - ) - - -def soft_nms_rotated(boxes, scores, method, gaussian_sigma, linear_threshold, prune_threshold): - """ - Performs soft non-maximum suppression algorithm on rotated boxes - Args: - boxes (Tensor[N, 5]): - boxes where NMS will be performed. They - are expected to be in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept """ - return _soft_nms( - RotatedBoxes, - pairwise_iou_rotated, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, - ) - - -def batched_soft_nms( - boxes, scores, idxs, method, gaussian_sigma, linear_threshold, prune_threshold -): - """ - Performs soft non-maximum suppression in a batched fashion. - Each index value correspond to a category, and NMS - will not be applied between elements of different categories. - Args: - boxes (Tensor[N, 4]): - boxes where NMS will be performed. They - are expected to be in (x1, y1, x2, y2) format - scores (Tensor[N]): - scores for each one of the boxes - idxs (Tensor[N]): - indices of the categories for each one of the boxes. - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - if boxes.numel() == 0: - return ( - torch.empty((0,), dtype=torch.int64, device=boxes.device), - torch.empty((0,), dtype=torch.float32, device=scores.device), - ) - # strategy: in order to perform NMS independently per class. - # we add an offset to all the boxes. The offset is dependent - # only on the class idx, and is large enough so that boxes - # from different classes do not overlap - max_coordinate = boxes.max() - offsets = idxs.to(boxes) * (max_coordinate + 1) - boxes_for_nms = boxes + offsets[:, None] - return soft_nms( - boxes_for_nms, scores, method, gaussian_sigma, linear_threshold, prune_threshold - ) - - -def batched_soft_nms_rotated( - boxes, scores, idxs, method, gaussian_sigma, linear_threshold, prune_threshold -): - """ - Performs soft non-maximum suppression in a batched fashion on rotated bounding boxes. - Each index value correspond to a category, and NMS - will not be applied between elements of different categories. - Args: - boxes (Tensor[N, 5]): - boxes where NMS will be performed. They - are expected to be in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - idxs (Tensor[N]): - indices of the categories for each one of the boxes. - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - if boxes.numel() == 0: - return ( - torch.empty((0,), dtype=torch.int64, device=boxes.device), - torch.empty((0,), dtype=torch.float32, device=scores.device), - ) - # strategy: in order to perform NMS independently per class. - # we add an offset to all the boxes. The offset is dependent - # only on the class idx, and is large enough so that boxes - # from different classes do not overlap - max_coordinate = boxes[:, :2].max() + torch.norm(boxes[:, 2:4], 2, dim=1).max() - offsets = idxs.to(boxes) * (max_coordinate + 1) - boxes_for_nms = boxes.clone() - boxes_for_nms[:, :2] += offsets[:, None] - return soft_nms_rotated( - boxes_for_nms, scores, method, gaussian_sigma, linear_threshold, prune_threshold - ) - - -def _soft_nms( - box_class, - pairwise_iou_func, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, -): - """ - Soft non-max suppression algorithm. - Implementation of [Soft-NMS -- Improving Object Detection With One Line of Codec] - (https://arxiv.org/abs/1704.04503) - Args: - box_class (cls): one of Box, RotatedBoxes - pairwise_iou_func (func): one of pairwise_iou, pairwise_iou_rotated - boxes (Tensor[N, ?]): - boxes where NMS will be performed - if Boxes, in (x1, y1, x2, y2) format - if RotatedBoxes, in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - boxes = boxes.clone() - scores = scores.clone() - idxs = torch.arange(scores.size()[0]) - - idxs_out = [] - scores_out = [] - - while scores.numel() > 0: - top_idx = torch.argmax(scores) - idxs_out.append(idxs[top_idx].item()) - scores_out.append(scores[top_idx].item()) - - top_box = boxes[top_idx] - ious = pairwise_iou_func(box_class(top_box.unsqueeze(0)), box_class(boxes))[0] - - if method == "linear": - decay = torch.ones_like(ious) - decay_mask = ious > linear_threshold - decay[decay_mask] = 1 - ious[decay_mask] - elif method == "gaussian": - decay = torch.exp(-torch.pow(ious, 2) / gaussian_sigma) - elif method == "hard": # standard NMS - decay = (ious < linear_threshold).float() - else: - raise NotImplementedError("{} soft nms method not implemented.".format(method)) - - scores *= decay - keep = scores > prune_threshold - keep[top_idx] = False - - boxes = boxes[keep] - scores = scores[keep] - idxs = idxs[keep] - - return torch.tensor(idxs_out).to(boxes.device), torch.tensor(scores_out).to(scores.device) \ No newline at end of file diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/app.py b/spaces/CarlDennis/Lovelive-VITS-JPZH/app.py deleted file mode 100644 index 5c9d153e84e187077e60acd8cb56ed6d00cb820c..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/Lovelive-VITS-JPZH/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import re -import gradio as gr -import torch -import unicodedata -import commons -import utils -from models import SynthesizerTrn -from text import text_to_sequence - -config_json = "muse_tricolor_b.json" -pth_path = "G=869.pth" - - -def get_text(text, hps, cleaned=False): - if cleaned: - text_norm = text_to_sequence(text, hps.symbols, []) - else: - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - -def get_label(text, label): - if f'[{label}]' in text: - return True, text.replace(f'[{label}]', '') - else: - return False, text - - -def clean_text(text): - print(text) - jap = re.compile(r'[\u3040-\u309F\u30A0-\u30FF]') # 匹配日文 - text = unicodedata.normalize('NFKC', text) - text = f"[JA]{text}[JA]" if jap.search(text) else f"[ZH]{text}[ZH]" - return text - - -def load_model(config_json, pth_path): - dev = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - hps_ms = utils.get_hparams_from_file(f"{config_json}") - n_speakers = hps_ms.data.n_speakers if 'n_speakers' in hps_ms.data.keys() else 0 - n_symbols = len(hps_ms.symbols) if 'symbols' in hps_ms.keys() else 0 - net_g_ms = SynthesizerTrn( - n_symbols, - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=n_speakers, - **hps_ms.model).to(dev) - _ = net_g_ms.eval() - _ = utils.load_checkpoint(pth_path, net_g_ms) - return net_g_ms - -net_g_ms = load_model(config_json, pth_path) - -def selection(speaker): - if speaker == "南小鸟": - spk = 0 - return spk - - elif speaker == "园田海未": - spk = 1 - return spk - - elif speaker == "小泉花阳": - spk = 2 - return spk - - elif speaker == "星空凛": - spk = 3 - return spk - - elif speaker == "东条希": - spk = 4 - return spk - - elif speaker == "矢泽妮可": - spk = 5 - return spk - - elif speaker == "绚濑绘里": - spk = 6 - return spk - - elif speaker == "西木野真姬": - spk = 7 - return spk - - elif speaker == "高坂穗乃果": - spk = 8 - return spk - -def infer(text,speaker_id, n_scale= 0.667,n_scale_w = 0.8, l_scale = 1 ): - text = clean_text(text) - speaker_id = int(selection(speaker_id)) - dev = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - hps_ms = utils.get_hparams_from_file(f"{config_json}") - with torch.no_grad(): - stn_tst = get_text(text, hps_ms, cleaned=False) - x_tst = stn_tst.unsqueeze(0).to(dev) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]).to(dev) - sid = torch.LongTensor([speaker_id]).to(dev) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=n_scale, noise_scale_w=n_scale_w, length_scale=l_scale)[0][ - 0, 0].data.cpu().float().numpy() - return (hps_ms.data.sampling_rate, audio) - -idols = ["南小鸟","园田海未","小泉花阳","星空凛","东条希","矢泽妮可","绚濑绘里","西木野真姬","高坂穗乃果"] -app = gr.Blocks() -with app: - with gr.Tabs(): - - with gr.TabItem("Basic"): - - tts_input1 = gr.TextArea(label="请输入纯中文或纯日文", value="大家好") - para_input1 = gr.Slider(minimum= 0.01,maximum=1.0,label="更改噪声比例", value=0.667) - para_input2 = gr.Slider(minimum= 0.01,maximum=1.0,label="更改噪声偏差", value=0.8) - para_input3 = gr.Slider(minimum= 0.1,maximum=10,label="更改时间比例", value=1) - tts_submit = gr.Button("Generate", variant="primary") - speaker1 = gr.Dropdown(label="选择说话人",choices=idols, value="高坂穗乃果", interactive=True) - tts_output2 = gr.Audio(label="Output") - - tts_submit.click(infer, [tts_input1,speaker1,para_input1,para_input2,para_input3], [tts_output2]) - app.launch() diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/chase_train/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/chase_train/__init__.py deleted file mode 100644 index 3a3d2b69958eae4be6afbe627c713b2459db87b1..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/chase_train/__init__.py +++ /dev/null @@ -1,60 +0,0 @@ -from pathlib import Path -from typing import List - -from PIL.Image import Image as IMG -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.utils import save_gif - -img_dir = Path(__file__).parent / "images" - - -def chase_train(images: List[BuildImage], texts, args): - img = images[0].convert("RGBA").square().resize((42, 42)) - frames: List[IMG] = [] - # fmt: off - locs = [ - (35, 34, 128, 44), (35, 33, 132, 40), (33, 34, 133, 36), (33, 38, 135, 41), - (34, 34, 136, 38), (35, 35, 136, 33), (33, 34, 138, 38), (36, 35, 138, 34), - (38, 34, 139, 32), (40, 35, 139, 37), (36, 35, 139, 33), (39, 36, 138, 28), - (40, 35, 138, 33), (37, 34, 138, 31), (43, 36, 135, 27), (36, 37, 136, 32), - (38, 40, 135, 26), (37, 35, 133, 26), (33, 36, 132, 30), (33, 39, 132, 25), - (32, 36, 131, 23), (33, 36, 130, 31), (35, 39, 128, 25), (33, 35, 127, 23), - (34, 36, 126, 29), (34, 40, 124, 25), (39, 36, 119, 23), (35, 36, 119, 32), - (35, 37, 116, 27), (36, 38, 113, 23), (34, 35, 113, 32), (39, 36, 113, 23), - (36, 35, 114, 17), (36, 38, 111, 13), (34, 37, 114, 15), (34, 39, 111, 10), - (33, 39, 109, 11), (36, 35, 104, 17), (34, 36, 102, 14), (34, 35, 99, 14), - (35, 38, 96, 16), (35, 35, 93, 14), (36, 35, 89, 15), (36, 36, 86, 18), - (36, 39, 83, 14), (34, 36, 81, 16), (40, 41, 74, 17), (38, 36, 74, 15), - (39, 35, 70, 16), (33, 35, 69, 20), (36, 35, 66, 17), (36, 35, 62, 17), - (37, 36, 57, 21), (35, 39, 57, 15), (35, 36, 53, 17), (35, 38, 51, 20), - (37, 36, 47, 19), (37, 35, 47, 18), (40, 36, 43, 19), (38, 35, 42, 22), - (40, 34, 38, 20), (38, 34, 37, 21), (39, 32, 35, 24), (39, 33, 33, 22), - (39, 36, 32, 22), (38, 35, 32, 25), (35, 37, 31, 22), (37, 37, 31, 23), - (36, 31, 31, 28), (37, 34, 32, 25), (36, 37, 32, 23), (36, 33, 33, 30), - (35, 34, 33, 27), (38, 33, 33, 28), (37, 34, 33, 29), (36, 35, 35, 28), - (36, 37, 36, 27), (43, 39, 33, 30), (35, 34, 38, 31), (37, 34, 39, 30), - (36, 34, 40, 30), (39, 35, 41, 30), (41, 36, 41, 29), (40, 37, 44, 32), - (40, 37, 45, 29), (39, 38, 48, 28), (38, 33, 50, 33), (35, 38, 53, 28), - (37, 34, 54, 31), (38, 34, 57, 32), (41, 35, 57, 29), (35, 34, 63, 29), - (41, 35, 62, 29), (38, 35, 66, 28), (35, 33, 70, 29), (40, 39, 70, 28), - (36, 36, 74, 28), (37, 35, 77, 26), (37, 35, 79, 28), (38, 35, 81, 27), - (36, 35, 85, 27), (37, 36, 88, 29), (36, 34, 91, 27), (38, 39, 94, 24), - (39, 34, 95, 27), (37, 34, 98, 26), (36, 35, 103, 24), (37, 36, 99, 28), - (34, 36, 97, 34), (34, 38, 102, 38), (37, 37, 99, 40), (39, 36, 101, 47), - (36, 36, 106, 43), (35, 35, 109, 40), (35, 39, 112, 43), (33, 36, 116, 41), - (36, 36, 116, 39), (34, 37, 121, 45), (35, 41, 123, 38), (34, 37, 126, 35), - ] - # fmt: on - for i in range(120): - frame = BuildImage.open(img_dir / f"{i}.png") - w, h, x, y = locs[i] - frame.paste(img.resize((w, h)), (x, y), below=True) - frames.append(frame.image) - return save_gif(frames, 0.05) - - -add_meme( - "chase_train", chase_train, min_images=1, max_images=1, keywords=["追列车", "追火车"] -) diff --git a/spaces/Curranj/FlowerDiffusion/README.md b/spaces/Curranj/FlowerDiffusion/README.md deleted file mode 100644 index fc5114ed4ede83224efa3427cea34fbf6f4738bb..0000000000000000000000000000000000000000 --- a/spaces/Curranj/FlowerDiffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FlowerDiffusion -emoji: 🏢 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DESUCLUB/BLLAMA/export_state_dict_checkpoint.py b/spaces/DESUCLUB/BLLAMA/export_state_dict_checkpoint.py deleted file mode 100644 index 78e9d1f655ff61c226fa16087c80e8d3336a162c..0000000000000000000000000000000000000000 --- a/spaces/DESUCLUB/BLLAMA/export_state_dict_checkpoint.py +++ /dev/null @@ -1,119 +0,0 @@ -import os -import json - -import torch -from peft import PeftModel, LoraConfig - -import transformers - -assert ( - "LlamaTokenizer" in transformers._import_structure["models.llama"] -), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git" -from transformers import LlamaTokenizer, LlamaForCausalLM - -tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") - -base_model = LlamaForCausalLM.from_pretrained( - "decapoda-research/llama-7b-hf", - load_in_8bit=False, - torch_dtype=torch.float16, - device_map={"": "cpu"}, -) - -lora_model = PeftModel.from_pretrained( - base_model, - "tloen/alpaca-lora-7b", - device_map={"": "cpu"}, - torch_dtype=torch.float16, -) - -# merge weights -for layer in lora_model.base_model.model.model.layers: - layer.self_attn.q_proj.merge_weights = True - layer.self_attn.v_proj.merge_weights = True - -lora_model.train(False) - -lora_model_sd = lora_model.state_dict() - -params = { - "dim": 4096, - "multiple_of": 256, - "n_heads": 32, - "n_layers": 32, - "norm_eps": 1e-06, - "vocab_size": -1, -} -n_layers = params["n_layers"] -n_heads = params["n_heads"] -dim = params["dim"] -dims_per_head = dim // n_heads -base = 10000.0 -inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head)) - - -def permute(w): - return ( - w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim) - ) - - -def unpermute(w): - return ( - w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim) - ) - - -def translate_state_dict_key(k): - k = k.replace("base_model.model.", "") - if k == "model.embed_tokens.weight": - return "tok_embeddings.weight" - elif k == "model.norm.weight": - return "norm.weight" - elif k == "lm_head.weight": - return "output.weight" - elif k.startswith("model.layers."): - layer = k.split(".")[2] - if k.endswith(".self_attn.q_proj.weight"): - return f"layers.{layer}.attention.wq.weight" - elif k.endswith(".self_attn.k_proj.weight"): - return f"layers.{layer}.attention.wk.weight" - elif k.endswith(".self_attn.v_proj.weight"): - return f"layers.{layer}.attention.wv.weight" - elif k.endswith(".self_attn.o_proj.weight"): - return f"layers.{layer}.attention.wo.weight" - elif k.endswith(".mlp.gate_proj.weight"): - return f"layers.{layer}.feed_forward.w1.weight" - elif k.endswith(".mlp.down_proj.weight"): - return f"layers.{layer}.feed_forward.w2.weight" - elif k.endswith(".mlp.up_proj.weight"): - return f"layers.{layer}.feed_forward.w3.weight" - elif k.endswith(".input_layernorm.weight"): - return f"layers.{layer}.attention_norm.weight" - elif k.endswith(".post_attention_layernorm.weight"): - return f"layers.{layer}.ffn_norm.weight" - elif k.endswith("rotary_emb.inv_freq") or "lora" in k: - return None - else: - print(layer, k) - raise NotImplementedError - else: - print(k) - raise NotImplementedError - - -new_state_dict = {} -for k, v in lora_model_sd.items(): - new_k = translate_state_dict_key(k) - if new_k is not None: - if "wq" in new_k or "wk" in new_k: - new_state_dict[new_k] = unpermute(v) - else: - new_state_dict[new_k] = v - -os.makedirs("./ckpt", exist_ok=True) - -torch.save(new_state_dict, "./ckpt/consolidated.00.pth") - -with open("./ckpt/params.json", "w") as f: - json.dump(params, f) diff --git a/spaces/DJQmUKV/rvc-inference/util.py b/spaces/DJQmUKV/rvc-inference/util.py deleted file mode 100644 index 8d6bcff1135c2d97e4caad7922f03f05c98484da..0000000000000000000000000000000000000000 --- a/spaces/DJQmUKV/rvc-inference/util.py +++ /dev/null @@ -1,81 +0,0 @@ -import sys -import asyncio -from io import BytesIO - -from fairseq import checkpoint_utils - -import torch - -import edge_tts -import librosa - - -# https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI/blob/main/config.py#L43-L55 # noqa -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, 'has_mps', False): - return False - - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -def is_half(device: str) -> bool: - if not device.startswith('cuda'): - return False - else: - gpu_name = torch.cuda.get_device_name( - int(device.split(':')[-1]) - ).upper() - - # ...regex? - if ( - ('16' in gpu_name and 'V100' not in gpu_name) - or 'P40' in gpu_name - or '1060' in gpu_name - or '1070' in gpu_name - or '1080' in gpu_name - ): - return False - - return True - - -def load_hubert_model(device: str, model_path: str = 'hubert_base.pt'): - model = checkpoint_utils.load_model_ensemble_and_task( - [model_path] - )[0][0].to(device) - - if is_half(device): - return model.half() - else: - return model.float() - - -async def call_edge_tts(speaker_name: str, text: str): - tts_com = edge_tts.Communicate(text, speaker_name) - tts_raw = b'' - - # Stream TTS audio to bytes - async for chunk in tts_com.stream(): - if chunk['type'] == 'audio': - tts_raw += chunk['data'] - - # Convert mp3 stream to wav - ffmpeg_proc = await asyncio.create_subprocess_exec( - 'ffmpeg', - '-f', 'mp3', - '-i', '-', - '-f', 'wav', - '-', - stdin=asyncio.subprocess.PIPE, - stdout=asyncio.subprocess.PIPE - ) - (tts_wav, _) = await ffmpeg_proc.communicate(tts_raw) - - return librosa.load(BytesIO(tts_wav)) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/_magics.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/_magics.py deleted file mode 100644 index 7fe6131182952ff30bf63543de528657f7ba77a2..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/_magics.py +++ /dev/null @@ -1,109 +0,0 @@ -""" -Magic functions for rendering vega-lite specifications -""" -__all__ = ["vegalite"] - -import json -import warnings - -import IPython -from IPython.core import magic_arguments -import pandas as pd -from toolz import curried - -from altair.vegalite import v5 as vegalite_v5 - -try: - import yaml - - YAML_AVAILABLE = True -except ImportError: - YAML_AVAILABLE = False - - -RENDERERS = { - "vega-lite": { - "5": vegalite_v5.VegaLite, - }, -} - - -TRANSFORMERS = { - "vega-lite": { - "5": vegalite_v5.data_transformers, - }, -} - - -def _prepare_data(data, data_transformers): - """Convert input data to data for use within schema""" - if data is None or isinstance(data, dict): - return data - elif isinstance(data, pd.DataFrame): - return curried.pipe(data, data_transformers.get()) - elif isinstance(data, str): - return {"url": data} - else: - warnings.warn("data of type {} not recognized".format(type(data)), stacklevel=1) - return data - - -def _get_variable(name): - """Get a variable from the notebook namespace.""" - ip = IPython.get_ipython() - if ip is None: - raise ValueError( - "Magic command must be run within an IPython " - "environemnt, in which get_ipython() is defined." - ) - if name not in ip.user_ns: - raise NameError( - "argument '{}' does not match the " - "name of any defined variable".format(name) - ) - return ip.user_ns[name] - - -@magic_arguments.magic_arguments() -@magic_arguments.argument( - "data", - nargs="?", - help="local variablename of a pandas DataFrame to be used as the dataset", -) -@magic_arguments.argument("-v", "--version", dest="version", default="v5") -@magic_arguments.argument("-j", "--json", dest="json", action="store_true") -def vegalite(line, cell): - """Cell magic for displaying vega-lite visualizations in CoLab. - - %%vegalite [dataframe] [--json] [--version='v5'] - - Visualize the contents of the cell using Vega-Lite, optionally - specifying a pandas DataFrame object to be used as the dataset. - - if --json is passed, then input is parsed as json rather than yaml. - """ - args = magic_arguments.parse_argstring(vegalite, line) - existing_versions = {"v5": "5"} - version = existing_versions[args.version] - assert version in RENDERERS["vega-lite"] - VegaLite = RENDERERS["vega-lite"][version] - data_transformers = TRANSFORMERS["vega-lite"][version] - - if args.json: - spec = json.loads(cell) - elif not YAML_AVAILABLE: - try: - spec = json.loads(cell) - except json.JSONDecodeError as err: - raise ValueError( - "%%vegalite: spec is not valid JSON. " - "Install pyyaml to parse spec as yaml" - ) from err - else: - spec = yaml.load(cell, Loader=yaml.SafeLoader) - - if args.data is not None: - data = _get_variable(args.data) - spec["data"] = _prepare_data(data, data_transformers) - - return VegaLite(spec) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/shell_completion.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/shell_completion.py deleted file mode 100644 index 5de124702ec711c7fc7e8244d95812aee41747a0..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/shell_completion.py +++ /dev/null @@ -1,593 +0,0 @@ -import os -import re -import typing as t -from gettext import gettext as _ - -from .core import Argument -from .core import BaseCommand -from .core import Context -from .core import MultiCommand -from .core import Option -from .core import Parameter -from .core import ParameterSource -from .parser import split_arg_string -from .utils import echo - - -def shell_complete( - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - complete_var: str, - instruction: str, -) -> int: - """Perform shell completion for the given CLI program. - - :param cli: Command being called. - :param ctx_args: Extra arguments to pass to - ``cli.make_context``. - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. - :param instruction: Value of ``complete_var`` with the completion - instruction and shell, in the form ``instruction_shell``. - :return: Status code to exit with. - """ - shell, _, instruction = instruction.partition("_") - comp_cls = get_completion_class(shell) - - if comp_cls is None: - return 1 - - comp = comp_cls(cli, ctx_args, prog_name, complete_var) - - if instruction == "source": - echo(comp.source()) - return 0 - - if instruction == "complete": - echo(comp.complete()) - return 0 - - return 1 - - -class CompletionItem: - """Represents a completion value and metadata about the value. The - default metadata is ``type`` to indicate special shell handling, - and ``help`` if a shell supports showing a help string next to the - value. - - Arbitrary parameters can be passed when creating the object, and - accessed using ``item.attr``. If an attribute wasn't passed, - accessing it returns ``None``. - - :param value: The completion suggestion. - :param type: Tells the shell script to provide special completion - support for the type. Click uses ``"dir"`` and ``"file"``. - :param help: String shown next to the value if supported. - :param kwargs: Arbitrary metadata. The built-in implementations - don't use this, but custom type completions paired with custom - shell support could use it. - """ - - __slots__ = ("value", "type", "help", "_info") - - def __init__( - self, - value: t.Any, - type: str = "plain", - help: t.Optional[str] = None, - **kwargs: t.Any, - ) -> None: - self.value: t.Any = value - self.type: str = type - self.help: t.Optional[str] = help - self._info = kwargs - - def __getattr__(self, name: str) -> t.Any: - return self._info.get(name) - - -# Only Bash >= 4.4 has the nosort option. -_SOURCE_BASH = """\ -%(complete_func)s() { - local IFS=$'\\n' - local response - - response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \ -%(complete_var)s=bash_complete $1) - - for completion in $response; do - IFS=',' read type value <<< "$completion" - - if [[ $type == 'dir' ]]; then - COMPREPLY=() - compopt -o dirnames - elif [[ $type == 'file' ]]; then - COMPREPLY=() - compopt -o default - elif [[ $type == 'plain' ]]; then - COMPREPLY+=($value) - fi - done - - return 0 -} - -%(complete_func)s_setup() { - complete -o nosort -F %(complete_func)s %(prog_name)s -} - -%(complete_func)s_setup; -""" - -_SOURCE_ZSH = """\ -#compdef %(prog_name)s - -%(complete_func)s() { - local -a completions - local -a completions_with_descriptions - local -a response - (( ! $+commands[%(prog_name)s] )) && return 1 - - response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \ -%(complete_var)s=zsh_complete %(prog_name)s)}") - - for type key descr in ${response}; do - if [[ "$type" == "plain" ]]; then - if [[ "$descr" == "_" ]]; then - completions+=("$key") - else - completions_with_descriptions+=("$key":"$descr") - fi - elif [[ "$type" == "dir" ]]; then - _path_files -/ - elif [[ "$type" == "file" ]]; then - _path_files -f - fi - done - - if [ -n "$completions_with_descriptions" ]; then - _describe -V unsorted completions_with_descriptions -U - fi - - if [ -n "$completions" ]; then - compadd -U -V unsorted -a completions - fi -} - -if [[ $zsh_eval_context[-1] == loadautofunc ]]; then - # autoload from fpath, call function directly - %(complete_func)s "$@" -else - # eval/source/. command, register function for later - compdef %(complete_func)s %(prog_name)s -fi -""" - -_SOURCE_FISH = """\ -function %(complete_func)s - set -l response (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \ -COMP_CWORD=(commandline -t) %(prog_name)s) - - for completion in $response - set -l metadata (string split "," $completion) - - if test $metadata[1] = "dir" - __fish_complete_directories $metadata[2] - else if test $metadata[1] = "file" - __fish_complete_path $metadata[2] - else if test $metadata[1] = "plain" - echo $metadata[2] - end - end -end - -complete --no-files --command %(prog_name)s --arguments \ -"(%(complete_func)s)" -""" - - -class ShellComplete: - """Base class for providing shell completion support. A subclass for - a given shell will override attributes and methods to implement the - completion instructions (``source`` and ``complete``). - - :param cli: Command being called. - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. - - .. versionadded:: 8.0 - """ - - name: t.ClassVar[str] - """Name to register the shell as with :func:`add_completion_class`. - This is used in completion instructions (``{name}_source`` and - ``{name}_complete``). - """ - - source_template: t.ClassVar[str] - """Completion script template formatted by :meth:`source`. This must - be provided by subclasses. - """ - - def __init__( - self, - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - complete_var: str, - ) -> None: - self.cli = cli - self.ctx_args = ctx_args - self.prog_name = prog_name - self.complete_var = complete_var - - @property - def func_name(self) -> str: - """The name of the shell function defined by the completion - script. - """ - safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), re.ASCII) - return f"_{safe_name}_completion" - - def source_vars(self) -> t.Dict[str, t.Any]: - """Vars for formatting :attr:`source_template`. - - By default this provides ``complete_func``, ``complete_var``, - and ``prog_name``. - """ - return { - "complete_func": self.func_name, - "complete_var": self.complete_var, - "prog_name": self.prog_name, - } - - def source(self) -> str: - """Produce the shell script that defines the completion - function. By default this ``%``-style formats - :attr:`source_template` with the dict returned by - :meth:`source_vars`. - """ - return self.source_template % self.source_vars() - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - """Use the env vars defined by the shell script to return a - tuple of ``args, incomplete``. This must be implemented by - subclasses. - """ - raise NotImplementedError - - def get_completions( - self, args: t.List[str], incomplete: str - ) -> t.List[CompletionItem]: - """Determine the context and last complete command or parameter - from the complete args. Call that object's ``shell_complete`` - method to get the completions for the incomplete value. - - :param args: List of complete args before the incomplete value. - :param incomplete: Value being completed. May be empty. - """ - ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args) - obj, incomplete = _resolve_incomplete(ctx, args, incomplete) - return obj.shell_complete(ctx, incomplete) - - def format_completion(self, item: CompletionItem) -> str: - """Format a completion item into the form recognized by the - shell script. This must be implemented by subclasses. - - :param item: Completion item to format. - """ - raise NotImplementedError - - def complete(self) -> str: - """Produce the completion data to send back to the shell. - - By default this calls :meth:`get_completion_args`, gets the - completions, then calls :meth:`format_completion` for each - completion. - """ - args, incomplete = self.get_completion_args() - completions = self.get_completions(args, incomplete) - out = [self.format_completion(item) for item in completions] - return "\n".join(out) - - -class BashComplete(ShellComplete): - """Shell completion for Bash.""" - - name = "bash" - source_template = _SOURCE_BASH - - def _check_version(self) -> None: - import subprocess - - output = subprocess.run( - ["bash", "-c", 'echo "${BASH_VERSION}"'], stdout=subprocess.PIPE - ) - match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode()) - - if match is not None: - major, minor = match.groups() - - if major < "4" or major == "4" and minor < "4": - raise RuntimeError( - _( - "Shell completion is not supported for Bash" - " versions older than 4.4." - ) - ) - else: - raise RuntimeError( - _("Couldn't detect Bash version, shell completion is not supported.") - ) - - def source(self) -> str: - self._check_version() - return super().source() - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - cword = int(os.environ["COMP_CWORD"]) - args = cwords[1:cword] - - try: - incomplete = cwords[cword] - except IndexError: - incomplete = "" - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - return f"{item.type},{item.value}" - - -class ZshComplete(ShellComplete): - """Shell completion for Zsh.""" - - name = "zsh" - source_template = _SOURCE_ZSH - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - cword = int(os.environ["COMP_CWORD"]) - args = cwords[1:cword] - - try: - incomplete = cwords[cword] - except IndexError: - incomplete = "" - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - return f"{item.type}\n{item.value}\n{item.help if item.help else '_'}" - - -class FishComplete(ShellComplete): - """Shell completion for Fish.""" - - name = "fish" - source_template = _SOURCE_FISH - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - incomplete = os.environ["COMP_CWORD"] - args = cwords[1:] - - # Fish stores the partial word in both COMP_WORDS and - # COMP_CWORD, remove it from complete args. - if incomplete and args and args[-1] == incomplete: - args.pop() - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - if item.help: - return f"{item.type},{item.value}\t{item.help}" - - return f"{item.type},{item.value}" - - -ShellCompleteType = t.TypeVar("ShellCompleteType", bound=t.Type[ShellComplete]) - - -_available_shells: t.Dict[str, t.Type[ShellComplete]] = { - "bash": BashComplete, - "fish": FishComplete, - "zsh": ZshComplete, -} - - -def add_completion_class( - cls: ShellCompleteType, name: t.Optional[str] = None -) -> ShellCompleteType: - """Register a :class:`ShellComplete` subclass under the given name. - The name will be provided by the completion instruction environment - variable during completion. - - :param cls: The completion class that will handle completion for the - shell. - :param name: Name to register the class under. Defaults to the - class's ``name`` attribute. - """ - if name is None: - name = cls.name - - _available_shells[name] = cls - - return cls - - -def get_completion_class(shell: str) -> t.Optional[t.Type[ShellComplete]]: - """Look up a registered :class:`ShellComplete` subclass by the name - provided by the completion instruction environment variable. If the - name isn't registered, returns ``None``. - - :param shell: Name the class is registered under. - """ - return _available_shells.get(shell) - - -def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool: - """Determine if the given parameter is an argument that can still - accept values. - - :param ctx: Invocation context for the command represented by the - parsed complete args. - :param param: Argument object being checked. - """ - if not isinstance(param, Argument): - return False - - assert param.name is not None - # Will be None if expose_value is False. - value = ctx.params.get(param.name) - return ( - param.nargs == -1 - or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE - or ( - param.nargs > 1 - and isinstance(value, (tuple, list)) - and len(value) < param.nargs - ) - ) - - -def _start_of_option(ctx: Context, value: str) -> bool: - """Check if the value looks like the start of an option.""" - if not value: - return False - - c = value[0] - return c in ctx._opt_prefixes - - -def _is_incomplete_option(ctx: Context, args: t.List[str], param: Parameter) -> bool: - """Determine if the given parameter is an option that needs a value. - - :param args: List of complete args before the incomplete value. - :param param: Option object being checked. - """ - if not isinstance(param, Option): - return False - - if param.is_flag or param.count: - return False - - last_option = None - - for index, arg in enumerate(reversed(args)): - if index + 1 > param.nargs: - break - - if _start_of_option(ctx, arg): - last_option = arg - - return last_option is not None and last_option in param.opts - - -def _resolve_context( - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - args: t.List[str], -) -> Context: - """Produce the context hierarchy starting with the command and - traversing the complete arguments. This only follows the commands, - it doesn't trigger input prompts or callbacks. - - :param cli: Command being called. - :param prog_name: Name of the executable in the shell. - :param args: List of complete args before the incomplete value. - """ - ctx_args["resilient_parsing"] = True - ctx = cli.make_context(prog_name, args.copy(), **ctx_args) - args = ctx.protected_args + ctx.args - - while args: - command = ctx.command - - if isinstance(command, MultiCommand): - if not command.chain: - name, cmd, args = command.resolve_command(ctx, args) - - if cmd is None: - return ctx - - ctx = cmd.make_context(name, args, parent=ctx, resilient_parsing=True) - args = ctx.protected_args + ctx.args - else: - sub_ctx = ctx - - while args: - name, cmd, args = command.resolve_command(ctx, args) - - if cmd is None: - return ctx - - sub_ctx = cmd.make_context( - name, - args, - parent=ctx, - allow_extra_args=True, - allow_interspersed_args=False, - resilient_parsing=True, - ) - args = sub_ctx.args - - ctx = sub_ctx - args = [*sub_ctx.protected_args, *sub_ctx.args] - else: - break - - return ctx - - -def _resolve_incomplete( - ctx: Context, args: t.List[str], incomplete: str -) -> t.Tuple[t.Union[BaseCommand, Parameter], str]: - """Find the Click object that will handle the completion of the - incomplete value. Return the object and the incomplete value. - - :param ctx: Invocation context for the command represented by - the parsed complete args. - :param args: List of complete args before the incomplete value. - :param incomplete: Value being completed. May be empty. - """ - # Different shells treat an "=" between a long option name and - # value differently. Might keep the value joined, return the "=" - # as a separate item, or return the split name and value. Always - # split and discard the "=" to make completion easier. - if incomplete == "=": - incomplete = "" - elif "=" in incomplete and _start_of_option(ctx, incomplete): - name, _, incomplete = incomplete.partition("=") - args.append(name) - - # The "--" marker tells Click to stop treating values as options - # even if they start with the option character. If it hasn't been - # given and the incomplete arg looks like an option, the current - # command will provide option name completions. - if "--" not in args and _start_of_option(ctx, incomplete): - return ctx.command, incomplete - - params = ctx.command.get_params(ctx) - - # If the last complete arg is an option name with an incomplete - # value, the option will provide value completions. - for param in params: - if _is_incomplete_option(ctx, args, param): - return param, incomplete - - # It's not an option name or value. The first argument without a - # parsed value will provide value completions. - for param in params: - if _is_incomplete_argument(ctx, param): - return param, incomplete - - # There were no unparsed arguments, the command may be a group that - # will provide command name completions. - return ctx.command, incomplete diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/config/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/config/__init__.py deleted file mode 100644 index c106fe51fc0b8a6926fa67928d4de7af1b9ffe5e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/config/__init__.py +++ /dev/null @@ -1,74 +0,0 @@ -""" -Define all configuration options that can affect the working of fontTools -modules. E.g. optimization levels of varLib IUP, otlLib GPOS compression level, -etc. If this file gets too big, split it into smaller files per-module. - -An instance of the Config class can be attached to a TTFont object, so that -the various modules can access their configuration options from it. -""" -from textwrap import dedent - -from fontTools.misc.configTools import * - - -class Config(AbstractConfig): - options = Options() - - -OPTIONS = Config.options - - -Config.register_option( - name="fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL", - help=dedent( - """\ - GPOS Lookup type 2 (PairPos) compression level: - 0 = do not attempt to compact PairPos lookups; - 1 to 8 = create at most 1 to 8 new subtables for each existing - subtable, provided that it would yield a 50%% file size saving; - 9 = create as many new subtables as needed to yield a file size saving. - Default: 0. - - This compaction aims to save file size, by splitting large class - kerning subtables (Format 2) that contain many zero values into - smaller and denser subtables. It's a trade-off between the overhead - of several subtables versus the sparseness of one big subtable. - - See the pull request: https://github.com/fonttools/fonttools/pull/2326 - """ - ), - default=0, - parse=int, - validate=lambda v: v in range(10), -) - -Config.register_option( - name="fontTools.ttLib.tables.otBase:USE_HARFBUZZ_REPACKER", - help=dedent( - """\ - FontTools tries to use the HarfBuzz Repacker to serialize GPOS/GSUB tables - if the uharfbuzz python bindings are importable, otherwise falls back to its - slower, less efficient serializer. Set to False to always use the latter. - Set to True to explicitly request the HarfBuzz Repacker (will raise an - error if uharfbuzz cannot be imported). - """ - ), - default=None, - parse=Option.parse_optional_bool, - validate=Option.validate_optional_bool, -) - -Config.register_option( - name="fontTools.otlLib.builder:WRITE_GPOS7", - help=dedent( - """\ - macOS before 13.2 didn’t support GPOS LookupType 7 (non-chaining - ContextPos lookups), so FontTools.otlLib.builder disables a file size - optimisation that would use LookupType 7 instead of 8 when there is no - chaining (no prefix or suffix). Set to True to enable the optimization. - """ - ), - default=False, - parse=Option.parse_optional_bool, - validate=Option.validate_optional_bool, -) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_h_h_e_a.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_h_h_e_a.py deleted file mode 100644 index c293fafcb040ed5fe661c28278309bae3a419b2b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_h_h_e_a.py +++ /dev/null @@ -1,134 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from fontTools.misc.fixedTools import ( - ensureVersionIsLong as fi2ve, - versionToFixed as ve2fi, -) -from . import DefaultTable -import math - - -hheaFormat = """ - > # big endian - tableVersion: L - ascent: h - descent: h - lineGap: h - advanceWidthMax: H - minLeftSideBearing: h - minRightSideBearing: h - xMaxExtent: h - caretSlopeRise: h - caretSlopeRun: h - caretOffset: h - reserved0: h - reserved1: h - reserved2: h - reserved3: h - metricDataFormat: h - numberOfHMetrics: H -""" - - -class table__h_h_e_a(DefaultTable.DefaultTable): - - # Note: Keep in sync with table__v_h_e_a - - dependencies = ["hmtx", "glyf", "CFF ", "CFF2"] - - # OpenType spec renamed these, add aliases for compatibility - @property - def ascender(self): - return self.ascent - - @ascender.setter - def ascender(self, value): - self.ascent = value - - @property - def descender(self): - return self.descent - - @descender.setter - def descender(self, value): - self.descent = value - - def decompile(self, data, ttFont): - sstruct.unpack(hheaFormat, data, self) - - def compile(self, ttFont): - if ttFont.recalcBBoxes and ( - ttFont.isLoaded("glyf") - or ttFont.isLoaded("CFF ") - or ttFont.isLoaded("CFF2") - ): - self.recalc(ttFont) - self.tableVersion = fi2ve(self.tableVersion) - return sstruct.pack(hheaFormat, self) - - def recalc(self, ttFont): - if "hmtx" in ttFont: - hmtxTable = ttFont["hmtx"] - self.advanceWidthMax = max(adv for adv, _ in hmtxTable.metrics.values()) - - boundsWidthDict = {} - if "glyf" in ttFont: - glyfTable = ttFont["glyf"] - for name in ttFont.getGlyphOrder(): - g = glyfTable[name] - if g.numberOfContours == 0: - continue - if g.numberOfContours < 0 and not hasattr(g, "xMax"): - # Composite glyph without extents set. - # Calculate those. - g.recalcBounds(glyfTable) - boundsWidthDict[name] = g.xMax - g.xMin - elif "CFF " in ttFont or "CFF2" in ttFont: - if "CFF " in ttFont: - topDict = ttFont["CFF "].cff.topDictIndex[0] - else: - topDict = ttFont["CFF2"].cff.topDictIndex[0] - charStrings = topDict.CharStrings - for name in ttFont.getGlyphOrder(): - cs = charStrings[name] - bounds = cs.calcBounds(charStrings) - if bounds is not None: - boundsWidthDict[name] = int( - math.ceil(bounds[2]) - math.floor(bounds[0]) - ) - - if boundsWidthDict: - minLeftSideBearing = float("inf") - minRightSideBearing = float("inf") - xMaxExtent = -float("inf") - for name, boundsWidth in boundsWidthDict.items(): - advanceWidth, lsb = hmtxTable[name] - rsb = advanceWidth - lsb - boundsWidth - extent = lsb + boundsWidth - minLeftSideBearing = min(minLeftSideBearing, lsb) - minRightSideBearing = min(minRightSideBearing, rsb) - xMaxExtent = max(xMaxExtent, extent) - self.minLeftSideBearing = minLeftSideBearing - self.minRightSideBearing = minRightSideBearing - self.xMaxExtent = xMaxExtent - - else: # No glyph has outlines. - self.minLeftSideBearing = 0 - self.minRightSideBearing = 0 - self.xMaxExtent = 0 - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(hheaFormat) - for name in names: - value = getattr(self, name) - if name == "tableVersion": - value = fi2ve(value) - value = "0x%08x" % value - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "tableVersion": - setattr(self, name, ve2fi(attrs["value"])) - return - setattr(self, name, safeEval(attrs["value"])) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/socks_proxy.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/socks_proxy.py deleted file mode 100644 index f12cb373f858b60c3b5987f75ca2795e67d81759..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_async/socks_proxy.py +++ /dev/null @@ -1,340 +0,0 @@ -import logging -import ssl -import typing - -from socksio import socks5 - -from .._backends.auto import AutoBackend -from .._backends.base import AsyncNetworkBackend, AsyncNetworkStream -from .._exceptions import ConnectionNotAvailable, ProxyError -from .._models import URL, Origin, Request, Response, enforce_bytes, enforce_url -from .._ssl import default_ssl_context -from .._synchronization import AsyncLock -from .._trace import Trace -from .connection_pool import AsyncConnectionPool -from .http11 import AsyncHTTP11Connection -from .interfaces import AsyncConnectionInterface - -logger = logging.getLogger("httpcore.socks") - - -AUTH_METHODS = { - b"\x00": "NO AUTHENTICATION REQUIRED", - b"\x01": "GSSAPI", - b"\x02": "USERNAME/PASSWORD", - b"\xff": "NO ACCEPTABLE METHODS", -} - -REPLY_CODES = { - b"\x00": "Succeeded", - b"\x01": "General SOCKS server failure", - b"\x02": "Connection not allowed by ruleset", - b"\x03": "Network unreachable", - b"\x04": "Host unreachable", - b"\x05": "Connection refused", - b"\x06": "TTL expired", - b"\x07": "Command not supported", - b"\x08": "Address type not supported", -} - - -async def _init_socks5_connection( - stream: AsyncNetworkStream, - *, - host: bytes, - port: int, - auth: typing.Optional[typing.Tuple[bytes, bytes]] = None, -) -> None: - conn = socks5.SOCKS5Connection() - - # Auth method request - auth_method = ( - socks5.SOCKS5AuthMethod.NO_AUTH_REQUIRED - if auth is None - else socks5.SOCKS5AuthMethod.USERNAME_PASSWORD - ) - conn.send(socks5.SOCKS5AuthMethodsRequest([auth_method])) - outgoing_bytes = conn.data_to_send() - await stream.write(outgoing_bytes) - - # Auth method response - incoming_bytes = await stream.read(max_bytes=4096) - response = conn.receive_data(incoming_bytes) - assert isinstance(response, socks5.SOCKS5AuthReply) - if response.method != auth_method: - requested = AUTH_METHODS.get(auth_method, "UNKNOWN") - responded = AUTH_METHODS.get(response.method, "UNKNOWN") - raise ProxyError( - f"Requested {requested} from proxy server, but got {responded}." - ) - - if response.method == socks5.SOCKS5AuthMethod.USERNAME_PASSWORD: - # Username/password request - assert auth is not None - username, password = auth - conn.send(socks5.SOCKS5UsernamePasswordRequest(username, password)) - outgoing_bytes = conn.data_to_send() - await stream.write(outgoing_bytes) - - # Username/password response - incoming_bytes = await stream.read(max_bytes=4096) - response = conn.receive_data(incoming_bytes) - assert isinstance(response, socks5.SOCKS5UsernamePasswordReply) - if not response.success: - raise ProxyError("Invalid username/password") - - # Connect request - conn.send( - socks5.SOCKS5CommandRequest.from_address( - socks5.SOCKS5Command.CONNECT, (host, port) - ) - ) - outgoing_bytes = conn.data_to_send() - await stream.write(outgoing_bytes) - - # Connect response - incoming_bytes = await stream.read(max_bytes=4096) - response = conn.receive_data(incoming_bytes) - assert isinstance(response, socks5.SOCKS5Reply) - if response.reply_code != socks5.SOCKS5ReplyCode.SUCCEEDED: - reply_code = REPLY_CODES.get(response.reply_code, "UNKOWN") - raise ProxyError(f"Proxy Server could not connect: {reply_code}.") - - -class AsyncSOCKSProxy(AsyncConnectionPool): - """ - A connection pool that sends requests via an HTTP proxy. - """ - - def __init__( - self, - proxy_url: typing.Union[URL, bytes, str], - proxy_auth: typing.Optional[ - typing.Tuple[typing.Union[bytes, str], typing.Union[bytes, str]] - ] = None, - ssl_context: typing.Optional[ssl.SSLContext] = None, - max_connections: typing.Optional[int] = 10, - max_keepalive_connections: typing.Optional[int] = None, - keepalive_expiry: typing.Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - network_backend: typing.Optional[AsyncNetworkBackend] = None, - ) -> None: - """ - A connection pool for making HTTP requests. - - Parameters: - proxy_url: The URL to use when connecting to the proxy server. - For example `"http://127.0.0.1:8080/"`. - ssl_context: An SSL context to use for verifying connections. - If not specified, the default `httpcore.default_ssl_context()` - will be used. - max_connections: The maximum number of concurrent HTTP connections that - the pool should allow. Any attempt to send a request on a pool that - would exceed this amount will block until a connection is available. - max_keepalive_connections: The maximum number of idle HTTP connections - that will be maintained in the pool. - keepalive_expiry: The duration in seconds that an idle HTTP connection - may be maintained for before being expired from the pool. - http1: A boolean indicating if HTTP/1.1 requests should be supported - by the connection pool. Defaults to True. - http2: A boolean indicating if HTTP/2 requests should be supported by - the connection pool. Defaults to False. - retries: The maximum number of retries when trying to establish - a connection. - local_address: Local address to connect from. Can also be used to - connect using a particular address family. Using - `local_address="0.0.0.0"` will connect using an `AF_INET` address - (IPv4), while using `local_address="::"` will connect using an - `AF_INET6` address (IPv6). - uds: Path to a Unix Domain Socket to use instead of TCP sockets. - network_backend: A backend instance to use for handling network I/O. - """ - super().__init__( - ssl_context=ssl_context, - max_connections=max_connections, - max_keepalive_connections=max_keepalive_connections, - keepalive_expiry=keepalive_expiry, - http1=http1, - http2=http2, - network_backend=network_backend, - retries=retries, - ) - self._ssl_context = ssl_context - self._proxy_url = enforce_url(proxy_url, name="proxy_url") - if proxy_auth is not None: - username, password = proxy_auth - username_bytes = enforce_bytes(username, name="proxy_auth") - password_bytes = enforce_bytes(password, name="proxy_auth") - self._proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = ( - username_bytes, - password_bytes, - ) - else: - self._proxy_auth = None - - def create_connection(self, origin: Origin) -> AsyncConnectionInterface: - return AsyncSocks5Connection( - proxy_origin=self._proxy_url.origin, - remote_origin=origin, - proxy_auth=self._proxy_auth, - ssl_context=self._ssl_context, - keepalive_expiry=self._keepalive_expiry, - http1=self._http1, - http2=self._http2, - network_backend=self._network_backend, - ) - - -class AsyncSocks5Connection(AsyncConnectionInterface): - def __init__( - self, - proxy_origin: Origin, - remote_origin: Origin, - proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = None, - ssl_context: typing.Optional[ssl.SSLContext] = None, - keepalive_expiry: typing.Optional[float] = None, - http1: bool = True, - http2: bool = False, - network_backend: typing.Optional[AsyncNetworkBackend] = None, - ) -> None: - self._proxy_origin = proxy_origin - self._remote_origin = remote_origin - self._proxy_auth = proxy_auth - self._ssl_context = ssl_context - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - - self._network_backend: AsyncNetworkBackend = ( - AutoBackend() if network_backend is None else network_backend - ) - self._connect_lock = AsyncLock() - self._connection: typing.Optional[AsyncConnectionInterface] = None - self._connect_failed = False - - async def handle_async_request(self, request: Request) -> Response: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("connect", None) - - async with self._connect_lock: - if self._connection is None: - try: - # Connect to the proxy - kwargs = { - "host": self._proxy_origin.host.decode("ascii"), - "port": self._proxy_origin.port, - "timeout": timeout, - } - with Trace("connect_tcp", logger, request, kwargs) as trace: - stream = await self._network_backend.connect_tcp(**kwargs) - trace.return_value = stream - - # Connect to the remote host using socks5 - kwargs = { - "stream": stream, - "host": self._remote_origin.host.decode("ascii"), - "port": self._remote_origin.port, - "auth": self._proxy_auth, - } - with Trace( - "setup_socks5_connection", logger, request, kwargs - ) as trace: - await _init_socks5_connection(**kwargs) - trace.return_value = stream - - # Upgrade the stream to SSL - if self._remote_origin.scheme == b"https": - ssl_context = ( - default_ssl_context() - if self._ssl_context is None - else self._ssl_context - ) - alpn_protocols = ( - ["http/1.1", "h2"] if self._http2 else ["http/1.1"] - ) - ssl_context.set_alpn_protocols(alpn_protocols) - - kwargs = { - "ssl_context": ssl_context, - "server_hostname": self._remote_origin.host.decode("ascii"), - "timeout": timeout, - } - async with Trace("start_tls", logger, request, kwargs) as trace: - stream = await stream.start_tls(**kwargs) - trace.return_value = stream - - # Determine if we should be using HTTP/1.1 or HTTP/2 - ssl_object = stream.get_extra_info("ssl_object") - http2_negotiated = ( - ssl_object is not None - and ssl_object.selected_alpn_protocol() == "h2" - ) - - # Create the HTTP/1.1 or HTTP/2 connection - if http2_negotiated or ( - self._http2 and not self._http1 - ): # pragma: nocover - from .http2 import AsyncHTTP2Connection - - self._connection = AsyncHTTP2Connection( - origin=self._remote_origin, - stream=stream, - keepalive_expiry=self._keepalive_expiry, - ) - else: - self._connection = AsyncHTTP11Connection( - origin=self._remote_origin, - stream=stream, - keepalive_expiry=self._keepalive_expiry, - ) - except Exception as exc: - self._connect_failed = True - raise exc - elif not self._connection.is_available(): # pragma: nocover - raise ConnectionNotAvailable() - - return await self._connection.handle_async_request(request) - - def can_handle_request(self, origin: Origin) -> bool: - return origin == self._remote_origin - - async def aclose(self) -> None: - if self._connection is not None: - await self._connection.aclose() - - def is_available(self) -> bool: - if self._connection is None: # pragma: nocover - # If HTTP/2 support is enabled, and the resulting connection could - # end up as HTTP/2 then we should indicate the connection as being - # available to service multiple requests. - return ( - self._http2 - and (self._remote_origin.scheme == b"https" or not self._http1) - and not self._connect_failed - ) - return self._connection.is_available() - - def has_expired(self) -> bool: - if self._connection is None: # pragma: nocover - return self._connect_failed - return self._connection.has_expired() - - def is_idle(self) -> bool: - if self._connection is None: # pragma: nocover - return self._connect_failed - return self._connection.is_idle() - - def is_closed(self) -> bool: - if self._connection is None: # pragma: nocover - return self._connect_failed - return self._connection.is_closed() - - def info(self) -> str: - if self._connection is None: # pragma: nocover - return "CONNECTION FAILED" if self._connect_failed else "CONNECTING" - return self._connection.info() - - def __repr__(self) -> str: - return f"<{self.__class__.__name__} [{self.info()}]>" diff --git a/spaces/Dalvo/Moxxie/README.md b/spaces/Dalvo/Moxxie/README.md deleted file mode 100644 index 9fe2e0be5e81ba2ef2f0980a685d7b3d324a2daa..0000000000000000000000000000000000000000 --- a/spaces/Dalvo/Moxxie/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Moxxie -emoji: 📉 -colorFrom: yellow -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DataScienceGuild/DataViz-Mermaid/style.css b/spaces/DataScienceGuild/DataViz-Mermaid/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/DataScienceGuild/DataViz-Mermaid/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/DelinteNicolas/SDG/app.py b/spaces/DelinteNicolas/SDG/app.py deleted file mode 100644 index e981f7ded68a0661cc890131d25985f61a7719f0..0000000000000000000000000000000000000000 --- a/spaces/DelinteNicolas/SDG/app.py +++ /dev/null @@ -1,80 +0,0 @@ -import gradio as gr -from spacy import displacy -from transformers import pipeline -from transformers import AutoTokenizer -from transformers import AutoModelForSequenceClassification - - -tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") -model_name_or_path = "DelinteNicolas/SDG_classifier_v0.0.4" -model = AutoModelForSequenceClassification.from_pretrained( - model_name_or_path) -classifier = pipeline("text-classification", - model=model, tokenizer=tokenizer) - - -def predict(text, thresh, model_name): - - colors = {"SDG1: No Poverty": "#e16f78", - "SDG10: Reduced Inequality": "#e26cac", - "SDG11: Sustainable Cities and Communities": "#e5b270", - "SDG12: Responsible Consumption and Production": "#e3b36b", - "SDG13: Climate Action": "#86df73", - "SDG14: Life Below Water": "#6fbce2", - "SDG15: Life on Land": "#70e47b", - "SDG16: Peace and Justice Strong Institutions": "#68aedc", - "SDG17: Partnerships to achieve the Goal": "#7099de", - "SDG2: Zero Hunger": "#e6c67b", - "SDG3: Good Health and Well-being": "#72dd91", - "SDG4: Quality Education": "#e1707d", - "SDG5: Gender Equality": "#e17669", - "SDG6: Clean Water and Sanitation": "#78cce1", - "SDG7: Affordable and Clean Energy": "#e2bf6e", - "SDG8: Decent Work and Economic Growth": "#e07391", - "SDG9: Industry, Innovation and Infrastructure": "#e49971"} - - doc = { - "text": text, - "ents": [], - "colors": colors, - "title": None - } - - txtList = text.split('.') - sdg = classifier(txtList) - - for i, txt in enumerate(txtList): - - if sdg[i]['score'] > thresh and len(txt) > 10 and sdg[i]['label'] != 'Not a SDG': - doc["ents"].append({"start": text.index(txt), "end": text.index( - txt)+len(txt)+1, "label": sdg[i]['label']}) - - html = displacy.render(doc, style="ent", page=True, - manual=True, minify=True, options={"colors": colors}) - html = ( - "

          " - ) - - return html - - -iface = gr.Interface( - fn=predict, - inputs=[ - gr.inputs.Textbox( - lines=10, - default="World hunger will be solved by 2030.", - label="Input Text"), - gr.Slider(0, 1, value=.9, interactive=True, label="Confidence"), - gr.Dropdown(["SDG_classifier_v0.0.1", - "SDG_classifier_v0.0.2", - "SDG_classifier_v0.0.3", - "SDG_classifier_v0.0.4", - "SDG_classifier_v0.0.5"], - value="SDG_classifier_v0.0.4", - label="Classification Model")], - outputs="html", -) -iface.launch() diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/id_lang_tab _final.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/id_lang_tab _final.py deleted file mode 100644 index 13acb4e32f482c585604c84c2f5201576cb8ae89..0000000000000000000000000000000000000000 --- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/id_lang_tab _final.py +++ /dev/null @@ -1,197 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import os -import matplotlib.pyplot as plt -import tiktoken -import random -import joblib -import json -import csv -from transformers import pipeline -import keras -from tensorflow.keras.preprocessing.sequence import pad_sequences -from sklearn.preprocessing import LabelEncoder -from sklearn.feature_extraction.text import CountVectorizer -from tensorflow.keras.utils import plot_model -from filesplit.merge import Merge -from extra_streamlit_components import tab_bar, TabBarItemData - -title = "Identification de langue" -sidebar_name = "Identification de langue" - - -# CountVectorizer a une liste de phrase en entrée. -# Cette fonction met les données d'entrée dans le bon format -def format_to_vectorize(data): - X_tok = [] - if "DataFrame" in str(type(data)):sentences = data.tolist() - elif "str" in str(type(data)): - sentences =[data] - else: sentences = data - - for sentence in sentences: - X_tok.append(sentence) - return X_tok - -def create_BOW(data): - global vectorizer - - X_tok = format_to_vectorize(data) - X = vectorizer.transform(X_tok) - return X - -def load_vectorizer(tokenizer): - global dict_token, dict_ids, nb_token - - path = 'data/vectorizer_tiktoken_big.pkl' - vectorizer = joblib.load(path) - dict_token = {tokenizer.decode([cle]): cle for cle, valeur in vectorizer.vocabulary_.items()} - dict_ids = {cle: tokenizer.decode([cle]) for cle, valeur in vectorizer.vocabulary_.items()} #dict_ids.items()} - nb_token = len(vectorizer.vocabulary_) - return vectorizer - -def lang_id_nb(sentences): - global lan_to_language - - if "str" in str(type(sentences)): - return lan_to_language[clf_nb.predict(create_BOW(sentences))[0]] - else: return [lan_to_language[l] for l in clf_nb.predict(create_BOW(sentences))] - -@st.cache_resource -def init_nb_identifier(): - - tokenizer = tiktoken.get_encoding("cl100k_base") - - # Chargement du classificateur sauvegardé - clf_nb = joblib.load("data/id_lang_tiktoken_nb_sparse_big.pkl") - vectorizer = load_vectorizer(tokenizer) - - # Lisez le contenu du fichier JSON - with open('data/multilingue/lan_to_language.json', 'r') as fichier: - lan_to_language = json.load(fichier) - return tokenizer, dict_token, dict_ids, nb_token, lan_to_language, clf_nb, vectorizer - -def encode_text(textes): - global tokenizer - - max_length=250 - sequences = tokenizer.encode_batch(textes) - return pad_sequences(sequences, maxlen=max_length, padding='post') - -def read_list_lan(): - - with open('data/multilingue/lan_code.csv', 'r') as fichier_csv: - reader = csv.reader(fichier_csv) - lan_code = next(reader) - return lan_code - -@st.cache_resource -def init_dl_identifier(): - - label_encoder = LabelEncoder() - label_encoder.fit(read_list_lan()) - merge = Merge("data/dl_id_lang_split", "./data", "dl_tiktoken_id_language_model.h5").merge(cleanup=False) - dl_model = keras.models.load_model("data/dl_tiktoken_id_language_model.h5") - return dl_model, label_encoder - -def lang_id_dl(sentences): - global dl_model, label_encoder - - if "str" in str(type(sentences)): predictions = dl_model.predict(encode_text([sentences])) - else: predictions = dl_model.predict(encode_text(sentences)) - # Décodage des prédictions en langues - predicted_labels_encoded = np.argmax(predictions, axis=1) - predicted_languages = label_encoder.classes_[predicted_labels_encoded] - if "str" in str(type(sentences)): return lan_to_language[predicted_languages[0]] - else: return [l for l in predicted_languages] - -@st.cache_resource -def init_lang_id_external(): - lang_id_model_ext = pipeline('text-classification',model="papluca/xlm-roberta-base-language-detection") - dict_xlmr = {"ar":"ara", "bg":"bul", "de":"deu", "el": "ell", "en":"eng", "es":"spa", "fr":"fra", "hi": "hin","it":"ita","ja":"jpn", \ - "nl":"nld", "pl":"pol", "pt":"por", "ru":"rus", "sw":"swh", "th":"tha", "tr":"tur", "ur": "urd", "vi":"vie", "zh":"cmn"} - return lang_id_model_ext, dict_xlmr - -def run(): - global tokenizer, vectorizer, dict_token, dict_ids, nb_token, lan_to_language, clf_nb - global dl_model, label_encoder - - - tokenizer, dict_token, dict_ids, nb_token, lan_to_language, clf_nb, vectorizer = init_nb_identifier() - dl_model, label_encoder = init_dl_identifier() - lang_id_model_ext, dict_xlmr = init_lang_id_external() - - st.write("") - st.title(title) - st.write("## **Explications :**\n") - st.markdown( - """ - Afin de mettre en oeuvre cette fonctionnalité nous avons utilisé un jeu d'entrainement multilinge de 9.757.778 phrases dans 95 langues. - Puis, nous avons utilisé 2 méthodes pour identifier la langue d'un texte: - 1. un classificateur **Naïve Bayes** - 2. un modèle de **Deep Learning** - - Les 2 modèles ont un accuracy similaire sur le jeu de test: **:red[96% pour NB et 97,5% pour DL]** - """ - ) - - chosen_id = tab_bar(data=[ - TabBarItemData(id="tab1", title="Id. Naïve Bayes", description="avec le Bag Of Words"), - TabBarItemData(id="tab2", title="Id. Deep Learning", description=" avec Keras"), - TabBarItemData(id="tab3", title="Interpretabilité", description="du modèle Naïve Bayes ")], - default="tab1") - - if (chosen_id == "tab1") or (chosen_id == "tab2"): - st.write("## **Paramètres :**\n") - custom_sentence = st.text_area(label="Saisir le texte dont vous souhaitez identifier la langue:") - st.button(label="Valider", type="primary") - if custom_sentence!='': - st.write("## **Résultats :**\n") - st.markdown( - """ - |Identifieur |Langue détectée| - |-------------------------------------|---------------| - |classificateur Naïve Bayes |**:red["""+lang_id_nb(custom_sentence)+"""]**| - |le modèle de Deep Learning |**:red["""+lang_id_dl(custom_sentence)+"""]**| - |XLM-RoBERTa (Hugging Face) |**:red["""+lan_to_language[dict_xlmr[lang_id_model_ext(custom_sentence)[0]['label']]]+"""]**| - """ - ) - st.write("## **Details sur la méthode :**\n") - if (chosen_id == "tab1"): - st.markdown( - """ - Afin d'utiliser le classificateur Naïve Bayes, il nous a fallu: - - Créer un Bag of Words de token.. - - ..Tokeniser le texte d'entrainement avec CountVectorizer et un tokenizer 'custom', **Tiktoken** d'OpenAI. - - Utiliser des matrices creuses (Sparse Matrix), car notre BOW contenait 10 M de lignes x 59122 tokens. - - Sauvegarder le vectorizer (non serialisable) et le classificateur entrainé. - - L'execution de toutes ces étapes est assez rapide: une dizaine de minutes -
          - Le résultat est très bon: Accuracy sur le jeu de test de - **:red[96%]** sur les 95 langues, et **:red[99,1%]** sur les 5 langues d'Europe de l'Ouest (en,fr,de,it,sp) -
          - **Note:** Le modèle *XLM-RoBERTa* de Hugging Face (qui identifie 20 langues seulement) a une accuracy sur notre jeu de test = **99,6%** - """ - , unsafe_allow_html=True) - else: - st.markdown( - """ - Nous avons mis en oeuvre un modèle Keras avec une couche d'embedding et 4 couches denses *(Voir architecture ci-dessous)*. - Nous avons utilé le tokensier **Tiktoken** d'OpenAI. - La couche d'embedding accepte 250 tokens, ce qui signifie que la détection de langue s'effectue sur approximativement les 200 premiers mots. -
          - Les 2 modèles ont un accuracy similaire sur le jeu de test: **:red[96% pour NB et 97,5% pour DL]** - """ - , unsafe_allow_html=True) - st.write("
          Architecture du modèle utilisé:
          ", unsafe_allow_html=True) - plot_model(dl_model, show_shapes=True, show_layer_names=True, show_layer_activations=True,rankdir='TB',to_file='./assets/model_plot.png') - col1, col2, col3 = st.columns([0.15,0.7,0.15]) - with col2: - st.image('./assets/model_plot.png',use_column_width="auto") - elif (chosen_id == "tab3"): - st.write("## **Interpretabilité**\n") - st.write("****Waiting for code from Olivier****") - - diff --git a/spaces/Devic1/LinearRegression/app.py b/spaces/Devic1/LinearRegression/app.py deleted file mode 100644 index 35d8e30613440e6b1b7db206f75d8c1125c5531b..0000000000000000000000000000000000000000 --- a/spaces/Devic1/LinearRegression/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import gradio as gr -import torch -import torch.nn as nn -import time -import math - -class Lreg(nn.Module): - def __init__(self): - super(Lreg,self).__init__() - self.l = nn.Linear(1,1) - - def forward(self,x): - y = self.l(x) - return y - -model = Lreg() -crit = nn.MSELoss() -optim = torch.optim.SGD(model.parameters(),lr=0.01) - -def weightsbias(weight,bias,predicted,progress=gr.Progress()): - weight = float(weight) - bias = float(bias) - predicted = float(predicted) - x = torch.randn(1000) - y = (x*weight)+bias - progress(0,desc="Starting") - model.train() - for i in progress.tqdm(range(1,100)): - for idx,ele in enumerate(x): - ele = ele.unsqueeze(0) - out = model(ele) - optim.zero_grad() - loss = crit(out,y[idx].unsqueeze(0)) - loss.backward() - optim.step() - model.eval() - t = torch.tensor([predicted],dtype=torch.float32) - rout = model(t) - c = model.state_dict() - return [rout.item(),c['l.weight'].item(),c['l.bias'].item()] - - -with gr.Blocks() as iface: - weights = gr.Textbox(label="Weights") - bias = gr.Textbox(label="Bias") - predicted = gr.Textbox(label="Prediction Number") - output = gr.Number(label="Output Predicted") - w = gr.Number(label="Calculated Weight ") - b = gr.Number(label="Calculated Bias ") - train = gr.Button("Train") - train.click(fn=weightsbias, inputs=[weights,bias,predicted], outputs=[output,w,b]) - -iface.queue(concurrency_count=10).launch() diff --git a/spaces/Devika/Briefly/app.py b/spaces/Devika/Briefly/app.py deleted file mode 100644 index 25ef8fbf404bd942295e630ad976004c58f5a42a..0000000000000000000000000000000000000000 --- a/spaces/Devika/Briefly/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import streamlit as st #Web App -from gnewsclient import gnewsclient # for fetching google news -from newspaper import Article # to obtain text from news articles -from transformers import pipeline # to summarize text -import spacy # to obtain keyword -from annotated_text import annotated_text # to display keywords - - -# Load sshleifer/distilbart-cnn-12-6 model -@st.cache(allow_output_mutation=True) -def load_model(): - model = pipeline("summarization") - return model - -data = gnewsclient.NewsClient(max_results=0) - -#faster method - inference api - 30k characters/mo -#API_URL = "https://api-inference.huggingface.co/models/sshleifer/distilbart-cnn-12-6" -#API_KEY=os.getenv("API_KEY") -#headers = {"Authorization": f"Bearer {API_KEY}"} -#def query(payload): -# response = requests.post(API_URL, headers=headers, json=payload) -# return response.json() - - -# obtain urls and it's content -def getNews(topic,location): - count=0 - contents=[] - titles=[] - authors=[] - urls=[] - data = gnewsclient.NewsClient(language='english',location=location,topic=topic,max_results=10) - news = data.get_news() - for item in news: - url=item['link'] - article = Article(url) - try: - article.download() - article.parse() - temp=item['title'][::-1] - index=temp.find("-") - temp=temp[:index-1][::-1] - urls.append(url) - contents.append(article.text) - titles.append(item['title'][:-index-1]) - authors.append(temp) - count+=1 - if(count==5): - break - except: - continue - return contents,titles,authors,urls - - - # Summarizes the content- minimum word limit 30 and maximum 60 -def getNewsSummary(contents,summarizer): - summaries=[] - for content in contents: - minimum=len(content.split()) - summaries.append(summarizer(content,max_length=60,min_length=min(30,minimum),do_sample=False,truncation=True)[0]['summary_text']) - return summaries - - -# Obtain 4 keywords from content (person,organisation or geopolitical entity) -def generateKeyword(contents): - keywords=[] - words=[] - nlp = spacy.load("en_core_web_lg") - labels=["PERSON","ORG","GPE"] - for content in contents: - doc=nlp(content) - keys=[] - limit=0 - for ent in doc.ents: - key=ent.text.upper() - label=ent.label_ - if(key not in words and key not in keywords and label in labels): - keys.append(key) - limit+=1 - for element in key.split(): - words.append(element) - if(limit==4): - keywords.append(keys) - break - return keywords - - -# Display title,author and summary in streamlit -def DisplaySummary(titles,authors,summaries,keywords,urls): - for i in range(5): - if(i+1<=len(summaries) and i+1<=len(keywords)): - st.text("") - st.subheader(f'[{titles[i]}] ({urls[i]})') - st.markdown(f'{authors[i]}',unsafe_allow_html=True) - st.write(summaries[i]) - if(len(keywords[i])==4): - annotated_text("KEYWORDS :",(keywords[i][0],"","#faa")," ",(keywords[i][1],"","#faa")," ",(keywords[i][2],"","#faa")," ",(keywords[i][3],"","#faa")) - elif(len(keywords[i])==3): - annotated_text("KEYWORDS :",(keywords[i][0],"","#faa")," ",(keywords[i][1],"","#faa")," ",(keywords[i][2],"","#faa")) - elif(len(keywords[i])==2): - annotated_text("KEYWORDS :",(keywords[i][0],"","#faa")," ",(keywords[i][1],"","#faa")) - elif(len(keywords[i])==1): - annotated_text("KEYWORDS :",(keywords[i][0],"","#faa")) - st.text("") - st.text("") - - -def main(): - summarizer=load_model() - st.title('Briefly') - with st.expander('Read trending news in less than 60 words...', expanded=True): - with st.form(key='form1'): - topic=st.selectbox('Category:',data.topics[2:]+["World"]) - location=st.selectbox('Location:',data.locations) - submit_button=st.form_submit_button() - - if submit_button: - with st.spinner('Fetching news...'): - contents,titles,authors,urls=getNews(topic,location) - summaries=getNewsSummary(contents,summarizer) - keywords=generateKeyword(contents) - DisplaySummary(titles,authors,summaries,keywords,urls) - - -if __name__ == '__main__': - main() diff --git a/spaces/Dinoking/Guccio-AI-Designer/utils.py b/spaces/Dinoking/Guccio-AI-Designer/utils.py deleted file mode 100644 index 5498289425bb70e959c0194eb7c6fab63e0c045a..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/utils.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import string -import numpy as np -from pathlib import Path -import requests -import pickle -import sys -import re -import gdown - -def prettify_name(name): - valid = "-_%s%s" % (string.ascii_letters, string.digits) - return ''.join(map(lambda c : c if c in valid else '_', name)) - -# Add padding to sequence of images -# Used in conjunction with np.hstack/np.vstack -# By default: adds one 64th of the width of horizontal padding -def pad_frames(strip, pad_fract_horiz=64, pad_fract_vert=0, pad_value=None): - dtype = strip[0].dtype - if pad_value is None: - if dtype in [np.float32, np.float64]: - pad_value = 1.0 - else: - pad_value = np.iinfo(dtype).max - - frames = [strip[0]] - for frame in strip[1:]: - if pad_fract_horiz > 0: - frames.append(pad_value*np.ones((frame.shape[0], frame.shape[1]//pad_fract_horiz, 3), dtype=dtype)) - elif pad_fract_vert > 0: - frames.append(pad_value*np.ones((frame.shape[0]//pad_fract_vert, frame.shape[1], 3), dtype=dtype)) - frames.append(frame) - return frames - - -def download_google_drive(url, output_name): - print('Downloading', url) - gdown.download(url, str(output_name)) - # session = requests.Session() - # r = session.get(url, allow_redirects=True) - # r.raise_for_status() - - # # Google Drive virus check message - # if r.encoding is not None: - # tokens = re.search('(confirm=.+)&id', str(r.content)) - # assert tokens is not None, 'Could not extract token from response' - - # url = url.replace('id=', f'{tokens[1]}&id=') - # r = session.get(url, allow_redirects=True) - # r.raise_for_status() - - # assert r.encoding is None, f'Failed to download weight file from {url}' - - # with open(output_name, 'wb') as f: - # f.write(r.content) - -def download_generic(url, output_name): - print('Downloading', url) - session = requests.Session() - r = session.get(url, allow_redirects=True) - r.raise_for_status() - - # No encoding means raw data - if r.encoding is None: - with open(output_name, 'wb') as f: - f.write(r.content) - else: - download_manual(url, output_name) - -def download_manual(url, output_name): - outpath = Path(output_name).resolve() - while not outpath.is_file(): - print('Could not find checkpoint') - print(f'Please download the checkpoint from\n{url}\nand save it as\n{outpath}') - input('Press any key to continue...') - -def download_ckpt(url, output_name): - if 'drive.google' in url: - download_google_drive(url, output_name) - elif 'mega.nz' in url: - download_manual(url, output_name) - else: - download_generic(url, output_name) \ No newline at end of file diff --git a/spaces/EPFL-VILAB/MultiMAE/dpt/models.py b/spaces/EPFL-VILAB/MultiMAE/dpt/models.py deleted file mode 100644 index f0c142fd3d8a29f9588b964250225d77f7b56fc8..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/dpt/models.py +++ /dev/null @@ -1,153 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - enable_attention_hooks=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - enable_attention_hooks=enable_attention_hooks, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__( - self, path=None, non_negative=True, scale=1.0, shift=0.0, invert=False, **kwargs - ): - features = kwargs["features"] if "features" in kwargs else 256 - - self.scale = scale - self.shift = shift - self.invert = invert - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - inv_depth = super().forward(x).squeeze(dim=1) - - if self.invert: - depth = self.scale * inv_depth + self.shift - depth[depth < 1e-8] = 1e-8 - depth = 1.0 / depth - return depth - else: - return inv_depth - - -class DPTSegmentationModel(DPT): - def __init__(self, num_classes, path=None, **kwargs): - - features = kwargs["features"] if "features" in kwargs else 256 - - kwargs["use_bn"] = True - - head = nn.Sequential( - nn.Conv2d(features, features, kernel_size=3, padding=1, bias=False), - nn.BatchNorm2d(features), - nn.ReLU(True), - nn.Dropout(0.1, False), - nn.Conv2d(features, num_classes, kernel_size=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - ) - - super().__init__(head, **kwargs) - - self.auxlayer = nn.Sequential( - nn.Conv2d(features, features, kernel_size=3, padding=1, bias=False), - nn.BatchNorm2d(features), - nn.ReLU(True), - nn.Dropout(0.1, False), - nn.Conv2d(features, num_classes, kernel_size=1), - ) - - if path is not None: - self.load(path) diff --git a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/app.py b/spaces/EcoCy/LoRA-DreamBooth-Training-UI/app.py deleted file mode 100644 index 1b47590d28504c5832a3fbb2fcd4f5ef121cf7d8..0000000000000000000000000000000000000000 --- a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/app.py +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os - -import gradio as gr -import torch - -from app_inference import create_inference_demo -from app_training import create_training_demo -from app_upload import create_upload_demo -from inference import InferencePipeline -from trainer import Trainer - -TITLE = '# LoRA DreamBooth Training UI' - -ORIGINAL_SPACE_ID = 'lora-library/LoRA-DreamBooth-Training-UI' -SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID) -SHARED_UI_WARNING = f'''# Attention - This Space doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU. - -
          Duplicate Space
          -''' - -if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID: - SETTINGS = f'Settings' -else: - SETTINGS = 'Settings' -CUDA_NOT_AVAILABLE_WARNING = f'''# Attention - Running on CPU. -
          -You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces. -"T4 small" is sufficient to run this demo. -
          -''' - -HF_TOKEN_NOT_SPECIFIED_WARNING = f'''# Attention - The environment variable `HF_TOKEN` is not specified. Please specify your Hugging Face token with write permission as the value of it. -
          -You can check and create your Hugging Face tokens here. -You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab. -
          -''' - -HF_TOKEN = os.getenv('HF_TOKEN') - - -def show_warning(warning_text: str) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown(warning_text) - return demo - - -pipe = InferencePipeline(HF_TOKEN) -trainer = Trainer(HF_TOKEN) - -with gr.Blocks(css='style.css') as demo: - if os.getenv('IS_SHARED_UI'): - show_warning(SHARED_UI_WARNING) - if not torch.cuda.is_available(): - show_warning(CUDA_NOT_AVAILABLE_WARNING) - if not HF_TOKEN: - show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING) - - gr.Markdown(TITLE) - with gr.Tabs(): - with gr.TabItem('Train'): - create_training_demo(trainer, pipe) - with gr.TabItem('Test'): - create_inference_demo(pipe, HF_TOKEN) - with gr.TabItem('Upload'): - gr.Markdown(''' - - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed. - ''') - create_upload_demo(HF_TOKEN) - -demo.queue(max_size=1).launch(share=False) diff --git a/spaces/Eddycrack864/Applio-Inference/julius/bands.py b/spaces/Eddycrack864/Applio-Inference/julius/bands.py deleted file mode 100644 index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/julius/bands.py +++ /dev/null @@ -1,119 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -Decomposition of a signal over frequency bands in the waveform domain. -""" -from typing import Optional, Sequence -import torch - -from .core import mel_frequencies -from .lowpass import LowPassFilters -from .utils import simple_repr - - -class SplitBands(torch.nn.Module): - """ - Decomposes a signal over the given frequency bands in the waveform domain using - a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`. - You can either specify explicitely the frequency cutoffs, or just the number of bands, - in which case the frequency cutoffs will be spread out evenly in mel scale. - - Args: - sample_rate (float): Sample rate of the input signal in Hz. - n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`. - In that case, the cutoff frequencies will be evenly spaced in mel-space. - cutoffs (list[float] or None): list of frequency cutoffs in Hz. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations. - fft (bool or None): See `LowPassFilters` for more info. - - ..note:: - The sum of all the bands will always be the input signal. - - ..warning:: - Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along - with the sample rate. - - Shape: - - - Input: `[*, T]` - - Output: `[B, *, T']`, with `T'=T` if `pad` is True. - If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1` - - >>> bands = SplitBands(sample_rate=128, n_bands=10) - >>> x = torch.randn(6, 4, 1024) - >>> list(bands(x).shape) - [10, 6, 4, 1024] - """ - - def __init__(self, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - if (cutoffs is None) + (n_bands is None) != 1: - raise ValueError("You must provide either n_bands, or cutoffs, but not boths.") - - self.sample_rate = sample_rate - self.n_bands = n_bands - self._cutoffs = list(cutoffs) if cutoffs is not None else None - self.pad = pad - self.zeros = zeros - self.fft = fft - - if cutoffs is None: - if n_bands is None: - raise ValueError("You must provide one of n_bands or cutoffs.") - if not n_bands >= 1: - raise ValueError(f"n_bands must be greater than one (got {n_bands})") - cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1] - else: - if max(cutoffs) > 0.5 * sample_rate: - raise ValueError("A cutoff above sample_rate/2 does not make sense.") - if len(cutoffs) > 0: - self.lowpass = LowPassFilters( - [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft) - else: - # Here I cannot make both TorchScript and MyPy happy. - # I miss the good old times, before all this madness was created. - self.lowpass = None # type: ignore - - def forward(self, input): - if self.lowpass is None: - return input[None] - lows = self.lowpass(input) - low = lows[0] - bands = [low] - for low_and_band in lows[1:]: - # Get a bandpass filter by substracting lowpasses - band = low_and_band - low - bands.append(band) - low = low_and_band - # Last band is whatever is left in the signal - bands.append(input - low) - return torch.stack(bands) - - @property - def cutoffs(self): - if self._cutoffs is not None: - return self._cutoffs - elif self.lowpass is not None: - return [c * self.sample_rate for c in self.lowpass.cutoffs] - else: - return [] - - def __repr__(self): - return simple_repr(self, overrides={"cutoffs": self._cutoffs}) - - -def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `SplitBands`, refer to this class for more information. - - >>> x = torch.randn(6, 4, 1024) - >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape) - [3, 6, 4, 1024] - """ - return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal) diff --git a/spaces/Egrt/GCycleGAN/utils/__init__.py b/spaces/Egrt/GCycleGAN/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_33966KB.py deleted file mode 100644 index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_33966KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_33966KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16, 32)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/psenet_r50_fpnf.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/psenet_r50_fpnf.py deleted file mode 100644 index a3aff0d1325d3b9e25b5ed095cea28d313f611a0..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/psenet_r50_fpnf.py +++ /dev/null @@ -1,51 +0,0 @@ -model_poly = dict( - type='PSENet', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=True, - style='caffe'), - neck=dict( - type='FPNF', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - fusion_type='concat'), - bbox_head=dict( - type='PSEHead', - in_channels=[256], - out_channels=7, - loss=dict(type='PSELoss'), - postprocessor=dict(type='PSEPostprocessor', text_repr_type='poly')), - train_cfg=None, - test_cfg=None) - -model_quad = dict( - type='PSENet', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=True, - style='caffe'), - neck=dict( - type='FPNF', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - fusion_type='concat'), - bbox_head=dict( - type='PSEHead', - in_channels=[256], - out_channels=7, - loss=dict(type='PSELoss'), - postprocessor=dict(type='PSEPostprocessor', text_repr_type='quad')), - train_cfg=None, - test_cfg=None) diff --git a/spaces/FaceOnLive/ID-Document-Recognition-SDK/README.md b/spaces/FaceOnLive/ID-Document-Recognition-SDK/README.md deleted file mode 100644 index fbae6272b776115559559e68e20919222cc53a5a..0000000000000000000000000000000000000000 --- a/spaces/FaceOnLive/ID-Document-Recognition-SDK/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: ID Document Recognition SDK -emoji: 🚀 -colorFrom: purple -colorTo: pink -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/data.py b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/data.py deleted file mode 100644 index 55e60f7a136b4e312ae7a19e4b5206316b201cf3..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/data.py +++ /dev/null @@ -1,399 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torchaudio -import logging - -from .models.multimodal_preprocessors import SimpleTokenizer -from PIL import Image -from pytorchvideo import transforms as pv_transforms -from pytorchvideo.data.clip_sampling import ConstantClipsPerVideoSampler -from pytorchvideo.data.encoded_video import EncodedVideo - -from torchvision import transforms -from torchvision.transforms._transforms_video import NormalizeVideo - -DEFAULT_AUDIO_FRAME_SHIFT_MS = 10 # in milliseconds - -BPE_PATH = "./model/ImageBind/bpe/bpe_simple_vocab_16e6.txt.gz" - - -def waveform2melspec(waveform, sample_rate, num_mel_bins, target_length): - # Based on https://github.com/YuanGongND/ast/blob/d7d8b4b8e06cdaeb6c843cdb38794c1c7692234c/src/dataloader.py#L102 - waveform -= waveform.mean() - fbank = torchaudio.compliance.kaldi.fbank( - waveform, - htk_compat=True, - sample_frequency=sample_rate, - use_energy=False, - window_type="hanning", - num_mel_bins=num_mel_bins, - dither=0.0, - frame_length=25, - frame_shift=DEFAULT_AUDIO_FRAME_SHIFT_MS, - ) - # Convert to [mel_bins, num_frames] shape - fbank = fbank.transpose(0, 1) - # Pad to target_length - n_frames = fbank.size(1) - p = target_length - n_frames - # if p is too large (say >20%), flash a warning - if abs(p) / n_frames > 0.2: - logging.warning( - "Large gap between audio n_frames(%d) and " - "target_length (%d). Is the audio_target_length " - "setting correct?", - n_frames, - target_length, - ) - # cut and pad - if p > 0: - fbank = torch.nn.functional.pad(fbank, (0, p), mode="constant", value=0) - elif p < 0: - fbank = fbank[:, 0:target_length] - # Convert to [1, mel_bins, num_frames] shape, essentially like a 1 - # channel image - fbank = fbank.unsqueeze(0) - return fbank - - -def get_clip_timepoints(clip_sampler, duration): - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def load_and_transform_vision_data(image_paths, device): - if image_paths is None: - return None - - image_ouputs = [] - for image_path in image_paths: - data_transform = transforms.Compose( - [ - transforms.Resize( - 224, interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - with open(image_path, "rb") as fopen: - image = Image.open(fopen).convert("RGB") - - image = data_transform(image).to(device) - image_ouputs.append(image) - return torch.stack(image_ouputs, dim=0) - - -def load_and_transform_vision_data_for_web_demo(image_paths, device): - if image_paths is None: - return None - - image_ouputs = [] - for image_path in image_paths: - data_transform = transforms.Compose( - [ - transforms.Resize( - (224,224), interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - with open(image_path, "rb") as fopen: - image = Image.open(fopen).convert("RGB") - - image = data_transform(image).to(device) - image_ouputs.append(image) - return torch.stack(image_ouputs, dim=0) - - -def load_and_transform_thermal_data(thermal_paths, device): - if thermal_paths is None: - return None - - thermal_ouputs = [] - for thermal_path in thermal_paths: - data_transform = transforms.Compose( - [ - transforms.Resize( - 224, interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.CenterCrop(224), - transforms.ToTensor(), - ] - ) - with open(thermal_path, "rb") as fopen: - thermal = Image.open(fopen).convert("L") - thermal = data_transform(thermal).to(device) - thermal_ouputs.append(thermal) - return torch.stack(thermal_ouputs, dim=0) - - -def load_and_transform_text(text, device): - if text is None: - return None - tokenizer = SimpleTokenizer(bpe_path=BPE_PATH) - tokens = [tokenizer(t).unsqueeze(0).to(device) for t in text] - tokens = torch.cat(tokens, dim=0) - return tokens - - -def load_and_transform_audio_data( - audio_paths, - device, - num_mel_bins=128, - target_length=204, - sample_rate=16000, - clip_duration=2, - clips_per_video=3, - mean=-4.268, - std=9.138, -): - if audio_paths is None: - return None - - audio_outputs = [] - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - - for audio_path in audio_paths: - waveform, sr = torchaudio.load(audio_path) - if sample_rate != sr: - waveform = torchaudio.functional.resample( - waveform, orig_freq=sr, new_freq=sample_rate - ) - all_clips_timepoints = get_clip_timepoints( - clip_sampler, waveform.size(1) / sample_rate - ) - all_clips = [] - for clip_timepoints in all_clips_timepoints: - waveform_clip = waveform[ - :, - int(clip_timepoints[0] * sample_rate) : int( - clip_timepoints[1] * sample_rate - ), - ] - waveform_melspec = waveform2melspec( - waveform_clip, sample_rate, num_mel_bins, target_length - ) - all_clips.append(waveform_melspec) - - normalize = transforms.Normalize(mean=mean, std=std) - all_clips = [normalize(ac).to(device) for ac in all_clips] - - all_clips = torch.stack(all_clips, dim=0) - audio_outputs.append(all_clips) - - return torch.stack(audio_outputs, dim=0) - - -def get_clip_timepoints(clip_sampler, duration): - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def crop_boxes(boxes, x_offset, y_offset): - """ - Peform crop on the bounding boxes given the offsets. - Args: - boxes (ndarray or None): bounding boxes to peform crop. The dimension - is `num boxes` x 4. - x_offset (int): cropping offset in the x axis. - y_offset (int): cropping offset in the y axis. - Returns: - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - cropped_boxes = boxes.copy() - cropped_boxes[:, [0, 2]] = boxes[:, [0, 2]] - x_offset - cropped_boxes[:, [1, 3]] = boxes[:, [1, 3]] - y_offset - - return cropped_boxes - - -def uniform_crop(images, size, spatial_idx, boxes=None, scale_size=None): - """ - Perform uniform spatial sampling on the images and corresponding boxes. - Args: - images (tensor): images to perform uniform crop. The dimension is - `num frames` x `channel` x `height` x `width`. - size (int): size of height and weight to crop the images. - spatial_idx (int): 0, 1, or 2 for left, center, and right crop if width - is larger than height. Or 0, 1, or 2 for top, center, and bottom - crop if height is larger than width. - boxes (ndarray or None): optional. Corresponding boxes to images. - Dimension is `num boxes` x 4. - scale_size (int): optinal. If not None, resize the images to scale_size before - performing any crop. - Returns: - cropped (tensor): images with dimension of - `num frames` x `channel` x `size` x `size`. - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - assert spatial_idx in [0, 1, 2] - ndim = len(images.shape) - if ndim == 3: - images = images.unsqueeze(0) - height = images.shape[2] - width = images.shape[3] - - if scale_size is not None: - if width <= height: - width, height = scale_size, int(height / width * scale_size) - else: - width, height = int(width / height * scale_size), scale_size - images = torch.nn.functional.interpolate( - images, - size=(height, width), - mode="bilinear", - align_corners=False, - ) - - y_offset = int(math.ceil((height - size) / 2)) - x_offset = int(math.ceil((width - size) / 2)) - - if height > width: - if spatial_idx == 0: - y_offset = 0 - elif spatial_idx == 2: - y_offset = height - size - else: - if spatial_idx == 0: - x_offset = 0 - elif spatial_idx == 2: - x_offset = width - size - cropped = images[:, :, y_offset : y_offset + size, x_offset : x_offset + size] - cropped_boxes = crop_boxes(boxes, x_offset, y_offset) if boxes is not None else None - if ndim == 3: - cropped = cropped.squeeze(0) - return cropped, cropped_boxes - - -class SpatialCrop(nn.Module): - """ - Convert the video into 3 smaller clips spatially. Must be used after the - temporal crops to get spatial crops, and should be used with - -2 in the spatial crop at the slowfast augmentation stage (so full - frames are passed in here). Will return a larger list with the - 3x spatial crops as well. - """ - - def __init__(self, crop_size: int = 224, num_crops: int = 3): - super().__init__() - self.crop_size = crop_size - if num_crops == 3: - self.crops_to_ext = [0, 1, 2] - self.flipped_crops_to_ext = [] - elif num_crops == 1: - self.crops_to_ext = [1] - self.flipped_crops_to_ext = [] - else: - raise NotImplementedError("Nothing else supported yet") - - def forward(self, videos): - """ - Args: - videos: A list of C, T, H, W videos. - Returns: - videos: A list with 3x the number of elements. Each video converted - to C, T, H', W' by spatial cropping. - """ - assert isinstance(videos, list), "Must be a list of videos after temporal crops" - assert all([video.ndim == 4 for video in videos]), "Must be (C,T,H,W)" - res = [] - for video in videos: - for spatial_idx in self.crops_to_ext: - res.append(uniform_crop(video, self.crop_size, spatial_idx)[0]) - if not self.flipped_crops_to_ext: - continue - flipped_video = transforms.functional.hflip(video) - for spatial_idx in self.flipped_crops_to_ext: - res.append(uniform_crop(flipped_video, self.crop_size, spatial_idx)[0]) - return res - - -def load_and_transform_video_data( - video_paths, - device, - clip_duration=2, - clips_per_video=5, - sample_rate=16000, -): - if video_paths is None: - return None - - video_outputs = [] - video_transform = transforms.Compose( - [ - pv_transforms.ShortSideScale(224), - NormalizeVideo( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - frame_sampler = pv_transforms.UniformTemporalSubsample(num_samples=clip_duration) - - for video_path in video_paths: - video = EncodedVideo.from_path( - video_path, - decoder="decord", - decode_audio=False, - **{"sample_rate": sample_rate}, - ) - - all_clips_timepoints = get_clip_timepoints(clip_sampler, video.duration) - - all_video = [] - for clip_timepoints in all_clips_timepoints: - # Read the clip, get frames - clip = video.get_clip(clip_timepoints[0], clip_timepoints[1]) - if clip is None: - raise ValueError("No clip found") - video_clip = frame_sampler(clip["video"]) - video_clip = video_clip / 255.0 # since this is float, need 0-1 - - all_video.append(video_clip) - - all_video = [video_transform(clip) for clip in all_video] - all_video = SpatialCrop(224, num_crops=3)(all_video) - - all_video = torch.stack(all_video, dim=0) - video_outputs.append(all_video) - - return torch.stack(video_outputs, dim=0).to(device) diff --git a/spaces/FantasticGNU/AnomalyGPT/utils/config.py b/spaces/FantasticGNU/AnomalyGPT/utils/config.py deleted file mode 100644 index b364ee774f8437a9962280f28748e2167a45e732..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/utils/config.py +++ /dev/null @@ -1,63 +0,0 @@ -import yaml -from easydict import EasyDict -import os -from .logger import print_log - -def log_args_to_file(args, pre='args', logger=None): - for key, val in args.__dict__.items(): - print_log(f'{pre}.{key} : {val}', logger = logger) - -def log_config_to_file(cfg, pre='cfg', logger=None): - for key, val in cfg.items(): - if isinstance(cfg[key], EasyDict): - print_log(f'{pre}.{key} = edict()', logger = logger) - log_config_to_file(cfg[key], pre=pre + '.' + key, logger=logger) - continue - print_log(f'{pre}.{key} : {val}', logger = logger) - -def merge_new_config(config, new_config): - for key, val in new_config.items(): - if not isinstance(val, dict): - if key == '_base_': - with open(new_config['_base_'], 'r') as f: - try: - val = yaml.load(f, Loader=yaml.FullLoader) - except: - val = yaml.load(f) - config[key] = EasyDict() - merge_new_config(config[key], val) - else: - config[key] = val - continue - if key not in config: - config[key] = EasyDict() - merge_new_config(config[key], val) - return config - -def cfg_from_yaml_file(cfg_file): - config = EasyDict() - with open(cfg_file, 'r') as f: - try: - new_config = yaml.load(f, Loader=yaml.FullLoader) - except: - new_config = yaml.load(f) - merge_new_config(config=config, new_config=new_config) - return config - -def get_config(args, logger=None): - if args.resume: - cfg_path = os.path.join(args.experiment_path, 'config.yaml') - if not os.path.exists(cfg_path): - print_log("Failed to resume", logger = logger) - raise FileNotFoundError() - print_log(f'Resume yaml from {cfg_path}', logger = logger) - args.config = cfg_path - config = cfg_from_yaml_file(args.config) - if not args.resume and args.local_rank == 0: - save_experiment_config(args, config, logger) - return config - -def save_experiment_config(args, config, logger = None): - config_path = os.path.join(args.experiment_path, 'config.yaml') - os.system('cp %s %s' % (args.config, config_path)) - print_log(f'Copy the Config file from {args.config} to {config_path}',logger = logger ) \ No newline at end of file diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/commons.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Giuliano/breast_cancer_prediction_tfjs/index.html b/spaces/Giuliano/breast_cancer_prediction_tfjs/index.html deleted file mode 100644 index 39fc073472a2863d4984846ce9dddba635761205..0000000000000000000000000000000000000000 --- a/spaces/Giuliano/breast_cancer_prediction_tfjs/index.html +++ /dev/null @@ -1,322 +0,0 @@ - - - - - - - - - - - - - - - - - - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - - -
          -
          - -
          - -

          training the application ... wait

          -

          -

          Result:

          -

          -

          *** This software is EXPERIMENTAL and should only be used for research purposes. Please see a doctor for any diagnostic reasons.

          - - - \ No newline at end of file diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py deleted file mode 100644 index ed9ca9bd4d78341d622acb0bd469339be81530e2..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/utils/cls.py +++ /dev/null @@ -1,171 +0,0 @@ -# This code is copied from https://github.com/thomasjpfan/pytorch/blob/401ec389db2c9d2978917a6e4d1101b20340d7e7/torch/optim/lr_scheduler.py - - -# This code is under review at PyTorch and is to be merged eventually to make CLR available to all. -# Tested with pytorch 0.2.0 - -import numpy as np - - -class CyclicLR(object): - """Sets the learning rate of each parameter group according to - cyclical learning rate policy (CLR). The policy cycles the learning - rate between two boundaries with a constant frequency, as detailed in - the paper `Cyclical Learning Rates for Training Neural Networks`_. - The distance between the two boundaries can be scaled on a per-iteration - or per-cycle basis. - Cyclical learning rate policy changes the learning rate after every batch. - `batch_step` should be called after a batch has been used for training. - To resume training, save `last_batch_iteration` and use it to instantiate `CycleLR`. - This class has three built-in policies, as put forth in the paper: - "triangular": - A basic triangular cycle w/ no amplitude scaling. - "triangular2": - A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": - A cycle that scales initial amplitude by gamma**(cycle iterations) at each - cycle iteration. - This implementation was adapted from the github repo: `bckenstler/CLR`_ - Args: - optimizer (Optimizer): Wrapped optimizer. - base_lr (float or list): Initial learning rate which is the - lower boundary in the cycle for eachparam groups. - Default: 0.001 - max_lr (float or list): Upper boundaries in the cycle for - each parameter group. Functionally, - it defines the cycle amplitude (max_lr - base_lr). - The lr at any cycle is the sum of base_lr - and some scaling of the amplitude; therefore - max_lr may not actually be reached depending on - scaling function. Default: 0.006 - step_size (int): Number of training iterations per - half cycle. Authors suggest setting step_size - 2-8 x training iterations in epoch. Default: 2000 - mode (str): One of {triangular, triangular2, exp_range}. - Values correspond to policies detailed above. - If scale_fn is not None, this argument is ignored. - Default: 'triangular' - gamma (float): Constant in 'exp_range' scaling function: - gamma**(cycle iterations) - Default: 1.0 - scale_fn (function): Custom scaling policy defined by a single - argument lambda function, where - 0 <= scale_fn(x) <= 1 for all x >= 0. - mode paramater is ignored - Default: None - scale_mode (str): {'cycle', 'iterations'}. - Defines whether scale_fn is evaluated on - cycle number or cycle iterations (training - iterations since start of cycle). - Default: 'cycle' - last_batch_iteration (int): The index of the last batch. Default: -1 - Example: - >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) - >>> scheduler = torch.optim.CyclicLR(optimizer) - >>> data_loader = torch.utils.data.DataLoader(...) - >>> for epoch in range(10): - >>> for batch in data_loader: - >>> scheduler.batch_step() - >>> train_batch(...) - .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - .. _bckenstler/CLR: https://github.com/bckenstler/CLR - """ - - def __init__( - self, - optimizer, - base_lr=1e-3, - max_lr=6e-3, - step_size=2000, - mode="triangular", - gamma=1.0, - scale_fn=None, - scale_mode="cycle", - last_batch_iteration=-1, - ): - - # if not isinstance(optimizer, Optimizer): - # raise TypeError('{} is not an Optimizer'.format( - # type(optimizer).__name__)) - self.optimizer = optimizer - - if isinstance(base_lr, list) or isinstance(base_lr, tuple): - if len(base_lr) != len(optimizer.param_groups): - raise ValueError( - "expected {} base_lr, got {}".format( - len(optimizer.param_groups), len(base_lr) - ) - ) - self.base_lrs = list(base_lr) - else: - self.base_lrs = [base_lr] * len(optimizer.param_groups) - - if isinstance(max_lr, list) or isinstance(max_lr, tuple): - if len(max_lr) != len(optimizer.param_groups): - raise ValueError( - "expected {} max_lr, got {}".format( - len(optimizer.param_groups), len(max_lr) - ) - ) - self.max_lrs = list(max_lr) - else: - self.max_lrs = [max_lr] * len(optimizer.param_groups) - - self.step_size = step_size - - if mode not in ["triangular", "triangular2", "exp_range"] and scale_fn is None: - raise ValueError("mode is invalid and scale_fn is None") - - self.mode = mode - self.gamma = gamma - self.current_lr = None - - if scale_fn is None: - if self.mode == "triangular": - self.scale_fn = self._triangular_scale_fn - self.scale_mode = "cycle" - elif self.mode == "triangular2": - self.scale_fn = self._triangular2_scale_fn - self.scale_mode = "cycle" - elif self.mode == "exp_range": - self.scale_fn = self._exp_range_scale_fn - self.scale_mode = "iterations" - else: - self.scale_fn = scale_fn - self.scale_mode = scale_mode - - self.batch_step(last_batch_iteration + 1) - self.last_batch_iteration = last_batch_iteration - - def batch_step(self, batch_iteration=None): - if batch_iteration is None: - batch_iteration = self.last_batch_iteration + 1 - self.last_batch_iteration = batch_iteration - for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): - param_group["lr"] = lr - self.current_lr = lr - - def _triangular_scale_fn(self, x): - return 1.0 - - def _triangular2_scale_fn(self, x): - return 1 / (2.0 ** (x - 1)) - - def _exp_range_scale_fn(self, x): - return self.gamma ** (x) - - def get_lr(self): - step_size = float(self.step_size) - cycle = np.floor(1 + self.last_batch_iteration / (2 * step_size)) - x = np.abs(self.last_batch_iteration / step_size - 2 * cycle + 1) - - lrs = [] - param_lrs = zip(self.optimizer.param_groups, self.base_lrs, self.max_lrs) - for param_group, base_lr, max_lr in param_lrs: - base_height = (max_lr - base_lr) * np.maximum(0, (1 - x)) - if self.scale_mode == "cycle": - lr = base_lr + base_height * self.scale_fn(cycle) - else: - lr = base_lr + base_height * self.scale_fn(self.last_batch_iteration) - lrs.append(lr) - return lrs diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py deleted file mode 100644 index 72d4db963ffd95851b945911b3db9941426583ab..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../htc/htc_r50_fpn_1x_coco.py' - -model = dict( - backbone=dict( - type='DetectoRS_ResNet', - conv_cfg=dict(type='ConvAWS'), - sac=dict(type='SAC', use_deform=True), - stage_with_sac=(False, True, True, True))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/README.md deleted file mode 100644 index 17eaa7d1dea393cbf9b8e3fd44c607b447812e6f..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fp16/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Mixed Precision Training - -## Introduction - -[OTHERS] - -```latex -@article{micikevicius2017mixed, - title={Mixed precision training}, - author={Micikevicius, Paulius and Narang, Sharan and Alben, Jonah and Diamos, Gregory and Elsen, Erich and Garcia, David and Ginsburg, Boris and Houston, Michael and Kuchaiev, Oleksii and Venkatesh, Ganesh and others}, - journal={arXiv preprint arXiv:1710.03740}, - year={2017} -} -``` - -## Results and Models - -| Architecture | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -|:------------:|:---------:|:-------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:| -| Faster R-CNN | R-50 | pytorch | 1x | 3.4 | 28.8 | 37.5 | - |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fp16/faster_rcnn_r50_fpn_fp16_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fp16/faster_rcnn_r50_fpn_fp16_1x_coco/faster_rcnn_r50_fpn_fp16_1x_coco_20200204-d4dc1471.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fp16/faster_rcnn_r50_fpn_fp16_1x_coco/faster_rcnn_r50_fpn_fp16_1x_coco_20200204_143530.log.json) | -| Mask R-CNN | R-50 | pytorch | 1x | 3.6 | 24.1 | 38.1 | 34.7 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fp16/mask_rcnn_r50_fpn_fp16_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fp16/mask_rcnn_r50_fpn_fp16_1x_coco/mask_rcnn_r50_fpn_fp16_1x_coco_20200205-59faf7e4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fp16/mask_rcnn_r50_fpn_fp16_1x_coco/mask_rcnn_r50_fpn_fp16_1x_coco_20200205_130539.log.json) | -| Retinanet | R-50 | pytorch | 1x | 2.8 | 31.6 | 36.4 | |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fp16/retinanet_r50_fpn_fp16_1x_coco/retinanet_r50_fpn_fp16_1x_coco_20200702-0dbfb212.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fp16/retinanet_r50_fpn_fp16_1x_coco/retinanet_r50_fpn_fp16_1x_coco_20200702_020127.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py deleted file mode 100644 index 3995603a6cee82a7d7cff620cb8bffe14b15b6a1..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnest101', - backbone=dict(stem_channels=128, depth=101)) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_task.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_task.py deleted file mode 100644 index e4152fde6861488acc3595fa25c456bf60f134b9..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/laser/laser_src/laser_task.py +++ /dev/null @@ -1,331 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from collections import OrderedDict, defaultdict -import json -import os -import logging -from argparse import ArgumentError - -from fairseq import options, models -from fairseq.data import ( - data_utils, - Dictionary, - LanguagePairDataset, - IndexedDataset, - FairseqDataset, -) -from .multitask_data_utils import ( - MultitaskDatasetWrapper, - MultidatasetEpochBatchIterator, -) - - -from fairseq.tasks import LegacyFairseqTask, register_task - -logger = logging.getLogger(__name__) - - -@register_task("laser") -class LaserTask(LegacyFairseqTask): - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "configfile", metavar="PATH", help="dataset configuration file in json" - ) - parser.add_argument( - "--weighting-alpha", - type=float, - default=None, - help="alpha for automatic weighting", - ) - parser.add_argument( - "--raw-text", action="store_true", help="load raw text dataset" - ) - parser.add_argument( - "--left-pad-source", - default="True", - type=str, - metavar="BOOL", - help="pad the source on the left (default: True)", - ) - parser.add_argument( - "--left-pad-target", - default="False", - type=str, - metavar="BOOL", - help="pad the target on the left (default: False)", - ) - try: - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - - def __init__(self, args, config, src_dictionary, tgt_dictionary, num_tasks): - super().__init__(args) - self.config = config - self.src_dictionary = src_dictionary - self.tgt_dictionary = tgt_dictionary - self.num_tasks = num_tasks - - @classmethod - def setup_task(cls, args, **kwargs): - with open(args.configfile, "r") as f: - config = json.load(f) - num_tasks = max(dataset["id"] for dataset in config["train"]) + 1 - - args.left_pad_source = options.eval_bool(args.left_pad_source) - args.left_pad_target = options.eval_bool(args.left_pad_target) - - src_dictionary = Dictionary.load(config["src_vocab"]) - tgt_dictionary = Dictionary.load(config["tgt_vocab"]) - - logger.info( - "| src Dictionary {} : {} types".format( - config["src_vocab"], len(src_dictionary) - ) - ) - logger.info( - "| tgt Dictionary {} : {} types".format( - config["tgt_vocab"], len(tgt_dictionary) - ) - ) - - return cls(args, config, src_dictionary, tgt_dictionary, num_tasks) - - # Experimental overriding for backtranslation - def build_model(self, args): - model = models.build_model(args, self) - return model - - def dataset(self, split): - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - return self.datasets[split] - - def load_dataset(self, split, epoch=1, **kwargs): - """Load a dataset split.""" - - def indexed_dataset(path, dictionary): - if self.args.raw_text: - raise Exception("Unable to handle raw text.") - dataset = IndexedDataset(path, fix_lua_indexing=True) - - return dataset - - pair_datasets = OrderedDict() - - if split == "valid": - self.datasets[split] = pair_datasets - return - - if split not in self.config: - raise FileNotFoundError( - "Dataset not found in config file: {}".format(split) - ) - - size_by_corpus = defaultdict(int) - size_sum = 0 - size_sum_with_subsampling = 0 - init_pair_datasets = {} - - for dataset_config in self.config[split]: - src_path = os.path.dirname(dataset_config["src"]) - corpus_name = src_path.split("/")[-2] - language_pair_name = src_path.split("/")[-1] - pair_datasets_key = corpus_name + "-" + language_pair_name - - logger.info(f"loading... {pair_datasets_key}") - if "src" in dataset_config: - src_dataset = indexed_dataset( - dataset_config["src"], self.src_dictionary - ) - else: - src_dataset = None - - if "tgt" in dataset_config: - tgt_dataset = indexed_dataset( - dataset_config["tgt"], self.tgt_dictionary - ) - else: - tgt_dataset = None - - dataset = LanguagePairDataset( - src_dataset, - src_dataset.sizes, - self.src_dictionary, - tgt_dataset, - tgt_dataset.sizes, - self.tgt_dictionary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - ) - - if pair_datasets_key in init_pair_datasets: - logger.warning( - f"Ignoring already added {pair_datasets_key}. " - f"Consider using `sample` key in order to upsample." - ) - else: - init_pair_datasets[pair_datasets_key] = { - "dataset": dataset, - "sample": dataset_config.get("sample", None), - "id": dataset_config.get("id", None), - "len": len(dataset), - } - - length_sum = 0 - weighted_freqs_sum = 0 - freq_per_dataset = {} - vmax = 0 - vmin = 1 - weighted_freq_per_dataset = {} - - if self.args.weighting_alpha: - for key in init_pair_datasets: - if init_pair_datasets[key]["sample"] is None: - length_sum += len(init_pair_datasets[key]["dataset"]) - - for key in init_pair_datasets: - if init_pair_datasets[key]["sample"] is None: - val = float(init_pair_datasets[key]["len"]) / length_sum - freq_per_dataset[key] = val - weighted_freqs_sum += val ** self.args.weighting_alpha - - for key in freq_per_dataset: - val = ( - freq_per_dataset[key] ** self.args.weighting_alpha - / weighted_freqs_sum - ) - vmin = min(vmin, val) - vmax = max(vmax, val) - weighted_freq_per_dataset[key] = val - - for pair_datasets_key in init_pair_datasets: - dataset_config = init_pair_datasets[pair_datasets_key] - dataset = dataset_config["dataset"] - sample = dataset_config["sample"] - if sample is None: - sample = 1.0 - - if pair_datasets_key in weighted_freq_per_dataset: - w = vmax / weighted_freq_per_dataset[pair_datasets_key] - sample = w - - sample = round(sample) - - initial_sample = sample - initial_pair_datasets_key = pair_datasets_key - - while sample >= 1.0: - assert ( - pair_datasets_key not in pair_datasets - ), f"{pair_datasets_key} already in" - size_sum_with_subsampling += len(dataset) - pair_datasets[pair_datasets_key] = MultitaskDatasetWrapper( - dataset, dataset_config.get("id", 0), 1.0, name=pair_datasets_key - ) - size_sum += len(dataset) - sample -= 1.0 - pair_datasets_key += "-up" - - assert sample < 1e-6, f"sample remains > 0 {pair_datasets_key}" - - logger.info( - f"added pair {initial_pair_datasets_key} length {len(dataset)} new_length = {len(dataset)*initial_sample}" - ) - size_by_corpus[corpus_name] += len(dataset) - - self.datasets[split] = pair_datasets - logger.info( - f"Datasets number = {len(self.datasets[split])} size = {size_sum} size_sum_with_subsampling = {size_sum_with_subsampling}" - ) - - @property - def source_dictionary(self): - return self.src_dictionary - - @property - def target_dictionary(self): - return self.tgt_dictionary - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - ): - - assert isinstance(dataset, OrderedDict) - assert len(dataset) - assert isinstance(dataset[next(iter(dataset))], FairseqDataset) - - # initialize the dataset with the correct starting epoch - for _, dt in dataset.items(): - dt.set_epoch(epoch) - - indices = OrderedDict() - batch_sampler = OrderedDict() - - with data_utils.numpy_seed(seed + epoch): - for key, dt in dataset.items(): - logger.info(f"\t ordered_indices {key}") - indices[key] = dt.ordered_indices() - - # filter examples that are too large - if max_positions is not None: - for key, dt in dataset.items(): - logger.info(f"\t filter_by_size {key}") - indices[key], ignored = dt.filter_indices_by_size( - indices[key], max_positions - ) - - for key, dt in dataset.items(): - logger.info(f"\t batch_by_size {key}") - batch_sampler[key] = data_utils.batch_by_size( - indices[key], - dt.num_tokens, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - epoch_iter = MultidatasetEpochBatchIterator( - dataset=dataset, - batch_sampler=batch_sampler, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - ) - - return epoch_iter diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/README.pretraining.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/README.pretraining.md deleted file mode 100644 index a4e7453529111fdd198be637d911d1764cb96c0e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/README.pretraining.md +++ /dev/null @@ -1,84 +0,0 @@ -# Pretraining RoBERTa using your own data - -This tutorial will walk you through pretraining RoBERTa over your own data. - -### 1) Preprocess the data - -Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training. - -We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/) -to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course -this dataset is quite small, so the resulting pretrained model will perform -poorly, but it gives the general idea. - -First download the dataset: -```bash -wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip -unzip wikitext-103-raw-v1.zip -``` - -Next encode it with the GPT-2 BPE: -```bash -mkdir -p gpt2_bpe -wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json -wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe -for SPLIT in train valid test; do \ - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json gpt2_bpe/encoder.json \ - --vocab-bpe gpt2_bpe/vocab.bpe \ - --inputs wikitext-103-raw/wiki.${SPLIT}.raw \ - --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \ - --keep-empty \ - --workers 60; \ -done -``` - -Finally preprocess/binarize the data using the GPT-2 fairseq dictionary: -```bash -wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt -fairseq-preprocess \ - --only-source \ - --srcdict gpt2_bpe/dict.txt \ - --trainpref wikitext-103-raw/wiki.train.bpe \ - --validpref wikitext-103-raw/wiki.valid.bpe \ - --testpref wikitext-103-raw/wiki.test.bpe \ - --destdir data-bin/wikitext-103 \ - --workers 60 -``` - -### 2) Train RoBERTa base -```bash -DATA_DIR=data-bin/wikitext-103 - -fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \ ---config-name base task.data=$DATA_DIR -``` - -**Note:** You can optionally resume training the released RoBERTa base model by -adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`. - -**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses -a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to -further increase the batch size by 16x (`optimization.update_freq`), for a total batch size -of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need -to reduce `dataset.batch_size` and increase dataset.update_freq to compensate. -Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly -to increase training speed. - -**Note:** The learning rate and batch size are tightly connected and need to be -adjusted together. We generally recommend increasing the learning rate as you -increase the batch size according to the following table (although it's also -dataset dependent, so don't rely on the following values too closely): - -batch size | peak learning rate ----|--- -256 | 0.0001 -2048 | 0.0005 -8192 | 0.0007 - -### 3) Load your pretrained model -```python -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data') -assert isinstance(roberta.model, torch.nn.Module) -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/ASG_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/ASG_loss.py deleted file mode 100644 index 41f50bbd70388ce723f2d316d4e9776bcd6be3c9..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/ASG_loss.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from examples.speech_recognition.data.replabels import pack_replabels -from fairseq import utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("asg_loss") -class ASGCriterion(FairseqCriterion): - @staticmethod - def add_args(parser): - group = parser.add_argument_group("ASG Loss") - group.add_argument( - "--asg-transitions-init", - help="initial diagonal value of transition matrix", - type=float, - default=0.0, - ) - group.add_argument( - "--max-replabel", help="maximum # of replabels", type=int, default=2 - ) - group.add_argument( - "--linseg-updates", - help="# of training updates to use LinSeg initialization", - type=int, - default=0, - ) - group.add_argument( - "--hide-linseg-messages", - help="hide messages about LinSeg initialization", - action="store_true", - ) - - def __init__( - self, - task, - silence_token, - asg_transitions_init, - max_replabel, - linseg_updates, - hide_linseg_messages, - ): - from flashlight.lib.sequence.criterion import ASGLoss, CriterionScaleMode - - super().__init__(task) - self.tgt_dict = task.target_dictionary - self.eos = self.tgt_dict.eos() - self.silence = ( - self.tgt_dict.index(silence_token) - if silence_token in self.tgt_dict - else None - ) - self.max_replabel = max_replabel - - num_labels = len(self.tgt_dict) - self.asg = ASGLoss(num_labels, scale_mode=CriterionScaleMode.TARGET_SZ_SQRT) - self.asg.trans = torch.nn.Parameter( - asg_transitions_init * torch.eye(num_labels), requires_grad=True - ) - - self.linseg_progress = torch.nn.Parameter( - torch.tensor([0], dtype=torch.int), requires_grad=False - ) - self.linseg_maximum = linseg_updates - self.linseg_message_state = "none" if hide_linseg_messages else "start" - - @classmethod - def build_criterion(cls, args, task): - return cls( - task, - args.silence_token, - args.asg_transitions_init, - args.max_replabel, - args.linseg_updates, - args.hide_linseg_messages, - ) - - def linseg_step(self): - if not self.training: - return False - if self.linseg_progress.item() < self.linseg_maximum: - if self.linseg_message_state == "start": - print("| using LinSeg to initialize ASG") - self.linseg_message_state = "finish" - self.linseg_progress.add_(1) - return True - elif self.linseg_message_state == "finish": - print("| finished LinSeg initialization") - self.linseg_message_state = "none" - return False - - def replace_eos_with_silence(self, tgt): - if tgt[-1] != self.eos: - return tgt - elif self.silence is None or (len(tgt) > 1 and tgt[-2] == self.silence): - return tgt[:-1] - else: - return tgt[:-1] + [self.silence] - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - net_output = model(**sample["net_input"]) - emissions = net_output["encoder_out"].transpose(0, 1).contiguous() - B = emissions.size(0) - T = emissions.size(1) - device = emissions.device - - target = torch.IntTensor(B, T) - target_size = torch.IntTensor(B) - using_linseg = self.linseg_step() - - for b in range(B): - initial_target_size = sample["target_lengths"][b].item() - if initial_target_size == 0: - raise ValueError("target size cannot be zero") - - tgt = sample["target"][b, :initial_target_size].tolist() - tgt = self.replace_eos_with_silence(tgt) - tgt = pack_replabels(tgt, self.tgt_dict, self.max_replabel) - tgt = tgt[:T] - - if using_linseg: - tgt = [tgt[t * len(tgt) // T] for t in range(T)] - - target[b][: len(tgt)] = torch.IntTensor(tgt) - target_size[b] = len(tgt) - - loss = self.asg.forward(emissions, target.to(device), target_size.to(device)) - - if reduce: - loss = torch.sum(loss) - - sample_size = ( - sample["target"].size(0) if self.args.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_output = { - "loss": loss_sum / nsentences, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return agg_output diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/concat_sentences_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/concat_sentences_dataset.py deleted file mode 100644 index 625a29370e90f9d1d7274024afb902ed83a22325..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/concat_sentences_dataset.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class ConcatSentencesDataset(FairseqDataset): - def __init__(self, *datasets): - super().__init__() - self.datasets = datasets - assert all( - len(ds) == len(datasets[0]) for ds in datasets - ), "datasets must have the same length" - - def __getitem__(self, index): - return torch.cat([ds[index] for ds in self.datasets]) - - def __len__(self): - return len(self.datasets[0]) - - def collater(self, samples): - return self.datasets[0].collater(samples) - - @property - def sizes(self): - return sum(ds.sizes for ds in self.datasets) - - def num_tokens(self, index): - return sum(ds.num_tokens(index) for ds in self.datasets) - - def size(self, index): - return sum(ds.size(index) for ds in self.datasets) - - def ordered_indices(self): - return self.datasets[0].ordered_indices() - - @property - def supports_prefetch(self): - return any(getattr(ds, "supports_prefetch", False) for ds in self.datasets) - - def prefetch(self, indices): - for ds in self.datasets: - if getattr(ds, "supports_prefetch", False): - ds.prefetch(indices) - - def set_epoch(self, epoch): - super().set_epoch(epoch) - for ds in self.datasets: - if hasattr(ds, "set_epoch"): - ds.set_epoch(epoch) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/test_fsdp.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/test_fsdp.sh deleted file mode 100644 index 1f428a035e4474427ded991f8e8307ea59f61f69..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/test_fsdp.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash -rm -rf fsdp_dummy -mkdir -p fsdp_dummy -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 5 --log-format json --log-interval 1 \ - --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \ - --restore-file x.pt "$@" - -# Now we try to load the checkpoint -CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 2 --log-format json --log-interval 1 \ - --save-interval-updates 2 --save-dir fsdp_dummy diff --git a/spaces/Harsimran19/DepthGAN/README.md b/spaces/Harsimran19/DepthGAN/README.md deleted file mode 100644 index 39ec1ef3be979394ab586810ef9eb5727bb2b21d..0000000000000000000000000000000000000000 --- a/spaces/Harsimran19/DepthGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DepthGAN -emoji: 🐠 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/inference_e2e.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/inference_e2e.py deleted file mode 100644 index 062aecd4280925336ab1d36420d2cd47febf661c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/inference_e2e.py +++ /dev/null @@ -1,91 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals - -import glob -import os -import numpy as np -import argparse -import json -import torch -from scipy.io.wavfile import write -from env import AttrDict -from meldataset import MAX_WAV_VALUE -from models import Generator - -h = None -device = None - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + "*") - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return "" - return sorted(cp_list)[-1] - - -def inference(a): - generator = Generator(h).to(device) - - state_dict_g = load_checkpoint(a.checkpoint_file, device) - generator.load_state_dict(state_dict_g["generator"]) - - filelist = os.listdir(a.input_mels_dir) - - os.makedirs(a.output_dir, exist_ok=True) - - generator.eval() - generator.remove_weight_norm() - with torch.no_grad(): - for i, filname in enumerate(filelist): - x = np.load(os.path.join(a.input_mels_dir, filname)) - x = torch.FloatTensor(x).to(device) - y_g_hat = generator(x) - audio = y_g_hat.squeeze() - audio = audio * MAX_WAV_VALUE - audio = audio.cpu().numpy().astype("int16") - - output_file = os.path.join( - a.output_dir, os.path.splitext(filname)[0] + "_generated_e2e.wav" - ) - write(output_file, h.sampling_rate, audio) - print(output_file) - - -def main(): - print("Initializing Inference Process..") - - parser = argparse.ArgumentParser() - parser.add_argument("--input_mels_dir", default="test_mel_files") - parser.add_argument("--output_dir", default="generated_files_from_mel") - parser.add_argument("--checkpoint_file", required=True) - a = parser.parse_args() - - config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json") - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - torch.manual_seed(h.seed) - global device - if torch.cuda.is_available(): - torch.cuda.manual_seed(h.seed) - device = torch.device("cuda") - else: - device = torch.device("cpu") - - inference(a) - - -if __name__ == "__main__": - main() diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/generate_mels.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/generate_mels.py deleted file mode 100644 index a3d331aef019cfd8cf45d6264db88d0fa26e5c0f..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/generate_mels.py +++ /dev/null @@ -1,70 +0,0 @@ -import numpy as np -import os -import torch -import commons - -import models -import utils -from argparse import ArgumentParser -from tqdm import tqdm -from text import text_to_sequence - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument("-m", "--model_dir", required=True, type=str) - parser.add_argument("-s", "--mels_dir", required=True, type=str) - args = parser.parse_args() - MODEL_DIR = args.model_dir # path to model dir - SAVE_MELS_DIR = args.mels_dir # path to save generated mels - - if not os.path.exists(SAVE_MELS_DIR): - os.makedirs(SAVE_MELS_DIR) - - hps = utils.get_hparams_from_dir(MODEL_DIR) - symbols = list(hps.data.punc) + list(hps.data.chars) - checkpoint_path = utils.latest_checkpoint_path(MODEL_DIR) - cleaner = hps.data.text_cleaners - - model = models.FlowGenerator( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ).to("cuda") - - utils.load_checkpoint(checkpoint_path, model) - model.decoder.store_inverse() # do not calcuate jacobians for fast decoding - _ = model.eval() - - def get_mel(text, fpath): - if getattr(hps.data, "add_blank", False): - text_norm = text_to_sequence(text, symbols, cleaner) - text_norm = commons.intersperse(text_norm, len(symbols)) - else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality - text = " " + text.strip() + " " - text_norm = text_to_sequence(text, symbols, cleaner) - - sequence = np.array(text_norm)[None, :] - - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda() - - with torch.no_grad(): - noise_scale = 0.667 - length_scale = 1.0 - (y_gen_tst, *_), *_, (attn_gen, *_) = model( - x_tst, - x_tst_lengths, - gen=True, - noise_scale=noise_scale, - length_scale=length_scale, - ) - - np.save(os.path.join(SAVE_MELS_DIR, fpath), y_gen_tst.cpu().detach().numpy()) - - for f in [hps.data.training_files, hps.data.validation_files]: - file_lines = open(f).read().splitlines() - - for line in tqdm(file_lines): - fname, text = line.split("|") - fname = os.path.basename(fname).replace(".wav", ".npy") - get_mel(text, fname) diff --git a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/.ipynb_checkpoints/utils-checkpoint.py b/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/.ipynb_checkpoints/utils-checkpoint.py deleted file mode 100644 index 1206244aa2a004d9f653782de798bfef9e5e726b..0000000000000000000000000000000000000000 --- a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/.ipynb_checkpoints/utils-checkpoint.py +++ /dev/null @@ -1,555 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# Daniel DeTone -# Tomasz Malisiewicz -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from pathlib import Path -import time -from collections import OrderedDict -from threading import Thread -import numpy as np -import cv2 -import torch -import matplotlib.pyplot as plt -import matplotlib -matplotlib.use('Agg') - - -class AverageTimer: - """ Class to help manage printing simple timing of code execution. """ - - def __init__(self, smoothing=0.3, newline=False): - self.smoothing = smoothing - self.newline = newline - self.times = OrderedDict() - self.will_print = OrderedDict() - self.reset() - - def reset(self): - now = time.time() - self.start = now - self.last_time = now - for name in self.will_print: - self.will_print[name] = False - - def update(self, name='default'): - now = time.time() - dt = now - self.last_time - if name in self.times: - dt = self.smoothing * dt + (1 - self.smoothing) * self.times[name] - self.times[name] = dt - self.will_print[name] = True - self.last_time = now - - def print(self, text='Timer'): - total = 0. - print('[{}]'.format(text), end=' ') - for key in self.times: - val = self.times[key] - if self.will_print[key]: - print('%s=%.3f' % (key, val), end=' ') - total += val - print('total=%.3f sec {%.1f FPS}' % (total, 1./total), end=' ') - if self.newline: - print(flush=True) - else: - print(end='\r', flush=True) - self.reset() - - -class VideoStreamer: - """ Class to help process image streams. Four types of possible inputs:" - 1.) USB Webcam. - 2.) An IP camera - 3.) A directory of images (files in directory matching 'image_glob'). - 4.) A video file, such as an .mp4 or .avi file. - """ - def __init__(self, basedir, resize, skip, image_glob, max_length=1000000): - self._ip_grabbed = False - self._ip_running = False - self._ip_camera = False - self._ip_image = None - self._ip_index = 0 - self.cap = [] - self.camera = True - self.video_file = False - self.listing = [] - self.resize = resize - self.interp = cv2.INTER_AREA - self.i = 0 - self.skip = skip - self.max_length = max_length - if isinstance(basedir, int) or basedir.isdigit(): - print('==> Processing USB webcam input: {}'.format(basedir)) - self.cap = cv2.VideoCapture(int(basedir)) - self.listing = range(0, self.max_length) - elif basedir.startswith(('http', 'rtsp')): - print('==> Processing IP camera input: {}'.format(basedir)) - self.cap = cv2.VideoCapture(basedir) - self.start_ip_camera_thread() - self._ip_camera = True - self.listing = range(0, self.max_length) - elif Path(basedir).is_dir(): - print('==> Processing image directory input: {}'.format(basedir)) - self.listing = list(Path(basedir).glob(image_glob[0])) - for j in range(1, len(image_glob)): - image_path = list(Path(basedir).glob(image_glob[j])) - self.listing = self.listing + image_path - self.listing.sort() - self.listing = self.listing[::self.skip] - self.max_length = np.min([self.max_length, len(self.listing)]) - if self.max_length == 0: - raise IOError('No images found (maybe bad \'image_glob\' ?)') - self.listing = self.listing[:self.max_length] - self.camera = False - elif Path(basedir).exists(): - print('==> Processing video input: {}'.format(basedir)) - self.cap = cv2.VideoCapture(basedir) - self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 1) - num_frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - self.listing = range(0, num_frames) - self.listing = self.listing[::self.skip] - self.video_file = True - self.max_length = np.min([self.max_length, len(self.listing)]) - self.listing = self.listing[:self.max_length] - else: - raise ValueError('VideoStreamer input \"{}\" not recognized.'.format(basedir)) - if self.camera and not self.cap.isOpened(): - raise IOError('Could not read camera') - - def load_image(self, impath): - """ Read image as grayscale and resize to img_size. - Inputs - impath: Path to input image. - Returns - grayim: uint8 numpy array sized H x W. - """ - grayim = cv2.imread(impath, 0) - if grayim is None: - raise Exception('Error reading image %s' % impath) - w, h = grayim.shape[1], grayim.shape[0] - w_new, h_new = process_resize(w, h, self.resize) - grayim = cv2.resize( - grayim, (w_new, h_new), interpolation=self.interp) - return grayim - - def next_frame(self): - """ Return the next frame, and increment internal counter. - Returns - image: Next H x W image. - status: True or False depending whether image was loaded. - """ - - if self.i == self.max_length: - return (None, False) - if self.camera: - - if self._ip_camera: - #Wait for first image, making sure we haven't exited - while self._ip_grabbed is False and self._ip_exited is False: - time.sleep(.001) - - ret, image = self._ip_grabbed, self._ip_image.copy() - if ret is False: - self._ip_running = False - else: - ret, image = self.cap.read() - if ret is False: - print('VideoStreamer: Cannot get image from camera') - return (None, False) - w, h = image.shape[1], image.shape[0] - if self.video_file: - self.cap.set(cv2.CAP_PROP_POS_FRAMES, self.listing[self.i]) - - w_new, h_new = process_resize(w, h, self.resize) - image = cv2.resize(image, (w_new, h_new), - interpolation=self.interp) - image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) - else: - image_file = str(self.listing[self.i]) - image = self.load_image(image_file) - self.i = self.i + 1 - return (image, True) - - def start_ip_camera_thread(self): - self._ip_thread = Thread(target=self.update_ip_camera, args=()) - self._ip_running = True - self._ip_thread.start() - self._ip_exited = False - return self - - def update_ip_camera(self): - while self._ip_running: - ret, img = self.cap.read() - if ret is False: - self._ip_running = False - self._ip_exited = True - self._ip_grabbed = False - return - - self._ip_image = img - self._ip_grabbed = ret - self._ip_index += 1 - #print('IPCAMERA THREAD got frame {}'.format(self._ip_index)) - - - def cleanup(self): - self._ip_running = False - -# --- PREPROCESSING --- - -def process_resize(w, h, resize): - assert(len(resize) > 0 and len(resize) <= 2) - if len(resize) == 1 and resize[0] > -1: - scale = resize[0] / max(h, w) - w_new, h_new = int(round(w*scale)), int(round(h*scale)) - elif len(resize) == 1 and resize[0] == -1: - w_new, h_new = w, h - else: # len(resize) == 2: - w_new, h_new = resize[0], resize[1] - - # Issue warning if resolution is too small or too large. - if max(w_new, h_new) < 160: - print('Warning: input resolution is very small, results may vary') - elif max(w_new, h_new) > 2000: - print('Warning: input resolution is very large, results may vary') - - return w_new, h_new - - -def frame2tensor(frame, device): - return torch.from_numpy(frame/255.).float()[None, None].to(device) - - -def read_image(path, device, resize, rotation, resize_float): - image = cv2.imread(str(path), cv2.IMREAD_GRAYSCALE) - if image is None: - return None, None, None - w, h = image.shape[1], image.shape[0] - w_new, h_new = process_resize(w, h, resize) - scales = (float(w) / float(w_new), float(h) / float(h_new)) - - if resize_float: - image = cv2.resize(image.astype('float32'), (w_new, h_new)) - else: - image = cv2.resize(image, (w_new, h_new)).astype('float32') - - if rotation != 0: - image = np.rot90(image, k=rotation) - if rotation % 2: - scales = scales[::-1] - - inp = frame2tensor(image, device) - return image, inp, scales - - -# --- GEOMETRY --- - - -def estimate_pose(kpts0, kpts1, K0, K1, thresh, conf=0.99999): - if len(kpts0) < 5: - return None - - f_mean = np.mean([K0[0, 0], K1[1, 1], K0[0, 0], K1[1, 1]]) - norm_thresh = thresh / f_mean - - kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None] - kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None] - - E, mask = cv2.findEssentialMat( - kpts0, kpts1, np.eye(3), threshold=norm_thresh, prob=conf, - method=cv2.RANSAC) - - assert E is not None - - best_num_inliers = 0 - ret = None - for _E in np.split(E, len(E) / 3): - n, R, t, _ = cv2.recoverPose( - _E, kpts0, kpts1, np.eye(3), 1e9, mask=mask) - if n > best_num_inliers: - best_num_inliers = n - ret = (R, t[:, 0], mask.ravel() > 0) - return ret - - -def rotate_intrinsics(K, image_shape, rot): - """image_shape is the shape of the image after rotation""" - assert rot <= 3 - h, w = image_shape[:2][::-1 if (rot % 2) else 1] - fx, fy, cx, cy = K[0, 0], K[1, 1], K[0, 2], K[1, 2] - rot = rot % 4 - if rot == 1: - return np.array([[fy, 0., cy], - [0., fx, w-1-cx], - [0., 0., 1.]], dtype=K.dtype) - elif rot == 2: - return np.array([[fx, 0., w-1-cx], - [0., fy, h-1-cy], - [0., 0., 1.]], dtype=K.dtype) - else: # if rot == 3: - return np.array([[fy, 0., h-1-cy], - [0., fx, cx], - [0., 0., 1.]], dtype=K.dtype) - - -def rotate_pose_inplane(i_T_w, rot): - rotation_matrices = [ - np.array([[np.cos(r), -np.sin(r), 0., 0.], - [np.sin(r), np.cos(r), 0., 0.], - [0., 0., 1., 0.], - [0., 0., 0., 1.]], dtype=np.float32) - for r in [np.deg2rad(d) for d in (0, 270, 180, 90)] - ] - return np.dot(rotation_matrices[rot], i_T_w) - - -def scale_intrinsics(K, scales): - scales = np.diag([1./scales[0], 1./scales[1], 1.]) - return np.dot(scales, K) - - -def to_homogeneous(points): - return np.concatenate([points, np.ones_like(points[:, :1])], axis=-1) - - -def compute_epipolar_error(kpts0, kpts1, T_0to1, K0, K1): - kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None] - kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None] - kpts0 = to_homogeneous(kpts0) - kpts1 = to_homogeneous(kpts1) - - t0, t1, t2 = T_0to1[:3, 3] - t_skew = np.array([ - [0, -t2, t1], - [t2, 0, -t0], - [-t1, t0, 0] - ]) - E = t_skew @ T_0to1[:3, :3] - - Ep0 = kpts0 @ E.T # N x 3 - p1Ep0 = np.sum(kpts1 * Ep0, -1) # N - Etp1 = kpts1 @ E # N x 3 - d = p1Ep0**2 * (1.0 / (Ep0[:, 0]**2 + Ep0[:, 1]**2) - + 1.0 / (Etp1[:, 0]**2 + Etp1[:, 1]**2)) - return d - - -def angle_error_mat(R1, R2): - cos = (np.trace(np.dot(R1.T, R2)) - 1) / 2 - cos = np.clip(cos, -1., 1.) # numercial errors can make it out of bounds - return np.rad2deg(np.abs(np.arccos(cos))) - - -def angle_error_vec(v1, v2): - n = np.linalg.norm(v1) * np.linalg.norm(v2) - return np.rad2deg(np.arccos(np.clip(np.dot(v1, v2) / n, -1.0, 1.0))) - - -def compute_pose_error(T_0to1, R, t): - R_gt = T_0to1[:3, :3] - t_gt = T_0to1[:3, 3] - error_t = angle_error_vec(t, t_gt) - error_t = np.minimum(error_t, 180 - error_t) # ambiguity of E estimation - error_R = angle_error_mat(R, R_gt) - return error_t, error_R - - -def pose_auc(errors, thresholds): - sort_idx = np.argsort(errors) - errors = np.array(errors.copy())[sort_idx] - recall = (np.arange(len(errors)) + 1) / len(errors) - errors = np.r_[0., errors] - recall = np.r_[0., recall] - aucs = [] - for t in thresholds: - last_index = np.searchsorted(errors, t) - r = np.r_[recall[:last_index], recall[last_index-1]] - e = np.r_[errors[:last_index], t] - aucs.append(np.trapz(r, x=e)/t) - return aucs - - -# --- VISUALIZATION --- - - -def plot_image_pair(imgs, dpi=100, size=6, pad=.5): - n = len(imgs) - assert n == 2, 'number of images must be two' - figsize = (size*n, size*3/4) if size is not None else None - _, ax = plt.subplots(1, n, figsize=figsize, dpi=dpi) - for i in range(n): - ax[i].imshow(imgs[i], cmap=plt.get_cmap('gray'), vmin=0, vmax=255) - ax[i].get_yaxis().set_ticks([]) - ax[i].get_xaxis().set_ticks([]) - for spine in ax[i].spines.values(): # remove frame - spine.set_visible(False) - plt.tight_layout(pad=pad) - - -def plot_keypoints(kpts0, kpts1, color='w', ps=2): - ax = plt.gcf().axes - ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps) - ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps) - - -def plot_matches(kpts0, kpts1, color, lw=1.5, ps=4): - fig = plt.gcf() - ax = fig.axes - fig.canvas.draw() - - transFigure = fig.transFigure.inverted() - fkpts0 = transFigure.transform(ax[0].transData.transform(kpts0)) - fkpts1 = transFigure.transform(ax[1].transData.transform(kpts1)) - - fig.lines = [matplotlib.lines.Line2D( - (fkpts0[i, 0], fkpts1[i, 0]), (fkpts0[i, 1], fkpts1[i, 1]), zorder=1, - transform=fig.transFigure, c=color[i], linewidth=lw) - for i in range(len(kpts0))] - ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps) - ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps) - - -def make_matching_plot(image0, image1, kpts0, kpts1, mkpts0, mkpts1, - color, text, path, show_keypoints=False, - fast_viz=False, opencv_display=False, - opencv_title='matches', small_text=[]): - - if fast_viz: - make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0, mkpts1, - color, text, path, show_keypoints, 10, - opencv_display, opencv_title, small_text) - return - - plot_image_pair([image0, image1]) - if show_keypoints: - plot_keypoints(kpts0, kpts1, color='k', ps=4) - plot_keypoints(kpts0, kpts1, color='w', ps=2) - plot_matches(mkpts0, mkpts1, color) - - fig = plt.gcf() - txt_color = 'k' if image0[:100, :150].mean() > 200 else 'w' - fig.text( - 0.01, 0.99, '\n'.join(text), transform=fig.axes[0].transAxes, - fontsize=15, va='top', ha='left', color=txt_color) - - txt_color = 'k' if image0[-100:, :150].mean() > 200 else 'w' - fig.text( - 0.01, 0.01, '\n'.join(small_text), transform=fig.axes[0].transAxes, - fontsize=5, va='bottom', ha='left', color=txt_color) - - plt.savefig(str(path), bbox_inches='tight', pad_inches=0) - plt.close() - - -def make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0, - mkpts1, color, text, path=None, - show_keypoints=False, margin=10, - opencv_display=False, opencv_title='', - small_text=[]): - H0, W0 = image0.shape - H1, W1 = image1.shape - H, W = max(H0, H1), W0 + W1 + margin - - out = 255*np.ones((H, W), np.uint8) - out[:H0, :W0] = image0 - out[:H1, W0+margin:] = image1 - out = np.stack([out]*3, -1) - - if show_keypoints: - kpts0, kpts1 = np.round(kpts0).astype(int), np.round(kpts1).astype(int) - white = (255, 255, 255) - black = (0, 0, 0) - for x, y in kpts0: - cv2.circle(out, (x, y), 2, black, -1, lineType=cv2.LINE_AA) - cv2.circle(out, (x, y), 1, white, -1, lineType=cv2.LINE_AA) - for x, y in kpts1: - cv2.circle(out, (x + margin + W0, y), 2, black, -1, - lineType=cv2.LINE_AA) - cv2.circle(out, (x + margin + W0, y), 1, white, -1, - lineType=cv2.LINE_AA) - - mkpts0, mkpts1 = np.round(mkpts0).astype(int), np.round(mkpts1).astype(int) - color = (np.array(color[:, :3])*255).astype(int)[:, ::-1] - for (x0, y0), (x1, y1), c in zip(mkpts0, mkpts1, color): - c = c.tolist() - cv2.line(out, (x0, y0), (x1 + margin + W0, y1), - color=c, thickness=1, lineType=cv2.LINE_AA) - # display line end-points as circles - cv2.circle(out, (x0, y0), 2, c, -1, lineType=cv2.LINE_AA) - cv2.circle(out, (x1 + margin + W0, y1), 2, c, -1, - lineType=cv2.LINE_AA) - - # Scale factor for consistent visualization across scales. - sc = min(H / 640., 2.0) - - # Big text. - Ht = int(30 * sc) # text height - txt_color_fg = (255, 255, 255) - txt_color_bg = (0, 0, 0) - for i, t in enumerate(text): - cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX, - 1.0*sc, txt_color_bg, 2, cv2.LINE_AA) - cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX, - 1.0*sc, txt_color_fg, 1, cv2.LINE_AA) - - # Small text. - Ht = int(18 * sc) # text height - for i, t in enumerate(reversed(small_text)): - cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX, - 0.5*sc, txt_color_bg, 2, cv2.LINE_AA) - cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX, - 0.5*sc, txt_color_fg, 1, cv2.LINE_AA) - - if path is not None: - cv2.imwrite(str(path), out) - - if opencv_display: - cv2.imshow(opencv_title, out) - cv2.waitKey(1) - - return out - - -def error_colormap(x): - return np.clip( - np.stack([2-x*2, x*2, np.zeros_like(x), np.ones_like(x)], -1), 0, 1) diff --git a/spaces/Hazem/roop/roop/utilities.py b/spaces/Hazem/roop/roop/utilities.py deleted file mode 100644 index 90c8d981f5f159a459ca0c08cc23dfac8d04c068..0000000000000000000000000000000000000000 --- a/spaces/Hazem/roop/roop/utilities.py +++ /dev/null @@ -1,141 +0,0 @@ -import glob -import mimetypes -import os -import platform -import shutil -import ssl -import subprocess -import urllib -from pathlib import Path -from typing import List, Any -from tqdm import tqdm - -import roop.globals - -TEMP_FILE = 'temp.mp4' -TEMP_DIRECTORY = 'temp' - -# monkey patch ssl for mac -if platform.system().lower() == 'darwin': - ssl._create_default_https_context = ssl._create_unverified_context - - -def run_ffmpeg(args: List[str]) -> bool: - commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', roop.globals.log_level] - commands.extend(args) - try: - subprocess.check_output(commands, stderr=subprocess.STDOUT) - return True - except Exception: - pass - return False - - -def detect_fps(target_path: str) -> float: - command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path] - output = subprocess.check_output(command).decode().strip().split('/') - try: - numerator, denominator = map(int, output) - return numerator / denominator - except Exception: - pass - return 30.0 - - -def extract_frames(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')]) - - -def create_video(target_path: str, fps: float = 30.0) -> None: - temp_output_path = get_temp_output_path(target_path) - temp_directory_path = get_temp_directory_path(target_path) - run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', roop.globals.video_encoder, '-crf', str(roop.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path]) - - -def restore_audio(target_path: str, output_path: str) -> None: - temp_output_path = get_temp_output_path(target_path) - done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path]) - if not done: - move_temp(target_path, output_path) - - -def get_temp_frame_paths(target_path: str) -> List[str]: - temp_directory_path = get_temp_directory_path(target_path) - return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png'))) - - -def get_temp_directory_path(target_path: str) -> str: - target_name, _ = os.path.splitext(os.path.basename(target_path)) - target_directory_path = os.path.dirname(target_path) - return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name) - - -def get_temp_output_path(target_path: str) -> str: - temp_directory_path = get_temp_directory_path(target_path) - return os.path.join(temp_directory_path, TEMP_FILE) - - -def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Any: - if source_path and target_path: - source_name, _ = os.path.splitext(os.path.basename(source_path)) - target_name, target_extension = os.path.splitext(os.path.basename(target_path)) - if os.path.isdir(output_path): - return os.path.join(output_path, source_name + '-' + target_name + target_extension) - return output_path - - -def create_temp(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - Path(temp_directory_path).mkdir(parents=True, exist_ok=True) - - -def move_temp(target_path: str, output_path: str) -> None: - temp_output_path = get_temp_output_path(target_path) - if os.path.isfile(temp_output_path): - if os.path.isfile(output_path): - os.remove(output_path) - shutil.move(temp_output_path, output_path) - - -def clean_temp(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - parent_directory_path = os.path.dirname(temp_directory_path) - if not roop.globals.keep_frames and os.path.isdir(temp_directory_path): - shutil.rmtree(temp_directory_path) - if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path): - os.rmdir(parent_directory_path) - - -def has_image_extension(image_path: str) -> bool: - return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp')) - - -def is_image(image_path: str) -> bool: - if image_path and os.path.isfile(image_path): - mimetype, _ = mimetypes.guess_type(image_path) - return bool(mimetype and mimetype.startswith('image/')) - return False - - -def is_video(video_path: str) -> bool: - if video_path and os.path.isfile(video_path): - mimetype, _ = mimetypes.guess_type(video_path) - return bool(mimetype and mimetype.startswith('video/')) - return False - - -def conditional_download(download_directory_path: str, urls: List[str]) -> None: - if not os.path.exists(download_directory_path): - os.makedirs(download_directory_path) - for url in urls: - download_file_path = os.path.join(download_directory_path, os.path.basename(url)) - if not os.path.exists(download_file_path): - request = urllib.request.urlopen(url) # type: ignore[attr-defined] - total = int(request.headers.get('Content-Length', 0)) - with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress: - urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined] - - -def resolve_relative_path(path: str) -> str: - return os.path.abspath(os.path.join(os.path.dirname(__file__), path)) diff --git a/spaces/Hina4867/bingo/src/lib/utils.ts b/spaces/Hina4867/bingo/src/lib/utils.ts deleted file mode 100644 index 07feedb34e356b1b3cf867872f32d47a96ae12fb..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/lib/utils.ts +++ /dev/null @@ -1,138 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>) { - let { - BING_COOKIE = process.env.BING_COOKIE, - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - BING_HEADER = process.env.BING_HEADER, - } = cookies - - if (BING_HEADER) { - return extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) - } - - const ua = parseUA(BING_UA) - - if (!BING_COOKIE) { - BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || 'xxx' // hf 暂时不用 Cookie 也可以正常使用 - } - - const parsedCookie = parseCookie(BING_COOKIE, '_U') - if (!parsedCookie) { - throw new Error('Invalid Cookie') - } - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${parsedCookie}` || '', - } -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/dpt_depth.py b/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/dpt_depth.py deleted file mode 100644 index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/modules/midas/midas/dpt_depth.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/dataloader.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/dataloader.py deleted file mode 100644 index 831174de3c3a62f13fa3ff1f172b36c8d2a84c44..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/dataloader.py +++ /dev/null @@ -1,308 +0,0 @@ -import os -import numpy as np -from PIL import Image, ImageSequence -import json -import pandas as pd - -import torch -from torch.utils.data import Dataset -from torchvision import transforms -import torchvision.transforms.functional as TF - -from celle.utils import replace_outliers - -def simple_conversion(seq): - """Create 26-dim embedding""" - chars = [ - "-", - "M", - "R", - "H", - "K", - "D", - "E", - "S", - "T", - "N", - "Q", - "C", - "U", - "G", - "P", - "A", - "V", - "I", - "F", - "Y", - "W", - "L", - "O", - "X", - "Z", - "B", - "J", - ] - - nums = range(len(chars)) - - seqs_x = np.zeros(len(seq)) - - for idx, char in enumerate(seq): - - lui = chars.index(char) - - seqs_x[idx] = nums[lui] - - return torch.tensor([seqs_x]).long() - - -class CellLoader(Dataset): - """imports mined opencell images with protein sequence""" - - def __init__( - self, - data_csv=None, - dataset=None, - split_key=None, - resize=600, - crop_size=600, - crop_method="random", - sequence_mode="simple", - vocab="bert", - threshold="median", - text_seq_len=0, - pad_mode="random", - ): - self.data_csv = data_csv - self.dataset = dataset - self.image_folders = [] - self.crop_method = crop_method - self.resize = resize - self.crop_size = crop_size - self.sequence_mode = sequence_mode - self.threshold = threshold - self.text_seq_len = int(text_seq_len) - self.vocab = vocab - self.pad_mode = pad_mode - - if self.sequence_mode == "embedding" or self.sequence_mode == "onehot": - - - if self.vocab == "esm1b" or self.vocab == "esm2": - from esm import Alphabet - - self.tokenizer = Alphabet.from_architecture( - "ESM-1b" - ).get_batch_converter() - self.text_seq_len += 2 - - if data_csv: - - data = pd.read_csv(data_csv) - - self.parent_path = os.path.dirname(data_csv).split(data_csv)[0] - - if split_key == "train": - self.data = data[data["split"] == "train"] - elif split_key == "val": - self.data = data[data["split"] == "val"] - else: - self.data = data - - self.data = self.data.reset_index(drop=True) - - - - def __len__(self): - return len(self.data) - - def __getitem__( - self, - idx, - get_sequence=True, - get_images=True, - ): - if get_sequence and self.text_seq_len > 0: - - protein_vector = self.get_protein_vector(idx) - - else: - protein_vector = torch.zeros((1, 1)) - - if get_images: - - nucleus, target, threshold = self.get_images(idx, self.dataset) - else: - nucleus, target, threshold = torch.zeros((3, 1)) - - data_dict = { - "nucleus": nucleus.float(), - "target": target.float(), - "threshold": threshold.float(), - "sequence": protein_vector.long(), - } - - return data_dict - - def get_protein_vector(self, idx): - - if "protein_sequence" not in self.data.columns: - - metadata = self.retrieve_metadata(idx) - protein_sequence = metadata["sequence"] - else: - protein_sequence = self.data.iloc[idx]["protein_sequence"] - - protein_vector = self.tokenize_sequence(protein_sequence) - - return protein_vector - - def get_images(self, idx, dataset): - - if dataset == "HPA": - - nucleus = Image.open( - os.path.join( - self.parent_path, self.data.iloc[idx]["nucleus_image_path"] - ) - ) - - target = Image.open( - os.path.join(self.parent_path, self.data.iloc[idx]["target_image_path"]) - ) - - nucleus = TF.to_tensor(nucleus)[0] - target = TF.to_tensor(target)[0] - - image = torch.stack([nucleus, target], axis=0) - - normalize = (0.0655, 0.0650), (0.1732, 0.1208) - - elif dataset == "OpenCell": - image = Image.open( - os.path.join(self.parent_path, self.data.iloc[idx]["image_path"]) - ) - nucleus, target = [page.copy() for page in ImageSequence.Iterator(image)] - - nucleus = replace_outliers(torch.divide(TF.to_tensor(nucleus), 65536))[0] - target = replace_outliers(torch.divide(TF.to_tensor(target), 65536))[0] - - image = torch.stack([nucleus, target], axis=0) - - normalize = ( - (0.0272, 0.0244), - (0.0486, 0.0671), - ) - - # # from https://discuss.pytorch.org/t/how-to-apply-same-transform-on-a-pair-of-picture/14914 - - t_forms = [transforms.Resize(self.resize, antialias=None)] - - if self.crop_method == "random": - - t_forms.append(transforms.RandomCrop(self.crop_size)) - t_forms.append(transforms.RandomHorizontalFlip(p=0.5)) - t_forms.append(transforms.RandomVerticalFlip(p=0.5)) - - elif self.crop_method == "center": - - t_forms.append(transforms.CenterCrop(self.crop_size)) - - t_forms.append(transforms.Normalize(normalize[0], normalize[1])) - - image = transforms.Compose(t_forms)(image) - - nucleus, target = image - - nucleus /= torch.abs(nucleus).max() - target -= target.min() - target /= target.max() - - nucleus = nucleus.unsqueeze(0) - target = target.unsqueeze(0) - - threshold = target - - if self.threshold == "mean": - - threshold = 1.0 * (threshold > (torch.mean(threshold))) - - elif self.threshold == "median": - - threshold = 1.0 * (threshold > (torch.median(threshold))) - - elif self.threshold == "1090_IQR": - - p10 = torch.quantile(threshold, 0.1, None) - p90 = torch.quantile(threshold, 0.9, None) - threshold = torch.clip(threshold, p10, p90) - - nucleus = torch.nan_to_num(nucleus, 0.0, 1.0, 0.0) - target = torch.nan_to_num(target, 0.0, 1.0, 0.0) - threshold = torch.nan_to_num(threshold, 0.0, 1.0, 0.0) - - return nucleus, target, threshold - - def retrieve_metadata(self, idx): - with open( - os.path.join(self.parent_path, self.data.iloc[idx]["metadata_path"]) - ) as f: - metadata = json.load(f) - - return metadata - - def tokenize_sequence(self, protein_sequence): - - pad_token = 0 - - if self.sequence_mode == "simple": - protein_vector = simple_conversion(protein_sequence) - - elif self.sequence_mode == "center": - protein_sequence = protein_sequence.center(self.text_seq_length, "-") - protein_vector = simple_conversion(protein_sequence) - - elif self.sequence_mode == "alternating": - protein_sequence = protein_sequence.center(self.text_seq_length, "-") - protein_sequence = protein_sequence[::18] - protein_sequence = protein_sequence.center( - int(self.text_seq_length / 18) + 1, "-" - ) - protein_vector = simple_conversion(protein_sequence) - - - elif self.sequence_mode == "embedding": - - if self.vocab == "esm1b" or self.vocab == "esm2": - pad_token = 1 - protein_vector = self.tokenizer([("", protein_sequence)])[-1] - - if protein_vector.shape[-1] < self.text_seq_len: - - diff = self.text_seq_len - protein_vector.shape[-1] - - if self.pad_mode == "end": - protein_vector = torch.nn.functional.pad( - protein_vector, (0, diff), "constant", pad_token - ) - elif self.pad_mode == "random": - split = diff - np.random.randint(0, diff + 1) - - protein_vector = torch.cat( - [torch.ones(1, split) * 0, protein_vector], dim=1 - ) - - protein_vector = torch.nn.functional.pad( - protein_vector, (0, diff - split), "constant", pad_token - ) - - elif protein_vector.shape[-1] > self.text_seq_len: - start_int = np.random.randint( - 0, protein_vector.shape[-1] - self.text_seq_len - ) - - protein_vector = protein_vector[ - :, start_int : start_int + self.text_seq_len - ] - - return protein_vector.long() diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/bytes.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/bytes.py deleted file mode 100644 index f88f8f6929f5b6bdb0db470be9ebedf8fe1f752d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/bytes.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.data.encoders import register_bpe -from fairseq.data.encoders.byte_utils import ( - SPACE, - SPACE_ESCAPE, - byte_encode, - smart_byte_decode, -) - - -@register_bpe("bytes") -class Bytes(object): - def __init__(self, *unused): - pass - - @staticmethod - def add_args(parser): - pass - - @staticmethod - def encode(x: str) -> str: - encoded = byte_encode(x) - escaped = encoded.replace(SPACE, SPACE_ESCAPE) - return SPACE.join(list(escaped)) - - @staticmethod - def decode(x: str) -> str: - unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) - return smart_byte_decode(unescaped) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/registry.py b/spaces/ICML2022/OFA/fairseq/fairseq/registry.py deleted file mode 100644 index f3b9406043d75a51d7bf4af5294f82b33a8f9a5e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/registry.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace - -from typing import Union -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent -from hydra.core.config_store import ConfigStore -from omegaconf import DictConfig - -REGISTRIES = {} - - -def setup_registry(registry_name: str, base_class=None, default=None, required=False): - assert registry_name.startswith("--") - registry_name = registry_name[2:].replace("-", "_") - - REGISTRY = {} - REGISTRY_CLASS_NAMES = set() - DATACLASS_REGISTRY = {} - - # maintain a registry of all registries - if registry_name in REGISTRIES: - return # registry already exists - REGISTRIES[registry_name] = { - "registry": REGISTRY, - "default": default, - "dataclass_registry": DATACLASS_REGISTRY, - } - - def build_x(cfg: Union[DictConfig, str, Namespace], *extra_args, **extra_kwargs): - if isinstance(cfg, DictConfig): - choice = cfg._name - - if choice and choice in DATACLASS_REGISTRY: - dc = DATACLASS_REGISTRY[choice] - cfg = merge_with_parent(dc(), cfg) - elif isinstance(cfg, str): - choice = cfg - if choice in DATACLASS_REGISTRY: - cfg = DATACLASS_REGISTRY[choice]() - else: - choice = getattr(cfg, registry_name, None) - if choice in DATACLASS_REGISTRY: - cfg = DATACLASS_REGISTRY[choice].from_namespace(cfg) - - if choice is None: - if required: - raise ValueError("{} is required!".format(registry_name)) - return None - - cls = REGISTRY[choice] - if hasattr(cls, "build_" + registry_name): - builder = getattr(cls, "build_" + registry_name) - else: - builder = cls - - return builder(cfg, *extra_args, **extra_kwargs) - - def register_x(name, dataclass=None): - def register_x_cls(cls): - if name in REGISTRY: - raise ValueError( - "Cannot register duplicate {} ({})".format(registry_name, name) - ) - if cls.__name__ in REGISTRY_CLASS_NAMES: - raise ValueError( - "Cannot register {} with duplicate class name ({})".format( - registry_name, cls.__name__ - ) - ) - if base_class is not None and not issubclass(cls, base_class): - raise ValueError( - "{} must extend {}".format(cls.__name__, base_class.__name__) - ) - - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if cls.__dataclass is not None: - DATACLASS_REGISTRY[name] = cls.__dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group=registry_name, node=node, provider="fairseq") - - REGISTRY[name] = cls - - return cls - - return register_x_cls - - return build_x, register_x, REGISTRY, DATACLASS_REGISTRY diff --git a/spaces/Ibtehaj10/cheating-detection/face_detections.py b/spaces/Ibtehaj10/cheating-detection/face_detections.py deleted file mode 100644 index e4f433997c263777b4e63b5db26a9188b98988b0..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection/face_detections.py +++ /dev/null @@ -1,60 +0,0 @@ -import cv2 -import datetime -import imutils -import numpy as np - -protopath = "deploy.prototxt" -modelpath = "res10_300x300_ssd_iter_140000.caffemodel" -detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath) -# Only enable it if you are using OpenVino environment -# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE) -# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) - -def main(): - cap = cv2.VideoCapture('test_video.mp4') - - fps_start_time = datetime.datetime.now() - fps = 0 - total_frames = 0 - - while True: - ret, frame = cap.read() - frame = imutils.resize(frame, width=600) - total_frames = total_frames + 1 - - (H, W) = frame.shape[:2] - - face_blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0), False, False) - - detector.setInput(face_blob) - face_detections = detector.forward() - - for i in np.arange(0, face_detections.shape[2]): - confidence = face_detections[0, 0, i, 2] - if confidence > 0.5: - - face_box = face_detections[0, 0, i, 3:7] * np.array([W, H, W, H]) - (startX, startY, endX, endY) = face_box.astype("int") - - cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2) - - fps_end_time = datetime.datetime.now() - time_diff = fps_end_time - fps_start_time - if time_diff.seconds == 0: - fps = 0.0 - else: - fps = (total_frames / time_diff.seconds) - - fps_text = "FPS: {:.2f}".format(fps) - - cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) - - cv2.imshow("Application", frame) - key = cv2.waitKey(1) - if key == ord('q'): - break - - cv2.destroyAllWindows() - - -main() diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/Interfaces.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/Interfaces.tsx deleted file mode 100644 index 59b80d06d6779c4681b9a89fec14d22c0c53872b..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/Interfaces.tsx +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright (c) Meta Platforms, Inc. and affiliates. -// All rights reserved. - -// This source code is licensed under the license found in the -// LICENSE file in the root directory of this source tree. - -import { Tensor } from "onnxruntime-web"; - -export interface modelScaleProps { - samScale: number; - height: number; - width: number; -} - -export interface modelInputProps { - x: number; - y: number; - clickType: number; -} - -export interface modeDataProps { - clicks?: Array; - tensor: Tensor; - modelScale: modelScaleProps; -} - -export interface ToolProps { - handleMouseMove: (e: any) => void; -} diff --git a/spaces/Inthv/NER/README.md b/spaces/Inthv/NER/README.md deleted file mode 100644 index 7ba4b8ee2e401fa4f56e8e15b2298062b6dfdfaf..0000000000000000000000000000000000000000 --- a/spaces/Inthv/NER/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NER -emoji: 👁 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/IoMa/diffusers-gallery/README.md b/spaces/IoMa/diffusers-gallery/README.md deleted file mode 100644 index ff1cbb6ee8e12c3a15d98730f50873db96260bad..0000000000000000000000000000000000000000 --- a/spaces/IoMa/diffusers-gallery/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Diffusers Gallery -emoji: 🖼️ -colorFrom: red -colorTo: green -sdk: static -app_port: 8080 -fullWidth: true -pinned: false -license: mit -duplicated_from: huggingface-projects/diffusers-gallery ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/setup.py b/spaces/Jacks2003/3D_Photo_Inpainting/setup.py deleted file mode 100644 index eddf6368ade3f8877d3eb6148157796c22066958..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/setup.py +++ /dev/null @@ -1,8 +0,0 @@ -from setuptools import setup - -setup( - name='cynetworkx_workaround', - version='1.0', - description='A useful module', - install_requires=['cynetworkx'], #external packages as dependencies -) \ No newline at end of file diff --git a/spaces/Jason1112/ML-GUI/README.md b/spaces/Jason1112/ML-GUI/README.md deleted file mode 100644 index dc7ae8c89f0a34231e41f681ef9f7d890fc07cd5..0000000000000000000000000000000000000000 --- a/spaces/Jason1112/ML-GUI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ML GUI -emoji: 🐠 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JayKen/YSF-External-Testing/app.py b/spaces/JayKen/YSF-External-Testing/app.py deleted file mode 100644 index 482562e452528720139353aea163fb6820596c95..0000000000000000000000000000000000000000 --- a/spaces/JayKen/YSF-External-Testing/app.py +++ /dev/null @@ -1,716 +0,0 @@ -import gradio as gr -import openai -from pathlib import Path -import requests -import json -from huggingface_hub import login -from datasets import load_dataset, Dataset -import os -from datetime import date -from PIL import Image -from io import BytesIO -import random -import html2text -import re -from markdown import markdown - -myopenaikey = os.getenv('openaiapi') -openai.api_key = myopenaikey -os.environ["OPENAI_API_KEY"] = myopenaikey -login(os.environ.get("HF_TOKEN")) - -API_URL_LLAMA2 = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf" -headers_llama2 = {"Authorization": "Bearer "+ os.environ.get("Vincenzo_key") } - - -def scrape_page_content(my_company, my_company_url, my_company_page): - - desc = get_company_info(my_company_url) - if len(desc) >= 500: - prompt = "Could you summarize the following Company description in 2-3 sentences: {}".format(desc) - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt=prompt, - max_tokens=100, - ) - - desc = output.choices[0].text - - try: - response = requests.get(my_company_page) - response.raise_for_status() - html_conv = html2text.HTML2Text() - html_conv.ignore_links = True - html_conv.escape_all = True - content = html_conv.handle(response.text) - - ratio1 = 0.1 - ratio2 = 0.1 - for i in range(0,10): - if len(content[round(len(content) * ratio1): len(content) - round(len(content) * ratio2)]) < 10000: - break - - if i%2 == 0: - ratio1 = ratio1 + 0.1 - else: - ratio2 = ratio2 + 0.1 - - content = content[round(len(content) * ratio1): len(content) - round(len(content) * ratio2)] - - prompt = "Could you summarize the following scraped content from the webpage and write the company value proposition?\n\n{}".format(content) - - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=250 - ) - - return desc, completion.choices[0].message['content'] - - except Exception as e: - return desc, f"Error while scraping content: {e}" - - -def get_unsplash_images(keywords): - - url = "https://api.unsplash.com/search/photos" - - params = { - "query": keywords, - "client_id": os.environ.get("unsplash"), - "page": 1, - "per_page": 6 - } - - response = requests.get(url, params=params) - - if response.status_code == 200: - data = response.json() - - image_urls = [photo["urls"]["regular"] for photo in data["results"]] - - - image_contents = [] - for x in image_urls: - response_2 = requests.get(x) - image_contents.append(Image.open(BytesIO(response_2.content))) - - return image_contents - else: - return None - - -def extract_linkedIn_profile(link): - api_endpoint = 'https://nubela.co/proxycurl/api/v2/linkedin' - api_key = os.environ.get("proxyapi") - header_dic = {'Authorization': 'Bearer ' + api_key} - params = { - 'url': link, - 'fallback_to_cache': 'on-error', - 'use_cache': 'if-present', - 'skills': 'include', - 'inferred_salary': 'include', - 'personal_email': 'include', - 'personal_contact_number': 'include', - 'twitter_profile_id': 'include', - 'facebook_profile_id': 'include', - 'github_profile_id': 'include', - 'extra': 'include', - } - response = requests.get(api_endpoint, params=params, headers=header_dic) - - return response - - -def get_linkedin_profile(user): - result = '''full_name: {}\noccupation: {}\nheadline: {}\nsummary: {}\ncountry: {}\ncity: {}\nlanguages: {}\ngender: {}\nbirth_date: {}\nindustry: {}\ninterests: {}\nskills: {}\n'''.format(user['full_name'], user['occupation'], user['headline'], user['summary'], user['country'], user['city'], ', '.join(user['languages']), user['gender'], user['birth_date'], user['industry'], ', '.join(user['interests']), ', '.join(user['skills'])) - - if user['experiences']: - result += '''experiences:\n''' - for i,x in enumerate(user['experiences']): - if i == 0: - mycompany = x['company'] - - if x["ends_at"] is None: - ends_at = 'None' - else: - ends_at = 'Date' - result += '''company: {}\ntitle: {}\ndescription: {}\nlocation: {}\n'''.format(x['company'], x['title'], x['description'], x['location'], x['company_linkedin_profile_url'], ends_at) - - if user['education']: - result += '''education:\n''' - for x in user['education']: - result += '''field_of_study: {}\ndegree_name: {}\nschool: {}\n'''.format(x['field_of_study'], x['degree_name'], x['school']) - - - companies = [] - for x in user['experiences']: - if x["ends_at"] is None: - companies.append((x['title'], x['company'], x['company_linkedin_profile_url'])) - - - return result, user['full_name'], companies - - -def get_linkedin_profile_summary(link): - - dataset = load_dataset("JayKen/YSF-linkedIn", split="train") - - if link in dataset['users']: - pos = dataset['users'].index(link) - user_data = json.loads(dataset['info'][pos]) - user, name, companies = get_linkedin_profile(user_data) - - else: - response = extract_linkedIn_profile(link) - user, name, companies = get_linkedin_profile(response.json()) - - json_string = json.dumps(response.json()) - - dataset = Dataset.from_dict({'users': dataset['users']+[link], - 'info': dataset['info']+[json_string]}) - - dataset.push_to_hub("JayKen/YSF-linkedIn") - - if len(user) >= 5000: - user = user.split('experiences')[0] - - prompt = 'The following is LinkedIn profile of a person: {}.\n\n\nCould you highlight most important personality traits of this person in points. List each trait and then further elaborate why you choose it?'.format(user) - - result = openai.Completion.create(engine="text-davinci-003", prompt=prompt, max_tokens=400) - - return name, companies, result.choices[0].text - - -def get_aud_profile(link): - - global AUD_PRO_SUM - global AUD_NAME - global AUD_companies - - AUD_NAME, AUD_companies, AUD_PRO_SUM = get_linkedin_profile_summary(link) - - job_titles = [] - for x in AUD_companies: - job_titles.append(x[0]+', '+x[1]) - - return '\n'.join(job_titles), AUD_PRO_SUM, '' - - -def set_audience_occ(occupation, company_url): - global AUD_OCC - global AUD_COM - - for x in AUD_companies: - if x[0]+', '+x[1] == occupation: - AUD_OCC = x[0] - AUD_COM = x[1] - url = x[2] - break - - if url: - company_url = url - - if company_url: - desc = get_company_info(company_url) - if len(desc) >= 500: - prompt = "Could you summarize the following Company description in 2-3 sentences: {}".format(desc) - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt=prompt, - max_tokens=100, - ) - - desc = output.choices[0].text - else: - desc = "Unable to fetch company description. Please enter the company linkedIn url." - company_url = '' - - return AUD_OCC + '\n' + AUD_COM + '\n' + company_url, desc - - -def get_company_info(link): - - dataset = load_dataset("JayKen/ysf-company-profile", split="train") - - if link in dataset['url']: - pos = dataset['url'].index(link) - company_info = dataset['aboutus'][pos] - else: - headers = {'Authorization': 'Bearer ' + os.environ.get("proxyapi")} - api_endpoint = 'https://nubela.co/proxycurl/api/linkedin/company' - params = { - 'url': link, - } - response = requests.get(api_endpoint, - params=params, - headers=headers) - - company_info = response.json()['description'] - - dataset = Dataset.from_dict({'url': dataset['url']+[link], - 'aboutus': dataset['aboutus']+[company_info]}) - - dataset.push_to_hub("JayKen/ysf-company-profile") - - return company_info - - - -def set_language(language): - global LANG - - if language == "English (default)": - LANG = '' - else: - LANG = language - - -def get_subject(brief, cta, relation, language): - - set_language(language) - - prompt = "Here is the brief about my meeting: {}".format(brief) - prompt += "\nAfter the meeting I want my {} to: {}".format(relation, cta) - prompt += "\nCombine this and rewrite it for me. \n\nI will combine and rewrite the brief for you: " - - global CTA - CTA = cta - - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=225 - ) - - global SUBJECT - SUBJECT = completion.choices[0].message['content'] - - if len(LANG): - - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt="Translate the following English text to " + LANG + ": \n\n" + SUBJECT, - max_tokens=225, - ) - - translation = output.choices[0].text - else: - translation = '' - - return prompt, SUBJECT, translation - - -def get_concern(my_company, my_company_desc, relation, company_desc): - - prompt = "I will have a high-stake conversation with {}. {}. \n\nI work for {}. {}.\n\nI will have a presentation with {} from {} who has the following personality: {}\n\nThe presentation subject is: {}.\n\nWhat could be my {}'s, {} and {} concerns regarding only to {}.\n\nGive me 5-10 points".format(AUD_COM, company_desc, my_company, my_company_desc, AUD_NAME, AUD_COM, AUD_PRO_SUM, SUBJECT, relation, AUD_COM, AUD_NAME, CTA) - - completion = openai.ChatCompletion.create( - model="gpt-4", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=500 - ) - - return prompt, completion.choices[0].message['content'] - - -def get_concern_step1(my_company, my_company_desc, relation, company_desc): - - prompt = "I will have a high-stake conversation with {}. {}. \n\nI work for {}. {}.\n\nThe presentation subject is: {}.\n\nWhat could be {}'s concerns regarding only to {}.\n\nGive me 5-10 points".format(AUD_COM, company_desc, my_company, my_company_desc, SUBJECT, AUD_COM, CTA) - - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=500 - ) - - global CONCERNS1 - CONCERNS1 = prompt + completion.choices[0].message['content'] - - return prompt, completion.choices[0].message['content'] - - -def get_concern_step2(relation): - - prompt = "I will have a high-stake conversation with {}, {}, who has the following personality: {}.\n\nFrom the above list of concerns show me only the list of concerns that are relevant to {}. ".format(AUD_NAME, relation, AUD_PRO_SUM, AUD_NAME) - - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": CONCERNS1}, - {"role": "user", "content": prompt} - ], - max_tokens=500 - ) - - if len(LANG): - - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'], - max_tokens=500, - ) - - translation = output.choices[0].text - else: - translation = '' - - return prompt, completion.choices[0].message['content'], translation - - -def get_andrew(relation, select_concern): - - prompt = "The main presentation subject I want to deliver to my {} is about {}. My {} is {}, who has the following personality: {}\n\nGive me a list of emotional rewards for {} when addressing {}. Focus on the emotional side. Consider {}'s main concerns:\n\n{}\nSort them from the most relevant to the least one.".format(relation, SUBJECT, relation, AUD_OCC, AUD_PRO_SUM, AUD_NAME, CTA, AUD_NAME, select_concern) - - completion = openai.ChatCompletion.create( - model="gpt-4", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=400 - ) - - if len(LANG): - - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'], - max_tokens=400, - ) - - translation = output.choices[0].text - else: - translation = '' - - return prompt, completion.choices[0].message['content'], translation - - -def get_andrew_two(my_company, value_prop, relation, aud_com_desc): - - prompt = "I work for {} and its value proposition is: {}\n\nI am eager to explore the gains that my {}, {}, who holds the position of {} at {}: {}, can gain in the context of {}\n\nPlease outline the gains.".format(my_company, value_prop, relation, AUD_NAME, AUD_OCC, AUD_COM, aud_com_desc, CTA) - - completion = openai.ChatCompletion.create( - model="gpt-4", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=400 - ) - - if len(LANG): - - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'], - max_tokens=400, - ) - - translation = output.choices[0].text - else: - translation = '' - - return prompt, completion.choices[0].message['content'], translation - - -def get_andrew_three(relation, s1, s2): - - prompt = "I am in the process of engaging my {} who holds the position of {}, to take concrete actions aligned with the presented topic: {}. I would greatly appreciate your expertise in crafting three distinct and compelling narratives that I can keep in mind when meeting my {}- {}, during a meeting. Each narrative must consider all the following points in the same narrative:\n\n1) {}.\n\n2) {}.\n\nKeep each narrative to 3-5 sentences. Don't address the audience. Use the following format Narrative 1: [write narrative]\nNarrative 2: [write narrative]\nNarrative 3: [write narrative].".format(relation, AUD_OCC, SUBJECT, relation, AUD_NAME, s1, s2) - - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": prompt} - ], - max_tokens=400 - ) - - if len(LANG): - - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt="Translate the following English text to " + LANG + ": \n\n" + completion.choices[0].message['content'], - max_tokens=400, - ) - - translation = output.choices[0].text - else: - translation = '' - - return prompt, completion.choices[0].message['content'], translation - - -def create_slides2(narrative, concerns, wins, gains): - - combined_prompt = "Can you create a business PPT presentation following the Hero Journey steps:\n\nOrdinary World: Describe the customer's current situation or pain point.\nCall to Adventure: Introduce the possibility of a solution (your product/service).\nRefusal of the Call: Address common objections or misconceptions.\nMeeting with the Mentor: Position your company as the guide.\nCrossing the First Threshold: Highlight first stepsto engage with your solution.\nTests, Allies, Enemies: Discuss case studies, user stories, and support.\nApproach to the Inmost Cave: Dive into the features and benefits.\nOrdeal: Address main objections and how your solution tackles them.\nReward: Emphasize the value and benefits of your solution.\nThe Road Back: Guide on the next steps or integration.\nResurrection: Reiterate the transformation promised.\nReturn with the Elixir: Call to action.\n\nMy storyline is:{}\n\nSome background information is: {}\n\n{}'s concerns are: {}\n\n{}'s emotional rewards are: {}\n\n{}'s gains are: {}\n\nGenerate all 12 steps with 12 slides. Don't use presentation jargons. Keep the bullet points as succinct phrases. Use the following format when generating slides:\nTitle: \n<3-5 bullet points>\nExplanation: \n".format(narrative, SUBJECT, AUD_NAME, concerns, AUD_NAME, wins, AUD_NAME, gains) - - completion = openai.ChatCompletion.create( - model="gpt-4", - messages=[ - {"role": "system", "content": "You are a helpful assistant. Given the context, provide answers to the questions."}, - {"role": "user", "content": combined_prompt} - ], - max_tokens=2500 - ) - - return combined_prompt, completion.choices[0].message['content'] - - -def get_profile_element(relation): - - prompt = "I have a meeting with my {} to deliver my presenation on/about : {}.".format(relation, SUBJECT) - - prompt += "His/her personality is: {}.\n\nBased on my presentation subject, what are the two most important elements in his/her personality for me to know/consider? Give me one point in 1-2 sentences max.".format(AUD_LINKEDIN_PROFILE) - - output = openai.Completion.create( - model="gpt-3.5-turbo-instruct", - prompt=prompt, - max_tokens=150, - ) - - return prompt, output.choices[0].text - - - -demo = gr.Blocks() - -with demo: - gr.Markdown("# Your YSF 😄 Personal Coach! Version 1.3") - with gr.Tabs(): - with gr.TabItem("Presentation Subject"): - with gr.Row(): - with gr.Column(): - my_company = gr.Textbox(label="My Company Name") - my_company_url = gr.Textbox(label="My Company linkedIn url") - my_company_page = gr.Textbox(label="My company page url") - with gr.Row(): - with gr.Column(): - button_my_company = gr.Button("Submit", variant="primary") - with gr.Column(): - output_my_company_desc = gr.components.Textbox(label="My Company description summarised") - output_my_company_value = gr.components.Textbox(label="My Company Value proposition") - - gr.Markdown("## Examples") - gr.Examples( - [["Your Speech Factory", "https://www.linkedin.com/company/yourspeechfactory/", "https://www.yourspeechfactory.com/"], - ["Dell Technologies", "https://www.linkedin.com/company/delltechnologies/", "https://www.dell.com/en-us/dt/storage/powerflex.htm#tab0=0"]], - [my_company, my_company_url, my_company_page], - [output_my_company_desc, output_my_company_value], - scrape_page_content, - cache_examples=True, - ) - - with gr.Row(): - with gr.Column(): - language = gr.Dropdown(["English (default)", "Swedish", "Norwegian", "Italian"], label="Select Translation language") - input_relation = gr.Textbox(label="Your relationship with audience your presenting to") - input_brief = gr.Textbox(label="Brief me on the situation and/or meeting focus") - input_cta = gr.Textbox(label="Call to action") - with gr.Row(): - with gr.Column(): - button_subject = gr.Button("Submit", variant="primary") - with gr.Column(): - output_subject_prompt = gr.components.Textbox(label="Input prompt") - output_subject = gr.components.Textbox(label="Output presentation subject") - output_subject_translate = gr.components.Textbox(label="Translation") - - gr.Markdown("## Examples") - gr.Examples( - [["I'm presenting Your Speech Factory as a business case for a potential equity investment", "Make an equity investment of 200 000 EUR in our upcoming fundraising round", "potential investor", "English (default)"], - ["I will have sales meeting to discuss datacenter strategy with BNY Investment", "agree to a demo running Powerflex in AWS", "Client", "English (default)"]], - [input_brief, input_cta, input_relation, language], - [output_subject_prompt, output_subject, output_subject_translate], - get_subject, - cache_examples=True, - ) - - - with gr.TabItem("Set Audience"): - with gr.Row(): - with gr.Column(): - input_audience_linkedin_url = gr.Textbox(label="Audience LinkedIn URL") - with gr.Row(): - with gr.Column(): - button_audience_linkedin_url = gr.Button("Get Profile Summary", variant="primary") - with gr.Column(): - output_audience_role = gr.components.Textbox(label="Audience Job titles") - output_audience_summary = gr.components.Textbox(label="Audience Profile summary") - output_bullets_translate = gr.components.Textbox(label="Translation") - - gr.Markdown("## Examples") - gr.Examples( - [["https://www.linkedin.com/in/helge-onsum-47704919/"], ["https://www.linkedin.com/in/sean-traverse-0394a714/"]], - [input_audience_linkedin_url], - [output_audience_role, output_audience_summary, output_bullets_translate], - get_aud_profile, - cache_examples=True, - ) - - with gr.Row(): - with gr.Column(): - input_audience_occ = gr.Textbox(label="Select Audience Occupation") - aud_company_url = gr.Textbox(label="Enter Company LinkedIn url (optional)") - with gr.Row(): - with gr.Column(): - button_audience_occ = gr.Button("Set Audience Occupation", variant="primary") - with gr.Column(): - output_audience_occ = gr.components.Textbox(label="Job Title Set") - output_company_desc = gr.components.Textbox(label="Audience Company description summarised") - - - with gr.TabItem("Concerns"): - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - button_concern = gr.Button("Generate concerns", variant="primary") - with gr.Column(): - output_concern_prompt = gr.components.Textbox(label="Input Prompt") - output_concern = gr.components.Textbox(label="Output") - - - with gr.TabItem("2 Step Concerns"): - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - button_concern_step1 = gr.Button("Generate concerns step 1", variant="primary") - with gr.Column(): - output_concern_step1_prompt = gr.components.Textbox(label="Input Prompt") - output_concern_step1 = gr.components.Textbox(label="Output") - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - button_concern_step2 = gr.Button("Generate concerns step 2", variant="primary") - with gr.Column(): - output_concern_step2_prompt = gr.components.Textbox(label="Input Prompt") - output_concern_step2 = gr.components.Textbox(label="Output") - output_concern_step2_translate = gr.components.Textbox(label="Translation") - - - with gr.TabItem("Personal Win"): - with gr.Row(): - with gr.Column(): - input_concerns_two = gr.Textbox(label="Selected Concerns") - with gr.Row(): - with gr.Column(): - button_andrew = gr.Button("Generate rewards", variant="primary") - with gr.Column(): - output_andrew_prompt = gr.components.Textbox(label="Input Prompt") - output_andrew = gr.components.Textbox(label="Your Output") - output_wins_translate = gr.components.Textbox(label="Translation") - - - with gr.TabItem("Gains"): - with gr.Row(): - with gr.Column(): - input_value_prop = gr.Textbox(label="Enter your company value proposition") - with gr.Row(): - with gr.Column(): - button_andrew_two = gr.Button("Generate gains", variant="primary") - with gr.Column(): - output_andrew_prompt_two = gr.components.Textbox(label="Input Prompt") - output_andrew_two = gr.components.Textbox(label="Your Output") - output_gains_translate = gr.components.Textbox(label="Translation") - - with gr.TabItem("Narrative"): - with gr.Row(): - with gr.Column(): - input_one_three = gr.Textbox(label="Personal Wins") - input_two_three = gr.Textbox(label="Gains") - with gr.Row(): - with gr.Column(): - button_andrew_three = gr.Button("Generate Narratives", variant="primary") - with gr.Column(): - output_andrew_prompt_three = gr.components.Textbox(label="Input Prompt") - output_andrew_three = gr.components.Textbox(label="Output Narrative") - output_narrative_translate = gr.components.Textbox(label="Translation") - - - with gr.TabItem("Slides"): - with gr.Row(): - with gr.Column(): - input_narrative = gr.Textbox(label="Input Narrative") - with gr.Row(): - with gr.Column(): - button_slides_concerns = gr.Button("Generate slides", variant="primary") - with gr.Column(): - output_slides_prompt = gr.components.Textbox(label="Input prompt") - output_slides = gr.components.Textbox(label="Output presentation slides") - - with gr.Row(): - with gr.Column(): - input_keyword = gr.Textbox(lines=1, label="Input Keywords") - with gr.Row(): - with gr.Column(): - button_unsplash = gr.Button("Generate Slide Images", variant="primary") - with gr.Column(): - output_unsplash = gr.Gallery(label="Found relevant Images", show_label=False, - elem_id="gallery", columns=[2], rows=[2], object_fit="contain", height="auto") - - - with gr.TabItem("Personality element"): - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - button_element = gr.Button("Extract personality", variant="primary") - with gr.Column(): - output_element_prompt = gr.components.Textbox(label="Input Prompt") - output_element = gr.components.Textbox(label="Most Important personality highlight") - - - - button_my_company.click(scrape_page_content, inputs=[my_company, my_company_url, my_company_page], - outputs=[output_my_company_desc, output_my_company_value]) - - button_subject.click(get_subject, inputs=[input_brief, input_cta, input_relation, language], - outputs=[output_subject_prompt, output_subject, output_subject_translate]) - - button_audience_linkedin_url.click(get_aud_profile, inputs=[input_audience_linkedin_url], - outputs=[output_audience_role, output_audience_summary, output_bullets_translate]) - - button_audience_occ.click(set_audience_occ, inputs=[input_audience_occ, aud_company_url], - outputs=[output_audience_occ, output_company_desc]) - - button_concern.click(get_concern, inputs=[my_company, output_my_company_desc, input_relation, output_company_desc], - outputs=[output_concern_prompt, output_concern]) - - button_concern_step1.click(get_concern_step1, inputs=[my_company, output_my_company_desc, input_relation, output_company_desc], - outputs=[output_concern_step1_prompt, output_concern_step1]) - - button_concern_step2.click(get_concern_step2, inputs=[input_relation], - outputs=[output_concern_step2_prompt, output_concern_step2, output_concern_step2_translate]) - - button_andrew.click(get_andrew, inputs=[input_relation, input_concerns_two], - outputs=[output_andrew_prompt, output_andrew, output_wins_translate]) - - button_andrew_two.click(get_andrew_two, inputs=[my_company, input_value_prop, input_relation, output_company_desc], - outputs=[output_andrew_prompt_two, output_andrew_two, output_gains_translate]) - - button_andrew_three.click(get_andrew_three, inputs=[input_relation, input_one_three, input_two_three], - outputs=[output_andrew_prompt_three, output_andrew_three, output_narrative_translate]) - - button_slides_concerns.click(create_slides2, inputs=[input_narrative, input_concerns_two, input_one_three, input_two_three], - outputs=[output_slides_prompt, output_slides]) - - button_unsplash.click(get_unsplash_images, inputs=[input_keyword], outputs=[output_unsplash]) - - button_element.click(get_profile_element, inputs=[input_relation], outputs=[output_element_prompt, output_element]) - -demo.launch() \ No newline at end of file diff --git a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Spider/con_spider_SVM.py b/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Spider/con_spider_SVM.py deleted file mode 100644 index 96b79c9ec78d4fa3f7d32d1de2c6c5c1274f320e..0000000000000000000000000000000000000000 --- a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Spider/con_spider_SVM.py +++ /dev/null @@ -1,36 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image -import pickle -import tensorflow as tf - -class spiderSVM: - def __init__(self,url) -> None: - self.image = url - - def predict_image(self): - # Load the model - load_extractor = tf.keras.models.load_model("././Model/Spider/resnetxSVM/resnet_EXTRACTOR.h5") - - modelpath = "././Model/Spider/resnetxSVM/dataSaved.pkl" - - with open(modelpath, 'rb') as file: - saved_data = pickle.load(file) - animal_breed = saved_data['class_name'] - model = saved_data['svm_model'] - - im = Image.open(self.image) - img = im.convert("RGB") - img= np.asarray(img) - image_resized= cv2.resize(img, (224,224)) - features = load_extractor.predict(np.expand_dims(image_resized, axis=0)) - - reshaped_features = features.reshape(features.shape[0],-1) - predicted_class = model.predict(reshaped_features) - pred_prob = model.predict_proba(reshaped_features) - prediction_probability = pred_prob[0][predicted_class[0]] - predicted_class - - output_class= animal_breed[predicted_class[0]] - - return [output_class, prediction_probability] diff --git a/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/utils/page_utils.py b/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/utils/page_utils.py deleted file mode 100644 index 5d3e4e78e97ab27a97c198dfee4df3d0051971f0..0000000000000000000000000000000000000000 --- a/spaces/KalbeDigitalLab/pathology_nuclei_segmentation_classification/utils/page_utils.py +++ /dev/null @@ -1,51 +0,0 @@ -from typing import Optional - - -class ColorPalette: - """Color Palette Container.""" - all = [] - - def __init__( - self, - c50: str, - c100: str, - c200: str, - c300: str, - c400: str, - c500: str, - c600: str, - c700: str, - c800: str, - c900: str, - c950: str, - name: Optional[str] = None, - ): - self.c50 = c50 - self.c100 = c100 - self.c200 = c200 - self.c300 = c300 - self.c400 = c400 - self.c500 = c500 - self.c600 = c600 - self.c700 = c700 - self.c800 = c800 - self.c900 = c900 - self.c950 = c950 - self.name = name - ColorPalette.all.append(self) - - -KALBE_THEME_COLOR = ColorPalette( - name='kalbe', - c50='#f2f9e8', - c100='#dff3c4', - c200='#c2e78d', - c300='#9fd862', - c400='#7fc93f', - c500='#3F831C', - c600='#31661a', - c700='#244c13', - c800='#18340c', - c900='#0c1b06', - c950='#050a02', -) \ No newline at end of file diff --git a/spaces/KashiwaByte/SparkDebate-V2.0/demo.py b/spaces/KashiwaByte/SparkDebate-V2.0/demo.py deleted file mode 100644 index 4e20e6b848c3a6ceae078d7e81f6d3206cd5651a..0000000000000000000000000000000000000000 --- a/spaces/KashiwaByte/SparkDebate-V2.0/demo.py +++ /dev/null @@ -1,6 +0,0 @@ -from utils.API import SparkAPI -app_id = input("app_id here :") -api_key = input("api_key here :") -api_secret = input("api_secret here :") -bot = SparkAPI(app_id=app_id ,api_key=api_key ,api_secret=api_secret) -bot.chat_stream() \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/models/deepmind_version.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/models/deepmind_version.py deleted file mode 100644 index 17b33b271ec40cfc78db9e96bd54f44dd90ec844..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/models/deepmind_version.py +++ /dev/null @@ -1,170 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from utils.display import * -from utils.dsp import * - - -class WaveRNN(nn.Module) : - def __init__(self, hidden_size=896, quantisation=256) : - super(WaveRNN, self).__init__() - - self.hidden_size = hidden_size - self.split_size = hidden_size // 2 - - # The main matmul - self.R = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=False) - - # Output fc layers - self.O1 = nn.Linear(self.split_size, self.split_size) - self.O2 = nn.Linear(self.split_size, quantisation) - self.O3 = nn.Linear(self.split_size, self.split_size) - self.O4 = nn.Linear(self.split_size, quantisation) - - # Input fc layers - self.I_coarse = nn.Linear(2, 3 * self.split_size, bias=False) - self.I_fine = nn.Linear(3, 3 * self.split_size, bias=False) - - # biases for the gates - self.bias_u = nn.Parameter(torch.zeros(self.hidden_size)) - self.bias_r = nn.Parameter(torch.zeros(self.hidden_size)) - self.bias_e = nn.Parameter(torch.zeros(self.hidden_size)) - - # display num params - self.num_params() - - - def forward(self, prev_y, prev_hidden, current_coarse) : - - # Main matmul - the projection is split 3 ways - R_hidden = self.R(prev_hidden) - R_u, R_r, R_e, = torch.split(R_hidden, self.hidden_size, dim=1) - - # Project the prev input - coarse_input_proj = self.I_coarse(prev_y) - I_coarse_u, I_coarse_r, I_coarse_e = \ - torch.split(coarse_input_proj, self.split_size, dim=1) - - # Project the prev input and current coarse sample - fine_input = torch.cat([prev_y, current_coarse], dim=1) - fine_input_proj = self.I_fine(fine_input) - I_fine_u, I_fine_r, I_fine_e = \ - torch.split(fine_input_proj, self.split_size, dim=1) - - # concatenate for the gates - I_u = torch.cat([I_coarse_u, I_fine_u], dim=1) - I_r = torch.cat([I_coarse_r, I_fine_r], dim=1) - I_e = torch.cat([I_coarse_e, I_fine_e], dim=1) - - # Compute all gates for coarse and fine - u = F.sigmoid(R_u + I_u + self.bias_u) - r = F.sigmoid(R_r + I_r + self.bias_r) - e = torch.tanh(r * R_e + I_e + self.bias_e) - hidden = u * prev_hidden + (1. - u) * e - - # Split the hidden state - hidden_coarse, hidden_fine = torch.split(hidden, self.split_size, dim=1) - - # Compute outputs - out_coarse = self.O2(F.relu(self.O1(hidden_coarse))) - out_fine = self.O4(F.relu(self.O3(hidden_fine))) - - return out_coarse, out_fine, hidden - - - def generate(self, seq_len): - with torch.no_grad(): - # First split up the biases for the gates - b_coarse_u, b_fine_u = torch.split(self.bias_u, self.split_size) - b_coarse_r, b_fine_r = torch.split(self.bias_r, self.split_size) - b_coarse_e, b_fine_e = torch.split(self.bias_e, self.split_size) - - # Lists for the two output seqs - c_outputs, f_outputs = [], [] - - # Some initial inputs - out_coarse = torch.LongTensor([0]).cuda() - out_fine = torch.LongTensor([0]).cuda() - - # We'll meed a hidden state - hidden = self.init_hidden() - - # Need a clock for display - start = time.time() - - # Loop for generation - for i in range(seq_len) : - - # Split into two hidden states - hidden_coarse, hidden_fine = \ - torch.split(hidden, self.split_size, dim=1) - - # Scale and concat previous predictions - out_coarse = out_coarse.unsqueeze(0).float() / 127.5 - 1. - out_fine = out_fine.unsqueeze(0).float() / 127.5 - 1. - prev_outputs = torch.cat([out_coarse, out_fine], dim=1) - - # Project input - coarse_input_proj = self.I_coarse(prev_outputs) - I_coarse_u, I_coarse_r, I_coarse_e = \ - torch.split(coarse_input_proj, self.split_size, dim=1) - - # Project hidden state and split 6 ways - R_hidden = self.R(hidden) - R_coarse_u , R_fine_u, \ - R_coarse_r, R_fine_r, \ - R_coarse_e, R_fine_e = torch.split(R_hidden, self.split_size, dim=1) - - # Compute the coarse gates - u = F.sigmoid(R_coarse_u + I_coarse_u + b_coarse_u) - r = F.sigmoid(R_coarse_r + I_coarse_r + b_coarse_r) - e = torch.tanh(r * R_coarse_e + I_coarse_e + b_coarse_e) - hidden_coarse = u * hidden_coarse + (1. - u) * e - - # Compute the coarse output - out_coarse = self.O2(F.relu(self.O1(hidden_coarse))) - posterior = F.softmax(out_coarse, dim=1) - distrib = torch.distributions.Categorical(posterior) - out_coarse = distrib.sample() - c_outputs.append(out_coarse) - - # Project the [prev outputs and predicted coarse sample] - coarse_pred = out_coarse.float() / 127.5 - 1. - fine_input = torch.cat([prev_outputs, coarse_pred.unsqueeze(0)], dim=1) - fine_input_proj = self.I_fine(fine_input) - I_fine_u, I_fine_r, I_fine_e = \ - torch.split(fine_input_proj, self.split_size, dim=1) - - # Compute the fine gates - u = F.sigmoid(R_fine_u + I_fine_u + b_fine_u) - r = F.sigmoid(R_fine_r + I_fine_r + b_fine_r) - e = torch.tanh(r * R_fine_e + I_fine_e + b_fine_e) - hidden_fine = u * hidden_fine + (1. - u) * e - - # Compute the fine output - out_fine = self.O4(F.relu(self.O3(hidden_fine))) - posterior = F.softmax(out_fine, dim=1) - distrib = torch.distributions.Categorical(posterior) - out_fine = distrib.sample() - f_outputs.append(out_fine) - - # Put the hidden state back together - hidden = torch.cat([hidden_coarse, hidden_fine], dim=1) - - # Display progress - speed = (i + 1) / (time.time() - start) - stream('Gen: %i/%i -- Speed: %i', (i + 1, seq_len, speed)) - - coarse = torch.stack(c_outputs).squeeze(1).cpu().data.numpy() - fine = torch.stack(f_outputs).squeeze(1).cpu().data.numpy() - output = combine_signal(coarse, fine) - - return output, coarse, fine - - def init_hidden(self, batch_size=1) : - return torch.zeros(batch_size, self.hidden_size).cuda() - - def num_params(self) : - parameters = filter(lambda p: p.requires_grad, self.parameters()) - parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000 - print('Trainable Parameters: %.3f million' % parameters) \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/necks/fpn.py b/spaces/KyanChen/RSPrompter/mmdet/models/necks/fpn.py deleted file mode 100644 index 67bd8879641f8539f329e6ffb94f88d25e417244..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/necks/fpn.py +++ /dev/null @@ -1,221 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Tuple, Union - -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule -from mmengine.model import BaseModule -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, MultiConfig, OptConfigType - - -@MODELS.register_module() -class FPN(BaseModule): - r"""Feature Pyramid Network. - - This is an implementation of paper `Feature Pyramid Networks for Object - Detection `_. - - Args: - in_channels (list[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Defaults to 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Defaults to -1, which means the - last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Defaults to False. - If True, it is equivalent to `add_extra_convs='on_input'`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Defaults to False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Defaults to False. - conv_cfg (:obj:`ConfigDict` or dict, optional): Config dict for - convolution layer. Defaults to None. - norm_cfg (:obj:`ConfigDict` or dict, optional): Config dict for - normalization layer. Defaults to None. - act_cfg (:obj:`ConfigDict` or dict, optional): Config dict for - activation layer in ConvModule. Defaults to None. - upsample_cfg (:obj:`ConfigDict` or dict, optional): Config dict - for interpolate layer. Defaults to dict(mode='nearest'). - init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \ - dict]): Initialization config dict. - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__( - self, - in_channels: List[int], - out_channels: int, - num_outs: int, - start_level: int = 0, - end_level: int = -1, - add_extra_convs: Union[bool, str] = False, - relu_before_extra_convs: bool = False, - no_norm_on_lateral: bool = False, - conv_cfg: OptConfigType = None, - norm_cfg: OptConfigType = None, - act_cfg: OptConfigType = None, - upsample_cfg: ConfigType = dict(mode='nearest'), - init_cfg: MultiConfig = dict( - type='Xavier', layer='Conv2d', distribution='uniform') - ) -> None: - super().__init__(init_cfg=init_cfg) - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1 or end_level == self.num_ins - 1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level is not the last level, no extra level is allowed - self.backbone_end_level = end_level + 1 - assert end_level < self.num_ins - assert num_outs == end_level - start_level + 1 - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - self.add_extra_convs = 'on_input' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - def forward(self, inputs: Tuple[Tensor]) -> tuple: - """Forward function. - - Args: - inputs (tuple[Tensor]): Features from the upstream network, each - is a 4D-tensor. - - Returns: - tuple: Feature maps, each is a 4D-tensor. - """ - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - # fix runtime error of "+=" inplace operation in PyTorch 1.10 - laterals[i - 1] = laterals[i - 1] + F.interpolate( - laterals[i], **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] = laterals[i - 1] + F.interpolate( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dataset_wrappers.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dataset_wrappers.py deleted file mode 100644 index 1adff10beb024940f9066a407cc76ddb06b27404..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dataset_wrappers.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -from mmengine.dataset import BaseDataset, force_full_init - -from mmpretrain.registry import DATASETS - - -@DATASETS.register_module() -class KFoldDataset: - """A wrapper of dataset for K-Fold cross-validation. - - K-Fold cross-validation divides all the samples in groups of samples, - called folds, of almost equal sizes. And we use k-1 of folds to do training - and use the fold left to do validation. - - Args: - dataset (:obj:`mmengine.dataset.BaseDataset` | dict): The dataset to be - divided - fold (int): The fold used to do validation. Defaults to 0. - num_splits (int): The number of all folds. Defaults to 5. - test_mode (bool): Use the training dataset or validation dataset. - Defaults to False. - seed (int, optional): The seed to shuffle the dataset before splitting. - If None, not shuffle the dataset. Defaults to None. - """ - - def __init__(self, - dataset, - fold=0, - num_splits=5, - test_mode=False, - seed=None): - if isinstance(dataset, dict): - self.dataset = DATASETS.build(dataset) - # Init the dataset wrapper lazily according to the dataset setting. - lazy_init = dataset.get('lazy_init', False) - elif isinstance(dataset, BaseDataset): - self.dataset = dataset - else: - raise TypeError(f'Unsupported dataset type {type(dataset)}.') - - self._metainfo = getattr(self.dataset, 'metainfo', {}) - self.fold = fold - self.num_splits = num_splits - self.test_mode = test_mode - self.seed = seed - - self._fully_initialized = False - if not lazy_init: - self.full_init() - - @property - def metainfo(self) -> dict: - """Get the meta information of ``self.dataset``. - - Returns: - dict: Meta information of the dataset. - """ - # Prevent `self._metainfo` from being modified by outside. - return copy.deepcopy(self._metainfo) - - def full_init(self): - """fully initialize the dataset.""" - if self._fully_initialized: - return - - self.dataset.full_init() - ori_len = len(self.dataset) - indices = list(range(ori_len)) - if self.seed is not None: - rng = np.random.default_rng(self.seed) - rng.shuffle(indices) - - test_start = ori_len * self.fold // self.num_splits - test_end = ori_len * (self.fold + 1) // self.num_splits - if self.test_mode: - indices = indices[test_start:test_end] - else: - indices = indices[:test_start] + indices[test_end:] - - self._ori_indices = indices - self.dataset = self.dataset.get_subset(indices) - - self._fully_initialized = True - - @force_full_init - def _get_ori_dataset_idx(self, idx: int) -> int: - """Convert global idx to local index. - - Args: - idx (int): Global index of ``KFoldDataset``. - - Returns: - int: The original index in the whole dataset. - """ - return self._ori_indices[idx] - - @force_full_init - def get_data_info(self, idx: int) -> dict: - """Get annotation by index. - - Args: - idx (int): Global index of ``KFoldDataset``. - - Returns: - dict: The idx-th annotation of the datasets. - """ - return self.dataset.get_data_info(idx) - - @force_full_init - def __len__(self): - return len(self.dataset) - - @force_full_init - def __getitem__(self, idx): - return self.dataset[idx] - - @force_full_init - def get_cat_ids(self, idx): - return self.dataset.get_cat_ids(idx) - - @force_full_init - def get_gt_labels(self): - return self.dataset.get_gt_labels() - - @property - def CLASSES(self): - """Return all categories names.""" - return self._metainfo.get('classes', None) - - @property - def class_to_idx(self): - """Map mapping class name to class index. - - Returns: - dict: mapping from class name to class index. - """ - - return {cat: i for i, cat in enumerate(self.CLASSES)} - - def __repr__(self): - """Print the basic information of the dataset. - - Returns: - str: Formatted string. - """ - head = 'Dataset ' + self.__class__.__name__ - body = [] - type_ = 'test' if self.test_mode else 'training' - body.append(f'Type: \t{type_}') - body.append(f'Seed: \t{self.seed}') - - def ordinal(n): - # Copy from https://codegolf.stackexchange.com/a/74047 - suffix = 'tsnrhtdd'[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4] - return f'{n}{suffix}' - - body.append( - f'Fold: \t{ordinal(self.fold+1)} of {self.num_splits}-fold') - if self._fully_initialized: - body.append(f'Number of samples: \t{self.__len__()}') - else: - body.append("Haven't been initialized") - - if self.CLASSES is not None: - body.append(f'Number of categories: \t{len(self.CLASSES)}') - else: - body.append('The `CLASSES` meta info is not set.') - - body.append( - f'Original dataset type:\t{self.dataset.__class__.__name__}') - - lines = [head] + [' ' * 4 + line for line in body] - return '\n'.join(lines) diff --git a/spaces/LanguageBind/LanguageBind/languagebind/thermal/processing_thermal.py b/spaces/LanguageBind/LanguageBind/languagebind/thermal/processing_thermal.py deleted file mode 100644 index 36ed1f09d3bf23514baf4859e462d28bc49dfd53..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/languagebind/thermal/processing_thermal.py +++ /dev/null @@ -1,77 +0,0 @@ -import torch -from PIL import Image -from torchvision import transforms -from transformers import ProcessorMixin, BatchEncoding -from transformers.image_processing_utils import BatchFeature - -OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073) -OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711) - -def make_list_of_images(x): - if not isinstance(x, list): - return [x] - return x - -def get_thermal_transform(config): - config = config.vision_config - transform = transforms.Compose( - [ - transforms.ToTensor(), - transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC), - transforms.CenterCrop(224), - transforms.Normalize(OPENAI_DATASET_MEAN, OPENAI_DATASET_STD) # assume image - ] - ) - return transform - - -def load_and_transform_thermal(thermal_path, transform): - thermal = Image.open(thermal_path) - thermal_outputs = transform(thermal) - return thermal_outputs - -class LanguageBindThermalProcessor(ProcessorMixin): - attributes = [] - tokenizer_class = ("LanguageBindThermalTokenizer") - - def __init__(self, config, tokenizer=None, **kwargs): - super().__init__(**kwargs) - self.config = config - self.transform = get_thermal_transform(config) - self.image_processor = load_and_transform_thermal - self.tokenizer = tokenizer - - def __call__(self, images=None, text=None, context_length=77, return_tensors=None, **kwargs): - if text is None and images is None: - raise ValueError("You have to specify either text or images. Both cannot be none.") - - if text is not None: - encoding = self.tokenizer(text, max_length=context_length, padding='max_length', - truncation=True, return_tensors=return_tensors, **kwargs) - - if images is not None: - images = make_list_of_images(images) - image_features = [self.image_processor(image, self.transform) for image in images] - image_features = torch.stack(image_features) - - if text is not None and images is not None: - encoding["pixel_values"] = image_features - return encoding - elif text is not None: - return encoding - else: - return {"pixel_values": image_features} - - def batch_decode(self, skip_special_tokens=True, *args, **kwargs): - """ - This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please - refer to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, skip_special_tokens=skip_special_tokens, **kwargs) - - def decode(self, skip_special_tokens=True, *args, **kwargs): - """ - This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to - the docstring of this method for more information. - """ - return self.tokenizer.decode(*args, skip_special_tokens=skip_special_tokens, **kwargs) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/_deeplab.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/_deeplab.py deleted file mode 100644 index e663007dde9a56add1aa540be76cf2f5d81de82f..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/s2m/_deeplab.py +++ /dev/null @@ -1,180 +0,0 @@ -# Credit: https://github.com/VainF/DeepLabV3Plus-Pytorch - -import torch -from torch import nn -from torch.nn import functional as F - -from .utils import _SimpleSegmentationModel - - -__all__ = ["DeepLabV3"] - - -class DeepLabV3(_SimpleSegmentationModel): - """ - Implements DeepLabV3 model from - `"Rethinking Atrous Convolution for Semantic Image Segmentation" - `_. - - Arguments: - backbone (nn.Module): the network used to compute the features for the model. - The backbone should return an OrderedDict[Tensor], with the key being - "out" for the last feature map used, and "aux" if an auxiliary classifier - is used. - classifier (nn.Module): module that takes the "out" element returned from - the backbone and returns a dense prediction. - aux_classifier (nn.Module, optional): auxiliary classifier used during training - """ - pass - -class DeepLabHeadV3Plus(nn.Module): - def __init__(self, in_channels, low_level_channels, num_classes, aspp_dilate=[12, 24, 36]): - super(DeepLabHeadV3Plus, self).__init__() - self.project = nn.Sequential( - nn.Conv2d(low_level_channels, 48, 1, bias=False), - nn.BatchNorm2d(48), - nn.ReLU(inplace=True), - ) - - self.aspp = ASPP(in_channels, aspp_dilate) - - self.classifier = nn.Sequential( - nn.Conv2d(304, 256, 3, padding=1, bias=False), - nn.BatchNorm2d(256), - nn.ReLU(inplace=True), - nn.Conv2d(256, num_classes, 1) - ) - self._init_weight() - - def forward(self, feature): - low_level_feature = self.project( feature['low_level'] ) - output_feature = self.aspp(feature['out']) - output_feature = F.interpolate(output_feature, size=low_level_feature.shape[2:], mode='bilinear', align_corners=False) - return self.classifier( torch.cat( [ low_level_feature, output_feature ], dim=1 ) ) - - def _init_weight(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - -class DeepLabHead(nn.Module): - def __init__(self, in_channels, num_classes, aspp_dilate=[12, 24, 36]): - super(DeepLabHead, self).__init__() - - self.classifier = nn.Sequential( - ASPP(in_channels, aspp_dilate), - nn.Conv2d(256, 256, 3, padding=1, bias=False), - nn.BatchNorm2d(256), - nn.ReLU(inplace=True), - nn.Conv2d(256, num_classes, 1) - ) - self._init_weight() - - def forward(self, feature): - return self.classifier( feature['out'] ) - - def _init_weight(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - -class AtrousSeparableConvolution(nn.Module): - """ Atrous Separable Convolution - """ - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, bias=True): - super(AtrousSeparableConvolution, self).__init__() - self.body = nn.Sequential( - # Separable Conv - nn.Conv2d( in_channels, in_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, bias=bias, groups=in_channels ), - # PointWise Conv - nn.Conv2d( in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias), - ) - - self._init_weight() - - def forward(self, x): - return self.body(x) - - def _init_weight(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - -class ASPPConv(nn.Sequential): - def __init__(self, in_channels, out_channels, dilation): - modules = [ - nn.Conv2d(in_channels, out_channels, 3, padding=dilation, dilation=dilation, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True) - ] - super(ASPPConv, self).__init__(*modules) - -class ASPPPooling(nn.Sequential): - def __init__(self, in_channels, out_channels): - super(ASPPPooling, self).__init__( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(in_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True)) - - def forward(self, x): - size = x.shape[-2:] - x = super(ASPPPooling, self).forward(x) - return F.interpolate(x, size=size, mode='bilinear', align_corners=False) - -class ASPP(nn.Module): - def __init__(self, in_channels, atrous_rates): - super(ASPP, self).__init__() - out_channels = 256 - modules = [] - modules.append(nn.Sequential( - nn.Conv2d(in_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True))) - - rate1, rate2, rate3 = tuple(atrous_rates) - modules.append(ASPPConv(in_channels, out_channels, rate1)) - modules.append(ASPPConv(in_channels, out_channels, rate2)) - modules.append(ASPPConv(in_channels, out_channels, rate3)) - modules.append(ASPPPooling(in_channels, out_channels)) - - self.convs = nn.ModuleList(modules) - - self.project = nn.Sequential( - nn.Conv2d(5 * out_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True), - nn.Dropout(0.1),) - - def forward(self, x): - res = [] - for conv in self.convs: - res.append(conv(x)) - res = torch.cat(res, dim=1) - return self.project(res) - - - -def convert_to_separable_conv(module): - new_module = module - if isinstance(module, nn.Conv2d) and module.kernel_size[0]>1: - new_module = AtrousSeparableConvolution(module.in_channels, - module.out_channels, - module.kernel_size, - module.stride, - module.padding, - module.dilation, - module.bias) - for name, child in module.named_children(): - new_module.add_module(name, convert_to_separable_conv(child)) - return new_module \ No newline at end of file diff --git a/spaces/Makiing/coolb-in-gtest/src/components/chat-suggestions.tsx b/spaces/Makiing/coolb-in-gtest/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
          -
          - - { - currentSuggestions.map(suggestion => ( - - )) - } -
          -
          - ) : null -} diff --git a/spaces/MathysL/AutoGPT4/autogpt/speech/__init__.py b/spaces/MathysL/AutoGPT4/autogpt/speech/__init__.py deleted file mode 100644 index 2ff0d2bf48dc356bf810cb5a2063d6774e5fec6e..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/speech/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""This module contains the speech recognition and speech synthesis functions.""" -from autogpt.speech.say import say_text - -__all__ = ["say_text"] diff --git a/spaces/Meena/table-question-answering-space/app/tapas.py b/spaces/Meena/table-question-answering-space/app/tapas.py deleted file mode 100644 index 65fdf8c3ec633bd59c62a7f51cc2e6bcfb81c56f..0000000000000000000000000000000000000000 --- a/spaces/Meena/table-question-answering-space/app/tapas.py +++ /dev/null @@ -1,99 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForTableQuestionAnswering -import pandas as pd -import re - -p = re.compile('\d+(\.\d+)?') - -def load_model_and_tokenizer(): - """ - Load - """ - tokenizer = AutoTokenizer.from_pretrained("Meena/table-question-answering-tapas") - model = AutoModelForTableQuestionAnswering.from_pretrained("Meena/table-question-answering-tapas") - - # Return tokenizer and model - return tokenizer, model - - -def prepare_inputs(table, queries, tokenizer): - """ - Convert dictionary into data frame and tokenize inputs given queries. - """ - table = table.astype('str').head(100) - inputs = tokenizer(table=table, queries=queries, padding='max_length', return_tensors="pt") - return table, inputs - - -def generate_predictions(inputs, model, tokenizer): - """ - Generate predictions for some tokenized input. - """ - # Generate model results - outputs = model(**inputs) - - # Convert logit outputs into predictions for table cells and aggregation operators - predicted_table_cell_coords, predicted_aggregation_operators = tokenizer.convert_logits_to_predictions( - inputs, - outputs.logits.detach(), - outputs.logits_aggregation.detach() - ) - - # Return values - return predicted_table_cell_coords, predicted_aggregation_operators - -def postprocess_predictions(predicted_aggregation_operators, predicted_table_cell_coords, table): - """ - Compute the predicted operation and nicely structure the answers. - """ - # Process predicted aggregation operators - aggregation_operators = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3:"COUNT"} - aggregation_predictions_string = [aggregation_operators[x] for x in predicted_aggregation_operators] - # Process predicted table cell coordinates - answers = [] - for agg, coordinates in zip(predicted_aggregation_operators, predicted_table_cell_coords): - if len(coordinates) == 1: - # 1 cell - answers.append(table.iat[coordinates[0]]) - else: - # > 1 cell - cell_values = [] - for coordinate in coordinates: - cell_values.append(table.iat[coordinate]) - answers.append(", ".join(cell_values)) - - # Return values - return aggregation_predictions_string, answers - - -def show_answers(queries, answers, aggregation_predictions_string): - """ - Visualize the postprocessed answers. - """ - agg = {"NONE": lambda x: x, "SUM" : lambda x: sum(x), "AVERAGE": lambda x: (sum(x) / len(x)), "COUNT": lambda x: len(x)} - results = [] - for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): - print(query) - if predicted_agg == "NONE": - print("Predicted answer: " + answer) - else: - if all([not p.match(val) == None for val in answer.split(', ')]): - # print("Predicted answer: " + predicted_agg + "(" + answer + ") = " + str(agg[predicted_agg](list(map(float, answer.split(',')))))) - result = str(agg[predicted_agg](list(map(float, answer.split(','))))) - elif predicted_agg == "COUNT": - # print("Predicted answer: " + predicted_agg + "(" + answer + ") = " + str(agg[predicted_agg](answer.split(',')))) - result = str(agg[predicted_agg](answer.split(','))) - else: - result = predicted_agg + " > " + answer - results.append(result) - return results - -def execute_query(query, table): - """ - Invoke the TAPAS model. - """ - queries = [query] - tokenizer, model = load_model_and_tokenizer() - table, inputs = prepare_inputs(table, queries, tokenizer) - predicted_table_cell_coords, predicted_aggregation_operators = generate_predictions(inputs, model, tokenizer) - aggregation_predictions_string, answers = postprocess_predictions(predicted_aggregation_operators, predicted_table_cell_coords, table) - return show_answers(queries, answers, aggregation_predictions_string) diff --git a/spaces/MercurialAi/OncologyGPT_Probabilities/README.md b/spaces/MercurialAi/OncologyGPT_Probabilities/README.md deleted file mode 100644 index 127b8bcb2efaa86bb77da324c523c1cafaa5b484..0000000000000000000000000000000000000000 --- a/spaces/MercurialAi/OncologyGPT_Probabilities/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: OncologyGPT Probabilities -emoji: 🧮 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/modules/loss_wrapper.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/modules/loss_wrapper.py deleted file mode 100644 index d86f1e6f7df4a6bc112563294b8bf6bb4d999b98..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/modules/loss_wrapper.py +++ /dev/null @@ -1,127 +0,0 @@ -import torch -from . import losses -from ..utils.rewards import init_scorer, get_self_critical_reward, get_self_critical_clipscore_reward -from ..utils.clipscore import CLIPScore -import numpy as np - -class LossWrapper(torch.nn.Module): - def __init__(self, model, opt): - super(LossWrapper, self).__init__() - self.opt = opt - self.model = model - if opt.label_smoothing > 0: - self.crit = losses.LabelSmoothing(smoothing=opt.label_smoothing) - else: - self.crit = losses.LanguageModelCriterion() - self.rl_crit = losses.RewardCriterion() - self.struc_crit = losses.StructureLosses(opt) - - self.clipscore_model = None - if self.opt.use_clipscore: - use_grammar = getattr(self.opt, 'use_grammar', False) - joint_out = getattr(self.opt, 'joint_out', False) - self.clipscore_model = CLIPScore( - mode=opt.clipscore_mode, - use_grammar=use_grammar, - joint_out=joint_out, - ) - for p in self.clipscore_model.parameters(): - p.requires_grad = False - - if use_grammar: - state_dict = torch.load(self.opt.clip_load_path, map_location='cpu') - self.clipscore_model.load_state_dict(state_dict['state_dict']) - - def forward(self, fc_feats, att_feats, labels, masks, att_masks, gts, gt_indices, - sc_flag, struc_flag, clip_vis_feats=None): - opt = self.opt - - out = {} - if struc_flag: - if opt.structure_loss_weight < 1: - lm_loss = self.crit(self.model(fc_feats, att_feats, labels[..., :-1], att_masks), labels[..., 1:], masks[..., 1:]) - else: - lm_loss = torch.tensor(0).type_as(fc_feats) - if opt.structure_loss_weight > 0: - gen_result, sample_logprobs = self.model(fc_feats, att_feats, att_masks, - opt={'sample_method':opt.train_sample_method, - 'beam_size':opt.train_beam_size, - 'output_logsoftmax': opt.struc_use_logsoftmax or opt.structure_loss_type == 'softmax_margin'\ - or not 'margin' in opt.structure_loss_type, - 'sample_n': opt.train_sample_n}, - mode='sample') - gts = [gts[_] for _ in gt_indices.tolist()] - struc_loss = self.struc_crit(sample_logprobs, gen_result, gts) - else: - struc_loss = {'loss': torch.tensor(0).type_as(fc_feats), - 'reward': torch.tensor(0).type_as(fc_feats)} - loss = (1-opt.structure_loss_weight) * lm_loss + opt.structure_loss_weight * struc_loss['loss'] - out['lm_loss'] = lm_loss - out['struc_loss'] = struc_loss['loss'] - out['reward'] = struc_loss['reward'] - elif not sc_flag: - loss = self.crit(self.model(fc_feats, att_feats, labels[..., :-1], att_masks), labels[..., 1:], masks[..., 1:]) - else: - self.model.eval() - with torch.no_grad(): - greedy_res, _ = self.model(fc_feats, att_feats, att_masks, - mode='sample', - opt={'sample_method': opt.sc_sample_method, - 'beam_size': opt.sc_beam_size}) - self.model.train() - gen_result, sample_logprobs = self.model(fc_feats, att_feats, att_masks, - opt={'sample_method':opt.train_sample_method, - 'beam_size':opt.train_beam_size, - 'sample_n': opt.train_sample_n}, - mode='sample') - gts = [gts[_] for _ in gt_indices.tolist()] - - if getattr(self.opt, 'use_multi_rewards', False): - assert self.opt.use_clipscore - clipscore_reward_normalized, clipscore_unnormalized_mean, grammar_rewards = get_self_critical_clipscore_reward( - greedy_res, gts, gen_result, self.opt, self.clipscore_model, clip_vis_feats, self.model.vocab) - - if self.opt.clipscore_mode == 'clip_s': - out['CLIP-S'] = clipscore_unnormalized_mean - elif self.opt.clipscore_mode == 'refclip_s': - out['RefCLIP-S'] = clipscore_unnormalized_mean - - if getattr(self.opt, 'use_grammar', False): - out['grammar_reward'] = grammar_rewards.mean() - - reward = clipscore_reward_normalized + grammar_rewards - - - else: - assert grammar_rewards is None - - cider_reward_normalized, cider_unnormalized_mean = get_self_critical_reward( - greedy_res, gts, gen_result, self.opt) - out['CIDEr'] = cider_unnormalized_mean - if isinstance(cider_reward_normalized, np.ndarray): - cider_reward_normalized = torch.from_numpy(cider_reward_normalized).to(clipscore_reward_normalized.device) - - reward = clipscore_reward_normalized + cider_reward_normalized - else: - if self.opt.use_clipscore: - clipscore_reward_normalized, clipscore_unnormalized_mean, _ = get_self_critical_clipscore_reward( - greedy_res, gts, gen_result, self.opt, self.clipscore_model, clip_vis_feats, self.model.vocab) - if self.opt.clipscore_mode == 'clip_s': - out['CLIP-S'] = clipscore_unnormalized_mean - elif self.opt.clipscore_mode == 'refclip_s': - out['RefCLIP-S'] = clipscore_unnormalized_mean - reward = clipscore_reward_normalized - else: - cider_reward_normalized, cider_unnormalized_mean = get_self_critical_reward( - greedy_res, gts, gen_result, self.opt) - out['CIDEr'] = cider_unnormalized_mean - reward = cider_reward_normalized - - if isinstance(reward, np.ndarray): - reward = torch.from_numpy(reward) - reward = reward.to(sample_logprobs) - loss = self.rl_crit(sample_logprobs, gen_result.data, reward) - out['reward'] = reward[:,0].mean() - out['loss'] = loss - return out - diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/keras_benchmark.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/keras_benchmark.py deleted file mode 100644 index 770674ac658f213d614f0a3704a0bbb200bb94aa..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/keras_benchmark.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Executes Keras benchmarks and accuracy tests.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf -from official.benchmark.perfzero_benchmark import PerfZeroBenchmark -from official.utils.flags import core as flags_core - - -class KerasBenchmark(PerfZeroBenchmark): - """Base benchmark class with methods to simplify testing.""" - - def __init__(self, - output_dir=None, - default_flags=None, - flag_methods=None, - tpu=None): - super(KerasBenchmark, self).__init__( - output_dir=output_dir, - default_flags=default_flags, - flag_methods=flag_methods, - tpu=tpu) - - def _report_benchmark(self, - stats, - wall_time_sec, - top_1_max=None, - top_1_min=None, - log_steps=None, - total_batch_size=None, - warmup=1, - start_time_sec=None): - """Report benchmark results by writing to local protobuf file. - - Args: - stats: dict returned from keras models with known entries. - wall_time_sec: the during of the benchmark execution in seconds - top_1_max: highest passing level for top_1 accuracy. - top_1_min: lowest passing level for top_1 accuracy. - log_steps: How often the log was created for stats['step_timestamp_log']. - total_batch_size: Global batch-size. - warmup: number of entries in stats['step_timestamp_log'] to ignore. - start_time_sec: the start time of the program in seconds since epoch - """ - - metrics = [] - if 'accuracy_top_1' in stats: - metrics.append({'name': 'accuracy_top_1', - 'value': stats['accuracy_top_1'], - 'min_value': top_1_min, - 'max_value': top_1_max}) - metrics.append({'name': 'top_1_train_accuracy', - 'value': stats['training_accuracy_top_1']}) - - if (warmup and 'step_timestamp_log' in stats and - len(stats['step_timestamp_log']) > warmup): - # first entry in the time_log is start of step 1. The rest of the - # entries are the end of each step recorded - time_log = stats['step_timestamp_log'] - elapsed = time_log[-1].timestamp - time_log[warmup].timestamp - num_examples = ( - total_batch_size * log_steps * (len(time_log) - warmup - 1)) - examples_per_sec = num_examples / elapsed - metrics.append({'name': 'exp_per_second', - 'value': examples_per_sec}) - - if 'avg_exp_per_second' in stats: - metrics.append({'name': 'avg_exp_per_second', - 'value': stats['avg_exp_per_second']}) - - if start_time_sec and 'step_timestamp_log' in stats: - time_log = stats['step_timestamp_log'] - # time_log[0] is recorded at the beginning of the first step. - startup_time = time_log[0].timestamp - start_time_sec - metrics.append({'name': 'startup_time', 'value': startup_time}) - - flags_str = flags_core.get_nondefault_flags_as_str() - self.report_benchmark( - iters=-1, - wall_time=wall_time_sec, - metrics=metrics, - extras={'flags': flags_str}) diff --git a/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/models.py b/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/models.py deleted file mode 100644 index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/NeonLion92/OpenChatKit-neon/style.css b/spaces/NeonLion92/OpenChatKit-neon/style.css deleted file mode 100644 index 00901d44b3146e9cfb8b309be8c76473bf1b3b33..0000000000000000000000000000000000000000 --- a/spaces/NeonLion92/OpenChatKit-neon/style.css +++ /dev/null @@ -1,8 +0,0 @@ -body { - padding: 0; - margin: 0; -} - -iframe { - width:100vw;height:100vh;border:0; -} diff --git a/spaces/NeuML/txtsql/app.py b/spaces/NeuML/txtsql/app.py deleted file mode 100644 index 9875b6f72659e4de2a43410f00405d692d55bd25..0000000000000000000000000000000000000000 --- a/spaces/NeuML/txtsql/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import streamlit as st - -from transformers import pipeline - -query = st.text_input("Query", value="feel good story since yesterday") - -nlp = pipeline("text2text-generation", model="NeuML/t5-small-txtsql") -text = nlp(f"translate English to SQL: {query}", max_length=512)[0]["generated_text"] -st.write(text.replace('$=', '<=')) diff --git a/spaces/Nightwing25/AICoverGen/src/trainset_preprocess_pipeline_print.py b/spaces/Nightwing25/AICoverGen/src/trainset_preprocess_pipeline_print.py deleted file mode 100644 index 7b19e3e9a5788552b6acb9cd6747bda7ae93146b..0000000000000000000000000000000000000000 --- a/spaces/Nightwing25/AICoverGen/src/trainset_preprocess_pipeline_print.py +++ /dev/null @@ -1,146 +0,0 @@ -import sys, os, multiprocessing -from scipy import signal - -now_dir = os.getcwd() -sys.path.append(now_dir) - -inp_root = sys.argv[1] -sr = int(sys.argv[2]) -n_p = int(sys.argv[3]) -exp_dir = sys.argv[4] -noparallel = sys.argv[5] == "True" -import numpy as np, os, traceback -from slicer2 import Slicer -import librosa, traceback -from scipy.io import wavfile -import multiprocessing -from my_utils import load_audio -import tqdm - -DoFormant = False -Quefrency = 1.0 -Timbre = 1.0 - -mutex = multiprocessing.Lock() -f = open("%s/preprocess.log" % exp_dir, "a+") - - -def println(strr): - mutex.acquire() - print(strr) - f.write("%s\n" % strr) - f.flush() - mutex.release() - - -class PreProcess: - def __init__(self, sr, exp_dir): - self.slicer = Slicer( - sr=sr, - threshold=-42, - min_length=1500, - min_interval=400, - hop_size=15, - max_sil_kept=500, - ) - self.sr = sr - self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr) - self.per = 3.0 - self.overlap = 0.3 - self.tail = self.per + self.overlap - self.max = 0.9 - self.alpha = 0.75 - self.exp_dir = exp_dir - self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir - self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir - os.makedirs(self.exp_dir, exist_ok=True) - os.makedirs(self.gt_wavs_dir, exist_ok=True) - os.makedirs(self.wavs16k_dir, exist_ok=True) - - def norm_write(self, tmp_audio, idx0, idx1): - tmp_max = np.abs(tmp_audio).max() - if tmp_max > 2.5: - print("%s-%s-%s-filtered" % (idx0, idx1, tmp_max)) - return - tmp_audio = (tmp_audio / tmp_max * (self.max * self.alpha)) + ( - 1 - self.alpha - ) * tmp_audio - wavfile.write( - "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1), - self.sr, - tmp_audio.astype(np.float32), - ) - tmp_audio = librosa.resample( - tmp_audio, orig_sr=self.sr, target_sr=16000 - ) # , res_type="soxr_vhq" - wavfile.write( - "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1), - 16000, - tmp_audio.astype(np.float32), - ) - - def pipeline(self, path, idx0): - try: - audio = load_audio(path, self.sr, DoFormant, Quefrency, Timbre) - # zero phased digital filter cause pre-ringing noise... - # audio = signal.filtfilt(self.bh, self.ah, audio) - audio = signal.lfilter(self.bh, self.ah, audio) - - idx1 = 0 - for audio in self.slicer.slice(audio): - i = 0 - while 1: - start = int(self.sr * (self.per - self.overlap) * i) - i += 1 - if len(audio[start:]) > self.tail * self.sr: - tmp_audio = audio[start : start + int(self.per * self.sr)] - self.norm_write(tmp_audio, idx0, idx1) - idx1 += 1 - else: - tmp_audio = audio[start:] - idx1 += 1 - break - self.norm_write(tmp_audio, idx0, idx1) - # println("%s->Suc." % path) - except: - println("%s->%s" % (path, traceback.format_exc())) - - def pipeline_mp(self, infos, thread_n): - for path, idx0 in tqdm.tqdm( - infos, position=thread_n, leave=True, desc="thread:%s" % thread_n - ): - self.pipeline(path, idx0) - - def pipeline_mp_inp_dir(self, inp_root, n_p): - try: - infos = [ - ("%s/%s" % (inp_root, name), idx) - for idx, name in enumerate(sorted(list(os.listdir(inp_root)))) - ] - if noparallel: - for i in range(n_p): - self.pipeline_mp(infos[i::n_p]) - else: - ps = [] - for i in range(n_p): - p = multiprocessing.Process( - target=self.pipeline_mp, args=(infos[i::n_p], i) - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() - except: - println("Fail. %s" % traceback.format_exc()) - - -def preprocess_trainset(inp_root, sr, n_p, exp_dir): - pp = PreProcess(sr, exp_dir) - println("start preprocess") - println(sys.argv) - pp.pipeline_mp_inp_dir(inp_root, n_p) - println("end preprocess") - - -if __name__ == "__main__": - preprocess_trainset(inp_root, sr, n_p, exp_dir) diff --git a/spaces/Noobian/DuaGenerator/README.md b/spaces/Noobian/DuaGenerator/README.md deleted file mode 100644 index 811b9e028553cd6ce7024e37455270a726139861..0000000000000000000000000000000000000000 --- a/spaces/Noobian/DuaGenerator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DuaGenerator -emoji: 📊 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OAOA/DifFace/basicsr/metrics/metric_util.py b/spaces/OAOA/DifFace/basicsr/metrics/metric_util.py deleted file mode 100644 index 2a27c70a043beeeb59cfaf533079492293065448..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/metrics/metric_util.py +++ /dev/null @@ -1,45 +0,0 @@ -import numpy as np - -from basicsr.utils import bgr2ycbcr - - -def reorder_image(img, input_order='HWC'): - """Reorder images to 'HWC' order. - - If the input_order is (h, w), return (h, w, 1); - If the input_order is (c, h, w), return (h, w, c); - If the input_order is (h, w, c), return as it is. - - Args: - img (ndarray): Input image. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - If the input image shape is (h, w), input_order will not have - effects. Default: 'HWC'. - - Returns: - ndarray: reordered image. - """ - - if input_order not in ['HWC', 'CHW']: - raise ValueError(f"Wrong input_order {input_order}. Supported input_orders are 'HWC' and 'CHW'") - if len(img.shape) == 2: - img = img[..., None] - if input_order == 'CHW': - img = img.transpose(1, 2, 0) - return img - - -def to_y_channel(img): - """Change to Y channel of YCbCr. - - Args: - img (ndarray): Images with range [0, 255]. - - Returns: - (ndarray): Images with range [0, 255] (float type) without round. - """ - img = img.astype(np.float32) / 255. - if img.ndim == 3 and img.shape[2] == 3: - img = bgr2ycbcr(img, y_only=True) - img = img[..., None] - return img * 255. diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/laser/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/laser/README.md deleted file mode 100644 index 66acada04f58fa235cd312753f144f6f1e5f4a33..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/laser/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# LASER Language-Agnostic SEntence Representations - -LASER is a library to calculate and use multilingual sentence embeddings. - -You can find more information about LASER and how to use it on the official [LASER repository](https://github.com/facebookresearch/LASER). - -This folder contains source code for training LASER embeddings. - - -## Prepare data and configuration file - -Binarize your data with fairseq, as described [here](https://fairseq.readthedocs.io/en/latest/getting_started.html#data-pre-processing). - -Create a json config file with this format: -``` -{ - "src_vocab": "/path/to/spm.src.cvocab", - "tgt_vocab": "/path/to/spm.tgt.cvocab", - "train": [ - { - "type": "translation", - "id": 0, - "src": "/path/to/srclang1-tgtlang0/train.srclang1", - "tgt": "/path/to/srclang1-tgtlang0/train.tgtlang0" - }, - { - "type": "translation", - "id": 1, - "src": "/path/to/srclang1-tgtlang1/train.srclang1", - "tgt": "/path/to/srclang1-tgtlang1/train.tgtlang1" - }, - { - "type": "translation", - "id": 0, - "src": "/path/to/srclang2-tgtlang0/train.srclang2", - "tgt": "/path/to/srclang2-tgtlang0/train.tgtlang0" - }, - { - "type": "translation", - "id": 1, - "src": "/path/to/srclang2-tgtlang1/train.srclang2", - "tgt": "/path/to/srclang2-tgtlang1/train.tgtlang1" - }, - ... - ], - "valid": [ - { - "type": "translation", - "id": 0, - "src": "/unused", - "tgt": "/unused" - } - ] -} -``` -where paths are paths to binarized indexed fairseq dataset files. -`id` represents the target language id. - - -## Training Command Line Example - -``` -fairseq-train \ - /path/to/configfile_described_above.json \ - --user-dir examples/laser/laser_src \ - --log-interval 100 --log-format simple \ - --task laser --arch laser_lstm \ - --save-dir . \ - --optimizer adam \ - --lr 0.001 \ - --lr-scheduler inverse_sqrt \ - --clip-norm 5 \ - --warmup-updates 90000 \ - --update-freq 2 \ - --dropout 0.0 \ - --encoder-dropout-out 0.1 \ - --max-tokens 2000 \ - --max-epoch 50 \ - --encoder-bidirectional \ - --encoder-layers 5 \ - --encoder-hidden-size 512 \ - --decoder-layers 1 \ - --decoder-hidden-size 2048 \ - --encoder-embed-dim 320 \ - --decoder-embed-dim 320 \ - --decoder-lang-embed-dim 32 \ - --warmup-init-lr 0.001 \ - --disable-validation -``` - - -## Applications - -We showcase several applications of multilingual sentence embeddings -with code to reproduce our results (in the directory "tasks"). - -* [**Cross-lingual document classification**](https://github.com/facebookresearch/LASER/tree/master/tasks/mldoc) using the - [*MLDoc*](https://github.com/facebookresearch/MLDoc) corpus [2,6] -* [**WikiMatrix**](https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix) - Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia [7] -* [**Bitext mining**](https://github.com/facebookresearch/LASER/tree/master/tasks/bucc) using the - [*BUCC*](https://comparable.limsi.fr/bucc2018/bucc2018-task.html) corpus [3,5] -* [**Cross-lingual NLI**](https://github.com/facebookresearch/LASER/tree/master/tasks/xnli) - using the [*XNLI*](https://www.nyu.edu/projects/bowman/xnli/) corpus [4,5,6] -* [**Multilingual similarity search**](https://github.com/facebookresearch/LASER/tree/master/tasks/similarity) [1,6] -* [**Sentence embedding of text files**](https://github.com/facebookresearch/LASER/tree/master/tasks/embed) - example how to calculate sentence embeddings for arbitrary text files in any of the supported language. - -**For all tasks, we use exactly the same multilingual encoder, without any task specific optimization or fine-tuning.** - - - -## References - -[1] Holger Schwenk and Matthijs Douze, - [*Learning Joint Multilingual Sentence Representations with Neural Machine Translation*](https://aclanthology.info/papers/W17-2619/w17-2619), - ACL workshop on Representation Learning for NLP, 2017 - -[2] Holger Schwenk and Xian Li, - [*A Corpus for Multilingual Document Classification in Eight Languages*](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), - LREC, pages 3548-3551, 2018. - -[3] Holger Schwenk, - [*Filtering and Mining Parallel Data in a Joint Multilingual Space*](http://aclweb.org/anthology/P18-2037) - ACL, July 2018 - -[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, - [*XNLI: Cross-lingual Sentence Understanding through Inference*](https://aclweb.org/anthology/D18-1269), - EMNLP, 2018. - -[5] Mikel Artetxe and Holger Schwenk, - [*Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings*](https://arxiv.org/abs/1811.01136) - arXiv, Nov 3 2018. - -[6] Mikel Artetxe and Holger Schwenk, - [*Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond*](https://arxiv.org/abs/1812.10464) - arXiv, Dec 26 2018. - -[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, - [*WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia*](https://arxiv.org/abs/1907.05791) - arXiv, July 11 2019. - -[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin - [*CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB*](https://arxiv.org/abs/1911.04944) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/process_data/dedup_data.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/process_data/dedup_data.py deleted file mode 100644 index 58d9ed1cd17b3ba70772a6d9adab709785495fd9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/process_data/dedup_data.py +++ /dev/null @@ -1,91 +0,0 @@ -import argparse -from collections import namedtuple -import os - -DATADIR = "/path/to/train_data" -DEDUP_FROM_DIR = "/path/to/eval/data" -OUTPUT_DIR = "/path/to/output/data" - - -def main(args): - languages = set() - for language_directory in os.listdir(DATADIR): - if "_" in language_directory: - src, tgt = language_directory.split("_") - languages.add(LanguagePair(src=src, tgt=tgt)) - - data = existing_data() - train_languages = sorted(languages) - for language_pair in train_languages[args.start_index:args.start_index + args.size]: - print(language_pair) - dedup(language_pair, data) - - -LanguagePair = namedtuple("LanguagePair", ["src", "tgt"]) - - -def existing_data(): - data = set() - for file in os.listdir(DEDUP_FROM_DIR): - with open(os.path.join(DEDUP_FROM_DIR, file)) as f: - data |= set(f.readlines()) - return data - -def dedup(language_pair, data, verbose=True, output=True): - train_filenames = LanguagePair( - src=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.src}", - tgt=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.tgt}", - ) - - output_filenames = LanguagePair( - src=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.src}", - tgt=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.tgt}" - ) - - # If output exists, skip this pair. It has already been done. - if (os.path.exists(output_filenames.src) and - os.path.exists(output_filenames.tgt)): - if verbose: - print(f"{language_pair.src}-{language_pair.tgt} already done.") - return - - if verbose: - print(f"{language_pair.src}-{language_pair.tgt} ready, will check dups.") - - # If there is no output, no need to actually do the loop. - if not output: - return - - if os.path.exists(train_filenames.src) and os.path.exists(train_filenames.tgt): - with open(train_filenames.src) as f: - train_source = f.readlines() - - with open(train_filenames.tgt) as f: - train_target = f.readlines() - - # do dedup - new_train_source = [] - new_train_target = [] - for i, train_line in enumerate(train_source): - if train_line not in data and train_target[i] not in data: - new_train_source.append(train_line) - new_train_target.append(train_target[i]) - - assert len(train_source) == len(train_target) - assert len(new_train_source) == len(new_train_target) - assert len(new_train_source) <= len(train_source) - - with open(output_filenames.src, "w") as o: - for line in new_train_source: - o.write(line) - - with open(output_filenames.tgt, "w") as o: - for line in new_train_target: - o.write(line) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("-s", "--start-index", required=True, type=int) - parser.add_argument("-n", "--size", required=True, type=int) - main(parser.parse_args()) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/README.md deleted file mode 100644 index ed4d5df52ccea01216276054a1f253d0d16c0409..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/roberta/README.md +++ /dev/null @@ -1,296 +0,0 @@ -# RoBERTa: A Robustly Optimized BERT Pretraining Approach - -https://arxiv.org/abs/1907.11692 - -## Introduction - -RoBERTa iterates on BERT's pretraining procedure, including training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data. See the associated paper for more details. - -### What's New: - -- December 2020: German model (GottBERT) is available: [GottBERT](https://github.com/pytorch/fairseq/tree/main/examples/gottbert). -- January 2020: Italian model (UmBERTo) is available from Musixmatch Research: [UmBERTo](https://github.com/musixmatchresearch/umberto). -- November 2019: French model (CamemBERT) is available: [CamemBERT](https://github.com/pytorch/fairseq/tree/main/examples/camembert). -- November 2019: Multilingual encoder (XLM-RoBERTa) is available: [XLM-R](https://github.com/pytorch/fairseq/tree/main/examples/xlmr). -- September 2019: TensorFlow and TPU support via the [transformers library](https://github.com/huggingface/transformers). -- August 2019: RoBERTa is now supported in the [pytorch-transformers library](https://github.com/huggingface/pytorch-transformers). -- August 2019: Added [tutorial for finetuning on WinoGrande](https://github.com/pytorch/fairseq/tree/main/examples/roberta/wsc#roberta-training-on-winogrande-dataset). -- August 2019: Added [tutorial for pretraining RoBERTa using your own data](README.pretraining.md). - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`roberta.base` | RoBERTa using the BERT-base architecture | 125M | [roberta.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz) -`roberta.large` | RoBERTa using the BERT-large architecture | 355M | [roberta.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz) -`roberta.large.mnli` | `roberta.large` finetuned on [MNLI](http://www.nyu.edu/projects/bowman/multinli) | 355M | [roberta.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.mnli.tar.gz) -`roberta.large.wsc` | `roberta.large` finetuned on [WSC](wsc/README.md) | 355M | [roberta.large.wsc.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.wsc.tar.gz) - -## Results - -**[GLUE (Wang et al., 2019)](https://gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`roberta.base` | 87.6 | 92.8 | 91.9 | 78.7 | 94.8 | 90.2 | 63.6 | 91.2 -`roberta.large` | 90.2 | 94.7 | 92.2 | 86.6 | 96.4 | 90.9 | 68.0 | 92.4 -`roberta.large.mnli` | 90.2 | - | - | - | - | - | - | - - -**[SuperGLUE (Wang et al., 2019)](https://super.gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | BoolQ | CB | COPA | MultiRC | RTE | WiC | WSC ----|---|---|---|---|---|---|--- -`roberta.large` | 86.9 | 98.2 | 94.0 | 85.7 | 89.5 | 75.6 | - -`roberta.large.wsc` | - | - | - | - | - | - | 91.3 - -**[SQuAD (Rajpurkar et al., 2018)](https://rajpurkar.github.io/SQuAD-explorer/)** -_(dev set, no additional data used)_ - -Model | SQuAD 1.1 EM/F1 | SQuAD 2.0 EM/F1 ----|---|--- -`roberta.large` | 88.9/94.6 | 86.5/89.4 - -**[RACE (Lai et al., 2017)](http://www.qizhexie.com/data/RACE_leaderboard.html)** -_(test set)_ - -Model | Accuracy | Middle | High ----|---|---|--- -`roberta.large` | 83.2 | 86.5 | 81.3 - -**[HellaSwag (Zellers et al., 2019)](https://rowanzellers.com/hellaswag/)** -_(test set)_ - -Model | Overall | In-domain | Zero-shot | ActivityNet | WikiHow ----|---|---|---|---|--- -`roberta.large` | 85.2 | 87.3 | 83.1 | 74.6 | 90.9 - -**[Commonsense QA (Talmor et al., 2019)](https://www.tau-nlp.org/commonsenseqa)** -_(test set)_ - -Model | Accuracy ----|--- -`roberta.large` (single model) | 72.1 -`roberta.large` (ensemble) | 72.5 - -**[Winogrande (Sakaguchi et al., 2019)](https://arxiv.org/abs/1907.10641)** -_(test set)_ - -Model | Accuracy ----|--- -`roberta.large` | 78.1 - -**[XNLI (Conneau et al., 2018)](https://arxiv.org/abs/1809.05053)** -_(TRANSLATE-TEST)_ - -Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur ----|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- -`roberta.large.mnli` | 91.3 | 82.91 | 84.27 | 81.24 | 81.74 | 83.13 | 78.28 | 76.79 | 76.64 | 74.17 | 74.05 | 77.5 | 70.9 | 66.65 | 66.81 - -## Example usage - -##### Load RoBERTa from torch.hub (PyTorch >= 1.1): -```python -import torch -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load RoBERTa (for PyTorch 1.0 or custom models): -```python -# Download roberta.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz -tar -xzvf roberta.large.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('/path/to/roberta.large', checkpoint_file='model.pt') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply Byte-Pair Encoding (BPE) to input text: -```python -tokens = roberta.encode('Hello world!') -assert tokens.tolist() == [0, 31414, 232, 328, 2] -roberta.decode(tokens) # 'Hello world!' -``` - -##### Extract features from RoBERTa: -```python -# Extract the last layer's features -last_layer_features = roberta.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 5, 1024]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = roberta.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 25 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -##### Use RoBERTa for sentence-pair classification tasks: -```python -# Download RoBERTa already finetuned for MNLI -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') -roberta.eval() # disable dropout for evaluation - -# Encode a pair of sentences and make a prediction -tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.') -roberta.predict('mnli', tokens).argmax() # 0: contradiction - -# Encode another pair of sentences -tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is based on BERT.') -roberta.predict('mnli', tokens).argmax() # 2: entailment -``` - -##### Register a new (randomly initialized) classification head: -```python -roberta.register_classification_head('new_task', num_classes=3) -logprobs = roberta.predict('new_task', tokens) # tensor([[-1.1050, -1.0672, -1.1245]], grad_fn=) -``` - -##### Batched prediction: -```python -import torch -from fairseq.data.data_utils import collate_tokens - -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') -roberta.eval() - -batch_of_pairs = [ - ['Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.'], - ['Roberta is a heavily optimized version of BERT.', 'Roberta is based on BERT.'], - ['potatoes are awesome.', 'I like to run.'], - ['Mars is very far from earth.', 'Mars is very close.'], -] - -batch = collate_tokens( - [roberta.encode(pair[0], pair[1]) for pair in batch_of_pairs], pad_idx=1 -) - -logprobs = roberta.predict('mnli', batch) -print(logprobs.argmax(dim=1)) -# tensor([0, 2, 1, 0]) -``` - -##### Using the GPU: -```python -roberta.cuda() -roberta.predict('new_task', tokens) # tensor([[-1.1050, -1.0672, -1.1245]], device='cuda:0', grad_fn=) -``` - -## Advanced usage - -#### Filling masks: - -RoBERTa can be used to fill `` tokens in the input. Some examples from the -[Natural Questions dataset](https://ai.google.com/research/NaturalQuestions/): -```python -roberta.fill_mask('The first Star wars movie came out in ', topk=3) -# [('The first Star wars movie came out in 1977', 0.9504708051681519, ' 1977'), ('The first Star wars movie came out in 1978', 0.009986862540245056, ' 1978'), ('The first Star wars movie came out in 1979', 0.009574787691235542, ' 1979')] - -roberta.fill_mask('Vikram samvat calender is official in ', topk=3) -# [('Vikram samvat calender is official in India', 0.21878819167613983, ' India'), ('Vikram samvat calender is official in Delhi', 0.08547237515449524, ' Delhi'), ('Vikram samvat calender is official in Gujarat', 0.07556215673685074, ' Gujarat')] - -roberta.fill_mask(' is the common currency of the European Union', topk=3) -# [('Euro is the common currency of the European Union', 0.9456493854522705, 'Euro'), ('euro is the common currency of the European Union', 0.025748178362846375, 'euro'), ('€ is the common currency of the European Union', 0.011183084920048714, '€')] -``` - -#### Pronoun disambiguation (Winograd Schema Challenge): - -RoBERTa can be used to disambiguate pronouns. First install spaCy and download the English-language model: -```bash -pip install spacy -python -m spacy download en_core_web_lg -``` - -Next load the `roberta.large.wsc` model and call the `disambiguate_pronoun` -function. The pronoun should be surrounded by square brackets (`[]`) and the -query referent surrounded by underscores (`_`), or left blank to return the -predicted candidate text directly: -```python -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.wsc', user_dir='examples/roberta/wsc') -roberta.cuda() # use the GPU (optional) - -roberta.disambiguate_pronoun('The _trophy_ would not fit in the brown suitcase because [it] was too big.') -# True -roberta.disambiguate_pronoun('The trophy would not fit in the brown _suitcase_ because [it] was too big.') -# False - -roberta.disambiguate_pronoun('The city councilmen refused the demonstrators a permit because [they] feared violence.') -# 'The city councilmen' -roberta.disambiguate_pronoun('The city councilmen refused the demonstrators a permit because [they] advocated violence.') -# 'demonstrators' -``` - -See the [RoBERTA Winograd Schema Challenge (WSC) README](wsc/README.md) for more details on how to train this model. - -#### Extract features aligned to words: - -By default RoBERTa outputs one feature vector per BPE token. You can instead -realign the features to match [spaCy's word-level tokenization](https://spacy.io/usage/linguistic-features#tokenization) -with the `extract_features_aligned_to_words` method. This will compute a -weighted average of the BPE-level features for each word and expose them in -spaCy's `Token.vector` attribute: -```python -doc = roberta.extract_features_aligned_to_words('I said, "hello RoBERTa."') -assert len(doc) == 10 -for tok in doc: - print('{:10}{} (...)'.format(str(tok), tok.vector[:5])) -# tensor([-0.1316, -0.0386, -0.0832, -0.0477, 0.1943], grad_fn=) (...) -# I tensor([ 0.0559, 0.1541, -0.4832, 0.0880, 0.0120], grad_fn=) (...) -# said tensor([-0.1565, -0.0069, -0.8915, 0.0501, -0.0647], grad_fn=) (...) -# , tensor([-0.1318, -0.0387, -0.0834, -0.0477, 0.1944], grad_fn=) (...) -# " tensor([-0.0486, 0.1818, -0.3946, -0.0553, 0.0981], grad_fn=) (...) -# hello tensor([ 0.0079, 0.1799, -0.6204, -0.0777, -0.0923], grad_fn=) (...) -# RoBERTa tensor([-0.2339, -0.1184, -0.7343, -0.0492, 0.5829], grad_fn=) (...) -# . tensor([-0.1341, -0.1203, -0.1012, -0.0621, 0.1892], grad_fn=) (...) -# " tensor([-0.1341, -0.1203, -0.1012, -0.0621, 0.1892], grad_fn=) (...) -# tensor([-0.0930, -0.0392, -0.0821, 0.0158, 0.0649], grad_fn=) (...) -``` - -#### Evaluating the `roberta.large.mnli` model: - -Example python code snippet to evaluate accuracy on the MNLI `dev_matched` set. -```python -label_map = {0: 'contradiction', 1: 'neutral', 2: 'entailment'} -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('glue_data/MNLI/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('mnli', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# Expected output: 0.9060 -``` - -## Finetuning - -- [Finetuning on GLUE](README.glue.md) -- [Finetuning on custom classification tasks (e.g., IMDB)](README.custom_classification.md) -- [Finetuning on Winograd Schema Challenge (WSC)](wsc/README.md) -- [Finetuning on Commonsense QA (CQA)](commonsense_qa/README.md) - -## Pretraining using your own data - -See the [tutorial for pretraining RoBERTa using your own data](README.pretraining.md). - -## Citation - -```bibtex -@article{liu2019roberta, - title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach}, - author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and - Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and - Luke Zettlemoyer and Veselin Stoyanov}, - journal={arXiv preprint arXiv:1907.11692}, - year = {2019}, -} -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation/prepare-wmt14en2fr.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation/prepare-wmt14en2fr.sh deleted file mode 100644 index 2ac97a5b76fab255449493488ed8bd67350a7bac..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation/prepare-wmt14en2fr.sh +++ /dev/null @@ -1,136 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=40000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://statmt.org/wmt13/training-parallel-un.tgz" - "http://statmt.org/wmt14/training-parallel-nc-v9.tgz" - "http://statmt.org/wmt10/training-giga-fren.tar" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-un.tgz" - "training-parallel-nc-v9.tgz" - "training-giga-fren.tar" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.fr-en" - "commoncrawl.fr-en" - "un/undoc.2000.fr-en" - "training/news-commentary-v9.fr-en" - "giga-fren.release2.fixed" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=en -tgt=fr -lang=en-fr -prep=wmt14_en_fr -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done - -gunzip giga-fren.release2.fixed.*.gz -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%1333 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%1333 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.fr-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py deleted file mode 100644 index 6d7dd625e09451be671908578f93148f371f53cd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .unpaired_audio_text import UnpairedAudioText - - -__all__ = [ - "UnpairedAudioText", -] diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/install_dependecies.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/install_dependecies.sh deleted file mode 100644 index 82a1054745264a56fbec4a8eb593884f8a42bd08..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/install_dependecies.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -CWD=`pwd` -INSTALL_PATH=$CWD/tokenizers/thirdparty - -MOSES=$INSTALL_PATH/mosesdecoder -if [ ! -d $MOSES ]; then - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git $MOSES - cd $MOSES - # To deal with differences in handling ' vs " - git checkout 03578921cc1a03402 - cd - -fi - -WMT16_SCRIPTS=$INSTALL_PATH/wmt16-scripts -if [ ! -d $WMT16_SCRIPTS ]; then - echo 'Cloning Romanian tokenization scripts' - git clone https://github.com/rsennrich/wmt16-scripts.git $WMT16_SCRIPTS -fi - -KYTEA=$INSTALL_PATH/kytea -if [ ! -f $KYTEA/bin/kytea ]; then - git clone https://github.com/neubig/kytea.git $KYTEA - cd $KYTEA - autoreconf -i - ./configure --prefix=`pwd` - make - make install - cd .. -fi - -export MECAB=$INSTALL_PATH/mecab-0.996-ko-0.9.2 -if [ ! -f $MECAB/bin/mecab ]; then - cd $INSTALL_PATH - curl -LO https://bitbucket.org/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz - tar zxfv mecab-0.996-ko-0.9.2.tar.gz - cd mecab-0.996-ko-0.9.2/ - ./configure --prefix=`pwd` - make - make install - - cd .. - curl -LO https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.1.1-20180720.tar.gz - tar zxfv mecab-ko-dic-2.1.1-20180720.tar.gz - cd mecab-ko-dic-2.1.1-20180720/ - ./autogen.sh - ./configure --prefix=`pwd` --with-dicdir=$MECAB/lib/mecab/dic/mecab-ko-dic --with-mecab-config=$MECAB/bin/mecab-config - make - sh -c 'echo "dicdir=$MECAB/lib/mecab/dic/mecab-ko-dic" > $MECAB/etc/mecabrc' - make install - cd $CWD -fi - -INDIC_RESOURCES_PATH=$INSTALL_PATH/indic_nlp_resources -if [ ! -d $INDIC_RESOURCES_PATH ]; then - echo 'Cloning indic_nlp_resources' - git clone https://github.com/anoopkunchukuttan/indic_nlp_resources.git $INDIC_RESOURCES_PATH -fi - - -if [ ! -f $INSTALL_PATH/seg_my.py ]; then - cd $INSTALL_PATH - wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip - unzip wat2020.my-en.zip - # switch to python3 - cat wat2020.my-en/myseg.py |sed 's/^sys.std/###sys.std/g' | sed 's/### sys/sys/g' | sed 's/unichr/chr/g' > seg_my.py - cd $CWD -fi - - -pip install pythainlp sacrebleu indic-nlp-library - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_dataclass_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_dataclass_utils.py deleted file mode 100644 index 45fc391a979feb198b0a4ecea69c31f1340e87d2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_dataclass_utils.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -from argparse import ArgumentParser -from dataclasses import dataclass, field - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import gen_parser_from_dataclass - - -@dataclass -class A(FairseqDataclass): - data: str = field(default="test", metadata={"help": "the data input"}) - num_layers: int = field(default=200, metadata={"help": "more layers is better?"}) - - -@dataclass -class B(FairseqDataclass): - bar: A = field(default=A()) - foo: int = field(default=0, metadata={"help": "not a bar"}) - - -@dataclass -class D(FairseqDataclass): - arch: A = field(default=A()) - foo: int = field(default=0, metadata={"help": "not a bar"}) - - -@dataclass -class C(FairseqDataclass): - data: str = field(default="test", metadata={"help": "root level data input"}) - encoder: D = field(default=D()) - decoder: A = field(default=A()) - lr: int = field(default=0, metadata={"help": "learning rate"}) - - -class TestDataclassUtils(unittest.TestCase): - def test_argparse_convert_basic(self): - parser = ArgumentParser() - gen_parser_from_dataclass(parser, A(), True) - args = parser.parse_args(["--num-layers", '10', "the/data/path"]) - self.assertEqual(args.num_layers, 10) - self.assertEqual(args.data, "the/data/path") - - def test_argparse_recursive(self): - parser = ArgumentParser() - gen_parser_from_dataclass(parser, B(), True) - args = parser.parse_args(["--num-layers", "10", "--foo", "10", "the/data/path"]) - self.assertEqual(args.num_layers, 10) - self.assertEqual(args.foo, 10) - self.assertEqual(args.data, "the/data/path") - - def test_argparse_recursive_prefixing(self): - self.maxDiff = None - parser = ArgumentParser() - gen_parser_from_dataclass(parser, C(), True, "") - args = parser.parse_args( - [ - "--encoder-arch-data", - "ENCODER_ARCH_DATA", - "--encoder-arch-num-layers", - "10", - "--encoder-foo", - "10", - "--decoder-data", - "DECODER_DATA", - "--decoder-num-layers", - "10", - "--lr", - "10", - "the/data/path", - ] - ) - self.assertEqual(args.encoder_arch_data, "ENCODER_ARCH_DATA") - self.assertEqual(args.encoder_arch_num_layers, 10) - self.assertEqual(args.encoder_foo, 10) - self.assertEqual(args.decoder_data, "DECODER_DATA") - self.assertEqual(args.decoder_num_layers, 10) - self.assertEqual(args.lr, 10) - self.assertEqual(args.data, "the/data/path") - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py deleted file mode 100644 index 40fa9aecdf9108e095feb3661236453c0f7ed7c4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import argparse -import pandas as pd -import sys - - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - - -def check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train={}): - # src, tgt = direction.split('-') - print(f'check training data for {direction} in {src_path} and {tgt_path}') - size = 0 - overlapped_size_counted_dup = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size, overlapped_size_counted_dup - - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - overlapped_size_counted_dup += 1 - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - overlapped_size_counted_dup += 1 - print(f'{direction}: size={size}, overlapped={overlapped_size_counted_dup}') - return mess_up_train, size, overlapped_size_counted_dup - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - # raw_data = '~chau/data-bin/MineBART/multilingual_mined_100M/en_XX/et_EE-en_XX/all.{en_XX, et_EE}' - print(f'checking training data againsts # {len(all_test_data)} sentences') - print(f'example test data: ', [s for i, s in enumerate(all_test_data.keys()) if i < 10]) - for direction in directions: - src, tgt = direction.split('-') - path = f'{raw_data}/en_XX/{direction}/all' - src_path = f'{path}.{src}' - tgt_path = f'{path}.{tgt}' - print(f'checking {src_path} {tgt_path}') - _, size, overlapped_size_counted_dup = check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train) - data_sizes[direction] = (size, overlapped_size_counted_dup) - return mess_up_train, data_sizes - - - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--folder", type=str, required=True, - help="the data folder ") - parser.add_argument("--test-data", type=str, required=True, - help="the test data folder ") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - directions = args.directions.split(',') - directions = sorted(set(directions)) - - results = [] - # print(f'checking where {args.split} split data are in training') - # print(f'direction\tcommon_count\tsrc common\ttgt common\tfrom_size\tto_size') - raw_data = args.folder - all_test_data, test_data = get_all_test_data(args.test_data, directions, split='test') - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_test_data) - print(data_sizes) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/evaluation/eval_f0.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/evaluation/eval_f0.py deleted file mode 100644 index df721d683113b44957149cfc3cddaba36520a22c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/evaluation/eval_f0.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Signal processing-based evaluation using waveforms -""" -import numpy as np -import os.path as op - -import torchaudio -import tqdm -from tabulate import tabulate - -from examples.speech_synthesis.utils import ( - gross_pitch_error, voicing_decision_error, f0_frame_error -) -from examples.speech_synthesis.evaluation.eval_sp import load_eval_spec - - -def difference_function(x, n, tau_max): - """ - Compute difference function of data x. This solution is implemented directly - with Numpy fft. - - - :param x: audio data - :param n: length of data - :param tau_max: integration window size - :return: difference function - :rtype: list - """ - - x = np.array(x, np.float64) - w = x.size - tau_max = min(tau_max, w) - x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum())) - size = w + tau_max - p2 = (size // 32).bit_length() - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size) - fc = np.fft.rfft(x, size_pad) - conv = np.fft.irfft(fc * fc.conjugate())[:tau_max] - return x_cumsum[w:w - tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - \ - 2 * conv - - -def cumulative_mean_normalized_difference_function(df, n): - """ - Compute cumulative mean normalized difference function (CMND). - - :param df: Difference function - :param n: length of data - :return: cumulative mean normalized difference function - :rtype: list - """ - - # scipy method - cmn_df = df[1:] * range(1, n) / np.cumsum(df[1:]).astype(float) - return np.insert(cmn_df, 0, 1) - - -def get_pitch(cmdf, tau_min, tau_max, harmo_th=0.1): - """ - Return fundamental period of a frame based on CMND function. - - :param cmdf: Cumulative Mean Normalized Difference function - :param tau_min: minimum period for speech - :param tau_max: maximum period for speech - :param harmo_th: harmonicity threshold to determine if it is necessary to - compute pitch frequency - :return: fundamental period if there is values under threshold, 0 otherwise - :rtype: float - """ - tau = tau_min - while tau < tau_max: - if cmdf[tau] < harmo_th: - while tau + 1 < tau_max and cmdf[tau + 1] < cmdf[tau]: - tau += 1 - return tau - tau += 1 - - return 0 # if unvoiced - - -def compute_yin(sig, sr, w_len=512, w_step=256, f0_min=100, f0_max=500, - harmo_thresh=0.1): - """ - - Compute the Yin Algorithm. Return fundamental frequency and harmonic rate. - - https://github.com/NVIDIA/mellotron adaption of - https://github.com/patriceguyot/Yin - - :param sig: Audio signal (list of float) - :param sr: sampling rate (int) - :param w_len: size of the analysis window (samples) - :param w_step: size of the lag between two consecutives windows (samples) - :param f0_min: Minimum fundamental frequency that can be detected (hertz) - :param f0_max: Maximum fundamental frequency that can be detected (hertz) - :param harmo_thresh: Threshold of detection. The yalgorithmù return the - first minimum of the CMND function below this threshold. - - :returns: - - * pitches: list of fundamental frequencies, - * harmonic_rates: list of harmonic rate values for each fundamental - frequency value (= confidence value) - * argmins: minimums of the Cumulative Mean Normalized DifferenceFunction - * times: list of time of each estimation - :rtype: tuple - """ - - tau_min = int(sr / f0_max) - tau_max = int(sr / f0_min) - - # time values for each analysis window - time_scale = range(0, len(sig) - w_len, w_step) - times = [t/float(sr) for t in time_scale] - frames = [sig[t:t + w_len] for t in time_scale] - - pitches = [0.0] * len(time_scale) - harmonic_rates = [0.0] * len(time_scale) - argmins = [0.0] * len(time_scale) - - for i, frame in enumerate(frames): - # Compute YIN - df = difference_function(frame, w_len, tau_max) - cm_df = cumulative_mean_normalized_difference_function(df, tau_max) - p = get_pitch(cm_df, tau_min, tau_max, harmo_thresh) - - # Get results - if np.argmin(cm_df) > tau_min: - argmins[i] = float(sr / np.argmin(cm_df)) - if p != 0: # A pitch was found - pitches[i] = float(sr / p) - harmonic_rates[i] = cm_df[p] - else: # No pitch, but we compute a value of the harmonic rate - harmonic_rates[i] = min(cm_df) - - return pitches, harmonic_rates, argmins, times - - -def extract_f0(samples): - f0_samples = [] - for sample in tqdm.tqdm(samples): - if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]): - f0_samples.append(None) - continue - - # assume single channel - yref, sr = torchaudio.load(sample["ref"]) - ysyn, _sr = torchaudio.load(sample["syn"]) - yref, ysyn = yref[0], ysyn[0] - assert sr == _sr, f"{sr} != {_sr}" - - yref_f0 = compute_yin(yref, sr) - ysyn_f0 = compute_yin(ysyn, sr) - - f0_samples += [ - { - "ref": yref_f0, - "syn": ysyn_f0 - } - ] - - return f0_samples - - -def eval_f0_error(samples, distortion_fn): - results = [] - for sample in tqdm.tqdm(samples): - if sample is None: - results.append(None) - continue - # assume single channel - yref_f, _, _, yref_t = sample["ref"] - ysyn_f, _, _, ysyn_t = sample["syn"] - - yref_f = np.array(yref_f) - yref_t = np.array(yref_t) - ysyn_f = np.array(ysyn_f) - ysyn_t = np.array(ysyn_t) - - distortion = distortion_fn(yref_t, yref_f, ysyn_t, ysyn_f) - results.append((distortion.item(), - len(yref_f), - len(ysyn_f) - )) - return results - - -def eval_gross_pitch_error(samples): - return eval_f0_error(samples, gross_pitch_error) - - -def eval_voicing_decision_error(samples): - return eval_f0_error(samples, voicing_decision_error) - - -def eval_f0_frame_error(samples): - return eval_f0_error(samples, f0_frame_error) - - -def print_results(results, show_bin): - results = np.array(list(filter(lambda x: x is not None, results))) - - np.set_printoptions(precision=3) - - def _print_result(results): - res = { - "nutt": len(results), - "error": results[:, 0].mean(), - "std": results[:, 0].std(), - "dur_ref": int(results[:, 1].sum()), - "dur_syn": int(results[:, 2].sum()), - } - print(tabulate([res.values()], res.keys(), floatfmt=".4f")) - - print(">>>> ALL") - _print_result(results) - - if show_bin: - edges = [0, 200, 400, 600, 800, 1000, 2000, 4000] - for i in range(1, len(edges)): - mask = np.logical_and(results[:, 1] >= edges[i-1], - results[:, 1] < edges[i]) - if not mask.any(): - continue - bin_results = results[mask] - print(f">>>> ({edges[i-1]}, {edges[i]})") - _print_result(bin_results) - - -def main(eval_f0, gpe, vde, ffe, show_bin): - samples = load_eval_spec(eval_f0) - if gpe or vde or ffe: - f0_samples = extract_f0(samples) - - if gpe: - print("===== Evaluate Gross Pitch Error =====") - results = eval_gross_pitch_error(f0_samples) - print_results(results, show_bin) - if vde: - print("===== Evaluate Voicing Decision Error =====") - results = eval_voicing_decision_error(f0_samples) - print_results(results, show_bin) - if ffe: - print("===== Evaluate F0 Frame Error =====") - results = eval_f0_frame_error(f0_samples) - print_results(results, show_bin) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("eval_f0") - parser.add_argument("--gpe", action="store_true") - parser.add_argument("--vde", action="store_true") - parser.add_argument("--ffe", action="store_true") - parser.add_argument("--show-bin", action="store_true") - args = parser.parse_args() - - main(args.eval_f0, args.gpe, args.vde, args.ffe, args.show_bin) diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/tests/segments_test.py b/spaces/Olivier-Truong/faster-whisper-webui-v2/tests/segments_test.py deleted file mode 100644 index d829f1c77f74b3c96513fe4965d532cf2d1dceb4..0000000000000000000000000000000000000000 --- a/spaces/Olivier-Truong/faster-whisper-webui-v2/tests/segments_test.py +++ /dev/null @@ -1,48 +0,0 @@ -import sys -import unittest - -sys.path.append('../whisper-webui') - -from src.segments import merge_timestamps - -class TestSegments(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestSegments, self).__init__(*args, **kwargs) - - def test_merge_segments(self): - segments = [ - {'start': 10.0, 'end': 20.0}, - {'start': 22.0, 'end': 27.0}, - {'start': 31.0, 'end': 35.0}, - {'start': 45.0, 'end': 60.0}, - {'start': 61.0, 'end': 65.0}, - {'start': 68.0, 'end': 98.0}, - {'start': 100.0, 'end': 102.0}, - {'start': 110.0, 'end': 112.0} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 9.0, 'end': 36.0}, - {'start': 44.0, 'end': 66.0}, - {'start': 67.0, 'end': 99.0}, - {'start': 99.0, 'end': 103.0}, - {'start': 109.0, 'end': 113.0} - ]) - - def test_overlap_next(self): - segments = [ - {'start': 5.0, 'end': 39.182}, - {'start': 39.986, 'end': 40.814} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 4.0, 'end': 39.584}, - {'start': 39.584, 'end': 41.814} - ]) - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/Omnibus/game-test/js_app.py b/spaces/Omnibus/game-test/js_app.py deleted file mode 100644 index fe755be91578fe04cdf9de04fd365f04132b1b64..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/game-test/js_app.py +++ /dev/null @@ -1,225 +0,0 @@ -import gradio as gr - -load_js=""" - -async () => { - const script = document.createElement("script"); - script.onload = () => console.log("module loaded") ; - script.type="module"; - script.src = "https://cdn.jsdelivr.net/npm/phaser@3.11.0/dist/phaser.js"; - document.head.appendChild(script) -}""" -game_js=""" - -async () => { - // set testFn() function on globalThis, so you html onlclick can access it - globalThis.testFn = () => { - - var config = { - type: Phaser.AUTO, - width: 800, - height: 600, - physics: { - default: 'arcade', - arcade: { - gravity: { y: 300 }, - debug: false - } - }, - scene: { - preload: preload, - create: create, - update: update - } -}; - -var player; -var stars; -var bombs; -var platforms; -var cursors; -var score = 0; -var gameOver = false; -var scoreText; - -const game = new Phaser.Game(config); -//const menu = document.querySelector('#demo'); - -//menu.appendChild = game -const test_div = document.createElement("div"); -test_div.innerHTML= game; -document.body.appendChild(test_div); - -function preload () -{ - this.load.image('sky', 'https://huggingface.co/spaces/Omnibus/game-test/resolve/main/assets/sky.png'); - this.load.image('ground', 'https://huggingface.co/spaces/Omnibus/game-test/resolve/main/assets/platform.png'); - this.load.image('star', 'https://huggingface.co/spaces/Omnibus/game-test/resolve/main/assets/star.png'); - this.load.image('bomb', 'https://huggingface.co/spaces/Omnibus/game-test/resolve/main/assets/bomb.png'); - this.load.spritesheet('dude', 'https://huggingface.co/spaces/Omnibus/game-test/resolve/main/assets/dude.png', { frameWidth: 32, frameHeight: 48 }); -} - -function create () -{ - // A simple background for our game - this.add.image(400, 300, 'sky'); - - // The platforms group contains the ground and the 2 ledges we can jump on - platforms = this.physics.add.staticGroup(); - - // Here we create the ground. - // Scale it to fit the width of the game (the original sprite is 400x32 in size) - platforms.create(400, 568, 'ground').setScale(2).refreshBody(); - - // Now let's create some ledges - platforms.create(600, 400, 'ground'); - platforms.create(50, 250, 'ground'); - platforms.create(750, 220, 'ground'); - - // The player and its settings - player = this.physics.add.sprite(100, 450, 'dude'); - - // Player physics properties. Give the little guy a slight bounce. - player.setBounce(0.2); - player.setCollideWorldBounds(true); - - // Our player animations, turning, walking left and walking right. - this.anims.create({ - key: 'left', - frames: this.anims.generateFrameNumbers('dude', { start: 0, end: 3 }), - frameRate: 10, - repeat: -1 - }); - - this.anims.create({ - key: 'turn', - frames: [ { key: 'dude', frame: 4 } ], - frameRate: 20 - }); - - this.anims.create({ - key: 'right', - frames: this.anims.generateFrameNumbers('dude', { start: 5, end: 8 }), - frameRate: 10, - repeat: -1 - }); - - // Input Events - cursors = this.input.keyboard.createCursorKeys(); - - // Some stars to collect, 12 in total, evenly spaced 70 pixels apart along the x axis - stars = this.physics.add.group({ - key: 'star', - repeat: 11, - setXY: { x: 12, y: 0, stepX: 70 } - }); - - stars.children.iterate(function (child) { - - // Give each star a slightly different bounce - child.setBounceY(Phaser.Math.FloatBetween(0.4, 0.8)); - - }); - - bombs = this.physics.add.group(); - - // The score - scoreText = this.add.text(16, 16, 'score: 0', { fontSize: '32px', fill: '#000' }); - - // Collide the player and the stars with the platforms - this.physics.add.collider(player, platforms); - this.physics.add.collider(stars, platforms); - this.physics.add.collider(bombs, platforms); - - // Checks to see if the player overlaps with any of the stars, if he does call the collectStar function - this.physics.add.overlap(player, stars, collectStar, null, this); - - this.physics.add.collider(player, bombs, hitBomb, null, this); -} - -function update () -{ - if (gameOver) - { - return; - } - - if (cursors.left.isDown) - { - player.setVelocityX(-160); - - player.anims.play('left', true); - } - else if (cursors.right.isDown) - { - player.setVelocityX(160); - - player.anims.play('right', true); - } - else - { - player.setVelocityX(0); - - player.anims.play('turn'); - } - - if (cursors.up.isDown && player.body.touching.down) - { - player.setVelocityY(-330); - } -} - -function collectStar (player, star) -{ - star.disableBody(true, true); - - // Add and update the score - score += 10; - scoreText.setText('Score: ' + score); - - if (stars.countActive(true) === 0) - { - // A new batch of stars to collect - stars.children.iterate(function (child) { - - child.enableBody(true, child.x, 0, true, true); - - }); - - var x = (player.x < 400) ? Phaser.Math.Between(400, 800) : Phaser.Math.Between(0, 400); - - var bomb = bombs.create(x, 16, 'bomb'); - bomb.setBounce(1); - bomb.setCollideWorldBounds(true); - bomb.setVelocity(Phaser.Math.Between(-200, 200), 20); - bomb.allowGravity = false; - - } -} - -function hitBomb (player, bomb) -{ - this.physics.pause(); - - player.setTint(0xff0000); - - player.anims.play('turn'); - - gameOver = true; -} - - } -} - -""" - -with gr.Blocks() as app: - gr.HTML(""" -
          - -
          -""") - - app.load(None,None,None,_js=load_js) - app.load(None,None,None,_js=game_js) -app.launch() \ No newline at end of file diff --git a/spaces/OpenGVLab/DragGAN/stylegan2/op/upfirdn2d.py b/spaces/OpenGVLab/DragGAN/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 2da48c831d48ce0a66fa3943e6e0123ec28ba428..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/DragGAN/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,232 +0,0 @@ -from collections import abc -import os - -import torch -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load -import warnings - -module_path = os.path.dirname(os.path.abspath(__file__)) - -try: - upfirdn2d_op = load( - "upfirdn2d", - sources=[ - os.path.join(module_path, "upfirdn2d.cpp"), - os.path.join(module_path, "upfirdn2d_kernel.cu"), - ], - ) -except: - warnings.warn( - f"(This is not error) Switch to native implementation" - ) - - upfirdn2d_op = None - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = None - - if ctx.needs_input_grad[0]: - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - if not isinstance(up, abc.Iterable): - up = (up, up) - - if not isinstance(down, abc.Iterable): - down = (down, down) - - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - if input.device.type == "cpu": - out = _upfirdn2d_native(input, kernel, *up, *down, *pad) - - else: - out = UpFirDn2d.apply(input, kernel, up, down, pad) - - return out - - -def upfirdn2d_native(input, kernel, up=1, down=1, pad=(0, 0)): - if not isinstance(up, abc.Iterable): - up = (up, up) - - if not isinstance(down, abc.Iterable): - down = (down, down) - - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - out = _upfirdn2d_native(input, kernel, *up, *down, *pad) - - return out - - -def _upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/fetch_data/eval_sampler.py b/spaces/OpenGVLab/InternGPT/third-party/lama/fetch_data/eval_sampler.py deleted file mode 100644 index 7cffdbc969e3f5d5f18f589c29f70abd240f3986..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/fetch_data/eval_sampler.py +++ /dev/null @@ -1,20 +0,0 @@ -import os -import random - -val_files_path = os.path.abspath('.') + '/places_standard_dataset/original/val/' -list_of_random_val_files = os.path.abspath('.') + '/places_standard_dataset/original/eval_random_files.txt' -val_files = [val_files_path + image for image in os.listdir(val_files_path)] - -print(f'Sampling 30000 images out of {len(val_files)} images in {val_files_path}' + \ - f'and put their paths to {list_of_random_val_files}') - -print('In our paper we evaluate trained models on these 30k sampled (mask,image) pairs in our paper (check Sup. mat.)') - -random.shuffle(val_files) -val_files_random = val_files[0:30000] - -with open(list_of_random_val_files, 'w') as fw: - for filename in val_files_random: - fw.write(filename+'\n') -print('...done') - diff --git a/spaces/OpenMind-AI/starchat-playground/dialogues.py b/spaces/OpenMind-AI/starchat-playground/dialogues.py deleted file mode 100644 index 634c4a1d4f515f21b919cbf5d45440fb587d748f..0000000000000000000000000000000000000000 --- a/spaces/OpenMind-AI/starchat-playground/dialogues.py +++ /dev/null @@ -1,241 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import os -from dataclasses import asdict, dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Type, TypeVar, Union - -from huggingface_hub import ModelHubMixin, hf_hub_download - -# Generic variable that is either ModelHubMixin or a subclass thereof -T = TypeVar("T", bound="ModelHubMixin") - -TEMPLATE_FILENAME = "dialogue_template.json" -IGNORE_INDEX = -100 - - -@dataclass -class DialogueTemplate(ModelHubMixin): - """Converts all turns of a dialogue between a user and assistant to a standardized format. - - Adapted from OpenAI's ChatML (https://github.com/openai/openai-python/blob/main/chatml.md) and Vicuna (https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) - """ - - system: str - messages: List[Dict[str, str]] = None - system_token: str = "<|system|>" - user_token: str = "<|user|>" - assistant_token: str = "<|assistant|>" - end_token: str = "<|end|>" - - def get_training_prompt(self) -> str: - prompt = self.system_token + "\n" + self.system + self.end_token + "\n" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n" - else: - prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n" - return prompt - - def get_inference_prompt(self) -> str: - prompt = self.system_token + "\n" + self.system + self.end_token + "\n" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n" - else: - prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n" - prompt += self.assistant_token - return prompt - - def get_dialogue(self): - """Helper function to format the messages as an easy-to-read dialogue.""" - prompt = "" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += "\n\nHuman: " + message["content"] - else: - prompt += "\n\nAssistant: " + message["content"] - return prompt - - def get_special_tokens(self) -> List[str]: - return [self.system_token, self.user_token, self.assistant_token, self.end_token] - - def copy(self): - return DialogueTemplate( - system=self.system, - messages=self.messages, - system_token=self.system_token, - user_token=self.user_token, - assistant_token=self.assistant_token, - end_token=self.end_token, - ) - - def to_dict(self) -> Dict[str, Any]: - return {k: v for k, v in asdict(self).items()} - - @classmethod - def from_dict(cls, data): - return DialogueTemplate( - system=data["system"] if "system" in data else "", - messages=data["messages"] if "messages" in data else None, - system_token=data["system_token"] if "system_token" in data else "<|system|>", - user_token=data["user_token"] if "user_token" in data else "<|user|>", - assistant_token=data["assistant_token"] if "assistant_token" in data else "<|assistant|>", - end_token=data["end_token"] if "end_token" in data else "<|end|>", - ) - - def _save_pretrained(self, save_directory: Union[str, Path]) -> None: - save_directory = Path(save_directory) - save_directory.mkdir(exist_ok=True) - with open(save_directory / "dialogue_template.json", "w") as f: - json.dump(self.to_dict(), f, indent=2) - - @classmethod - def _from_pretrained( - cls: Type[T], - *, - model_id: str, - revision: Optional[str], - cache_dir: Optional[Union[str, Path]], - force_download: bool, - proxies: Optional[Dict], - resume_download: bool, - local_files_only: bool, - token: Optional[Union[str, bool]], - **model_kwargs, - ) -> T: - """Loads the dialogue template from a local directory or the Huggingface Hub. - - Args: - model_id (`str`): - ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`). - revision (`str`, *optional*): - Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the - latest commit on `main` branch. - force_download (`bool`, *optional*, defaults to `False`): - Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding - the existing cache. - resume_download (`bool`, *optional*, defaults to `False`): - Whether to delete incompletely received files. Will attempt to resume the download if such a file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint (e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`). - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. By default, it will use the token - cached when running `huggingface-cli login`. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the local cached file if it exists. - model_kwargs: - Additional keyword arguments passed along to the [`~ModelHubMixin._from_pretrained`] method. - """ - if os.path.isdir(model_id): # Can either be a local directory - print("Loading dialogue template from local directory") - template_file = os.path.join(model_id, TEMPLATE_FILENAME) - else: # Or a template on the Hub - template_file = hf_hub_download( # Download from the hub, passing same input args - repo_id=model_id, - filename=TEMPLATE_FILENAME, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - ) - - # Load template - with open(template_file, "r") as f: - data = json.load(f) - return cls.from_dict(data=data) - - -# A shortened version of the system message in Anthropic's HHH prompt: https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt -default_template = DialogueTemplate( - system="Below is a dialogue between a human user and an AI assistant. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed.", -) - -# OpenAI and OpenAssistant train on few to no system messages. -# TODO: consider defining this as the `default` template -no_system_template = DialogueTemplate( - system="", -) - -alpaca_template = DialogueTemplate( - system="Below is an instruction that describes a task. Write a response that appropriately completes the request.", - user_token="### Instruction:", - assistant_token="### Response:", -) - -SUPPORTED_DIALOGUE_TEMPLATES = { - "default": default_template, - "no_system": no_system_template, - "alpaca": alpaca_template, -} - - -def get_dialogue_template(template: str) -> DialogueTemplate: - if template not in SUPPORTED_DIALOGUE_TEMPLATES.keys(): - raise ValueError(f"Template {template} is not supported!") - return SUPPORTED_DIALOGUE_TEMPLATES[template].copy() - - -def prepare_dialogue(example, dialogue_template, is_train=True): - """Format example to single- or multi-turn dialogue.""" - # TODO: make this simpler by just ensuring every dataset has a messages column - if "messages" in example.keys() and example["messages"] is not None: - dialogue_template.messages = example["messages"] - elif all(k in example.keys() for k in ("prompt", "completion")): - # Construct single-turn dialogue from prompt and completion - dialogue_template.messages = [ - {"role": "user", "content": example["prompt"]}, - {"role": "assistant", "content": example["completion"]}, - ] - elif "prompt" in example.keys(): - # Construct single-turn dialogue from prompt (inference only) - dialogue_template.messages = [ - {"role": "user", "content": example["prompt"]}, - ] - else: - raise ValueError( - f"Could not format example as dialogue! Require either `messages` or `[prompt, completion]` or `[prompt]` keys but found {list(example.keys())}" - ) - if is_train: - example["text"] = dialogue_template.get_training_prompt() - else: - example["text"] = dialogue_template.get_inference_prompt() - return example - - -def mask_user_labels(tokenizer, dialogue_template, labels): - """Masks the user turns of a dialogue from the loss""" - user_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.user_token) - assistant_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.assistant_token) - for idx, label_id in enumerate(labels): - if label_id == user_token_id: - current_idx = idx - while labels[current_idx] != assistant_token_id and current_idx < len(labels): - labels[current_idx] = IGNORE_INDEX - current_idx += 1 diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/acquisitions.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/acquisitions.py deleted file mode 100644 index bfb966e6bcbc37a914a4ae22f20a6dabc28ac36d..0000000000000000000000000000000000000000 --- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/acquisitions.py +++ /dev/null @@ -1,36 +0,0 @@ -from sklearn.gaussian_process import GaussianProcessRegressor -from scipy.stats import norm -from typing import * - -def PI_acquisition(X: List, X_prime: List, model: GaussianProcessRegressor, maximize: bool = True): - """Acquisition function for bayesian optimization using probability of improvement - - Args: - X (List): A list containing the input data - X_prime (List): A list containing the generate samples - model (GaussianProcessRegressor): The gaussian model to use - maximize (bool, optional): A boolean value indicating the optimization objective. Defaults to True. - - Returns: - List: A list containing the probabilities - """ - - # let us predict the means for the input data - mu = model.predict(X) - - # let us calculate the means and standard deviation for the random samples - mu_e, std_e = model.predict(X_prime, return_std=True) - - if not maximize: - - mu = -mu - - mu_e = -mu_e - - # let us take the best mean - mu_best = max(mu) - - # let us calculate the probability of improvement - probs = norm.cdf((mu_e - mu_best) / std_e) - - return probs diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/ocrnet_hr18.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/ocrnet_hr18.py deleted file mode 100644 index c60f62a7cdf3f5c5096a7a7e725e8268fddcb057..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/ocrnet_hr18.py +++ /dev/null @@ -1,68 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - type='HRNet', - norm_cfg=norm_cfg, - norm_eval=False, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(18, 36)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(18, 36, 72)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(18, 36, 72, 144)))), - decode_head=[ - dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - channels=sum([18, 36, 72, 144]), - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - channels=512, - ocr_channels=256, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - ], - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/image_degradation/bsrgan.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/image_degradation/bsrgan.py deleted file mode 100644 index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/image_degradation/bsrgan.py +++ /dev/null @@ -1,730 +0,0 @@ -# -*- coding: utf-8 -*- -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(30, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - elif i == 1: - image = add_blur(image, sf=sf) - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image":image} - return example - - -# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc... -def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None): - """ - This is an extended degradation model by combining - the degradation models of BSRGAN and Real-ESRGAN - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - use_shuffle: the degradation shuffle - use_sharp: sharpening the img - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - if use_sharp: - img = add_sharpening(img) - hq = img.copy() - - if random.random() < shuffle_prob: - shuffle_order = random.sample(range(13), 13) - else: - shuffle_order = list(range(13)) - # local shuffle for noise, JPEG is always the last one - shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6))) - shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13))) - - poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1 - - for i in shuffle_order: - if i == 0: - img = add_blur(img, sf=sf) - elif i == 1: - img = add_resize(img, sf=sf) - elif i == 2: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 3: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 4: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 5: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - elif i == 6: - img = add_JPEG_noise(img) - elif i == 7: - img = add_blur(img, sf=sf) - elif i == 8: - img = add_resize(img, sf=sf) - elif i == 9: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 10: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 11: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 12: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - else: - print('check the shuffle!') - - # resize to desired size - img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])), - interpolation=random.choice([1, 2, 3])) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf, lq_patchsize) - - return img, hq - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - print(img) - img = util.uint2single(img) - print(img) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_lq = deg_fn(img) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') - - diff --git a/spaces/PSLD/PSLD/stable-diffusion/run/inverse_sr_ldm_laion.sh b/spaces/PSLD/PSLD/stable-diffusion/run/inverse_sr_ldm_laion.sh deleted file mode 100644 index 3e247f443679407a77b5868028a60a4a7e5b0816..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/run/inverse_sr_ldm_laion.sh +++ /dev/null @@ -1,13 +0,0 @@ -export CUDA_VISIBLE_DEVICES='1' -python scripts/inverse.py \ - --file_id='00478.png' \ - --task_config='configs/super_resolution_config.yaml' \ - --inpainting=0 \ - --general_inverse=1 \ - --gamma=1e-1 \ - --omega=1e-1 \ - --W=256 \ - --H=256 \ - --scale=5.0 \ - --laion400m \ - --outdir="outputs/psld-ldm-laion400m-sr" diff --git a/spaces/Pauitbid/meta-llama-Llama-2-7b-hfx/README.md b/spaces/Pauitbid/meta-llama-Llama-2-7b-hfx/README.md deleted file mode 100644 index 5be4d81f8fc4443e6c0b2bc49066d16cfb2f43a7..0000000000000000000000000000000000000000 --- a/spaces/Pauitbid/meta-llama-Llama-2-7b-hfx/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Meta Llama Llama 2 7b Hfx -emoji: 🔥 -colorFrom: purple -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/utils.py b/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/utils.py deleted file mode 100644 index d302528fd6fc9be8d782f78b6c44f4d894147d07..0000000000000000000000000000000000000000 --- a/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx - -from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN - - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/lvis.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/lvis.py deleted file mode 100644 index bdb7fa7ed321525932fde41ebfa5b8a17477ac83..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/lvis.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import json -import os -import time -from collections import defaultdict - -import pycocotools.mask as mask_utils -import torchvision -from PIL import Image - -# from .coco import ConvertCocoPolysToMask, make_coco_transforms -from .modulated_coco import ConvertCocoPolysToMask - - -def _isArrayLike(obj): - return hasattr(obj, "__iter__") and hasattr(obj, "__len__") - - -class LVIS: - def __init__(self, annotation_path=None): - """Class for reading and visualizing annotations. - Args: - annotation_path (str): location of annotation file - """ - self.anns = {} - self.cats = {} - self.imgs = {} - self.img_ann_map = defaultdict(list) - self.cat_img_map = defaultdict(list) - self.dataset = {} - - if annotation_path is not None: - print("Loading annotations.") - - tic = time.time() - self.dataset = self._load_json(annotation_path) - print("Done (t={:0.2f}s)".format(time.time() - tic)) - - assert type(self.dataset) == dict, "Annotation file format {} not supported.".format(type(self.dataset)) - self._create_index() - - def _load_json(self, path): - with open(path, "r") as f: - return json.load(f) - - def _create_index(self): - print("Creating index.") - - self.img_ann_map = defaultdict(list) - self.cat_img_map = defaultdict(list) - - self.anns = {} - self.cats = {} - self.imgs = {} - - for ann in self.dataset["annotations"]: - self.img_ann_map[ann["image_id"]].append(ann) - self.anns[ann["id"]] = ann - - for img in self.dataset["images"]: - self.imgs[img["id"]] = img - - for cat in self.dataset["categories"]: - self.cats[cat["id"]] = cat - - for ann in self.dataset["annotations"]: - self.cat_img_map[ann["category_id"]].append(ann["image_id"]) - - print("Index created.") - - def get_ann_ids(self, img_ids=None, cat_ids=None, area_rng=None): - """Get ann ids that satisfy given filter conditions. - Args: - img_ids (int array): get anns for given imgs - cat_ids (int array): get anns for given cats - area_rng (float array): get anns for a given area range. e.g [0, inf] - Returns: - ids (int array): integer array of ann ids - """ - if img_ids is not None: - img_ids = img_ids if _isArrayLike(img_ids) else [img_ids] - if cat_ids is not None: - cat_ids = cat_ids if _isArrayLike(cat_ids) else [cat_ids] - anns = [] - if img_ids is not None: - for img_id in img_ids: - anns.extend(self.img_ann_map[img_id]) - else: - anns = self.dataset["annotations"] - - # return early if no more filtering required - if cat_ids is None and area_rng is None: - return [_ann["id"] for _ann in anns] - - cat_ids = set(cat_ids) - - if area_rng is None: - area_rng = [0, float("inf")] - - ann_ids = [ - _ann["id"] - for _ann in anns - if _ann["category_id"] in cat_ids and _ann["area"] > area_rng[0] and _ann["area"] < area_rng[1] - ] - return ann_ids - - def get_cat_ids(self): - """Get all category ids. - Returns: - ids (int array): integer array of category ids - """ - return list(self.cats.keys()) - - def get_img_ids(self): - """Get all img ids. - Returns: - ids (int array): integer array of image ids - """ - return list(self.imgs.keys()) - - def _load_helper(self, _dict, ids): - if ids is None: - return list(_dict.values()) - elif _isArrayLike(ids): - return [_dict[id] for id in ids] - else: - return [_dict[ids]] - - def load_anns(self, ids=None): - """Load anns with the specified ids. If ids=None load all anns. - Args: - ids (int array): integer array of annotation ids - Returns: - anns (dict array) : loaded annotation objects - """ - return self._load_helper(self.anns, ids) - - def load_cats(self, ids): - """Load categories with the specified ids. If ids=None load all - categories. - Args: - ids (int array): integer array of category ids - Returns: - cats (dict array) : loaded category dicts - """ - return self._load_helper(self.cats, ids) - - def load_imgs(self, ids): - """Load categories with the specified ids. If ids=None load all images. - Args: - ids (int array): integer array of image ids - Returns: - imgs (dict array) : loaded image dicts - """ - return self._load_helper(self.imgs, ids) - - def download(self, save_dir, img_ids=None): - """Download images from mscoco.org server. - Args: - save_dir (str): dir to save downloaded images - img_ids (int array): img ids of images to download - """ - imgs = self.load_imgs(img_ids) - - if not os.path.exists(save_dir): - os.makedirs(save_dir) - - for img in imgs: - file_name = os.path.join(save_dir, img["file_name"]) - if not os.path.exists(file_name): - from urllib.request import urlretrieve - - urlretrieve(img["coco_url"], file_name) - - def ann_to_rle(self, ann): - """Convert annotation which can be polygons, uncompressed RLE to RLE. - Args: - ann (dict) : annotation object - Returns: - ann (rle) - """ - img_data = self.imgs[ann["image_id"]] - h, w = img_data["height"], img_data["width"] - segm = ann["segmentation"] - if isinstance(segm, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = mask_utils.frPyObjects(segm, h, w) - rle = mask_utils.merge(rles) - elif isinstance(segm["counts"], list): - # uncompressed RLE - rle = mask_utils.frPyObjects(segm, h, w) - else: - # rle - rle = ann["segmentation"] - return rle - - def ann_to_mask(self, ann): - """Convert annotation which can be polygons, uncompressed RLE, or RLE - to binary mask. - Args: - ann (dict) : annotation object - Returns: - binary mask (numpy 2D array) - """ - rle = self.ann_to_rle(ann) - return mask_utils.decode(rle) - - -class LvisDetectionBase(torchvision.datasets.VisionDataset): - def __init__(self, root, annFile, transform=None, target_transform=None, transforms=None): - super(LvisDetectionBase, self).__init__(root, transforms, transform, target_transform) - self.lvis = LVIS(annFile) - self.ids = list(sorted(self.lvis.imgs.keys())) - - def __getitem__(self, index): - """ - Args: - index (int): Index - Returns: - tuple: Tuple (image, target). target is the object returned by ``coco.loadAnns``. - """ - lvis = self.lvis - img_id = self.ids[index] - ann_ids = lvis.get_ann_ids(img_ids=img_id) - target = lvis.load_anns(ann_ids) - - path = "/".join(self.lvis.load_imgs(img_id)[0]["coco_url"].split("/")[-2:]) - - img = Image.open(os.path.join(self.root, path)).convert("RGB") - if self.transforms is not None: - img, target = self.transforms(img, target) - - return img, target - - - def __len__(self): - return len(self.ids) - - -class LvisDetection(LvisDetectionBase): - def __init__(self, img_folder, ann_file, transforms, return_masks=False, **kwargs): - super(LvisDetection, self).__init__(img_folder, ann_file) - self.ann_file = ann_file - self._transforms = transforms - self.prepare = ConvertCocoPolysToMask(return_masks) - - def __getitem__(self, idx): - img, target = super(LvisDetection, self).__getitem__(idx) - image_id = self.ids[idx] - target = {"image_id": image_id, "annotations": target} - img, target = self.prepare(img, target) - if self._transforms is not None: - img = self._transforms(img) - return img, target, idx - - def get_raw_image(self, idx): - img, target = super(LvisDetection, self).__getitem__(idx) - return img - - def categories(self): - id2cat = {c["id"]: c for c in self.lvis.dataset["categories"]} - all_cats = sorted(list(id2cat.keys())) - categories = {} - for l in list(all_cats): - categories[l] = id2cat[l]['name'] - return categories \ No newline at end of file diff --git a/spaces/Plurigrid/LifeSim/src/lib/utils.ts b/spaces/Plurigrid/LifeSim/src/lib/utils.ts deleted file mode 100644 index ec79801fe9cdd7711f6dbef26678a134c634a8be..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/lib/utils.ts +++ /dev/null @@ -1,6 +0,0 @@ -import { type ClassValue, clsx } from "clsx" -import { twMerge } from "tailwind-merge" - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_3_score_only.sh b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_3_score_only.sh deleted file mode 100644 index e1eec48598220f07fb1198ba9499f172dbc96dd3..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_3_score_only.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash -#SBATCH -p gpu -#SBATCH --mem=32g -#SBATCH --gres=gpu:rtx2080:1 -#SBATCH -c 3 -#SBATCH --output=example_3.out - -source activate mlfold - -path_to_PDB="../inputs/PDB_complexes/pdbs/3HTN.pdb" - -output_dir="../outputs/example_3_score_only_outputs" -if [ ! -d $output_dir ] -then - mkdir -p $output_dir -fi - -chains_to_design="A B" - -python ../protein_mpnn_run.py \ - --pdb_path $path_to_PDB \ - --pdb_path_chains "$chains_to_design" \ - --out_folder $output_dir \ - --num_seq_per_target 10 \ - --sampling_temp "0.1" \ - --score_only 1 \ - --seed 37 \ - --batch_size 1 diff --git a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/losses/contperceptual.py b/spaces/RamAnanth1/T2I-Adapter/ldm/modules/losses/contperceptual.py deleted file mode 100644 index 672c1e32a1389def02461c0781339681060c540e..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/losses/contperceptual.py +++ /dev/null @@ -1,111 +0,0 @@ -import torch -import torch.nn as nn - -from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no? - - -class LPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm - ).apply(weights_init) - self.discriminator_iter_start = disc_start - self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", - weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - if self.disc_factor > 0.0: - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - else: - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log - diff --git a/spaces/RamV/ChatRobo_II/app.py b/spaces/RamV/ChatRobo_II/app.py deleted file mode 100644 index 14c938c7c8e641a3741df6de5b081aad78cb71c0..0000000000000000000000000000000000000000 --- a/spaces/RamV/ChatRobo_II/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import os -import openai -import gradio as gr - -# Set up OpenAI API credentials -api_key = os.environ.get("OPENAI_API_KEY") -openai.api_key = api_key - -# Define a function to generate a response to user input -def generate_response(user_input): - if "who created you" in user_input.lower(): - chat_response = "I was created by Ram.V" - elif "who is superstar" in user_input.lower(): - chat_response = "The one and only * Superstar Rajinikanth * Thalaiva!" - elif "what's on 10th aug" in user_input.lower(): - chat_response = "Jailer Movie releasing! Alappara Kelappurom Thalaivaru Nerandharam!" - elif "what is the weather like" in user_input.lower(): - chat_response = "I'm sorry, but I don't have the ability to check the weather. Is there something else I can help you with?" - elif "how are you" in user_input.lower(): - chat_response = "I'm a chatbot created by Ram.V. I don't have feelings, but thanks for asking. Hope you're well! :-)" - elif "what's your name" in user_input.lower(): - chat_response = "My name is ChatRobo :-)" - else: - prompt = f"You said: {user_input}" - - - # Generate a response using the OpenAI API - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.9, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0.6, - stop=["Human:", "AI:"] - ) - - # Extract the response from the API output - chat_response = response.choices[0].text.strip() - - return chat_response - -# Define the function to handle the chat history -def openai_chat_history(input, history): - history = history or [] - if input.strip() != "": - s = list(sum(history, ())) - s.append(input) - inp = ' '.join(s) - output = generate_response(inp) - history.append((input, output)) - return history[-1][1] - else: - return "" - -# Define the conversation prompt -conversation_prompt = "Welcome to ChatRobo, kindly type in your enquiries: " - -# Set up the Gradio interface -block = gr.Interface( - fn=openai_chat_history, - inputs=[gr.inputs.Textbox(placeholder=conversation_prompt)], - outputs=[gr.outputs.Textbox(label="ChatRobo Output")] -) - -# Launch the Gradio interface -block.launch() - - diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/hash.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/hash.py deleted file mode 100644 index 042dac813e74b8187c3754cb9a937c7f7183e331..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/hash.py +++ /dev/null @@ -1,59 +0,0 @@ -import hashlib -import logging -import sys -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.utils.hashes import FAVORITE_HASH, STRONG_HASHES -from pip._internal.utils.misc import read_chunks, write_output - -logger = logging.getLogger(__name__) - - -class HashCommand(Command): - """ - Compute a hash of a local package archive. - - These can be used with --hash in a requirements file to do repeatable - installs. - """ - - usage = "%prog [options] ..." - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-a", - "--algorithm", - dest="algorithm", - choices=STRONG_HASHES, - action="store", - default=FAVORITE_HASH, - help="The hash algorithm to use: one of {}".format( - ", ".join(STRONG_HASHES) - ), - ) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - self.parser.print_usage(sys.stderr) - return ERROR - - algorithm = options.algorithm - for path in args: - write_output( - "%s:\n--hash=%s:%s", path, algorithm, _hash_of_file(path, algorithm) - ) - return SUCCESS - - -def _hash_of_file(path: str, algorithm: str) -> str: - """Return the hash digest of a file.""" - with open(path, "rb") as archive: - hash = hashlib.new(algorithm) - for chunk in read_chunks(archive): - hash.update(chunk) - return hash.hexdigest() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pyparsing/testing.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pyparsing/testing.py deleted file mode 100644 index 84a0ef17078c99e5917db41e3dbaf035fe206d7c..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pyparsing/testing.py +++ /dev/null @@ -1,331 +0,0 @@ -# testing.py - -from contextlib import contextmanager -import typing - -from .core import ( - ParserElement, - ParseException, - Keyword, - __diag__, - __compat__, -) - - -class pyparsing_test: - """ - namespace class for classes useful in writing unit tests - """ - - class reset_pyparsing_context: - """ - Context manager to be used when writing unit tests that modify pyparsing config values: - - packrat parsing - - bounded recursion parsing - - default whitespace characters. - - default keyword characters - - literal string auto-conversion class - - __diag__ settings - - Example:: - - with reset_pyparsing_context(): - # test that literals used to construct a grammar are automatically suppressed - ParserElement.inlineLiteralsUsing(Suppress) - - term = Word(alphas) | Word(nums) - group = Group('(' + term[...] + ')') - - # assert that the '()' characters are not included in the parsed tokens - self.assertParseAndCheckList(group, "(abc 123 def)", ['abc', '123', 'def']) - - # after exiting context manager, literals are converted to Literal expressions again - """ - - def __init__(self): - self._save_context = {} - - def save(self): - self._save_context["default_whitespace"] = ParserElement.DEFAULT_WHITE_CHARS - self._save_context["default_keyword_chars"] = Keyword.DEFAULT_KEYWORD_CHARS - - self._save_context[ - "literal_string_class" - ] = ParserElement._literalStringClass - - self._save_context["verbose_stacktrace"] = ParserElement.verbose_stacktrace - - self._save_context["packrat_enabled"] = ParserElement._packratEnabled - if ParserElement._packratEnabled: - self._save_context[ - "packrat_cache_size" - ] = ParserElement.packrat_cache.size - else: - self._save_context["packrat_cache_size"] = None - self._save_context["packrat_parse"] = ParserElement._parse - self._save_context[ - "recursion_enabled" - ] = ParserElement._left_recursion_enabled - - self._save_context["__diag__"] = { - name: getattr(__diag__, name) for name in __diag__._all_names - } - - self._save_context["__compat__"] = { - "collect_all_And_tokens": __compat__.collect_all_And_tokens - } - - return self - - def restore(self): - # reset pyparsing global state - if ( - ParserElement.DEFAULT_WHITE_CHARS - != self._save_context["default_whitespace"] - ): - ParserElement.set_default_whitespace_chars( - self._save_context["default_whitespace"] - ) - - ParserElement.verbose_stacktrace = self._save_context["verbose_stacktrace"] - - Keyword.DEFAULT_KEYWORD_CHARS = self._save_context["default_keyword_chars"] - ParserElement.inlineLiteralsUsing( - self._save_context["literal_string_class"] - ) - - for name, value in self._save_context["__diag__"].items(): - (__diag__.enable if value else __diag__.disable)(name) - - ParserElement._packratEnabled = False - if self._save_context["packrat_enabled"]: - ParserElement.enable_packrat(self._save_context["packrat_cache_size"]) - else: - ParserElement._parse = self._save_context["packrat_parse"] - ParserElement._left_recursion_enabled = self._save_context[ - "recursion_enabled" - ] - - __compat__.collect_all_And_tokens = self._save_context["__compat__"] - - return self - - def copy(self): - ret = type(self)() - ret._save_context.update(self._save_context) - return ret - - def __enter__(self): - return self.save() - - def __exit__(self, *args): - self.restore() - - class TestParseResultsAsserts: - """ - A mixin class to add parse results assertion methods to normal unittest.TestCase classes. - """ - - def assertParseResultsEquals( - self, result, expected_list=None, expected_dict=None, msg=None - ): - """ - Unit test assertion to compare a :class:`ParseResults` object with an optional ``expected_list``, - and compare any defined results names with an optional ``expected_dict``. - """ - if expected_list is not None: - self.assertEqual(expected_list, result.as_list(), msg=msg) - if expected_dict is not None: - self.assertEqual(expected_dict, result.as_dict(), msg=msg) - - def assertParseAndCheckList( - self, expr, test_string, expected_list, msg=None, verbose=True - ): - """ - Convenience wrapper assert to test a parser element and input string, and assert that - the resulting ``ParseResults.asList()`` is equal to the ``expected_list``. - """ - result = expr.parse_string(test_string, parse_all=True) - if verbose: - print(result.dump()) - else: - print(result.as_list()) - self.assertParseResultsEquals(result, expected_list=expected_list, msg=msg) - - def assertParseAndCheckDict( - self, expr, test_string, expected_dict, msg=None, verbose=True - ): - """ - Convenience wrapper assert to test a parser element and input string, and assert that - the resulting ``ParseResults.asDict()`` is equal to the ``expected_dict``. - """ - result = expr.parse_string(test_string, parseAll=True) - if verbose: - print(result.dump()) - else: - print(result.as_list()) - self.assertParseResultsEquals(result, expected_dict=expected_dict, msg=msg) - - def assertRunTestResults( - self, run_tests_report, expected_parse_results=None, msg=None - ): - """ - Unit test assertion to evaluate output of ``ParserElement.runTests()``. If a list of - list-dict tuples is given as the ``expected_parse_results`` argument, then these are zipped - with the report tuples returned by ``runTests`` and evaluated using ``assertParseResultsEquals``. - Finally, asserts that the overall ``runTests()`` success value is ``True``. - - :param run_tests_report: tuple(bool, [tuple(str, ParseResults or Exception)]) returned from runTests - :param expected_parse_results (optional): [tuple(str, list, dict, Exception)] - """ - run_test_success, run_test_results = run_tests_report - - if expected_parse_results is not None: - merged = [ - (*rpt, expected) - for rpt, expected in zip(run_test_results, expected_parse_results) - ] - for test_string, result, expected in merged: - # expected should be a tuple containing a list and/or a dict or an exception, - # and optional failure message string - # an empty tuple will skip any result validation - fail_msg = next( - (exp for exp in expected if isinstance(exp, str)), None - ) - expected_exception = next( - ( - exp - for exp in expected - if isinstance(exp, type) and issubclass(exp, Exception) - ), - None, - ) - if expected_exception is not None: - with self.assertRaises( - expected_exception=expected_exception, msg=fail_msg or msg - ): - if isinstance(result, Exception): - raise result - else: - expected_list = next( - (exp for exp in expected if isinstance(exp, list)), None - ) - expected_dict = next( - (exp for exp in expected if isinstance(exp, dict)), None - ) - if (expected_list, expected_dict) != (None, None): - self.assertParseResultsEquals( - result, - expected_list=expected_list, - expected_dict=expected_dict, - msg=fail_msg or msg, - ) - else: - # warning here maybe? - print("no validation for {!r}".format(test_string)) - - # do this last, in case some specific test results can be reported instead - self.assertTrue( - run_test_success, msg=msg if msg is not None else "failed runTests" - ) - - @contextmanager - def assertRaisesParseException(self, exc_type=ParseException, msg=None): - with self.assertRaises(exc_type, msg=msg): - yield - - @staticmethod - def with_line_numbers( - s: str, - start_line: typing.Optional[int] = None, - end_line: typing.Optional[int] = None, - expand_tabs: bool = True, - eol_mark: str = "|", - mark_spaces: typing.Optional[str] = None, - mark_control: typing.Optional[str] = None, - ) -> str: - """ - Helpful method for debugging a parser - prints a string with line and column numbers. - (Line and column numbers are 1-based.) - - :param s: tuple(bool, str - string to be printed with line and column numbers - :param start_line: int - (optional) starting line number in s to print (default=1) - :param end_line: int - (optional) ending line number in s to print (default=len(s)) - :param expand_tabs: bool - (optional) expand tabs to spaces, to match the pyparsing default - :param eol_mark: str - (optional) string to mark the end of lines, helps visualize trailing spaces (default="|") - :param mark_spaces: str - (optional) special character to display in place of spaces - :param mark_control: str - (optional) convert non-printing control characters to a placeholding - character; valid values: - - "unicode" - replaces control chars with Unicode symbols, such as "␍" and "␊" - - any single character string - replace control characters with given string - - None (default) - string is displayed as-is - - :return: str - input string with leading line numbers and column number headers - """ - if expand_tabs: - s = s.expandtabs() - if mark_control is not None: - if mark_control == "unicode": - tbl = str.maketrans( - {c: u for c, u in zip(range(0, 33), range(0x2400, 0x2433))} - | {127: 0x2421} - ) - eol_mark = "" - else: - tbl = str.maketrans( - {c: mark_control for c in list(range(0, 32)) + [127]} - ) - s = s.translate(tbl) - if mark_spaces is not None and mark_spaces != " ": - if mark_spaces == "unicode": - tbl = str.maketrans({9: 0x2409, 32: 0x2423}) - s = s.translate(tbl) - else: - s = s.replace(" ", mark_spaces) - if start_line is None: - start_line = 1 - if end_line is None: - end_line = len(s) - end_line = min(end_line, len(s)) - start_line = min(max(1, start_line), end_line) - - if mark_control != "unicode": - s_lines = s.splitlines()[start_line - 1 : end_line] - else: - s_lines = [line + "␊" for line in s.split("␊")[start_line - 1 : end_line]] - if not s_lines: - return "" - - lineno_width = len(str(end_line)) - max_line_len = max(len(line) for line in s_lines) - lead = " " * (lineno_width + 1) - if max_line_len >= 99: - header0 = ( - lead - + "".join( - "{}{}".format(" " * 99, (i + 1) % 100) - for i in range(max(max_line_len // 100, 1)) - ) - + "\n" - ) - else: - header0 = "" - header1 = ( - header0 - + lead - + "".join( - " {}".format((i + 1) % 10) - for i in range(-(-max_line_len // 10)) - ) - + "\n" - ) - header2 = lead + "1234567890" * (-(-max_line_len // 10)) + "\n" - return ( - header1 - + header2 - + "\n".join( - "{:{}d}:{}{}".format(i, lineno_width, line, eol_mark) - for i, line in enumerate(s_lines, start=start_line) - ) - + "\n" - ) diff --git a/spaces/ReThGe/Linet/rethge_components.py b/spaces/ReThGe/Linet/rethge_components.py deleted file mode 100644 index eca272ee851d72ad831d8169e5d80b83a9eb0032..0000000000000000000000000000000000000000 --- a/spaces/ReThGe/Linet/rethge_components.py +++ /dev/null @@ -1,365 +0,0 @@ -## this file contains self-coded components classes, for build deepleanring model in pytorch - -# author: rethge -# created data: 2023/07/20 - -import torch -from torch import nn - -import torch.nn.functional as F - - -# Depthwise separeble conv——————————————————————————————————————————————— - -class RTG_depthwise_separable_conv(nn.Module): - def __init__(self, input_size, output_size, kernel_size, - stride, padding, bias: bool = True): - super().__init__() - - self.dsc = nn.Sequential( - nn.Conv2d(in_channels=input_size, - out_channels=input_size, - kernel_size=kernel_size, - stride=stride, - padding=padding, - groups=input_size, - bias=bias), - - nn.Conv2d(in_channels=input_size, - out_channels=output_size, - kernel_size=1, - stride=1, - padding=0, - bias=bias) - ) - - - def forward(self, x): - return self.dsc(x) - - -class RTG_res_block(nn.Module): - def __init__(self, nin): - super().__init__() - - self.inp = nin - self.conv1 = RTG_depthwise_separable_conv(nin, nin, kernel_size=3, - stride=1,padding=1) - self.conv2 = RTG_depthwise_separable_conv(nin, nin, kernel_size=3, - stride=1,padding=1) - - def forward(self, x): - return F.selu(self.conv2(F.selu(self.conv1(x))) + x) # if using concat, we are adding channles - - -class RTG_res_block_expand(nn.Module): - def __init__(self, nin, nout): - super().__init__() - - self.inp = nin - self.oup = nout - self.conv1 = RTG_depthwise_separable_conv(nin, nout, kernel_size=3, - stride=1, padding=1) - self.conv2 = RTG_depthwise_separable_conv(nout, nout, kernel_size=3, - stride=1, padding=1) - - self.identity = nn.Conv2d(nin, nout, kernel_size=1, - stride=1, padding=0) - - def forward(self, x): - a = self.conv2(F.selu(self.conv1(x))) # (1, 64, 18, 18) - identity = self.identity(x) - - return F.selu(a + identity) - - - - -# ViT———————————————————————————————————————————————————————————————————— - -# a torch class for patch layer -- vision transformer -class RTG_PatchEmbedding(nn.Module): - """Turns a 2D input image into a 1D sequence learnable embedding vector. - - Args: - in_channels (int): Number of color channels for the input images. Defaults to 3. - patch_size (int): Size of patches to convert input image into. Defaults to 16. - embedding_dim (int): Size of embedding to turn image into. Defaults to 768. - """ - def __init__(self, - input_channels: int = 3, - patch_size: int = 16, - embedding_size: int = 768): - super().__init__() - - self.patch_size = patch_size - - self.patcher = nn.Conv2d(in_channels=input_channels, - out_channels=embedding_size, - stride=patch_size, - kernel_size=patch_size, - padding=0) - - self.flatten = nn.Flatten(start_dim=2, end_dim=3) - - def forward(self, x): - img_size = x.shape[-1] - assert img_size % self.patch_size == 0, f"Input image size must be divisble by patch size, image shape: {img_size}, patch size: {self.patch_size}" - - return self.flatten(self.patcher(x)).permute(0,2,1) - - -class RTG_MultiheadSelf_attention_block(nn.Module): - def __init__(self, - embedding_dim: int = 768, - num_head: int = 12, - attention_dropout: float = 0): - super().__init__() - - self.LayerNorm = nn.LayerNorm(normalized_shape=embedding_dim) - - self.multihead_attention = nn.MultiheadAttention(embed_dim=embedding_dim, - num_heads=num_head, - dropout=attention_dropout, - batch_first=True) - - def forward(self, x): - x = self.LayerNorm(x) # x with shape [1, 197, 768] - attention_out, _ = self.multihead_attention(query=x, - key=x, - value=x, - need_weights=False) - - return attention_out - - -class RTG_MLPBlock(nn.Module): - def __init__(self, - embedding_dim: int = 768, - mlp_size: int = 3072, - dropout: float = 0.1): - super().__init__() - - self.LayerNorm = nn.LayerNorm(normalized_shape=embedding_dim) - - self.mlp = nn.Sequential( - nn.Linear(in_features=embedding_dim, - out_features=mlp_size), - nn.GELU(), - nn.Dropout(p=dropout), - nn.Linear(in_features=mlp_size, - out_features=embedding_dim), - nn.Dropout(p=dropout) - ) - - - def forward(self, x): - return self.mlp(self.LayerNorm(x)) - - -class RTG_TransformerEncoderBlock(nn.Module): - def __init__(self, - embedding_dim: int = 768, - num_head: int = 12, - mlp_size: int = 3072, - mlp_dropout: float = 0.1, - attention_dropout: float = 0): - super().__init__() - - self.msa_block = RTG_MultiheadSelf_attention_block(embedding_dim=embedding_dim, - num_head=num_head, - attention_dropout=attention_dropout) - - self.mlp_block = RTG_MLPBlock(embedding_dim=embedding_dim, - mlp_size=mlp_size, - dropout=mlp_dropout) - - - def forward(self, x): - x = self.msa_block(x) + x - x = self.mlp_block(x) + x - - return x - - -class RTG_ViT(nn.Module): - def __init__(self, - img_size: int = 224, - input_channels: int = 3, - patch_size: int = 16, - num_transformer_layers: int = 12, - # embedding_dim: int = 768, - num_head: int = 12, - mlp_size: int = 3072, - embedding_dropout: float = 0.1, # Dropout for patch and position embeddings - mlp_dropout: float = 0.1, # Dropout for dense/MLP layers - attention_dropout: float = 0, # Dropout for attention projection - num_classes: int = 1000): # Default for ImageNet but can customize this - super().__init__() - - assert img_size % patch_size == 0, f"Image size must be divisible by patch size, image size: {img_size}, patch size: {patch_size}." - - # self.patch_size = patch_size - self.num_patches = (img_size//patch_size)**2 - self.embedding_dim = input_channels*patch_size**2 - - self.class_embedding = nn.Parameter(data=torch.randn(1, 1, self.embedding_dim), - requires_grad=True) # 1x1x768 - - self.position_embedding = nn.Parameter(data=torch.randn(1, self.num_patches+1, self.embedding_dim), - requires_grad=True) # 1x197x768 - - self.embedding_dropout = nn.Dropout(p=embedding_dropout) - - self.img_patch_embedding = RTG_PatchEmbedding(input_channels=input_channels, - patch_size=patch_size, - embedding_size=self.embedding_dim) - - # Note: The "*" means "all" - self.transformer_encoder = nn.Sequential( # stack 12 times of encoder block - *[RTG_TransformerEncoderBlock(embedding_dim=self.embedding_dim, - num_head=num_head, - mlp_size=mlp_size, - mlp_dropout=mlp_dropout) for _ in range(num_transformer_layers)]) - - - self.classifier = nn.Sequential( - nn.LayerNorm(normalized_shape=self.embedding_dim), - nn.Linear(in_features=self.embedding_dim, - out_features=num_classes) - ) - - - def forward(self, x): - - # Get batch size - # batch_size = x.shape[0] - - # Create class token embedding and expand it to match the batch size (equation 1) - class_token = self.class_embedding.expand(x.shape[0], -1, -1) # "-1" means to infer the dimension (try this line on its own) - - # Create patch embedding (equation 1) - # x = self.img_patch_embedding(x) - - # Concat class embedding and patch embedding (equation 1) - x = torch.cat((class_token, self.img_patch_embedding(x)), dim=1) + self.position_embedding - - # Add position embedding to patch embedding (equation 1) - # x = self.position_embedding + x - - # Run embedding dropout (Appendix B.1) - # x = self.embedding_dropout(x) - - # Pass patch, position and class embedding through transformer encoder layers (equations 2 & 3) - x = self.transformer_encoder(self.embedding_dropout(x)) - - # Put 0 index logit through classifier (equation 4) - x = self.classifier(x[:, 0]) # run on each sample in a batch at 0 index -> class embeddings at the very top of tensor - - return x - - - -# Resnet—————————————————————————————————————————————————————————————————————————————————————————————————————— -class Block(nn.Module): - def __init__(self, - in_c, - out_c, - identity_downsample=None, # conv - stride=1): - super().__init__() - - self.expansion = 4 - - self.conv1 = nn.Conv2d(in_c, out_c, kernel_size=1, stride=1, padding=0) - self.bn1 = nn.BatchNorm2d(out_c) - - self.conv2 = nn.Conv2d(out_c, out_c, kernel_size=3, stride=stride, padding=1) - self.bn2 = nn.BatchNorm2d(out_c) - - self.conv3 = nn.Conv2d(out_c, out_c*self.expansion, kernel_size=1, stride=1, padding=0) - self.bn3 = nn.BatchNorm2d(out_c*self.expansion) - - self.relu = nn.ReLU() - self.identity_downsample = identity_downsample - - def forward(self, x): - identity = x - - x = self.relu(self.bn1(self.conv1(x))) - - x = self.relu(self.bn2(self.conv2(x))) - - x = self.bn3(self.conv3(x)) - - if self.identity_downsample is not None: - identity = self.identity_downsample(identity) - - x += identity - x = self.relu(x) - - return x - - -class RTG_Resnet(nn.Module): - def __init__(self, block, layers, img_channels, num_classes): - super().__init__() - - self.in_c = 64 - self.conv1 = nn.Conv2d(img_channels, 64, kernel_size=7, stride=2, padding=3) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU() - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - # res layers - self.layer1 = self._make_layer(block, layers[0], - out_channels=64, - stride=1) - - self.layer2 = self._make_layer(block, layers[1], - out_channels=128, - stride=2) - - self.layer3 = self._make_layer(block, layers[2], - out_channels=256, - stride=2) - - self.layer4 = self._make_layer(block, layers[3], - out_channels=512, - stride=2) - - self.avgpool = nn.AdaptiveAvgPool2d((1,1)) # GAP - self.fc = nn.Linear(512*4, num_classes) - - - def forward(self, x): - x = self.maxpool(self.relu(self.bn1(self.conv1(x)))) - - x = self.avgpool(self.layer4(self.layer3(self.layer2(self.layer1(x))))) - x = x.reshape(x.shape[0], -1) - x = self.fc(x) - - return x - - def _make_layer(self, block, num_res_blocks, out_channels, stride): - - identity_dowansample = None - layers = [] - - if stride != 1 or self.in_c != out_channels*4: - identity_dowansample = nn.Sequential( - nn.Conv2d(self.in_c, out_channels*4, kernel_size=1, stride=stride), - nn.BatchNorm2d(out_channels*4) - ) - - layers.append(block(self.in_c, out_channels, identity_dowansample, stride)) - - self.in_c = out_channels*4 - - for _ in range(num_res_blocks - 1): - layers.append(block(self.in_c, out_channels)) - - return nn.Sequential(*layers) - -def RTG_Resnet50(img_c=3, num_class=3): - return RTG_Resnet(Block, [3,4,6,3], img_c, num_class) \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/profiler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/profiler.py deleted file mode 100644 index b70236997eec59c2209ef351ae38863b4112d0ec..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/profiler.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from typing import Callable, List, Optional, Union - -import torch - -from ..dist_utils import master_only -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class ProfilerHook(Hook): - """Profiler to analyze performance during training. - - PyTorch Profiler is a tool that allows the collection of the performance - metrics during the training. More details on Profiler can be found at - https://pytorch.org/docs/1.8.1/profiler.html#torch.profiler.profile - - Args: - by_epoch (bool): Profile performance by epoch or by iteration. - Default: True. - profile_iters (int): Number of iterations for profiling. - If ``by_epoch=True``, profile_iters indicates that they are the - first profile_iters epochs at the beginning of the - training, otherwise it indicates the first profile_iters - iterations. Default: 1. - activities (list[str]): List of activity groups (CPU, CUDA) to use in - profiling. Default: ['cpu', 'cuda']. - schedule (dict, optional): Config of generating the callable schedule. - if schedule is None, profiler will not add step markers into the - trace and table view. Default: None. - on_trace_ready (callable, dict): Either a handler or a dict of generate - handler. Default: None. - record_shapes (bool): Save information about operator's input shapes. - Default: False. - profile_memory (bool): Track tensor memory allocation/deallocation. - Default: False. - with_stack (bool): Record source information (file and line number) - for the ops. Default: False. - with_flops (bool): Use formula to estimate the FLOPS of specific - operators (matrix multiplication and 2D convolution). - Default: False. - json_trace_path (str, optional): Exports the collected trace in Chrome - JSON format. Default: None. - - Example: - >>> runner = ... # instantiate a Runner - >>> # tensorboard trace - >>> trace_config = dict(type='tb_trace', dir_name='work_dir') - >>> profiler_config = dict(on_trace_ready=trace_config) - >>> runner.register_profiler_hook(profiler_config) - >>> runner.run(data_loaders=[trainloader], workflow=[('train', 1)]) - """ - - def __init__(self, - by_epoch: bool = True, - profile_iters: int = 1, - activities: List[str] = ['cpu', 'cuda'], - schedule: Optional[dict] = None, - on_trace_ready: Optional[Union[Callable, dict]] = None, - record_shapes: bool = False, - profile_memory: bool = False, - with_stack: bool = False, - with_flops: bool = False, - json_trace_path: Optional[str] = None) -> None: - try: - from torch import profiler # torch version >= 1.8.1 - except ImportError: - raise ImportError('profiler is the new feature of torch1.8.1, ' - f'but your version is {torch.__version__}') - - assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean.' - self.by_epoch = by_epoch - - if profile_iters < 1: - raise ValueError('profile_iters should be greater than 0, but got ' - f'{profile_iters}') - self.profile_iters = profile_iters - - if not isinstance(activities, list): - raise ValueError( - f'activities should be list, but got {type(activities)}') - self.activities = [] - for activity in activities: - activity = activity.lower() - if activity == 'cpu': - self.activities.append(profiler.ProfilerActivity.CPU) - elif activity == 'cuda': - self.activities.append(profiler.ProfilerActivity.CUDA) - else: - raise ValueError( - f'activity should be "cpu" or "cuda", but got {activity}') - - if schedule is not None: - self.schedule = profiler.schedule(**schedule) - else: - self.schedule = None - - self.on_trace_ready = on_trace_ready - self.record_shapes = record_shapes - self.profile_memory = profile_memory - self.with_stack = with_stack - self.with_flops = with_flops - self.json_trace_path = json_trace_path - - @master_only - def before_run(self, runner): - if self.by_epoch and runner.max_epochs < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_epochs}') - - if not self.by_epoch and runner.max_iters < self.profile_iters: - raise ValueError('self.profile_iters should not be greater than ' - f'{runner.max_iters}') - - if callable(self.on_trace_ready): # handler - _on_trace_ready = self.on_trace_ready - elif isinstance(self.on_trace_ready, dict): # config of handler - trace_cfg = self.on_trace_ready.copy() - trace_type = trace_cfg.pop('type') # log_trace handler - if trace_type == 'log_trace': - - def _log_handler(prof): - print(prof.key_averages().table(**trace_cfg)) - - _on_trace_ready = _log_handler - elif trace_type == 'tb_trace': # tensorboard_trace handler - try: - import torch_tb_profiler # noqa: F401 - except ImportError: - raise ImportError('please run "pip install ' - 'torch-tb-profiler" to install ' - 'torch_tb_profiler') - _on_trace_ready = torch.profiler.tensorboard_trace_handler( - **trace_cfg) - else: - raise ValueError('trace_type should be "log_trace" or ' - f'"tb_trace", but got {trace_type}') - elif self.on_trace_ready is None: - _on_trace_ready = None # type: ignore - else: - raise ValueError('on_trace_ready should be handler, dict or None, ' - f'but got {type(self.on_trace_ready)}') - - if runner.max_epochs > 1: - warnings.warn(f'profiler will profile {runner.max_epochs} epochs ' - 'instead of 1 epoch. Since profiler will slow down ' - 'the training, it is recommended to train 1 epoch ' - 'with ProfilerHook and adjust your setting according' - ' to the profiler summary. During normal training ' - '(epoch > 1), you may disable the ProfilerHook.') - - self.profiler = torch.profiler.profile( - activities=self.activities, - schedule=self.schedule, - on_trace_ready=_on_trace_ready, - record_shapes=self.record_shapes, - profile_memory=self.profile_memory, - with_stack=self.with_stack, - with_flops=self.with_flops) - - self.profiler.__enter__() - runner.logger.info('profiler is profiling...') - - @master_only - def after_train_epoch(self, runner): - if self.by_epoch and runner.epoch == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) - - @master_only - def after_train_iter(self, runner): - self.profiler.step() - if not self.by_epoch and runner.iter == self.profile_iters - 1: - runner.logger.info('profiler may take a few minutes...') - self.profiler.__exit__(None, None, None) - if self.json_trace_path is not None: - self.profiler.export_chrome_trace(self.json_trace_path) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/coder/base_bbox_coder.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/coder/base_bbox_coder.py deleted file mode 100644 index cf0b34c7cc2fe561718b0c884990beb40a993643..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,17 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ga_rpn_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ga_rpn_head.py deleted file mode 100644 index 2ec0d4fdd3475bfbd2e541a6e8130b1df9ad861a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ga_rpn_head.py +++ /dev/null @@ -1,171 +0,0 @@ -import copy -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv import ConfigDict -from mmcv.cnn import normal_init -from mmcv.ops import nms - -from ..builder import HEADS -from .guided_anchor_head import GuidedAnchorHead -from .rpn_test_mixin import RPNTestMixin - - -@HEADS.register_module() -class GARPNHead(RPNTestMixin, GuidedAnchorHead): - """Guided-Anchor-based RPN head.""" - - def __init__(self, in_channels, **kwargs): - super(GARPNHead, self).__init__(1, in_channels, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.rpn_conv = nn.Conv2d( - self.in_channels, self.feat_channels, 3, padding=1) - super(GARPNHead, self)._init_layers() - - def init_weights(self): - """Initialize weights of the head.""" - normal_init(self.rpn_conv, std=0.01) - super(GARPNHead, self).init_weights() - - def forward_single(self, x): - """Forward feature of a single scale level.""" - - x = self.rpn_conv(x) - x = F.relu(x, inplace=True) - (cls_score, bbox_pred, shape_pred, - loc_pred) = super(GARPNHead, self).forward_single(x) - return cls_score, bbox_pred, shape_pred, loc_pred - - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - losses = super(GARPNHead, self).loss( - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - None, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - return dict( - loss_rpn_cls=losses['loss_cls'], - loss_rpn_bbox=losses['loss_bbox'], - loss_anchor_shape=losses['loss_shape'], - loss_anchor_loc=losses['loss_loc']) - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and max_per_img at the same time, ' \ - f'but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the ' \ - f'nms_thr which will be deprecated.' - - assert cfg.nms.get('type', 'nms') == 'nms', 'GARPNHead only support ' \ - 'naive nms.' - - mlvl_proposals = [] - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - anchors = mlvl_anchors[idx] - mask = mlvl_masks[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = rpn_cls_score.softmax(dim=1)[:, :-1] - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, - 4)[mask, :] - if scores.dim() == 0: - rpn_bbox_pred = rpn_bbox_pred.unsqueeze(0) - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - if cfg.nms_pre > 0 and scores.shape[0] > cfg.nms_pre: - _, topk_inds = scores.topk(cfg.nms_pre) - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - scores = scores[topk_inds] - # get proposals w.r.t. anchors and rpn_bbox_pred - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - # filter out too small bboxes - if cfg.min_bbox_size > 0: - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_inds = torch.nonzero( - (w >= cfg.min_bbox_size) & (h >= cfg.min_bbox_size), - as_tuple=False).squeeze() - proposals = proposals[valid_inds, :] - scores = scores[valid_inds] - # NMS in current level - proposals, _ = nms(proposals, scores, cfg.nms.iou_threshold) - proposals = proposals[:cfg.nms_post, :] - mlvl_proposals.append(proposals) - proposals = torch.cat(mlvl_proposals, 0) - if cfg.get('nms_across_levels', False): - # NMS across multi levels - proposals, _ = nms(proposals[:, :4], proposals[:, -1], - cfg.nms.iou_threshold) - proposals = proposals[:cfg.max_per_img, :] - else: - scores = proposals[:, 4] - num = min(cfg.max_per_img, proposals.shape[0]) - _, topk_inds = scores.topk(num) - proposals = proposals[topk_inds, :] - return proposals diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/region_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/region_assigner.py deleted file mode 100644 index dd7d4326b31f0b637018159a31a68c0303afd06b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/region_assigner.py +++ /dev/null @@ -1,221 +0,0 @@ -import torch - -from annotator.uniformer.mmdet.core import anchor_inside_flags -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def calc_region(bbox, ratio, stride, featmap_size=None): - """Calculate region of the box defined by the ratio, the ratio is from the - center of the box to every edge.""" - # project bbox on the feature - f_bbox = bbox / stride - x1 = torch.round((1 - ratio) * f_bbox[0] + ratio * f_bbox[2]) - y1 = torch.round((1 - ratio) * f_bbox[1] + ratio * f_bbox[3]) - x2 = torch.round(ratio * f_bbox[0] + (1 - ratio) * f_bbox[2]) - y2 = torch.round(ratio * f_bbox[1] + (1 - ratio) * f_bbox[3]) - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) - - -def anchor_ctr_inside_region_flags(anchors, stride, region): - """Get the flag indicate whether anchor centers are inside regions.""" - x1, y1, x2, y2 = region - f_anchors = anchors / stride - x = (f_anchors[:, 0] + f_anchors[:, 2]) * 0.5 - y = (f_anchors[:, 1] + f_anchors[:, 3]) * 0.5 - flags = (x >= x1) & (x <= x2) & (y >= y1) & (y <= y2) - return flags - - -@BBOX_ASSIGNERS.register_module() -class RegionAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - center_ratio: ratio of the region in the center of the bbox to - define positive sample. - ignore_ratio: ratio of the region to define ignore samples. - """ - - def __init__(self, center_ratio=0.2, ignore_ratio=0.5): - self.center_ratio = center_ratio - self.ignore_ratio = ignore_ratio - - def assign(self, - mlvl_anchors, - mlvl_valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - anchor_scale, - anchor_strides, - gt_bboxes_ignore=None, - gt_labels=None, - allowed_border=0): - """Assign gt to anchors. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. Assign every anchor to 0 (negative) - For each gt_bboxes: - 2. Compute ignore flags based on ignore_region then - assign -1 to anchors w.r.t. ignore flags - 3. Compute pos flags based on center_region then - assign gt_bboxes to anchors w.r.t. pos flags - 4. Compute ignore flags based on adjacent anchor lvl then - assign -1 to anchors w.r.t. ignore flags - 5. Assign anchor outside of image to -1 - - Args: - mlvl_anchors (list[Tensor]): Multi level anchors. - mlvl_valid_flags (list[Tensor]): Multi level valid flags. - gt_bboxes (Tensor): Ground truth bboxes of image - img_meta (dict): Meta info of image. - featmap_sizes (list[Tensor]): Feature mapsize each level - anchor_scale (int): Scale of the anchor. - anchor_strides (list[int]): Stride of the anchor. - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - allowed_border (int, optional): The border to allow the valid - anchor. Defaults to 0. - - Returns: - :obj:`AssignResult`: The assign result. - """ - if gt_bboxes_ignore is not None: - raise NotImplementedError - - num_gts = gt_bboxes.shape[0] - num_bboxes = sum(x.shape[0] for x in mlvl_anchors) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = gt_bboxes.new_zeros((num_bboxes, )) - assigned_gt_inds = gt_bboxes.new_zeros((num_bboxes, ), - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = gt_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - num_lvls = len(mlvl_anchors) - r1 = (1 - self.center_ratio) / 2 - r2 = (1 - self.ignore_ratio) / 2 - - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - - # 1. assign 0 (negative) by default - mlvl_assigned_gt_inds = [] - mlvl_ignore_flags = [] - for lvl in range(num_lvls): - h, w = featmap_sizes[lvl] - assert h * w == mlvl_anchors[lvl].shape[0] - assigned_gt_inds = gt_bboxes.new_full((h * w, ), - 0, - dtype=torch.long) - ignore_flags = torch.zeros_like(assigned_gt_inds) - mlvl_assigned_gt_inds.append(assigned_gt_inds) - mlvl_ignore_flags.append(ignore_flags) - - for gt_id in range(num_gts): - lvl = target_lvls[gt_id].item() - featmap_size = featmap_sizes[lvl] - stride = anchor_strides[lvl] - anchors = mlvl_anchors[lvl] - gt_bbox = gt_bboxes[gt_id, :4] - - # Compute regions - ignore_region = calc_region(gt_bbox, r2, stride, featmap_size) - ctr_region = calc_region(gt_bbox, r1, stride, featmap_size) - - # 2. Assign -1 to ignore flags - ignore_flags = anchor_ctr_inside_region_flags( - anchors, stride, ignore_region) - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 3. Assign gt_bboxes to pos flags - pos_flags = anchor_ctr_inside_region_flags(anchors, stride, - ctr_region) - mlvl_assigned_gt_inds[lvl][pos_flags] = gt_id + 1 - - # 4. Assign -1 to ignore adjacent lvl - if lvl > 0: - d_lvl = lvl - 1 - d_anchors = mlvl_anchors[d_lvl] - d_featmap_size = featmap_sizes[d_lvl] - d_stride = anchor_strides[d_lvl] - d_ignore_region = calc_region(gt_bbox, r2, d_stride, - d_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - d_anchors, d_stride, d_ignore_region) - mlvl_ignore_flags[d_lvl][ignore_flags] = 1 - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - u_anchors = mlvl_anchors[u_lvl] - u_featmap_size = featmap_sizes[u_lvl] - u_stride = anchor_strides[u_lvl] - u_ignore_region = calc_region(gt_bbox, r2, u_stride, - u_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - u_anchors, u_stride, u_ignore_region) - mlvl_ignore_flags[u_lvl][ignore_flags] = 1 - - # 4. (cont.) Assign -1 to ignore adjacent lvl - for lvl in range(num_lvls): - ignore_flags = mlvl_ignore_flags[lvl] - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 5. Assign -1 to anchor outside of image - flat_assigned_gt_inds = torch.cat(mlvl_assigned_gt_inds) - flat_anchors = torch.cat(mlvl_anchors) - flat_valid_flags = torch.cat(mlvl_valid_flags) - assert (flat_assigned_gt_inds.shape[0] == flat_anchors.shape[0] == - flat_valid_flags.shape[0]) - inside_flags = anchor_inside_flags(flat_anchors, flat_valid_flags, - img_meta['img_shape'], - allowed_border) - outside_flags = ~inside_flags - flat_assigned_gt_inds[outside_flags] = -1 - - if gt_labels is not None: - assigned_labels = torch.zeros_like(flat_assigned_gt_inds) - pos_flags = assigned_gt_inds > 0 - assigned_labels[pos_flags] = gt_labels[ - flat_assigned_gt_inds[pos_flags] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, flat_assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/hourglass.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/hourglass.py deleted file mode 100644 index 3422acee35e3c6f8731cdb310f188e671b5be12f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/hourglass.py +++ /dev/null @@ -1,198 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import BasicBlock - - -class HourglassModule(nn.Module): - """Hourglass Module for HourglassNet backbone. - - Generate module recursively and use BasicBlock as the base unit. - - Args: - depth (int): Depth of current HourglassModule. - stage_channels (list[int]): Feature channels of sub-modules in current - and follow-up HourglassModule. - stage_blocks (list[int]): Number of sub-modules stacked in current and - follow-up HourglassModule. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - depth, - stage_channels, - stage_blocks, - norm_cfg=dict(type='BN', requires_grad=True)): - super(HourglassModule, self).__init__() - - self.depth = depth - - cur_block = stage_blocks[0] - next_block = stage_blocks[1] - - cur_channel = stage_channels[0] - next_channel = stage_channels[1] - - self.up1 = ResLayer( - BasicBlock, cur_channel, cur_channel, cur_block, norm_cfg=norm_cfg) - - self.low1 = ResLayer( - BasicBlock, - cur_channel, - next_channel, - cur_block, - stride=2, - norm_cfg=norm_cfg) - - if self.depth > 1: - self.low2 = HourglassModule(depth - 1, stage_channels[1:], - stage_blocks[1:]) - else: - self.low2 = ResLayer( - BasicBlock, - next_channel, - next_channel, - next_block, - norm_cfg=norm_cfg) - - self.low3 = ResLayer( - BasicBlock, - next_channel, - cur_channel, - cur_block, - norm_cfg=norm_cfg, - downsample_first=False) - - self.up2 = nn.Upsample(scale_factor=2) - - def forward(self, x): - """Forward function.""" - up1 = self.up1(x) - low1 = self.low1(x) - low2 = self.low2(low1) - low3 = self.low3(low2) - up2 = self.up2(low3) - return up1 + up2 - - -@BACKBONES.register_module() -class HourglassNet(nn.Module): - """HourglassNet backbone. - - Stacked Hourglass Networks for Human Pose Estimation. - More details can be found in the `paper - `_ . - - Args: - downsample_times (int): Downsample times in a HourglassModule. - num_stacks (int): Number of HourglassModule modules stacked, - 1 for Hourglass-52, 2 for Hourglass-104. - stage_channels (list[int]): Feature channel of each sub-module in a - HourglassModule. - stage_blocks (list[int]): Number of sub-modules stacked in a - HourglassModule. - feat_channel (int): Feature channel of conv after a HourglassModule. - norm_cfg (dict): Dictionary to construct and config norm layer. - - Example: - >>> from mmdet.models import HourglassNet - >>> import torch - >>> self = HourglassNet() - >>> self.eval() - >>> inputs = torch.rand(1, 3, 511, 511) - >>> level_outputs = self.forward(inputs) - >>> for level_output in level_outputs: - ... print(tuple(level_output.shape)) - (1, 256, 128, 128) - (1, 256, 128, 128) - """ - - def __init__(self, - downsample_times=5, - num_stacks=2, - stage_channels=(256, 256, 384, 384, 384, 512), - stage_blocks=(2, 2, 2, 2, 2, 4), - feat_channel=256, - norm_cfg=dict(type='BN', requires_grad=True)): - super(HourglassNet, self).__init__() - - self.num_stacks = num_stacks - assert self.num_stacks >= 1 - assert len(stage_channels) == len(stage_blocks) - assert len(stage_channels) > downsample_times - - cur_channel = stage_channels[0] - - self.stem = nn.Sequential( - ConvModule(3, 128, 7, padding=3, stride=2, norm_cfg=norm_cfg), - ResLayer(BasicBlock, 128, 256, 1, stride=2, norm_cfg=norm_cfg)) - - self.hourglass_modules = nn.ModuleList([ - HourglassModule(downsample_times, stage_channels, stage_blocks) - for _ in range(num_stacks) - ]) - - self.inters = ResLayer( - BasicBlock, - cur_channel, - cur_channel, - num_stacks - 1, - norm_cfg=norm_cfg) - - self.conv1x1s = nn.ModuleList([ - ConvModule( - cur_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None) - for _ in range(num_stacks - 1) - ]) - - self.out_convs = nn.ModuleList([ - ConvModule( - cur_channel, feat_channel, 3, padding=1, norm_cfg=norm_cfg) - for _ in range(num_stacks) - ]) - - self.remap_convs = nn.ModuleList([ - ConvModule( - feat_channel, cur_channel, 1, norm_cfg=norm_cfg, act_cfg=None) - for _ in range(num_stacks - 1) - ]) - - self.relu = nn.ReLU(inplace=True) - - def init_weights(self, pretrained=None): - """Init module weights. - - We do nothing in this function because all modules we used - (ConvModule, BasicBlock and etc.) have default initialization, and - currently we don't provide pretrained model of HourglassNet. - - Detector's __init__() will call backbone's init_weights() with - pretrained as input, so we keep this function. - """ - # Training Centripetal Model needs to reset parameters for Conv2d - for m in self.modules(): - if isinstance(m, nn.Conv2d): - m.reset_parameters() - - def forward(self, x): - """Forward function.""" - inter_feat = self.stem(x) - out_feats = [] - - for ind in range(self.num_stacks): - single_hourglass = self.hourglass_modules[ind] - out_conv = self.out_convs[ind] - - hourglass_feat = single_hourglass(inter_feat) - out_feat = out_conv(hourglass_feat) - out_feats.append(out_feat) - - if ind < self.num_stacks - 1: - inter_feat = self.conv1x1s[ind]( - inter_feat) + self.remap_convs[ind]( - out_feat) - inter_feat = self.inters[ind](self.relu(inter_feat)) - - return out_feats diff --git a/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/src/readers/longformer_reader.py b/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/src/readers/longformer_reader.py deleted file mode 100644 index 1e8bd7c13afb2de28b0eb2141f5b26cdbb2e417a..0000000000000000000000000000000000000000 --- a/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/src/readers/longformer_reader.py +++ /dev/null @@ -1,44 +0,0 @@ -import torch -from transformers import ( - LongformerTokenizer, - LongformerForQuestionAnswering -) -from typing import List, Dict, Tuple -from dotenv import load_dotenv - -from src.readers.base_reader import Reader - - -load_dotenv() - - -class LongformerReader(Reader): - def __init__(self) -> None: - checkpoint = "valhalla/longformer-base-4096-finetuned-squadv1" - self.tokenizer = LongformerTokenizer.from_pretrained(checkpoint) - self.model = LongformerForQuestionAnswering.from_pretrained(checkpoint) - - def read(self, - query: str, - context: Dict[str, List[str]], - num_answers=5) -> List[Tuple]: - answers = [] - - for text in context['texts'][:num_answers]: - encoding = self.tokenizer(query, text, return_tensors="pt") - input_ids = encoding["input_ids"] - attention_mask = encoding["attention_mask"] - outputs = self.model(input_ids, attention_mask=attention_mask) - - start_logits = outputs.start_logits - end_logits = outputs.end_logits - all_tokens = self.tokenizer.convert_ids_to_tokens( - input_ids[0].tolist()) - answer_tokens = all_tokens[ - torch.argmax(start_logits):torch.argmax(end_logits) + 1] - answer = self.tokenizer.decode( - self.tokenizer.convert_tokens_to_ids(answer_tokens) - ) - answers.append([answer, [], []]) - - return answers diff --git a/spaces/Salesforce/EDICT/my_diffusers/training_utils.py b/spaces/Salesforce/EDICT/my_diffusers/training_utils.py deleted file mode 100644 index fa1694161fc54c7fd097abf3bcbf44c498daad4b..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/training_utils.py +++ /dev/null @@ -1,125 +0,0 @@ -import copy -import os -import random - -import numpy as np -import torch - - -def enable_full_determinism(seed: int): - """ - Helper function for reproducible behavior during distributed training. See - - https://pytorch.org/docs/stable/notes/randomness.html for pytorch - """ - # set seed first - set_seed(seed) - - # Enable PyTorch deterministic mode. This potentially requires either the environment - # variable 'CUDA_LAUNCH_BLOCKING' or 'CUBLAS_WORKSPACE_CONFIG' to be set, - # depending on the CUDA version, so we set them both here - os.environ["CUDA_LAUNCH_BLOCKING"] = "1" - os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" - torch.use_deterministic_algorithms(True) - - # Enable CUDNN deterministic mode - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def set_seed(seed: int): - """ - Args: - Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch`. - seed (`int`): The seed to set. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - # ^^ safe to call this function even if cuda is not available - - -class EMAModel: - """ - Exponential Moving Average of models weights - """ - - def __init__( - self, - model, - update_after_step=0, - inv_gamma=1.0, - power=2 / 3, - min_value=0.0, - max_value=0.9999, - device=None, - ): - """ - @crowsonkb's notes on EMA Warmup: - If gamma=1 and power=1, implements a simple average. gamma=1, power=2/3 are good values for models you plan - to train for a million or more steps (reaches decay factor 0.999 at 31.6K steps, 0.9999 at 1M steps), - gamma=1, power=3/4 for models you plan to train for less (reaches decay factor 0.999 at 10K steps, 0.9999 - at 215.4k steps). - Args: - inv_gamma (float): Inverse multiplicative factor of EMA warmup. Default: 1. - power (float): Exponential factor of EMA warmup. Default: 2/3. - min_value (float): The minimum EMA decay rate. Default: 0. - """ - - self.averaged_model = copy.deepcopy(model).eval() - self.averaged_model.requires_grad_(False) - - self.update_after_step = update_after_step - self.inv_gamma = inv_gamma - self.power = power - self.min_value = min_value - self.max_value = max_value - - if device is not None: - self.averaged_model = self.averaged_model.to(device=device) - - self.decay = 0.0 - self.optimization_step = 0 - - def get_decay(self, optimization_step): - """ - Compute the decay factor for the exponential moving average. - """ - step = max(0, optimization_step - self.update_after_step - 1) - value = 1 - (1 + step / self.inv_gamma) ** -self.power - - if step <= 0: - return 0.0 - - return max(self.min_value, min(value, self.max_value)) - - @torch.no_grad() - def step(self, new_model): - ema_state_dict = {} - ema_params = self.averaged_model.state_dict() - - self.decay = self.get_decay(self.optimization_step) - - for key, param in new_model.named_parameters(): - if isinstance(param, dict): - continue - try: - ema_param = ema_params[key] - except KeyError: - ema_param = param.float().clone() if param.ndim == 1 else copy.deepcopy(param) - ema_params[key] = ema_param - - if not param.requires_grad: - ema_params[key].copy_(param.to(dtype=ema_param.dtype).data) - ema_param = ema_params[key] - else: - ema_param.mul_(self.decay) - ema_param.add_(param.data.to(dtype=ema_param.dtype), alpha=1 - self.decay) - - ema_state_dict[key] = ema_param - - for key, param in new_model.named_buffers(): - ema_state_dict[key] = param - - self.averaged_model.load_state_dict(ema_state_dict, strict=False) - self.optimization_step += 1 diff --git a/spaces/SeViLA/SeViLA/lavis/common/gradcam.py b/spaces/SeViLA/SeViLA/lavis/common/gradcam.py deleted file mode 100644 index d53a5254d4b319eaf2cbfbd081b0ca8e38c5c7a0..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/common/gradcam.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np -from matplotlib import pyplot as plt -from scipy.ndimage import filters -from skimage import transform as skimage_transform - - -def getAttMap(img, attMap, blur=True, overlap=True): - attMap -= attMap.min() - if attMap.max() > 0: - attMap /= attMap.max() - attMap = skimage_transform.resize(attMap, (img.shape[:2]), order=3, mode="constant") - if blur: - attMap = filters.gaussian_filter(attMap, 0.02 * max(img.shape[:2])) - attMap -= attMap.min() - attMap /= attMap.max() - cmap = plt.get_cmap("jet") - attMapV = cmap(attMap) - attMapV = np.delete(attMapV, 3, 2) - if overlap: - attMap = ( - 1 * (1 - attMap**0.7).reshape(attMap.shape + (1,)) * img - + (attMap**0.7).reshape(attMap.shape + (1,)) * attMapV - ) - return attMap diff --git a/spaces/ServerX/PorcoDiaz/app.py b/spaces/ServerX/PorcoDiaz/app.py deleted file mode 100644 index eec14d07438487500d64e550997e6cb90c133917..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/app.py +++ /dev/null @@ -1,3150 +0,0 @@ -import os, sys -os.system("pip install pyworld") # ==0.3.3 - -now_dir = os.getcwd() -sys.path.append(now_dir) -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' -os.environ["OPENBLAS_NUM_THREADS"] = "1" -os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1" - -# Download models -shell_script = './tools/dlmodels.sh' -os.system(f'chmod +x {shell_script}') -os.system('apt install git-lfs') -os.system('git lfs install') -os.system('apt-get -y install aria2') -os.system('aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d . -o hubert_base.pt') -try: - return_code = os.system(shell_script) - if return_code == 0: - print("Shell script executed successfully.") - else: - print(f"Shell script failed with return code {return_code}") -except Exception as e: - print(f"An error occurred: {e}") - - -import logging -import shutil -import threading -import lib.globals.globals as rvc_globals -from LazyImport import lazyload -import mdx -from mdx_processing_script import get_model_list,id_to_ptm,prepare_mdx,run_mdx -math = lazyload('math') -import traceback -import warnings -tensorlowest = lazyload('tensorlowest') -from random import shuffle -from subprocess import Popen -from time import sleep -import json -import pathlib - -import fairseq -logging.getLogger("faiss").setLevel(logging.WARNING) -import faiss -gr = lazyload("gradio") -np = lazyload("numpy") -torch = lazyload('torch') -re = lazyload('regex') -SF = lazyload("soundfile") -SFWrite = SF.write -from dotenv import load_dotenv -from sklearn.cluster import MiniBatchKMeans -import datetime - - -from glob import glob1 -import signal -from signal import SIGTERM -import librosa - -from configs.config import Config -from i18n import I18nAuto -from infer.lib.train.process_ckpt import ( - change_info, - extract_small_model, - merge, - show_info, -) -#from infer.modules.uvr5.modules import uvr -from infer.modules.vc.modules import VC -from infer.modules.vc.utils import * -from infer.modules.vc.pipeline import Pipeline -import lib.globals.globals as rvc_globals -math = lazyload('math') -ffmpeg = lazyload('ffmpeg') -import nltk -nltk.download('punkt', quiet=True) -from nltk.tokenize import sent_tokenize -from bark import SAMPLE_RATE - -import easy_infer -import audioEffects -from infer.lib.csvutil import CSVutil - -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM -from infer_uvr5 import _audio_pre_, _audio_pre_new -from MDXNet import MDXNetDereverb -from infer.lib.audio import load_audio - - -from sklearn.cluster import MiniBatchKMeans - -import time -import csv - -from shlex import quote as SQuote - - - - -RQuote = lambda val: SQuote(str(val)) - -tmp = os.path.join(now_dir, "TEMP") -runtime_dir = os.path.join(now_dir, "runtime/Lib/site-packages") -directories = ['logs', 'audios', 'datasets', 'weights', 'audio-others' , 'audio-outputs'] - -shutil.rmtree(tmp, ignore_errors=True) -shutil.rmtree("%s/runtime/Lib/site-packages/infer_pack" % (now_dir), ignore_errors=True) -shutil.rmtree("%s/runtime/Lib/site-packages/uvr5_pack" % (now_dir), ignore_errors=True) - -os.makedirs(tmp, exist_ok=True) -for folder in directories: - os.makedirs(os.path.join(now_dir, folder), exist_ok=True) - - -os.makedirs(tmp, exist_ok=True) -os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True) -os.makedirs(os.path.join(now_dir, "assets/weights"), exist_ok=True) -os.environ["TEMP"] = tmp -warnings.filterwarnings("ignore") -torch.manual_seed(114514) -logging.getLogger("numba").setLevel(logging.WARNING) - -logger = logging.getLogger(__name__) - - -if not os.path.isdir("csvdb/"): - os.makedirs("csvdb") - frmnt, stp = open("csvdb/formanting.csv", "w"), open("csvdb/stop.csv", "w") - frmnt.close() - stp.close() - -global DoFormant, Quefrency, Timbre - -try: - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - DoFormant = ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant) -except (ValueError, TypeError, IndexError): - DoFormant, Quefrency, Timbre = False, 1.0, 1.0 - CSVutil("csvdb/formanting.csv", "w+", "formanting", DoFormant, Quefrency, Timbre) - -load_dotenv() -config = Config() -vc = VC(config) - -if config.dml == True: - - def forward_dml(ctx, x, scale): - ctx.scale = scale - res = x.clone().detach() - return res - - fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml - -i18n = I18nAuto() -i18n.print() -# 判断是否有能用来训练和加速推理的N卡 -ngpu = torch.cuda.device_count() -gpu_infos = [] -mem = [] -if_gpu_ok = False - -isinterrupted = 0 - - -if torch.cuda.is_available() or ngpu != 0: - for i in range(ngpu): - gpu_name = torch.cuda.get_device_name(i) - if any( - value in gpu_name.upper() - for value in [ - "10", - "16", - "20", - "30", - "40", - "A2", - "A3", - "A4", - "P4", - "A50", - "500", - "A60", - "70", - "80", - "90", - "M4", - "T4", - "TITAN", - ] - ): - # A10#A100#V100#A40#P40#M40#K80#A4500 - if_gpu_ok = True # 至少有一张能用的N卡 - gpu_infos.append("%s\t%s" % (i, gpu_name)) - mem.append( - int( - torch.cuda.get_device_properties(i).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - ) -if if_gpu_ok and len(gpu_infos) > 0: - gpu_info = "\n".join(gpu_infos) - default_batch_size = min(mem) // 2 -else: - gpu_info = "Unfortunately, there is no compatible GPU available to support your training." - default_batch_size = 1 -gpus = "-".join([i[0] for i in gpu_infos]) - -class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - - -hubert_model = None -weight_root = os.getenv("weight_root") -weight_uvr5_root = os.getenv("weight_uvr5_root") -index_root = os.getenv("index_root") -datasets_root = "datasets" -fshift_root = "formantshiftcfg" -audio_root = "audios" -audio_others_root = "audio-others" - -sup_audioext = {'wav', 'mp3', 'flac', 'ogg', 'opus', - 'm4a', 'mp4', 'aac', 'alac', 'wma', - 'aiff', 'webm', 'ac3'} - -names = [os.path.join(root, file) - for root, _, files in os.walk(weight_root) - for file in files - if file.endswith((".pth", ".onnx"))] - -indexes_list = [os.path.join(root, name) - for root, _, files in os.walk(index_root, topdown=False) - for name in files - if name.endswith(".index") and "trained" not in name] - -audio_paths = [os.path.join(root, name) - for root, _, files in os.walk(audio_root, topdown=False) - for name in files - if name.endswith(tuple(sup_audioext))] - -audio_others_paths = [os.path.join(root, name) - for root, _, files in os.walk(audio_others_root, topdown=False) - for name in files - if name.endswith(tuple(sup_audioext))] - -uvr5_names = [name.replace(".pth", "") - for name in os.listdir(weight_uvr5_root) - if name.endswith(".pth") or "onnx" in name] - - -check_for_name = lambda: sorted(names)[0] if names else '' - -datasets=[] -for foldername in os.listdir(os.path.join(now_dir, datasets_root)): - if "." not in foldername: - datasets.append(os.path.join(easy_infer.find_folder_parent(".","pretrained"),"datasets",foldername)) - -def get_dataset(): - if len(datasets) > 0: - return sorted(datasets)[0] - else: - return '' - -def update_model_choices(select_value): - model_ids = get_model_list() - model_ids_list = list(model_ids) - if select_value == "VR": - return {"choices": uvr5_names, "__type__": "update"} - elif select_value == "MDX": - return {"choices": model_ids_list, "__type__": "update"} - -set_bark_voice = easy_infer.get_bark_voice() -set_edge_voice = easy_infer.get_edge_voice() - -def update_tts_methods_voice(select_value): - #["Edge-tts", "RVG-tts", "Bark-tts"] - if select_value == "Edge-tts": - return {"choices": set_edge_voice, "value": "", "__type__": "update"} - elif select_value == "Bark-tts": - return {"choices": set_bark_voice, "value": "", "__type__": "update"} - - -def update_dataset_list(name): - new_datasets = [] - for foldername in os.listdir(os.path.join(now_dir, datasets_root)): - if "." not in foldername: - new_datasets.append(os.path.join(easy_infer.find_folder_parent(".","pretrained"),"datasets",foldername)) - return gr.Dropdown.update(choices=new_datasets) - -def get_indexes(): - indexes_list = [ - os.path.join(dirpath, filename) - for dirpath, _, filenames in os.walk(index_root) - for filename in filenames - if filename.endswith(".index") and "trained" not in filename - ] - - return indexes_list if indexes_list else '' - -def get_fshift_presets(): - fshift_presets_list = [ - os.path.join(dirpath, filename) - for dirpath, _, filenames in os.walk(fshift_root) - for filename in filenames - if filename.endswith(".txt") - ] - - return fshift_presets_list if fshift_presets_list else '' - -import soundfile as sf - -def generate_output_path(output_folder, base_name, extension): - # Generar un nombre único para el archivo de salida - index = 1 - while True: - output_path = os.path.join(output_folder, f"{base_name}_{index}.{extension}") - if not os.path.exists(output_path): - return output_path - index += 1 - -def combine_and_save_audios(audio1_path, audio2_path, output_path, volume_factor_audio1, volume_factor_audio2): - audio1, sr1 = librosa.load(audio1_path, sr=None) - audio2, sr2 = librosa.load(audio2_path, sr=None) - - # Alinear las tasas de muestreo - if sr1 != sr2: - if sr1 > sr2: - audio2 = librosa.resample(audio2, orig_sr=sr2, target_sr=sr1) - else: - audio1 = librosa.resample(audio1, orig_sr=sr1, target_sr=sr2) - - # Ajustar los audios para que tengan la misma longitud - target_length = min(len(audio1), len(audio2)) - audio1 = librosa.util.fix_length(audio1, target_length) - audio2 = librosa.util.fix_length(audio2, target_length) - - # Ajustar el volumen de los audios multiplicando por el factor de ganancia - if volume_factor_audio1 != 1.0: - audio1 *= volume_factor_audio1 - if volume_factor_audio2 != 1.0: - audio2 *= volume_factor_audio2 - - # Combinar los audios - combined_audio = audio1 + audio2 - - sf.write(output_path, combined_audio, sr1) - -# Resto de tu código... - -# Define función de conversión llamada por el botón -def audio_combined(audio1_path, audio2_path, volume_factor_audio1=1.0, volume_factor_audio2=1.0, reverb_enabled=False, compressor_enabled=False, noise_gate_enabled=False): - output_folder = os.path.join(now_dir, "audio-outputs") - os.makedirs(output_folder, exist_ok=True) - - # Generar nombres únicos para los archivos de salida - base_name = "combined_audio" - extension = "wav" - output_path = generate_output_path(output_folder, base_name, extension) - print(reverb_enabled) - print(compressor_enabled) - print(noise_gate_enabled) - - if reverb_enabled or compressor_enabled or noise_gate_enabled: - # Procesa el primer audio con los efectos habilitados - base_name = "effect_audio" - output_path = generate_output_path(output_folder, base_name, extension) - processed_audio_path = audioEffects.process_audio(audio2_path, output_path, reverb_enabled, compressor_enabled, noise_gate_enabled) - base_name = "combined_audio" - output_path = generate_output_path(output_folder, base_name, extension) - # Combina el audio procesado con el segundo audio usando audio_combined - combine_and_save_audios(audio1_path, processed_audio_path, output_path, volume_factor_audio1, volume_factor_audio2) - - return i18n("Conversion complete!"), output_path - else: - base_name = "combined_audio" - output_path = generate_output_path(output_folder, base_name, extension) - # No hay efectos habilitados, combina directamente los audios sin procesar - combine_and_save_audios(audio1_path, audio2_path, output_path, volume_factor_audio1, volume_factor_audio2) - - return i18n("Conversion complete!"), output_path - - - - -def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0,architecture): - infos = [] - if architecture == "VR": - try: - inp_root, save_root_vocal, save_root_ins = [x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") for x in [inp_root, save_root_vocal, save_root_ins]] - usable_files = [os.path.join(inp_root, file) - for file in os.listdir(inp_root) - if file.endswith(tuple(sup_audioext))] - - - pre_fun = MDXNetDereverb(15) if model_name == "onnx_dereverb_By_FoxJoy" else (_audio_pre_ if "DeEcho" not in model_name else _audio_pre_new)( - agg=int(agg), - model_path=os.path.join(weight_uvr5_root, model_name + ".pth"), - device=config.device, - is_half=config.is_half, - ) - - try: - if paths != None: - paths = [path.name for path in paths] - else: - paths = usable_files - - except: - traceback.print_exc() - paths = usable_files - print(paths) - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat, done = 1, 0 - - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if info["streams"][0]["channels"] == 2 and info["streams"][0]["sample_rate"] == "44100": - need_reformat = 0 - pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal, format0) - done = 1 - except: - traceback.print_exc() - - if need_reformat: - tmp_path = f"{tmp}/{os.path.basename(RQuote(inp_path))}.reformatted.wav" - os.system(f"ffmpeg -i {RQuote(inp_path)} -vn -acodec pcm_s16le -ac 2 -ar 44100 {RQuote(tmp_path)} -y") - inp_path = tmp_path - - try: - if not done: - pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal, format0) - infos.append(f"{os.path.basename(inp_path)}->Success") - yield "\n".join(infos) - except: - infos.append(f"{os.path.basename(inp_path)}->{traceback.format_exc()}") - yield "\n".join(infos) - except: - infos.append(traceback.format_exc()) - yield "\n".join(infos) - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - - del pre_fun - except: traceback.print_exc() - - print("clean_empty_cache") - - if torch.cuda.is_available(): torch.cuda.empty_cache() - - yield "\n".join(infos) - elif architecture == "MDX": - try: - infos.append(i18n("Starting audio conversion... (This might take a moment)")) - yield "\n".join(infos) - inp_root, save_root_vocal, save_root_ins = [x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") for x in [inp_root, save_root_vocal, save_root_ins]] - - usable_files = [os.path.join(inp_root, file) - for file in os.listdir(inp_root) - if file.endswith(tuple(sup_audioext))] - try: - if paths != None: - paths = [path.name for path in paths] - else: - paths = usable_files - - except: - traceback.print_exc() - paths = usable_files - print(paths) - invert=True - denoise=True - use_custom_parameter=True - dim_f=3072 - dim_t=256 - n_fft=7680 - use_custom_compensation=True - compensation=1.025 - suffix = "Vocals_custom" #@param ["Vocals", "Drums", "Bass", "Other"]{allow-input: true} - suffix_invert = "Instrumental_custom" #@param ["Instrumental", "Drumless", "Bassless", "Instruments"]{allow-input: true} - print_settings = True # @param{type:"boolean"} - onnx = id_to_ptm(model_name) - compensation = compensation if use_custom_compensation or use_custom_parameter else None - mdx_model = prepare_mdx(onnx,use_custom_parameter, dim_f, dim_t, n_fft, compensation=compensation) - - - for path in paths: - #inp_path = os.path.join(inp_root, path) - suffix_naming = suffix if use_custom_parameter else None - diff_suffix_naming = suffix_invert if use_custom_parameter else None - run_mdx(onnx, mdx_model, path, format0, diff=invert,suffix=suffix_naming,diff_suffix=diff_suffix_naming,denoise=denoise) - - if print_settings: - print() - print('[MDX-Net_Colab settings used]') - print(f'Model used: {onnx}') - print(f'Model MD5: {mdx.MDX.get_hash(onnx)}') - print(f'Model parameters:') - print(f' -dim_f: {mdx_model.dim_f}') - print(f' -dim_t: {mdx_model.dim_t}') - print(f' -n_fft: {mdx_model.n_fft}') - print(f' -compensation: {mdx_model.compensation}') - print() - print('[Input file]') - print('filename(s): ') - for filename in paths: - print(f' -{filename}') - infos.append(f"{os.path.basename(filename)}->Success") - yield "\n".join(infos) - except: - infos.append(traceback.format_exc()) - yield "\n".join(infos) - finally: - try: - del mdx_model - except: traceback.print_exc() - - print("clean_empty_cache") - - if torch.cuda.is_available(): torch.cuda.empty_cache() - - - - - -def change_choices(): - names = [os.path.join(root, file) - for root, _, files in os.walk(weight_root) - for file in files - if file.endswith((".pth", ".onnx"))] - indexes_list = [os.path.join(root, name) for root, _, files in os.walk(index_root, topdown=False) for name in files if name.endswith(".index") and "trained" not in name] - audio_paths = [os.path.join(audio_root, file) for file in os.listdir(os.path.join(now_dir, "audios"))] - - - return ( - {"choices": sorted(names), "__type__": "update"}, - {"choices": sorted(indexes_list), "__type__": "update"}, - {"choices": sorted(audio_paths), "__type__": "update"} - ) -def change_choices2(): - names = [os.path.join(root, file) - for root, _, files in os.walk(weight_root) - for file in files - if file.endswith((".pth", ".onnx"))] - indexes_list = [os.path.join(root, name) for root, _, files in os.walk(index_root, topdown=False) for name in files if name.endswith(".index") and "trained" not in name] - - - return ( - {"choices": sorted(names), "__type__": "update"}, - {"choices": sorted(indexes_list), "__type__": "update"}, - ) -def change_choices3(): - - audio_paths = [os.path.join(audio_root, file) for file in os.listdir(os.path.join(now_dir, "audios"))] - audio_others_paths = [os.path.join(audio_others_root, file) for file in os.listdir(os.path.join(now_dir, "audio-others"))] - - - return ( - {"choices": sorted(audio_others_paths), "__type__": "update"}, - {"choices": sorted(audio_paths), "__type__": "update"} - ) - -def clean(): - return {"value": "", "__type__": "update"} -def export_onnx(): - from infer.modules.onnx.export import export_onnx as eo - - eo() - -sr_dict = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -def if_done(done, p): - while 1: - if p.poll() is None: - sleep(0.5) - else: - break - done[0] = True - - -def if_done_multi(done, ps): - while 1: - # poll==None代表进程未结束 - # 只要有一个进程未结束都不停 - flag = 1 - for p in ps: - if p.poll() is None: - flag = 0 - sleep(0.5) - break - if flag == 1: - break - done[0] = True - -def formant_enabled( - cbox, qfrency, tmbre, frmntapply, formantpreset, formant_refresh_button -): - if cbox: - DoFormant = True - CSVutil("csvdb/formanting.csv", "w+", "formanting", DoFormant, qfrency, tmbre) - - # print(f"is checked? - {cbox}\ngot {DoFormant}") - - return ( - {"value": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - ) - - else: - DoFormant = False - CSVutil("csvdb/formanting.csv", "w+", "formanting", DoFormant, qfrency, tmbre) - - # print(f"is checked? - {cbox}\ngot {DoFormant}") - return ( - {"value": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - ) - - -def formant_apply(qfrency, tmbre): - Quefrency = qfrency - Timbre = tmbre - DoFormant = True - CSVutil("csvdb/formanting.csv", "w+", "formanting", DoFormant, qfrency, tmbre) - - return ( - {"value": Quefrency, "__type__": "update"}, - {"value": Timbre, "__type__": "update"}, - ) - -def update_fshift_presets(preset, qfrency, tmbre): - - if preset: - with open(preset, 'r') as p: - content = p.readlines() - qfrency, tmbre = content[0].strip(), content[1] - - formant_apply(qfrency, tmbre) - else: - qfrency, tmbre = preset_apply(preset, qfrency, tmbre) - - return ( - {"choices": get_fshift_presets(), "__type__": "update"}, - {"value": qfrency, "__type__": "update"}, - {"value": tmbre, "__type__": "update"}, - ) - -def preprocess_dataset(trainset_dir, exp_dir, sr, n_p): - sr = sr_dict[sr] - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w") - f.close() - per = 3.0 if config.is_half else 3.7 - cmd = '"%s" infer/modules/train/preprocess.py "%s" %s %s "%s/logs/%s" %s %.1f' % ( - config.python_cmd, - trainset_dir, - sr, - n_p, - now_dir, - exp_dir, - config.noparallel, - per, - ) - logger.info(cmd) - p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0]: - break - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - logger.info(log) - yield log - - -def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19, echl, gpus_rmvpe): - gpus = gpus.split("-") - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w") - f.close() - if if_f0: - if f0method != "rmvpe_gpu": - cmd = ( - '"%s" infer/modules/train/extract/extract_f0_print.py "%s/logs/%s" %s %s' - % ( - config.python_cmd, - now_dir, - exp_dir, - n_p, - f0method, - echl, - ) - ) - logger.info(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , stdin=PIPE, stdout=PIPE,stderr=PIPE - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - else: - if gpus_rmvpe != "-": - gpus_rmvpe = gpus_rmvpe.split("-") - leng = len(gpus_rmvpe) - ps = [] - for idx, n_g in enumerate(gpus_rmvpe): - cmd = ( - '"%s" infer/modules/train/extract/extract_f0_rmvpe.py %s %s %s "%s/logs/%s" %s ' - % ( - config.python_cmd, - leng, - idx, - n_g, - now_dir, - exp_dir, - config.is_half, - ) - ) - logger.info(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done_multi, # - args=( - done, - ps, - ), - ).start() - else: - cmd = ( - config.python_cmd - + ' infer/modules/train/extract/extract_f0_rmvpe_dml.py "%s/logs/%s" ' - % ( - now_dir, - exp_dir, - ) - ) - logger.info(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - p.wait() - done = [True] - while 1: - with open( - "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r" - ) as f: - yield (f.read()) - sleep(1) - if done[0]: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - logger.info(log) - yield log - ####对不同part分别开多进程 - """ - n_part=int(sys.argv[1]) - i_part=int(sys.argv[2]) - i_gpu=sys.argv[3] - exp_dir=sys.argv[4] - os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu) - """ - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = ( - '"%s" infer/modules/train/extract_feature_print.py %s %s %s %s "%s/logs/%s" %s' - % ( - config.python_cmd, - config.device, - leng, - idx, - n_g, - now_dir, - exp_dir, - version19, - ) - ) - logger.info(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done_multi, - args=( - done, - ps, - ), - ).start() - while 1: - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0]: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - logger.info(log) - yield log - -def get_pretrained_models(path_str, f0_str, sr2): - if_pretrained_generator_exist = os.access( - "assets/pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK - ) - if_pretrained_discriminator_exist = os.access( - "assets/pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK - ) - if not if_pretrained_generator_exist: - logger.warn( - "assets/pretrained%s/%sG%s.pth not exist, will not use pretrained model", - path_str, - f0_str, - sr2, - ) - if not if_pretrained_discriminator_exist: - logger.warn( - "assets/pretrained%s/%sD%s.pth not exist, will not use pretrained model", - path_str, - f0_str, - sr2, - ) - return ( - "assets/pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2) - if if_pretrained_generator_exist - else "", - "assets/pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2) - if if_pretrained_discriminator_exist - else "", - ) - -def change_sr2(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - f0_str = "f0" if if_f0_3 else "" - return get_pretrained_models(path_str, f0_str, sr2) - - -def change_version19(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - if sr2 == "32k" and version19 == "v1": - sr2 = "40k" - to_return_sr2 = ( - {"choices": ["40k", "48k"], "__type__": "update", "value": sr2} - if version19 == "v1" - else {"choices": ["40k", "48k", "32k"], "__type__": "update", "value": sr2} - ) - f0_str = "f0" if if_f0_3 else "" - return ( - *get_pretrained_models(path_str, f0_str, sr2), - to_return_sr2, - ) - - -def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15 - path_str = "" if version19 == "v1" else "_v2" - return ( - {"visible": if_f0_3, "__type__": "update"}, - *get_pretrained_models(path_str, "f0", sr2), - ) - - -global log_interval - -def set_log_interval(exp_dir, batch_size12): - log_interval = 1 - folder_path = os.path.join(exp_dir, "1_16k_wavs") - - if os.path.isdir(folder_path): - wav_files_num = len(glob1(folder_path,"*.wav")) - - if wav_files_num > 0: - log_interval = math.ceil(wav_files_num / batch_size12) - if log_interval > 1: - log_interval += 1 - - return log_interval - -global PID, PROCESS - -def click_train( - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, -): - CSVutil("csvdb/stop.csv", "w+", "formanting", False) - # 生成filelist - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if if_f0_3: - f0_dir = "%s/2a_f0" % (exp_dir) - f0nsf_dir = "%s/2b-f0nsf" % (exp_dir) - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % exp_dir, "w") as f: - f.write("\n".join(opt)) - logger.debug("Write filelist done") - # 生成config#无需生成config - # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0" - logger.info("Use gpus: %s", str(gpus16)) - if pretrained_G14 == "": - logger.info("No pretrained Generator") - if pretrained_D15 == "": - logger.info("No pretrained Discriminator") - if version19 == "v1" or sr2 == "40k": - config_path = "v1/%s.json" % sr2 - else: - config_path = "v2/%s.json" % sr2 - config_save_path = os.path.join(exp_dir, "config.json") - if not pathlib.Path(config_save_path).exists(): - with open(config_save_path, "w", encoding="utf-8") as f: - json.dump( - config.json_config[config_path], - f, - ensure_ascii=False, - indent=4, - sort_keys=True, - ) - f.write("\n") - if gpus16: - cmd = ( - '"%s" infer/modules/train/train.py -e "%s" -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s' - % ( - config.python_cmd, - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "", - "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "", - 1 if if_save_latest13 == True else 0, - 1 if if_cache_gpu17 == True else 0, - 1 if if_save_every_weights18 == True else 0, - version19, - ) - ) - else: - cmd = ( - '"%s" infer/modules/train/train.py -e "%s" -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s' - % ( - config.python_cmd, - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "", - "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "", - 1 if if_save_latest13 == True else 0, - 1 if if_cache_gpu17 == True else 0, - 1 if if_save_every_weights18 == True else 0, - version19, - ) - ) - logger.info(cmd) - global p - p = Popen(cmd, shell=True, cwd=now_dir) - global PID - PID = p.pid - - p.wait() - - return i18n("Training is done, check train.log"), {"visible": False, "__type__": "update"}, {"visible": True, "__type__": "update"} - - -def train_index(exp_dir1, version19): - # exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - exp_dir = "logs/%s" % (exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if not os.path.exists(feature_dir): - return "请先进行特征提取!" - listdir_res = list(os.listdir(feature_dir)) - if len(listdir_res) == 0: - return "请先进行特征提取!" - infos = [] - npys = [] - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - if big_npy.shape[0] > 2e5: - infos.append("Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0]) - yield "\n".join(infos) - try: - big_npy = ( - MiniBatchKMeans( - n_clusters=10000, - verbose=True, - batch_size=256 * config.n_cpu, - compute_labels=False, - init="random", - ) - .fit(big_npy) - .cluster_centers_ - ) - except: - info = traceback.format_exc() - logger.info(info) - infos.append(info) - yield "\n".join(infos) - - np.save("%s/total_fea.npy" % exp_dir, big_npy) - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - infos.append("%s,%s" % (big_npy.shape, n_ivf)) - yield "\n".join(infos) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf) - infos.append("training") - yield "\n".join(infos) - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - - infos.append("adding") - yield "\n".join(infos) - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - infos.append( - "Successful Index Construction,added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19)) - yield "\n".join(infos) - -def change_info_(ckpt_path): - if not os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")): - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - try: - with open( - ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r" - ) as f: - info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1]) - sr, f0 = info["sample_rate"], info["if_f0"] - version = "v2" if ("version" in info and info["version"] == "v2") else "v1" - return sr, str(f0), version - except: - traceback.print_exc() - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - -F0GPUVisible = config.dml == False - - -def change_f0_method(f0method8): - if f0method8 == "rmvpe_gpu": - visible = F0GPUVisible - else: - visible = False - return {"visible": visible, "__type__": "update"} - - - -def export_onnx(model_path, exported_path): - device = torch.device("cpu") - checkpoint = torch.load(model_path, map_location=device) - vec_channels = 256 if checkpoint.get("version", "v1") == "v1" else 768 - - test_inputs = { - "phone": torch.rand(1, 200, vec_channels), - "phone_lengths": torch.LongTensor([200]), - "pitch": torch.randint(5, 255, (1, 200)), - "pitchf": torch.rand(1, 200), - "ds": torch.zeros(1).long(), - "rnd": torch.rand(1, 192, 200) - } - - checkpoint["config"][-3] = checkpoint["weight"]["emb_g.weight"].shape[0] - net_g = SynthesizerTrnMsNSFsidM(*checkpoint["config"], is_half=False, version=checkpoint.get("version", "v1")) - - net_g.load_state_dict(checkpoint["weight"], strict=False) - net_g = net_g.to(device) - - dynamic_axes = {"phone": [1], "pitch": [1], "pitchf": [1], "rnd": [2]} - - torch.onnx.export( - net_g, - tuple(value.to(device) for value in test_inputs.values()), - exported_path, - dynamic_axes=dynamic_axes, - do_constant_folding=False, - opset_version=13, - verbose=False, - input_names=list(test_inputs.keys()), - output_names=["audio"], - ) - return "Finished" - - - -import re as regex -import scipy.io.wavfile as wavfile - -cli_current_page = "HOME" - - -def cli_split_command(com): - exp = r'(?:(?<=\s)|^)"(.*?)"(?=\s|$)|(\S+)' - split_array = regex.findall(exp, com) - split_array = [group[0] if group[0] else group[1] for group in split_array] - return split_array - - -def execute_generator_function(genObject): - for _ in genObject: - pass - - -def cli_infer(com): - # get VC first - com = cli_split_command(com) - model_name = com[0] - source_audio_path = com[1] - output_file_name = com[2] - feature_index_path = com[3] - f0_file = None # Not Implemented Yet - - # Get parameters for inference - speaker_id = int(com[4]) - transposition = float(com[5]) - f0_method = com[6] - crepe_hop_length = int(com[7]) - harvest_median_filter = int(com[8]) - resample = int(com[9]) - mix = float(com[10]) - feature_ratio = float(com[11]) - protection_amnt = float(com[12]) - protect1 = 0.5 - - if com[14] == "False" or com[14] == "false": - DoFormant = False - Quefrency = 0.0 - Timbre = 0.0 - CSVutil( - "csvdb/formanting.csv", "w+", "formanting", DoFormant, Quefrency, Timbre - ) - - else: - DoFormant = True - Quefrency = float(com[15]) - Timbre = float(com[16]) - CSVutil( - "csvdb/formanting.csv", "w+", "formanting", DoFormant, Quefrency, Timbre - ) - - print("Mangio-RVC-Fork Infer-CLI: Starting the inference...") - vc_data = vc.get_vc(model_name, protection_amnt, protect1) - print(vc_data) - print("Mangio-RVC-Fork Infer-CLI: Performing inference...") - conversion_data = vc.vc_single( - speaker_id, - source_audio_path, - source_audio_path, - transposition, - f0_file, - f0_method, - feature_index_path, - feature_index_path, - feature_ratio, - harvest_median_filter, - resample, - mix, - protection_amnt, - crepe_hop_length, - ) - if "Success." in conversion_data[0]: - print( - "Mangio-RVC-Fork Infer-CLI: Inference succeeded. Writing to %s/%s..." - % ("audio-outputs", output_file_name) - ) - wavfile.write( - "%s/%s" % ("audio-outputs", output_file_name), - conversion_data[1][0], - conversion_data[1][1], - ) - print( - "Mangio-RVC-Fork Infer-CLI: Finished! Saved output to %s/%s" - % ("audio-outputs", output_file_name) - ) - else: - print("Mangio-RVC-Fork Infer-CLI: Inference failed. Here's the traceback: ") - print(conversion_data[0]) - - -def cli_pre_process(com): - com = cli_split_command(com) - model_name = com[0] - trainset_directory = com[1] - sample_rate = com[2] - num_processes = int(com[3]) - - print("Mangio-RVC-Fork Pre-process: Starting...") - generator = preprocess_dataset( - trainset_directory, model_name, sample_rate, num_processes - ) - execute_generator_function(generator) - print("Mangio-RVC-Fork Pre-process: Finished") - - -def cli_extract_feature(com): - com = cli_split_command(com) - model_name = com[0] - gpus = com[1] - num_processes = int(com[2]) - has_pitch_guidance = True if (int(com[3]) == 1) else False - f0_method = com[4] - crepe_hop_length = int(com[5]) - version = com[6] # v1 or v2 - - print("Mangio-RVC-CLI: Extract Feature Has Pitch: " + str(has_pitch_guidance)) - print("Mangio-RVC-CLI: Extract Feature Version: " + str(version)) - print("Mangio-RVC-Fork Feature Extraction: Starting...") - generator = extract_f0_feature( - gpus, - num_processes, - f0_method, - has_pitch_guidance, - model_name, - version, - crepe_hop_length, - ) - execute_generator_function(generator) - print("Mangio-RVC-Fork Feature Extraction: Finished") - - -def cli_train(com): - com = cli_split_command(com) - model_name = com[0] - sample_rate = com[1] - has_pitch_guidance = True if (int(com[2]) == 1) else False - speaker_id = int(com[3]) - save_epoch_iteration = int(com[4]) - total_epoch = int(com[5]) # 10000 - batch_size = int(com[6]) - gpu_card_slot_numbers = com[7] - if_save_latest = True if (int(com[8]) == 1) else False - if_cache_gpu = True if (int(com[9]) == 1) else False - if_save_every_weight = True if (int(com[10]) == 1) else False - version = com[11] - - pretrained_base = "pretrained/" if version == "v1" else "pretrained_v2/" - - g_pretrained_path = "%sf0G%s.pth" % (pretrained_base, sample_rate) - d_pretrained_path = "%sf0D%s.pth" % (pretrained_base, sample_rate) - - print("Mangio-RVC-Fork Train-CLI: Training...") - click_train( - model_name, - sample_rate, - has_pitch_guidance, - speaker_id, - save_epoch_iteration, - total_epoch, - batch_size, - if_save_latest, - g_pretrained_path, - d_pretrained_path, - gpu_card_slot_numbers, - if_cache_gpu, - if_save_every_weight, - version, - ) - - -def cli_train_feature(com): - com = cli_split_command(com) - model_name = com[0] - version = com[1] - print("Mangio-RVC-Fork Train Feature Index-CLI: Training... Please wait") - generator = train_index(model_name, version) - execute_generator_function(generator) - print("Mangio-RVC-Fork Train Feature Index-CLI: Done!") - - -def cli_extract_model(com): - com = cli_split_command(com) - model_path = com[0] - save_name = com[1] - sample_rate = com[2] - has_pitch_guidance = com[3] - info = com[4] - version = com[5] - extract_small_model_process = extract_small_model( - model_path, save_name, sample_rate, has_pitch_guidance, info, version - ) - if extract_small_model_process == "Success.": - print("Mangio-RVC-Fork Extract Small Model: Success!") - else: - print(str(extract_small_model_process)) - print("Mangio-RVC-Fork Extract Small Model: Failed!") - - -def preset_apply(preset, qfer, tmbr): - if str(preset) != "": - with open(str(preset), "r") as p: - content = p.readlines() - qfer, tmbr = content[0].split("\n")[0], content[1] - formant_apply(qfer, tmbr) - else: - pass - return ( - {"value": qfer, "__type__": "update"}, - {"value": tmbr, "__type__": "update"}, - ) - - -def print_page_details(): - if cli_current_page == "HOME": - print( - "\n go home : Takes you back to home with a navigation list." - "\n go infer : Takes you to inference command execution." - "\n go pre-process : Takes you to training step.1) pre-process command execution." - "\n go extract-feature : Takes you to training step.2) extract-feature command execution." - "\n go train : Takes you to training step.3) being or continue training command execution." - "\n go train-feature : Takes you to the train feature index command execution." - "\n go extract-model : Takes you to the extract small model command execution." - ) - elif cli_current_page == "INFER": - print( - "\n arg 1) model name with .pth in ./weights: mi-test.pth" - "\n arg 2) source audio path: myFolder\\MySource.wav" - "\n arg 3) output file name to be placed in './audio-outputs': MyTest.wav" - "\n arg 4) feature index file path: logs/mi-test/added_IVF3042_Flat_nprobe_1.index" - "\n arg 5) speaker id: 0" - "\n arg 6) transposition: 0" - "\n arg 7) f0 method: harvest (pm, harvest, crepe, crepe-tiny, hybrid[x,x,x,x], mangio-crepe, mangio-crepe-tiny, rmvpe)" - "\n arg 8) crepe hop length: 160" - "\n arg 9) harvest median filter radius: 3 (0-7)" - "\n arg 10) post resample rate: 0" - "\n arg 11) mix volume envelope: 1" - "\n arg 12) feature index ratio: 0.78 (0-1)" - "\n arg 13) Voiceless Consonant Protection (Less Artifact): 0.33 (Smaller number = more protection. 0.50 means Dont Use.)" - "\n arg 14) Whether to formant shift the inference audio before conversion: False (if set to false, you can ignore setting the quefrency and timbre values for formanting)" - "\n arg 15)* Quefrency for formanting: 8.0 (no need to set if arg14 is False/false)" - "\n arg 16)* Timbre for formanting: 1.2 (no need to set if arg14 is False/false) \n" - "\nExample: mi-test.pth saudio/Sidney.wav myTest.wav logs/mi-test/added_index.index 0 -2 harvest 160 3 0 1 0.95 0.33 0.45 True 8.0 1.2" - ) - elif cli_current_page == "PRE-PROCESS": - print( - "\n arg 1) Model folder name in ./logs: mi-test" - "\n arg 2) Trainset directory: mydataset (or) E:\\my-data-set" - "\n arg 3) Sample rate: 40k (32k, 40k, 48k)" - "\n arg 4) Number of CPU threads to use: 8 \n" - "\nExample: mi-test mydataset 40k 24" - ) - elif cli_current_page == "EXTRACT-FEATURE": - print( - "\n arg 1) Model folder name in ./logs: mi-test" - "\n arg 2) Gpu card slot: 0 (0-1-2 if using 3 GPUs)" - "\n arg 3) Number of CPU threads to use: 8" - "\n arg 4) Has Pitch Guidance?: 1 (0 for no, 1 for yes)" - "\n arg 5) f0 Method: harvest (pm, harvest, dio, crepe)" - "\n arg 6) Crepe hop length: 128" - "\n arg 7) Version for pre-trained models: v2 (use either v1 or v2)\n" - "\nExample: mi-test 0 24 1 harvest 128 v2" - ) - elif cli_current_page == "TRAIN": - print( - "\n arg 1) Model folder name in ./logs: mi-test" - "\n arg 2) Sample rate: 40k (32k, 40k, 48k)" - "\n arg 3) Has Pitch Guidance?: 1 (0 for no, 1 for yes)" - "\n arg 4) speaker id: 0" - "\n arg 5) Save epoch iteration: 50" - "\n arg 6) Total epochs: 10000" - "\n arg 7) Batch size: 8" - "\n arg 8) Gpu card slot: 0 (0-1-2 if using 3 GPUs)" - "\n arg 9) Save only the latest checkpoint: 0 (0 for no, 1 for yes)" - "\n arg 10) Whether to cache training set to vram: 0 (0 for no, 1 for yes)" - "\n arg 11) Save extracted small model every generation?: 0 (0 for no, 1 for yes)" - "\n arg 12) Model architecture version: v2 (use either v1 or v2)\n" - "\nExample: mi-test 40k 1 0 50 10000 8 0 0 0 0 v2" - ) - elif cli_current_page == "TRAIN-FEATURE": - print( - "\n arg 1) Model folder name in ./logs: mi-test" - "\n arg 2) Model architecture version: v2 (use either v1 or v2)\n" - "\nExample: mi-test v2" - ) - elif cli_current_page == "EXTRACT-MODEL": - print( - "\n arg 1) Model Path: logs/mi-test/G_168000.pth" - "\n arg 2) Model save name: MyModel" - "\n arg 3) Sample rate: 40k (32k, 40k, 48k)" - "\n arg 4) Has Pitch Guidance?: 1 (0 for no, 1 for yes)" - '\n arg 5) Model information: "My Model"' - "\n arg 6) Model architecture version: v2 (use either v1 or v2)\n" - '\nExample: logs/mi-test/G_168000.pth MyModel 40k 1 "Created by Cole Mangio" v2' - ) - -def change_page(page): - global cli_current_page - cli_current_page = page - return 0 - -def execute_command(com): - if com == "go home": - return change_page("HOME") - elif com == "go infer": - return change_page("INFER") - elif com == "go pre-process": - return change_page("PRE-PROCESS") - elif com == "go extract-feature": - return change_page("EXTRACT-FEATURE") - elif com == "go train": - return change_page("TRAIN") - elif com == "go train-feature": - return change_page("TRAIN-FEATURE") - elif com == "go extract-model": - return change_page("EXTRACT-MODEL") - else: - if com[:3] == "go ": - print("page '%s' does not exist!" % com[3:]) - return 0 - - if cli_current_page == "INFER": - cli_infer(com) - elif cli_current_page == "PRE-PROCESS": - cli_pre_process(com) - elif cli_current_page == "EXTRACT-FEATURE": - cli_extract_feature(com) - elif cli_current_page == "TRAIN": - cli_train(com) - elif cli_current_page == "TRAIN-FEATURE": - cli_train_feature(com) - elif cli_current_page == "EXTRACT-MODEL": - cli_extract_model(com) - -def cli_navigation_loop(): - while True: - print("\nYou are currently in '%s':" % cli_current_page) - print_page_details() - command = input("%s: " % cli_current_page) - try: - execute_command(command) - except: - print(traceback.format_exc()) - - -if config.is_cli: - print("\n\nMangio-RVC-Fork v2 CLI App!\n") - print( - "Welcome to the CLI version of RVC. Please read the documentation on https://github.com/Mangio621/Mangio-RVC-Fork (README.MD) to understand how to use this app.\n" - ) - cli_navigation_loop() - - - - - -def switch_pitch_controls(f0method0): - is_visible = f0method0 != 'rmvpe' - - if rvc_globals.NotesOrHertz: - return ( - {"visible": False, "__type__": "update"}, - {"visible": is_visible, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": is_visible, "__type__": "update"} - ) - else: - return ( - {"visible": is_visible, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": is_visible, "__type__": "update"}, - {"visible": False, "__type__": "update"} - ) - -def match_index(sid0): - picked = False - # folder = sid0.split('.')[0] - - # folder = re.split(r'. |_', sid0)[0] - folder = sid0.split(".")[0].split("_")[0] - # folder_test = sid0.split('.')[0].split('_')[0].split('-')[0] - parent_dir = "./logs/" + folder - # print(parent_dir) - if os.path.exists(parent_dir): - # print('path exists') - for filename in os.listdir(parent_dir.replace("\\", "/")): - if filename.endswith(".index"): - for i in range(len(indexes_list)): - if indexes_list[i] == ( - os.path.join(("./logs/" + folder), filename).replace("\\", "/") - ): - # print('regular index found') - break - else: - if indexes_list[i] == ( - os.path.join( - ("./logs/" + folder.lower()), filename - ).replace("\\", "/") - ): - # print('lowered index found') - parent_dir = "./logs/" + folder.lower() - break - # elif (indexes_list[i]).casefold() == ((os.path.join(("./logs/" + folder), filename).replace('\\','/')).casefold()): - # print('8') - # parent_dir = "./logs/" + folder.casefold() - # break - # elif (indexes_list[i]) == ((os.path.join(("./logs/" + folder_test), filename).replace('\\','/'))): - # parent_dir = "./logs/" + folder_test - # print(parent_dir) - # break - # elif (indexes_list[i]) == (os.path.join(("./logs/" + folder_test.lower()), filename).replace('\\','/')): - # parent_dir = "./logs/" + folder_test - # print(parent_dir) - # break - # else: - # #print('couldnt find index') - # continue - - # print('all done') - index_path = os.path.join( - parent_dir.replace("\\", "/"), filename.replace("\\", "/") - ).replace("\\", "/") - # print(index_path) - return (index_path, index_path) - - else: - # print('nothing found') - return ("", "") - -def stoptraining(mim): - if int(mim) == 1: - CSVutil("csvdb/stop.csv", "w+", "stop", "True") - # p.terminate() - # p.kill() - try: - os.kill(PID, signal.SIGTERM) - except Exception as e: - print(f"Couldn't click due to {e}") - pass - else: - pass - - return ( - {"visible": False, "__type__": "update"}, - {"visible": True, "__type__": "update"}, - ) - -weights_dir = 'weights/' - -def note_to_hz(note_name): - SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2} - pitch_class, octave = note_name[:-1], int(note_name[-1]) - semitone = SEMITONES[pitch_class] - note_number = 12 * (octave - 4) + semitone - frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number - return frequency - -def save_to_wav(record_button): - if record_button is None: - pass - else: - path_to_file=record_button - new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.wav' - new_path='./audios/'+new_name - shutil.move(path_to_file,new_path) - return new_name -def save_to_wav2_edited(dropbox): - if dropbox is None: - pass - else: - file_path = dropbox.name - target_path = os.path.join('audios', os.path.basename(file_path)) - - if os.path.exists(target_path): - os.remove(target_path) - print('Replacing old dropdown file...') - - shutil.move(file_path, target_path) - return -def save_to_wav2(dropbox): - file_path = dropbox.name - target_path = os.path.join('audios', os.path.basename(file_path)) - - if os.path.exists(target_path): - os.remove(target_path) - print('Replacing old dropdown file...') - - shutil.move(file_path, target_path) - return target_path - -from gtts import gTTS -import edge_tts -import asyncio - - - - -def custom_voice( - _values, # filter indices - audio_files, # all audio files - model_voice_path='', - transpose=0, - f0method='pm', - index_rate_=float(0.66), - crepe_hop_length_=float(64), - f0_autotune=False, - file_index='', - file_index2='', - ): - - vc.get_vc(model_voice_path) - - - for _value_item in _values: - filename = "audio2/"+audio_files[_value_item] if _value_item != "converted_tts" else audio_files[0] - #filename = "audio2/"+audio_files[_value_item] - try: - print(audio_files[_value_item], model_voice_path) - except: - pass - info_, (sample_, audio_output_) = vc.vc_single_dont_save( - sid=0, - input_audio_path0=filename, #f"audio2/{filename}", - input_audio_path1=filename, #f"audio2/{filename}", - f0_up_key=transpose, # transpose for m to f and reverse 0 12 - f0_file=None, - f0_method= f0method, - file_index= file_index, # dir pwd? - file_index2= file_index2, - # file_big_npy1, - index_rate= index_rate_, - filter_radius= int(3), - resample_sr= int(0), - rms_mix_rate= float(0.25), - protect= float(0.33), - crepe_hop_length= crepe_hop_length_, - f0_autotune=f0_autotune, - f0_min=50, - note_min=50, - f0_max=1100, - note_max=1100 - ) - - sf.write( - file= filename, #f"audio2/{filename}", - samplerate=sample_, - data=audio_output_ - ) -def cast_to_device(tensor, device): - try: - return tensor.to(device) - except Exception as e: - print(e) - return tensor - - -def __bark__(text, voice_preset): - os.makedirs(os.path.join(now_dir,"tts"), exist_ok=True) - from transformers import AutoProcessor, BarkModel - device = "cuda:0" if torch.cuda.is_available() else "cpu" - dtype = torch.float32 if "cpu" in device else torch.float16 - bark_processor = AutoProcessor.from_pretrained( - "suno/bark", - cache_dir=os.path.join(now_dir,"tts","suno/bark"), - torch_dtype=dtype) - bark_model = BarkModel.from_pretrained( - "suno/bark", - cache_dir=os.path.join(now_dir,"tts","suno/bark"), - torch_dtype=dtype).to(device) - # bark_model.enable_cpu_offload() - inputs = bark_processor( - text=[text], - return_tensors="pt", - voice_preset=voice_preset - ) - tensor_dict = {k: cast_to_device(v,device) if hasattr(v,"to") else v for k, v in inputs.items()} - speech_values = bark_model.generate(**tensor_dict, do_sample=True) - sampling_rate = bark_model.generation_config.sample_rate - speech = speech_values.cpu().numpy().squeeze() - return speech, sampling_rate - - - -def make_test( - tts_text, - tts_voice, - model_path, - index_path, - transpose, - f0_method, - index_rate, - crepe_hop_length, - f0_autotune, - tts_method - ): - - if tts_voice == None: - return - - filename = os.path.join(now_dir, "audio-outputs", "converted_tts.wav") - if "SET_LIMIT" == os.getenv("DEMO"): - if len(tts_text) > 60: - tts_text = tts_text[:60] - print("DEMO; limit to 60 characters") - - language = tts_voice[:2] - if tts_method == "Edge-tts": - try: - #nest_asyncio.apply() # gradio;not - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save(filename)) - except: - try: - tts = gTTS(tts_text, lang=language) - tts.save(filename) - tts.save - print(f'No audio was received. Please change the tts voice for {tts_voice}. USING gTTS.') - except: - tts = gTTS('a', lang=language) - tts.save(filename) - print('Error: Audio will be replaced.') - - os.system("cp audio-outputs/converted_tts.wav audio-outputs/real_tts.wav") - - custom_voice( - ["converted_tts"], # filter indices - ["audio-outputs/converted_tts.wav"], # all audio files - model_voice_path=model_path, - transpose=transpose, - f0method=f0_method, - index_rate_=index_rate, - crepe_hop_length_=crepe_hop_length, - f0_autotune=f0_autotune, - file_index='', - file_index2=index_path, - ) - return os.path.join(now_dir, "audio-outputs", "converted_tts.wav"), os.path.join(now_dir, "audio-outputs", "real_tts.wav") - elif tts_method == "Bark-tts": - try: - - script = tts_text.replace("\n", " ").strip() - sentences = sent_tokenize(script) - print(sentences) - silence = np.zeros(int(0.25 * SAMPLE_RATE)) - pieces = [] - nombre_archivo = os.path.join(now_dir, "audio-outputs", "bark_out.wav") - for sentence in sentences: - audio_array , _ = __bark__(sentence, tts_voice.split("-")[0]) - pieces += [audio_array, silence.copy()] - - sf.write( - file= nombre_archivo, - samplerate=SAMPLE_RATE, - data=np.concatenate(pieces) - ) - vc.get_vc(model_path) - info_, (sample_, audio_output_) = vc.vc_single_dont_save( - sid=0, - input_audio_path0=os.path.join(now_dir, "audio-outputs", "bark_out.wav"), #f"audio2/{filename}", - input_audio_path1=os.path.join(now_dir, "audio-outputs", "bark_out.wav"), #f"audio2/{filename}", - f0_up_key=transpose, # transpose for m to f and reverse 0 12 - f0_file=None, - f0_method=f0_method, - file_index= '', # dir pwd? - file_index2= index_path, - # file_big_npy1, - index_rate= index_rate, - filter_radius= int(3), - resample_sr= int(0), - rms_mix_rate= float(0.25), - protect= float(0.33), - crepe_hop_length= crepe_hop_length, - f0_autotune=f0_autotune, - f0_min=50, - note_min=50, - f0_max=1100, - note_max=1100 - ) - wavfile.write(os.path.join(now_dir, "audio-outputs", "converted_bark.wav"), rate=sample_, data=audio_output_) - return os.path.join(now_dir, "audio-outputs", "converted_bark.wav"), nombre_archivo - - except Exception as e: - print(f"{e}") - return None, None - - - - - - -def GradioSetup(UTheme=gr.themes.Soft()): - - default_weight = names[0] if names else '' - - with gr.Blocks(theme='JohnSmith9982/small_and_pretty', title="Applio") as app: - gr.HTML("

          🍏 Applio (Mangio-RVC-Fork HF)

          ") - gr.HTML("

          The current space only uses CPU, so it's only for inference. If you have issues with the queue, I recommend duplicating the space.

          ") - gr.Markdown( - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/r3gm/RVC_HF?duplicate=true)\n\n" - ) - with gr.Tabs(): - with gr.TabItem(i18n("Model Inference")): - with gr.Row(): - sid0 = gr.Dropdown(label=i18n("Inferencing voice:"), choices=sorted(names), value=default_weight) - refresh_button = gr.Button(i18n("Refresh"), variant="primary") - clean_button = gr.Button(i18n("Unload voice to save GPU memory"), variant="primary") - clean_button.click(fn=lambda: ({"value": "", "__type__": "update"}), inputs=[], outputs=[sid0]) - - - with gr.TabItem(i18n("Single")): - with gr.Row(): - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("Select Speaker/Singer ID:"), - value=0, - visible=False, - interactive=True, - ) - - - with gr.Group(): - with gr.Row(): - with gr.Column(): # First column for audio-related inputs - dropbox = gr.File(label=i18n("Drag your audio here:")) - record_button=gr.Audio(source="microphone", label=i18n("Or record an audio:"), type="filepath") - input_audio0 = gr.Textbox( - label=i18n("Manual path to the audio file to be processed"), - value=os.path.join(now_dir, "audios", "someguy.mp3"), - visible=False - ) - input_audio1 = gr.Dropdown( - label=i18n("Auto detect audio path and select from the dropdown:"), - choices=sorted(audio_paths), - value='', - interactive=True, - ) - - input_audio1.select(fn=lambda:'',inputs=[],outputs=[input_audio0]) - input_audio0.input(fn=lambda:'',inputs=[],outputs=[input_audio1]) - - dropbox.upload(fn=save_to_wav2, inputs=[dropbox], outputs=[input_audio0]) - dropbox.upload(fn=easy_infer.change_choices2, inputs=[], outputs=[input_audio1]) - record_button.change(fn=save_to_wav, inputs=[record_button], outputs=[input_audio0]) - record_button.change(fn=easy_infer.change_choices2, inputs=[], outputs=[input_audio1]) - - best_match_index_path1 = match_index(sid0.value) # Get initial index from default sid0 (first voice model in list) - - with gr.Column(): # Second column for pitch shift and other options - file_index2 = gr.Dropdown( - label=i18n("Auto-detect index path and select from the dropdown:"), - choices=get_indexes(), - value=best_match_index_path1, - interactive=True, - allow_custom_value=True, - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Search feature ratio:"), - value=0.75, - interactive=True, - ) - refresh_button.click( - fn=change_choices, inputs=[], outputs=[sid0, file_index2, input_audio1] - ) - with gr.Column(): - vc_transform0 = gr.Number( - label=i18n("Transpose (integer, number of semitones, raise by an octave: 12, lower by an octave: -12):"), value=0 - ) - - # Create a checkbox for advanced settings - advanced_settings_checkbox = gr.Checkbox( - value=False, - label=i18n("Advanced Settings"), - interactive=True, - ) - - # Advanced settings container - with gr.Column(visible=False) as advanced_settings: # Initially hidden - with gr.Row(label = i18n("Advanced Settings"), open = False): - with gr.Column(): - f0method0 = gr.Radio( - label=i18n( - "Select the pitch extraction algorithm:" - ), - choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny", "rmvpe", "rmvpe+"], - value="rmvpe+", - interactive=True, - ) - f0_autotune = gr.Checkbox( - label="Enable autotune", - interactive=True - ) - crepe_hop_length = gr.Slider( - minimum=1, - maximum=512, - step=1, - label=i18n("Mangio-Crepe Hop Length (Only applies to mangio-crepe): Hop length refers to the time it takes for the speaker to jump to a dramatic pitch. Lower hop lengths take more time to infer but are more pitch accurate."), - value=120, - interactive=True, - visible=False, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n("If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness."), - value=3, - step=1, - interactive=True, - ) - - minpitch_slider = gr.Slider( - label = i18n("Min pitch:"), - info = i18n("Specify minimal pitch for inference [HZ]"), - step = 0.1, - minimum = 1, - scale = 0, - value = 50, - maximum = 16000, - interactive = True, - visible = (not rvc_globals.NotesOrHertz) and (f0method0.value != 'rmvpe'), - ) - minpitch_txtbox = gr.Textbox( - label = i18n("Min pitch:"), - info = i18n("Specify minimal pitch for inference [NOTE][OCTAVE]"), - placeholder = "C5", - visible = (rvc_globals.NotesOrHertz) and (f0method0.value != 'rmvpe'), - interactive = True, - ) - - maxpitch_slider = gr.Slider( - label = i18n("Max pitch:"), - info = i18n("Specify max pitch for inference [HZ]"), - step = 0.1, - minimum = 1, - scale = 0, - value = 1100, - maximum = 16000, - interactive = True, - visible = (not rvc_globals.NotesOrHertz) and (f0method0.value != 'rmvpe'), - ) - maxpitch_txtbox = gr.Textbox( - label = i18n("Max pitch:"), - info = i18n("Specify max pitch for inference [NOTE][OCTAVE]"), - placeholder = "C6", - visible = (rvc_globals.NotesOrHertz) and (f0method0.value != 'rmvpe'), - interactive = True, - ) - - with gr.Column(): - file_index1 = gr.Textbox( - label=i18n("Feature search database file path:"), - value="", - interactive=True, - ) - - with gr.Accordion(label = i18n("Custom f0 [Root pitch] File"), open = False): - f0_file = gr.File(label=i18n("F0 curve file (optional). One pitch per line. Replaces the default F0 and pitch modulation:")) - - f0method0.change( - fn=lambda radio: ( - { - "visible": radio in ['mangio-crepe', 'mangio-crepe-tiny'], - "__type__": "update" - } - ), - inputs=[f0method0], - outputs=[crepe_hop_length] - ) - - f0method0.change( - fn=switch_pitch_controls, - inputs=[f0method0], - outputs=[minpitch_slider, minpitch_txtbox, - maxpitch_slider, maxpitch_txtbox] - ) - - with gr.Column(): - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling:"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used:"), - value=0.25, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy:" - ), - value=0.33, - step=0.01, - interactive=True, - ) - formanting = gr.Checkbox( - value=bool(DoFormant), - label=i18n("Formant shift inference audio"), - info=i18n("Used for male to female and vice-versa conversions"), - interactive=True, - visible=True, - ) - - formant_preset = gr.Dropdown( - value='', - choices=get_fshift_presets(), - label=i18n("Browse presets for formanting"), - info=i18n("Presets are located in formantshiftcfg/ folder"), - visible=bool(DoFormant), - ) - - formant_refresh_button = gr.Button( - value='\U0001f504', - visible=bool(DoFormant), - variant='primary', - ) - - qfrency = gr.Slider( - value=Quefrency, - info=i18n("Default value is 1.0"), - label=i18n("Quefrency for formant shifting"), - minimum=0.0, - maximum=16.0, - step=0.1, - visible=bool(DoFormant), - interactive=True, - ) - - tmbre = gr.Slider( - value=Timbre, - info=i18n("Default value is 1.0"), - label=i18n("Timbre for formant shifting"), - minimum=0.0, - maximum=16.0, - step=0.1, - visible=bool(DoFormant), - interactive=True, - ) - frmntbut = gr.Button( - "Apply", variant="primary", visible=bool(DoFormant) - ) - - formant_preset.change( - fn=preset_apply, - inputs=[formant_preset, qfrency, tmbre], - outputs=[qfrency, tmbre], - ) - formanting.change( - fn=formant_enabled, - inputs=[ - formanting, - qfrency, - tmbre, - frmntbut, - formant_preset, - formant_refresh_button, - ], - outputs=[ - formanting, - qfrency, - tmbre, - frmntbut, - formant_preset, - formant_refresh_button, - ], - ) - frmntbut.click( - fn=formant_apply, - inputs=[qfrency, tmbre], - outputs=[qfrency, tmbre], - ) - formant_refresh_button.click( - fn=update_fshift_presets, - inputs=[formant_preset, qfrency, tmbre], - outputs=[formant_preset, qfrency, tmbre], - ) - - # Function to toggle advanced settings - def toggle_advanced_settings(checkbox): - return {"visible": checkbox, "__type__": "update"} - - # Attach the change event - advanced_settings_checkbox.change( - fn=toggle_advanced_settings, - inputs=[advanced_settings_checkbox], - outputs=[advanced_settings] - ) - - - but0 = gr.Button(i18n("Convert"), variant="primary").style(full_width=True) - - with gr.Row(): # Defines output info + output audio download after conversion - vc_output1 = gr.Textbox(label=i18n("Output information:")) - vc_output2 = gr.Audio(label=i18n("Export audio (click on the three dots in the lower right corner to download)")) - - with gr.Group(): # I think this defines the big convert button - with gr.Row(): - but0.click( - vc.vc_single, - [ - spk_item, - input_audio0, - input_audio1, - vc_transform0, - f0_file, - f0method0, - file_index1, - file_index2, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - crepe_hop_length, - minpitch_slider, minpitch_txtbox, - maxpitch_slider, maxpitch_txtbox, - f0_autotune - ], - [vc_output1, vc_output2], - ) - - - with gr.TabItem(i18n("Batch")): # Dont Change - with gr.Group(): # Markdown explanation of batch inference - gr.Markdown( - value=i18n("Batch conversion. Enter the folder containing the audio files to be converted or upload multiple audio files. The converted audio will be output in the specified folder (default: 'opt').") - ) - with gr.Row(): - with gr.Column(): - vc_transform1 = gr.Number( - label=i18n("Transpose (integer, number of semitones, raise by an octave: 12, lower by an octave: -12):"), value=0 - ) - opt_input = gr.Textbox(label=i18n("Specify output folder:"), value="opt") - with gr.Column(): - file_index4 = gr.Dropdown( - label=i18n("Auto-detect index path and select from the dropdown:"), - choices=get_indexes(), - value=best_match_index_path1, - interactive=True, - ) - sid0.select(fn=match_index, inputs=[sid0], outputs=[file_index2, file_index4]) - - refresh_button.click( - fn=lambda: change_choices()[1], - inputs=[], - outputs=file_index4, - ) - index_rate2 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Search feature ratio:"), - value=0.75, - interactive=True, - ) - with gr.Row(): - dir_input = gr.Textbox( - label=i18n("Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):"), - value=os.path.join(now_dir, "audios"), - ) - inputs = gr.File( - file_count="multiple", label=i18n("You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder.") - ) - - with gr.Row(): - with gr.Column(): - # Create a checkbox for advanced batch settings - advanced_settings_batch_checkbox = gr.Checkbox( - value=False, - label=i18n("Advanced Settings"), - interactive=True, - ) - - # Advanced batch settings container - with gr.Row(visible=False) as advanced_settings_batch: # Initially hidden - with gr.Row(label = i18n("Advanced Settings"), open = False): - with gr.Column(): - file_index3 = gr.Textbox( - label=i18n("Feature search database file path:"), - value="", - interactive=True, - ) - - f0method1 = gr.Radio( - label=i18n( - "Select the pitch extraction algorithm:" - ), - choices=["pm", "harvest", "crepe", "rmvpe"], - value="rmvpe", - interactive=True, - ) - f0_autotune = gr.Checkbox( - label="Enable autotune", - interactive=True - ) - filter_radius1 = gr.Slider( - minimum=0, - maximum=7, - label=i18n("If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness."), - value=3, - step=1, - interactive=True, - ) - - with gr.Row(): - format1 = gr.Radio( - label=i18n("Export file format"), - choices=["wav", "flac", "mp3", "m4a"], - value="wav", - interactive=True, - ) - - - with gr.Column(): - resample_sr1 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling:"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used:"), - value=1, - interactive=True, - ) - protect1 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy:" - ), - value=0.33, - step=0.01, - interactive=True, - ) - vc_output3 = gr.Textbox(label=i18n("Output information:")) - but1 = gr.Button(i18n("Convert"), variant="primary") - but1.click( - vc.vc_multi, - [ - spk_item, - dir_input, - opt_input, - inputs, - vc_transform1, - f0method1, - file_index3, - file_index4, - index_rate2, - filter_radius1, - resample_sr1, - rms_mix_rate1, - protect1, - format1, - crepe_hop_length, - minpitch_slider if (not rvc_globals.NotesOrHertz) else minpitch_txtbox, - maxpitch_slider if (not rvc_globals.NotesOrHertz) else maxpitch_txtbox, - f0_autotune - ], - [vc_output3], - ) - - sid0.change( - fn=vc.get_vc, - inputs=[sid0, protect0, protect1], - outputs=[spk_item, protect0, protect1], - ) - if not sid0.value == '': - spk_item, protect0, protect1 = vc.get_vc(sid0.value, protect0, protect1) - - #spk_item, protect0, protect1 = vc.get_vc(sid0.value, protect0, protect1) - - # Function to toggle advanced settings - def toggle_advanced_settings_batch(checkbox): - return {"visible": checkbox, "__type__": "update"} - - # Attach the change event - advanced_settings_batch_checkbox.change( - fn=toggle_advanced_settings_batch, - inputs=[advanced_settings_batch_checkbox], - outputs=[advanced_settings_batch] - ) - - - with gr.TabItem(i18n("Train")): - with gr.Accordion(label=i18n("Step 1: Processing data")): - with gr.Row(): - exp_dir1 = gr.Textbox(label=i18n("Enter the model name:"), value=i18n("Model_Name")) - sr2 = gr.Radio( - label=i18n("Target sample rate:"), - choices=["40k", "48k", "32k"], - value="40k", - interactive=True, - ) - if_f0_3 = gr.Checkbox( - label=i18n("Whether the model has pitch guidance."), - value=True, - interactive=True, - ) - version19 = gr.Radio( - label=i18n("Version:"), - choices=["v1", "v2"], - value="v2", - interactive=True, - visible=True, - ) - np7 = gr.Slider( - minimum=0, - maximum=config.n_cpu, - step=1, - label=i18n("Number of CPU processes:"), - value=int(np.ceil(config.n_cpu / 1.5)), - interactive=True, - ) - with gr.Group(): - with gr.Accordion(label=i18n("Step 2: Skipping pitch extraction")): - - with gr.Row(): - # trainset_dir4 = gr.Textbox( - # label=i18n("Enter the path of the training folder:"), value=os.path.join(now_dir, datasets_root) - # ) - with gr.Column(): - trainset_dir4 = gr.Dropdown(choices=sorted(datasets), label=i18n("Select your dataset:"), value=get_dataset()) - btn_update_dataset_list = gr.Button(i18n("Update list"), variant="primary") - spk_id5 = gr.Slider( - minimum=0, - maximum=4, - step=1, - label=i18n("Specify the model ID:"), - value=0, - interactive=True, - ) - btn_update_dataset_list.click( - easy_infer.update_dataset_list, [spk_id5], trainset_dir4 - ) - but1 = gr.Button(i18n("Process data"), variant="primary") - info1 = gr.Textbox(label=i18n("Output information:"), value="") - but1.click( - preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1] - ) - with gr.Group(): - with gr.Accordion(label=i18n("Step 3: Extracting features")): - with gr.Row(): - with gr.Column(): - gpus6 = gr.Textbox( - label=i18n("Provide the GPU index(es) separated by '-', like 0-1-2 for using GPUs 0, 1, and 2:"), - value=gpus, - interactive=True, - ) - gpu_info9 = gr.Textbox( - label=i18n("GPU Information:"), value=gpu_info, visible=F0GPUVisible - ) - with gr.Column(): - f0method8 = gr.Radio( - label=i18n( - "Select the pitch extraction algorithm:" - ), - choices=["pm", "harvest", "dio", "crepe", "mangio-crepe", "rmvpe", "rmvpe_gpu"], - # [ MANGIO ]: Fork feature: Crepe on f0 extraction for training. - value="rmvpe", - interactive=True, - ) - gpus_rmvpe = gr.Textbox( - label=i18n( - "rmvpe卡号配置:以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程" - ), - value="%s-%s" % (gpus, gpus), - interactive=True, - visible=F0GPUVisible, - ) - - extraction_crepe_hop_length = gr.Slider( - minimum=1, - maximum=512, - step=1, - label=i18n("Mangio-Crepe Hop Length (Only applies to mangio-crepe): Hop length refers to the time it takes for the speaker to jump to a dramatic pitch. Lower hop lengths take more time to infer but are more pitch accurate."), - value=64, - interactive=True, - visible=False, - ) - - f0method8.change( - fn=lambda radio: ( - { - "visible": radio in ['mangio-crepe', 'mangio-crepe-tiny'], - "__type__": "update" - } - ), - inputs=[f0method8], - outputs=[extraction_crepe_hop_length] - ) - f0method8.change( - fn=change_f0_method, - inputs=[f0method8], - outputs=[gpus_rmvpe], - ) - but2 = gr.Button(i18n("Feature extraction"), variant="primary") - info2 = gr.Textbox(label=i18n("Output information:"), value="", max_lines=8, interactive=False) - but2.click( - extract_f0_feature, - [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19, extraction_crepe_hop_length, gpus_rmvpe,], - [info2], - ) - with gr.Group(): - with gr.Row(): - with gr.Accordion(label=i18n("Step 4: Model training started")): - with gr.Row(): - save_epoch10 = gr.Slider( - minimum=1, - maximum=100, - step=1, - label=i18n("Save frequency:"), - value=10, - interactive=True, - visible=True, - ) - total_epoch11 = gr.Slider( - minimum=1, - maximum=10000, - step=2, - label=i18n("Training epochs:"), - value=750, - interactive=True, - ) - batch_size12 = gr.Slider( - minimum=1, - maximum=50, - step=1, - label=i18n("Batch size per GPU:"), - value=default_batch_size, - #value=20, - interactive=True, - ) - - with gr.Row(): - if_save_latest13 = gr.Checkbox( - label=i18n("Whether to save only the latest .ckpt file to save hard drive space"), - value=True, - interactive=True, - ) - if_cache_gpu17 = gr.Checkbox( - label=i18n("Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training"), - value=False, - interactive=True, - ) - if_save_every_weights18 = gr.Checkbox( - label=i18n("Save a small final model to the 'weights' folder at each save point"), - value=True, - interactive=True, - ) - - with gr.Row(): - pretrained_G14 = gr.Textbox( - lines=4, - label=i18n("Load pre-trained base model G path:"), - value="assets/pretrained_v2/f0G40k.pth", - interactive=True, - ) - pretrained_D15 = gr.Textbox( - lines=4, - label=i18n("Load pre-trained base model D path:"), - value="assets/pretrained_v2/f0D40k.pth", - interactive=True, - ) - gpus16 = gr.Textbox( - label=i18n("Provide the GPU index(es) separated by '-', like 0-1-2 for using GPUs 0, 1, and 2:"), - value=gpus, - interactive=True, - ) - sr2.change( - change_sr2, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15], - ) - version19.change( - change_version19, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15, sr2], - ) - if_f0_3.change( - fn=change_f0, - inputs=[if_f0_3, sr2, version19], - outputs=[f0method8, pretrained_G14, pretrained_D15], - ) - if_f0_3.change(fn=lambda radio: ( - { - "visible": radio in ['mangio-crepe', 'mangio-crepe-tiny'], - "__type__": "update" - } - ), inputs=[f0method8], outputs=[extraction_crepe_hop_length]) - - butstop = gr.Button(i18n("Stop training"), - variant='primary', - visible=False, - ) - but3 = gr.Button(i18n("Train model"), variant="primary", visible=True) - but3.click(fn=stoptraining, inputs=[gr.Number(value=0, visible=False)], outputs=[but3, butstop]) - butstop.click(fn=stoptraining, inputs=[gr.Number(value=1, visible=False)], outputs=[but3, butstop]) - - - with gr.Column(): - info3 = gr.Textbox(label=i18n("Output information:"), value="", max_lines=4) - save_action = gr.Dropdown(label=i18n("Save type"), choices=[i18n("Save all"),i18n("Save D and G"),i18n("Save voice")], value=i18n("Choose the method"), interactive=True) - - but7 = gr.Button(i18n("Save model"), variant="primary") - but4 = gr.Button(i18n("Train feature index"), variant="primary") - - - - if_save_every_weights18.change( - fn=lambda if_save_every_weights: ( - { - "visible": if_save_every_weights, - "__type__": "update" - } - ), - inputs=[if_save_every_weights18], - outputs=[save_epoch10] - ) - - but3.click( - click_train, - [ - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - ], - [info3, butstop, but3], - ) - - but4.click(train_index, [exp_dir1, version19], info3) - but7.click(easy_infer.save_model, [exp_dir1, save_action], info3) - with gr.Group(): - with gr.Row(): - with gr.Accordion(label=i18n("Step 5: Export lowest points on a graph of the model")): - - lowestval_weight_dir = gr.Textbox(visible=False) - ds = gr.Textbox(visible=False) - weights_dir1 = gr.Textbox(visible=False, value=weights_dir) - - - with gr.Row(): - amntlastmdls = gr.Slider( - minimum=1, - maximum=25, - label=i18n('How many lowest points to save:'), - value=3, - step=1, - interactive=True, - ) - lpexport = gr.Button( - value=i18n('Export lowest points of a model'), - variant='primary', - ) - lw_mdls = gr.File( - file_count="multiple", - label=i18n("Output models:"), - interactive=False, - ) ##### - - with gr.Row(): - infolpex = gr.Textbox(label=i18n("Output information:"), value="", max_lines=10) - mdlbl = gr.Dataframe(label=i18n('Stats of selected models:'), datatype='number', type='pandas') - - lpexport.click( - lambda model_name: os.path.join("logs", model_name, "lowestvals"), - inputs=[exp_dir1], - outputs=[lowestval_weight_dir] - ) - - lpexport.click(fn=tensorlowest.main, inputs=[exp_dir1, save_epoch10, amntlastmdls], outputs=[ds]) - - ds.change( - fn=tensorlowest.selectweights, - inputs=[exp_dir1, ds, weights_dir1, lowestval_weight_dir], - outputs=[infolpex, lw_mdls, mdlbl], - ) - with gr.TabItem(i18n("UVR5")): # UVR section - with gr.Group(): - with gr.Row(): - with gr.Column(): - model_select = gr.Radio( - label=i18n("Model Architecture:"), - choices=["VR", "MDX"], - value="VR", - interactive=True, - ) - dir_wav_input = gr.Textbox( - label=i18n("Enter the path of the audio folder to be processed:"), - value=os.path.join(now_dir, "audios") - ) - wav_inputs = gr.File( - file_count="multiple", label=i18n("You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder.") - ) - - with gr.Column(): - model_choose = gr.Dropdown(label=i18n("Model:"), choices=uvr5_names) - agg = gr.Slider( - minimum=0, - maximum=20, - step=1, - label="Vocal Extraction Aggressive", - value=10, - interactive=True, - visible=False, - ) - opt_vocal_root = gr.Textbox( - label=i18n("Specify the output folder for vocals:"), value="opt" - ) - opt_ins_root = gr.Textbox( - label=i18n("Specify the output folder for accompaniment:"), value="opt" - ) - format0 = gr.Radio( - label=i18n("Export file format:"), - choices=["wav", "flac", "mp3", "m4a"], - value="flac", - interactive=True, - ) - model_select.change( - fn=update_model_choices, - inputs=model_select, - outputs=model_choose, - ) - but2 = gr.Button(i18n("Convert"), variant="primary") - vc_output4 = gr.Textbox(label=i18n("Output information:")) - #wav_inputs.upload(fn=save_to_wav2_edited, inputs=[wav_inputs], outputs=[]) - but2.click( - uvr, - [ - model_choose, - dir_wav_input, - opt_vocal_root, - wav_inputs, - opt_ins_root, - agg, - format0, - model_select - ], - [vc_output4], - ) - with gr.TabItem(i18n("TTS")): - with gr.Group(): - with gr.Column(): - text_test = gr.Textbox(label=i18n("Text:"), placeholder=i18n("Enter the text you want to convert to voice..."), lines=6) - - with gr.Group(): - with gr.Row(): - with gr.Column(): - tts_methods_voice = ["Edge-tts", "Bark-tts"] - ttsmethod_test = gr.Dropdown(tts_methods_voice, value='Edge-tts', label = i18n('TTS Method:'), visible=True) - tts_test = gr.Dropdown(set_edge_voice, label = i18n('TTS Model:'), visible=True) - ttsmethod_test.change( - fn=update_tts_methods_voice, - inputs=ttsmethod_test, - outputs=tts_test, - ) - - with gr.Column(): - model_voice_path07 = gr.Dropdown(label=i18n('RVC Model:'), choices=sorted(names), value=default_weight) - best_match_index_path1 = match_index(model_voice_path07.value) - - file_index2_07 = gr.Dropdown( - label=i18n('Select the .index file:'), - choices=get_indexes(), - value=best_match_index_path1, - interactive=True, - allow_custom_value=True, - ) - #transpose_test = gr.Number(label = i18n('Transpose (integer, number Fof semitones, raise by an octave: 12, lower by an octave: -12):'), value=0, visible=True, interactive= True) - - - - - with gr.Row(): - refresh_button_ = gr.Button(i18n("Refresh"), variant="primary") - refresh_button_.click(fn=change_choices2, inputs=[], outputs=[model_voice_path07, file_index2_07]) - with gr.Row(): - original_ttsvoice = gr.Audio(label=i18n('Audio TTS:')) - ttsvoice = gr.Audio(label=i18n('Audio RVC:')) - - with gr.Row(): - button_test = gr.Button(i18n("Convert"), variant="primary") - - - button_test.click(make_test, inputs=[ - text_test, - tts_test, - model_voice_path07, - file_index2_07, - #transpose_test, - vc_transform0, - f0method8, - index_rate1, - crepe_hop_length, - f0_autotune, - ttsmethod_test - ], outputs=[ttsvoice, original_ttsvoice]) - - with gr.TabItem(i18n("Resources")): - easy_infer.download_model() - easy_infer.download_backup() - easy_infer.download_dataset(trainset_dir4) - easy_infer.download_audio() - easy_infer.youtube_separator() - with gr.TabItem(i18n("Extra")): - gr.Markdown( - value=i18n("This section contains some extra utilities that often may be in experimental phases") - ) - with gr.TabItem(i18n("Merge Audios")): - with gr.Group(): - gr.Markdown( - value="## " + i18n("Merge your generated audios with the instrumental") - ) - gr.Markdown(value=".",visible=True) - gr.Markdown(value=".",visible=True) - with gr.Row(): - with gr.Column(): - dropbox = gr.File(label=i18n("Drag your audio here:")) - gr.Markdown(value=i18n("### Instrumental settings:")) - input_audio1 = gr.Dropdown( - label=i18n("Choose your instrumental:"), - choices=sorted(audio_others_paths), - value='', - interactive=True, - ) - input_audio1_scale = gr.Slider( - minimum=0, - maximum=10, - label=i18n("Volume of the instrumental audio:"), - value=1.00, - interactive=True, - ) - gr.Markdown(value=i18n("### Audio settings:")) - input_audio3 = gr.Dropdown( - label=i18n("Select the generated audio"), - choices=sorted(audio_paths), - value='', - interactive=True, - ) - with gr.Row(): - input_audio3_scale = gr.Slider( - minimum=0, - maximum=10, - label=i18n("Volume of the generated audio:"), - value=1.00, - interactive=True, - ) - - gr.Markdown(value=i18n("### Add the effects:")) - reverb_ = gr.Checkbox( - label=i18n("Reverb"), - value=False, - interactive=True, - ) - compressor_ = gr.Checkbox( - label=i18n("Compressor"), - value=False, - interactive=True, - ) - noise_gate_ = gr.Checkbox( - label=i18n("Noise Gate"), - value=False, - interactive=True, - ) - - butnone = gr.Button(i18n("Merge"), variant="primary").style(full_width=True) - - vc_output1 = gr.Textbox(label=i18n("Output information:")) - vc_output2 = gr.Audio(label=i18n("Export audio (click on the three dots in the lower right corner to download)"), type='filepath') - - dropbox.upload(fn=save_to_wav2, inputs=[dropbox], outputs=[input_audio1]) - dropbox.upload(fn=easy_infer.change_choices2, inputs=[], outputs=[input_audio1]) - - refresh_button.click( - fn=lambda: change_choices3(), - inputs=[], - outputs=[input_audio1, input_audio3], - ) - - butnone.click( - fn=audio_combined, - inputs=[input_audio1, input_audio3,input_audio1_scale,input_audio3_scale,reverb_,compressor_,noise_gate_], - outputs=[vc_output1, vc_output2] - ) - - - with gr.TabItem(i18n("Processing")): - with gr.Group(): - - with gr.Accordion(label=i18n("Model fusion, can be used to test timbre fusion")): - with gr.Row(): - with gr.Column(): - name_to_save0 = gr.Textbox( - label=i18n("Name:"), - value="", - max_lines=1, - interactive=True, - placeholder=i18n("Name for saving") - ) - alpha_a = gr.Slider( - minimum=0, - maximum=1, - label=i18n("Weight for Model A:"), - value=0.5, - interactive=True, - ) - if_f0_ = gr.Checkbox( - label=i18n("Whether the model has pitch guidance."), - value=True, - interactive=True, - ) - version_2 = gr.Radio( - label=i18n("Model architecture version:"), - choices=["v1", "v2"], - value="v2", - interactive=True, - ) - sr_ = gr.Radio( - label=i18n("Target sample rate:"), - choices=["40k", "48k"], - value="40k", - interactive=True, - ) - - - with gr.Column(): - ckpt_a = gr.Textbox(label=i18n("Path to Model A:"), value="", interactive=True, placeholder=i18n("Path to model")) - - ckpt_b = gr.Textbox(label=i18n("Path to Model B:"), value="", interactive=True, placeholder=i18n("Path to model")) - - info__ = gr.Textbox( - label=i18n("Model information to be placed:"), value="", max_lines=8, interactive=True, placeholder=i18n("Model information to be placed") - ) - info4 = gr.Textbox(label=i18n("Output information:"), value="", max_lines=8) - - - but6 = gr.Button(i18n("Fusion"), variant="primary") - - but6.click( - merge, - [ - ckpt_a, - ckpt_b, - alpha_a, - sr_, - if_f0_, - info__, - name_to_save0, - version_2, - ], - info4, - ) # def merge(path1,path2,alpha1,sr,f0,info): - with gr.Group(): - with gr.Accordion(label=i18n("Modify model information")): - with gr.Row(): ###### - with gr.Column(): - ckpt_path0 = gr.Textbox( - label=i18n("Path to Model:"), value="", interactive=True, placeholder=i18n("Path to model") - ) - info_ = gr.Textbox( - label=i18n("Model information to be modified:"), value="", max_lines=8, interactive=True, placeholder=i18n("Model information to be placed") - ) - - with gr.Column(): - name_to_save1 = gr.Textbox( - label=i18n("Save file name:"), - placeholder=i18n("Name for saving"), - value="", - max_lines=8, - interactive=True, - - ) - - info5 = gr.Textbox(label=i18n("Output information:"), value="", max_lines=8) - but7 = gr.Button(i18n("Modify"), variant="primary") - but7.click(change_info, [ckpt_path0, info_, name_to_save1], info5) - with gr.Group(): - with gr.Accordion(label=i18n("View model information")): - with gr.Row(): - with gr.Column(): - ckpt_path1 = gr.Textbox( - label=i18n("Path to Model:"), value="", interactive=True, placeholder=i18n("Path to model") - ) - - info6 = gr.Textbox(label=i18n("Output information:"), value="", max_lines=8) - but8 = gr.Button(i18n("View"), variant="primary") - but8.click(show_info, [ckpt_path1], info6) - with gr.Group(): - with gr.Accordion(label=i18n("Model extraction")): - with gr.Row(): - with gr.Column(): - save_name = gr.Textbox( - label=i18n("Name:"), value="", interactive=True, placeholder=i18n("Name for saving") - ) - if_f0__ = gr.Checkbox( - label=i18n("Whether the model has pitch guidance."), - value=True, - interactive=True, - ) - version_1 = gr.Radio( - label=i18n("Model architecture version:"), - choices=["v1", "v2"], - value="v2", - interactive=True, - ) - sr__ = gr.Radio( - label=i18n("Target sample rate:"), - choices=["32k", "40k", "48k"], - value="40k", - interactive=True, - ) - - with gr.Column(): - ckpt_path2 = gr.Textbox( - - label=i18n("Path to Model:"), - placeholder=i18n("Path to model"), - interactive=True, - ) - info___ = gr.Textbox( - label=i18n("Model information to be placed:"), value="", max_lines=8, interactive=True, placeholder=i18n("Model information to be placed") - ) - info7 = gr.Textbox(label=i18n("Output information:"), value="", max_lines=8) - - with gr.Row(): - - but9 = gr.Button(i18n("Extract"), variant="primary") - ckpt_path2.change( - change_info_, [ckpt_path2], [sr__, if_f0__, version_1] - ) - but9.click( - extract_small_model, - [ckpt_path2, save_name, sr__, if_f0__, info___, version_1], - info7, - ) - - - - - with gr.TabItem(i18n("Settings")): - with gr.Row(): - gr.Markdown(value= - i18n("Pitch settings") - ) - noteshertz = gr.Checkbox( - label = i18n("Whether to use note names instead of their hertz value. E.G. [C5, D6] instead of [523.25, 1174.66]Hz"), - value = rvc_globals.NotesOrHertz, - interactive = True, - ) - - noteshertz.change(fn=lambda nhertz: rvc_globals.__setattr__('NotesOrHertz', nhertz), inputs=[noteshertz], outputs=[]) - - noteshertz.change( - fn=switch_pitch_controls, - inputs=[f0method0], - outputs=[ - minpitch_slider, minpitch_txtbox, - maxpitch_slider, maxpitch_txtbox,] - ) - return app - -def GradioRun(app): - share_gradio_link = config.iscolab or config.paperspace - concurrency_count = 511 - max_size = 1022 - - if ( - config.iscolab or config.paperspace - ): - app.queue(concurrency_count=concurrency_count, max_size=max_size).launch( - favicon_path="./images/icon.png", - ) - else: - app.queue(concurrency_count=concurrency_count, max_size=max_size).launch( - favicon_path=".\images\icon.png", - ) - -if __name__ == "__main__": - if os.name == 'nt': - print(i18n("Any ConnectionResetErrors post-conversion are irrelevant and purely visual; they can be ignored.\n")) - app = GradioSetup(UTheme=config.grtheme) - GradioRun(app) \ No newline at end of file diff --git a/spaces/ServerX/PorcoDiaz/demucs/tasnet.py b/spaces/ServerX/PorcoDiaz/demucs/tasnet.py deleted file mode 100644 index ecc1257925ea8f4fbe389ddd6d73ce9fdf45f6d4..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/demucs/tasnet.py +++ /dev/null @@ -1,452 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -# Created on 2018/12 -# Author: Kaituo XU -# Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels -# Here is the original license: -# The MIT License (MIT) -# -# Copyright (c) 2018 Kaituo XU -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .utils import capture_init - -EPS = 1e-8 - - -def overlap_and_add(signal, frame_step): - outer_dimensions = signal.size()[:-2] - frames, frame_length = signal.size()[-2:] - - subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor - subframe_step = frame_step // subframe_length - subframes_per_frame = frame_length // subframe_length - output_size = frame_step * (frames - 1) + frame_length - output_subframes = output_size // subframe_length - - subframe_signal = signal.view(*outer_dimensions, -1, subframe_length) - - frame = torch.arange(0, output_subframes, - device=signal.device).unfold(0, subframes_per_frame, subframe_step) - frame = frame.long() # signal may in GPU or CPU - frame = frame.contiguous().view(-1) - - result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length) - result.index_add_(-2, frame, subframe_signal) - result = result.view(*outer_dimensions, -1) - return result - - -class ConvTasNet(nn.Module): - @capture_init - def __init__(self, - sources, - N=256, - L=20, - B=256, - H=512, - P=3, - X=8, - R=4, - audio_channels=2, - norm_type="gLN", - causal=False, - mask_nonlinear='relu', - samplerate=44100, - segment_length=44100 * 2 * 4): - """ - Args: - sources: list of sources - N: Number of filters in autoencoder - L: Length of the filters (in samples) - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(ConvTasNet, self).__init__() - # Hyper-parameter - self.sources = sources - self.C = len(sources) - self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R - self.norm_type = norm_type - self.causal = causal - self.mask_nonlinear = mask_nonlinear - self.audio_channels = audio_channels - self.samplerate = samplerate - self.segment_length = segment_length - # Components - self.encoder = Encoder(L, N, audio_channels) - self.separator = TemporalConvNet( - N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear) - self.decoder = Decoder(N, L, audio_channels) - # init - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def valid_length(self, length): - return length - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - est_source: [M, C, T] - """ - mixture_w = self.encoder(mixture) - est_mask = self.separator(mixture_w) - est_source = self.decoder(mixture_w, est_mask) - - # T changed after conv1d in encoder, fix it here - T_origin = mixture.size(-1) - T_conv = est_source.size(-1) - est_source = F.pad(est_source, (0, T_origin - T_conv)) - return est_source - - -class Encoder(nn.Module): - """Estimation of the nonnegative mixture weight by a 1-D conv layer. - """ - def __init__(self, L, N, audio_channels): - super(Encoder, self).__init__() - # Hyper-parameter - self.L, self.N = L, N - # Components - # 50% overlap - self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False) - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1 - """ - mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K] - return mixture_w - - -class Decoder(nn.Module): - def __init__(self, N, L, audio_channels): - super(Decoder, self).__init__() - # Hyper-parameter - self.N, self.L = N, L - self.audio_channels = audio_channels - # Components - self.basis_signals = nn.Linear(N, audio_channels * L, bias=False) - - def forward(self, mixture_w, est_mask): - """ - Args: - mixture_w: [M, N, K] - est_mask: [M, C, N, K] - Returns: - est_source: [M, C, T] - """ - # D = W * M - source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K] - source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N] - # S = DV - est_source = self.basis_signals(source_w) # [M, C, K, ac * L] - m, c, k, _ = est_source.size() - est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous() - est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T - return est_source - - -class TemporalConvNet(nn.Module): - def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'): - """ - Args: - N: Number of filters in autoencoder - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - C: Number of speakers - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(TemporalConvNet, self).__init__() - # Hyper-parameter - self.C = C - self.mask_nonlinear = mask_nonlinear - # Components - # [M, N, K] -> [M, N, K] - layer_norm = ChannelwiseLayerNorm(N) - # [M, N, K] -> [M, B, K] - bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False) - # [M, B, K] -> [M, B, K] - repeats = [] - for r in range(R): - blocks = [] - for x in range(X): - dilation = 2**x - padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2 - blocks += [ - TemporalBlock(B, - H, - P, - stride=1, - padding=padding, - dilation=dilation, - norm_type=norm_type, - causal=causal) - ] - repeats += [nn.Sequential(*blocks)] - temporal_conv_net = nn.Sequential(*repeats) - # [M, B, K] -> [M, C*N, K] - mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False) - # Put together - self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net, - mask_conv1x1) - - def forward(self, mixture_w): - """ - Keep this API same with TasNet - Args: - mixture_w: [M, N, K], M is batch size - returns: - est_mask: [M, C, N, K] - """ - M, N, K = mixture_w.size() - score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K] - score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K] - if self.mask_nonlinear == 'softmax': - est_mask = F.softmax(score, dim=1) - elif self.mask_nonlinear == 'relu': - est_mask = F.relu(score) - else: - raise ValueError("Unsupported mask non-linear function") - return est_mask - - -class TemporalBlock(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(TemporalBlock, self).__init__() - # [M, B, K] -> [M, H, K] - conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False) - prelu = nn.PReLU() - norm = chose_norm(norm_type, out_channels) - # [M, H, K] -> [M, B, K] - dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding, - dilation, norm_type, causal) - # Put together - self.net = nn.Sequential(conv1x1, prelu, norm, dsconv) - - def forward(self, x): - """ - Args: - x: [M, B, K] - Returns: - [M, B, K] - """ - residual = x - out = self.net(x) - # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad? - return out + residual # look like w/o F.relu is better than w/ F.relu - # return F.relu(out + residual) - - -class DepthwiseSeparableConv(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(DepthwiseSeparableConv, self).__init__() - # Use `groups` option to implement depthwise convolution - # [M, H, K] -> [M, H, K] - depthwise_conv = nn.Conv1d(in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - bias=False) - if causal: - chomp = Chomp1d(padding) - prelu = nn.PReLU() - norm = chose_norm(norm_type, in_channels) - # [M, H, K] -> [M, B, K] - pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False) - # Put together - if causal: - self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv) - else: - self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv) - - def forward(self, x): - """ - Args: - x: [M, H, K] - Returns: - result: [M, B, K] - """ - return self.net(x) - - -class Chomp1d(nn.Module): - """To ensure the output length is the same as the input. - """ - def __init__(self, chomp_size): - super(Chomp1d, self).__init__() - self.chomp_size = chomp_size - - def forward(self, x): - """ - Args: - x: [M, H, Kpad] - Returns: - [M, H, K] - """ - return x[:, :, :-self.chomp_size].contiguous() - - -def chose_norm(norm_type, channel_size): - """The input of normlization will be (M, C, K), where M is batch size, - C is channel size and K is sequence length. - """ - if norm_type == "gLN": - return GlobalLayerNorm(channel_size) - elif norm_type == "cLN": - return ChannelwiseLayerNorm(channel_size) - elif norm_type == "id": - return nn.Identity() - else: # norm_type == "BN": - # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics - # along M and K, so this BN usage is right. - return nn.BatchNorm1d(channel_size) - - -# TODO: Use nn.LayerNorm to impl cLN to speed up -class ChannelwiseLayerNorm(nn.Module): - """Channel-wise Layer Normalization (cLN)""" - def __init__(self, channel_size): - super(ChannelwiseLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - cLN_y: [M, N, K] - """ - mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K] - var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K] - cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return cLN_y - - -class GlobalLayerNorm(nn.Module): - """Global Layer Normalization (gLN)""" - def __init__(self, channel_size): - super(GlobalLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - gLN_y: [M, N, K] - """ - # TODO: in torch 1.0, torch.mean() support dim list - mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1] - var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) - gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return gLN_y - - -if __name__ == "__main__": - torch.manual_seed(123) - M, N, L, T = 2, 3, 4, 12 - K = 2 * T // L - 1 - B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False - mixture = torch.randint(3, (M, T)) - # test Encoder - encoder = Encoder(L, N) - encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size()) - mixture_w = encoder(mixture) - print('mixture', mixture) - print('U', encoder.conv1d_U.weight) - print('mixture_w', mixture_w) - print('mixture_w size', mixture_w.size()) - - # test TemporalConvNet - separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal) - est_mask = separator(mixture_w) - print('est_mask', est_mask) - - # test Decoder - decoder = Decoder(N, L) - est_mask = torch.randint(2, (B, K, C, N)) - est_source = decoder(mixture_w, est_mask) - print('est_source', est_source) - - # test Conv-TasNet - conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type) - est_source = conv_tasnet(mixture) - print('est_source', est_source) - print('est_source size', est_source.size()) diff --git a/spaces/ServerX/PorcoDiaz/utils/backups.py b/spaces/ServerX/PorcoDiaz/utils/backups.py deleted file mode 100644 index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/utils/backups.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import shutil -import hashlib -import time -import base64 - - - - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - weights_exist = False - for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH): - for filename in files: - filepath = os.path.join(root, filename) - if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - shutil.copy2(filepath, backup_filepath) # copy file with metadata - print(f'Imported file from Google Drive backup: {filename}') - elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'): - weights_exist = True - weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights'))) - weights_folderpath = os.path.dirname(weights_filepath) - if not os.path.exists(weights_folderpath): - os.makedirs(weights_folderpath) - print(f'Created weights folder: {weights_folderpath}', flush=True) - shutil.copy2(filepath, weights_filepath) # copy file with metadata - print(f'Imported file from weights: {filename}') - if weights_exist: - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("No weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def get_md5_hash(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def copy_weights_folder_to_drive(): - destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights') - try: - if not os.path.exists(destination_folder): - os.makedirs(destination_folder) - - num_copied = 0 - for filename in os.listdir(WEIGHTS_FOLDER): - if filename.endswith('.pth'): - source_file = os.path.join(WEIGHTS_FOLDER, filename) - destination_file = os.path.join(destination_folder, filename) - if not os.path.exists(destination_file): - shutil.copy2(source_file, destination_file) - num_copied += 1 - print(f"Copied {filename} to Google Drive!") - - if num_copied == 0: - print("No new finished models found for copying.") - else: - print(f"Finished copying {num_copied} files to Google Drive!") - - except Exception as e: - print(f"An error occurred while copying weights: {str(e)}") - # You can log the error or take appropriate actions here. - -def backup_files(): - print("\nStarting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - - while True: - try: - updated = False # flag to check if any files were updated - last_backup_timestamps = {} - - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except FileNotFoundError: - pass # File does not exist yet, which is fine - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - shutil.copy2(filepath, backup_filepath) # copy file with metadata - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - if last_backup_timestamp is None: - print(f'Backed up file: {filename}') - else: - print(f'Updating backed up file: {filename}') - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - os.remove(backup_filepath) - print(f'Deleted file: {filepath}') - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - sleep_time = 15 - else: - sleep_time = 0.1 - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - - time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups - - except Exception as e: - print(f"An error occurred: {str(e)}") - # You can log the error or take appropriate actions here. diff --git "a/spaces/SouthCity/ShuruiXu/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/SouthCity/ShuruiXu/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index ebcad851f58f5d2305292fb38073c32870f34f17..0000000000000000000000000000000000000000 --- "a/spaces/SouthCity/ShuruiXu/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,25 +0,0 @@ -from predict import predict_no_ui_long_connection -from toolbox import CatchException, report_execption, write_results_to_file -import datetime - -@CatchException -def 高阶功能模板函数(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。为了做到简单易读,该函数只有25行代码,所以不会实时反馈文字流或心跳,请耐心等待程序输出完成。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield chatbot, history, '正常' # 由于请求gpt需要一段时间,我们先及时地做一次状态显示 - - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' # 由于请求gpt需要一段时间,我们先及时地做一次状态显示 - - # history = [] 每次询问不携带之前的询问历史 - gpt_say = predict_no_ui_long_connection( - inputs=i_say, top_p=top_p, api_key=api_key, temperature=temperature, history=[], - sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。") # 请求gpt,需要一段时间 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield chatbot, history, '正常' # 显示 \ No newline at end of file diff --git a/spaces/Spico/writing-comrade/README.md b/spaces/Spico/writing-comrade/README.md deleted file mode 100644 index bf46ea295b622bf2bb23ab6ca9386890a078c3f5..0000000000000000000000000000000000000000 --- a/spaces/Spico/writing-comrade/README.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Writing Comrade -emoji: ✒️ -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -# ✒️ Writing Partner - -ChatGPT as a writing partner. - -🔥 We have a demo on https://huggingface.co/spaces/Spico/writing-comrade . Check it out! - -## 🚀 QuickStart - -```bash -$ git clone https://github.com/Spico197/writing-comrade -$ cd writing-comrade -$ pip install -r requirements.txt -$ python app.py -``` - -## ⚔️🥊Abilities - -- Completion (文本补全): Let the model complete a story, or any texts -- Correction (文本纠错): Correcting grammar errors -- Polishing (文本润色): Polishing texts -- Paraphrase (文本改写): Text rewriting -- Translation (机器翻译,需要提供目标语言): Text translation to target language -- Freestyle (直接调用ChatGPT): This will call raw ChatGPT API without leading instruction prefixes, so you may want to use it as you've done on [the official ChatGPT website](https://chat.openai.com/) diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/metrics/visqol.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/metrics/visqol.py deleted file mode 100644 index 44f4b0a2c3c6c726857db8386491823dd85dde51..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/metrics/visqol.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import json -import logging -from pathlib import Path -import tempfile -import typing as tp -import subprocess -import shutil - -import torch -import torchaudio - -logger = logging.getLogger(__name__) - - -class ViSQOL: - """ViSQOL wrapper to run ViSQOL from Python using a pre-installed binary. - - To learn more about ViSQOL and how to build ViSQOL binary using bazel, please refer to the - instructions available in the open source repository: https://github.com/google/visqol - - ViSQOL is capable of running in two modes: - - Audio Mode: - When running in audio mode, input signals must have a 48kHz sample rate. Input should be resampled to 48kHz. - Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison. - Audio mode uses support vector regression, with the maximum range at ~4.75. - - Speech Mode: - When running in speech mode, ViSQOL uses a wideband model. It therefore expects input sample rates of 16kHz. - Input should be resampled to 16kHz. - As part of the speech mode processing, a root mean square implementation for voice activity detection - is performed on the reference signal to determine what parts of the signal have voice activity and - should therefore be included in the comparison. The signal is normalized before performing the voice - activity detection. - Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison. - Speech mode is scaled to have a maximum MOS of 5.0 to match previous version behavior. - - For more details, check the guidelines: https://github.com/google/visqol#general-guidelines-for-input - - Args: - visqol_bin (str): Path to the ViSQOL binary. - mode (str): ViSQOL computation mode, expecting "audio" or "speech". - model (str): Name of the model to use for similarity to quality model. - debug (bool): Whether to also get debug metrics from ViSQOL or not. - """ - SAMPLE_RATES_MODES = {"audio": 48_000, "speech": 16_000} - ALLOWED_SAMPLE_RATES = frozenset(SAMPLE_RATES_MODES.values()) - - def __init__(self, bin: tp.Union[Path, str], mode: str = "audio", - model: str = "libsvm_nu_svr_model.txt", debug: bool = False): - assert bin is not None and Path(bin).exists(), f"Could not find ViSQOL binary in specified path: {bin}" - self.visqol_bin = str(bin) - self.visqol_mode = mode - self.target_sr = self._get_target_sr(self.visqol_mode) - self.model = model - self.debug = debug - assert Path(self.visqol_model).exists(), \ - f"Could not find the specified model in ViSQOL install: {self.visqol_model}" - - def _get_target_sr(self, mode: str) -> int: - # returns target sampling rate for the corresponding ViSQOL mode. - if mode not in ViSQOL.SAMPLE_RATES_MODES: - raise ValueError( - f"Unsupported mode! Allowed are: {', '.join(ViSQOL.SAMPLE_RATES_MODES.keys())}" - ) - return ViSQOL.SAMPLE_RATES_MODES[mode] - - def _prepare_files( - self, ref_sig: torch.Tensor, deg_sig: torch.Tensor, sr: int, target_sr: int, pad_with_silence: bool = False - ): - # prepare files for ViSQOL evaluation. - assert target_sr in ViSQOL.ALLOWED_SAMPLE_RATES - assert len(ref_sig) == len(deg_sig), ( - "Expects same number of ref and degraded inputs", - f" but ref len {len(ref_sig)} != deg len {len(deg_sig)}" - ) - # resample audio if needed - if sr != target_sr: - transform = torchaudio.transforms.Resample(sr, target_sr) - pad = int(0.5 * target_sr) - rs_ref = [] - rs_deg = [] - for i in range(len(ref_sig)): - rs_ref_i = transform(ref_sig[i]) - rs_deg_i = transform(deg_sig[i]) - if pad_with_silence: - rs_ref_i = torch.nn.functional.pad(rs_ref_i, (pad, pad), mode='constant', value=0) - rs_deg_i = torch.nn.functional.pad(rs_deg_i, (pad, pad), mode='constant', value=0) - rs_ref.append(rs_ref_i) - rs_deg.append(rs_deg_i) - ref_sig = torch.stack(rs_ref) - deg_sig = torch.stack(rs_deg) - # save audio chunks to tmp dir and create csv - tmp_dir = Path(tempfile.mkdtemp()) - try: - tmp_input_csv_path = tmp_dir / "input.csv" - tmp_results_csv_path = tmp_dir / "results.csv" - tmp_debug_json_path = tmp_dir / "debug.json" - with open(tmp_input_csv_path, "w") as csv_file: - csv_writer = csv.writer(csv_file) - csv_writer.writerow(["reference", "degraded"]) - for i in range(len(ref_sig)): - tmp_ref_filename = tmp_dir / f"ref_{i}.wav" - tmp_deg_filename = tmp_dir / f"deg_{i}.wav" - torchaudio.save( - tmp_ref_filename, - torch.clamp(ref_sig[i], min=-0.99, max=0.99), - sample_rate=target_sr, - bits_per_sample=16, - encoding="PCM_S" - ) - torchaudio.save( - tmp_deg_filename, - torch.clamp(deg_sig[i], min=-0.99, max=0.99), - sample_rate=target_sr, - bits_per_sample=16, - encoding="PCM_S" - ) - csv_writer.writerow([str(tmp_ref_filename), str(tmp_deg_filename)]) - return tmp_dir, tmp_input_csv_path, tmp_results_csv_path, tmp_debug_json_path - except Exception as e: - logger.error("Exception occurred when preparing files for ViSQOL: %s", e) - return tmp_dir, None, None, None - - def _flush_files(self, tmp_dir: tp.Union[Path, str]): - # flush tmp files used to compute ViSQOL. - shutil.rmtree(str(tmp_dir)) - - def _collect_moslqo_score(self, results_csv_path: tp.Union[Path, str]) -> float: - # collect results for each evaluated pair and return averaged moslqo score. - with open(results_csv_path, "r") as csv_file: - reader = csv.DictReader(csv_file) - moslqo_scores = [float(row["moslqo"]) for row in reader] - if len(moslqo_scores) > 0: - return sum(moslqo_scores) / len(moslqo_scores) - else: - return 0.0 - - def _collect_debug_data(self, debug_json_path: tp.Union[Path, str]) -> dict: - # collect debug data for the visqol inference. - with open(debug_json_path, "r") as f: - data = json.load(f) - return data - - @property - def visqol_model(self): - return f'{self.visqol_bin}/model/{self.model}' - - def _run_visqol( - self, - input_csv_path: tp.Union[Path, str], - results_csv_path: tp.Union[Path, str], - debug_csv_path: tp.Optional[tp.Union[Path, str]], - ): - input_csv_path = str(input_csv_path) - results_csv_path = str(results_csv_path) - debug_csv_path = str(debug_csv_path) - cmd = [ - f'{self.visqol_bin}/bazel-bin/visqol', - '--batch_input_csv', f'{input_csv_path}', - '--results_csv', f'{results_csv_path}' - ] - if debug_csv_path is not None: - cmd += ['--output_debug', f'{debug_csv_path}'] - if self.visqol_mode == "speech": - cmd += ['--use_speech_mode'] - cmd += ['--similarity_to_quality_model', f'{self.visqol_model}'] - result = subprocess.run(cmd, capture_output=True) - if result.returncode: - logger.error("Error with visqol: \n %s \n %s", result.stdout.decode(), result.stderr.decode()) - raise RuntimeError("Error while executing visqol") - result.check_returncode() - - def __call__( - self, - ref_sig: torch.Tensor, - deg_sig: torch.Tensor, - sr: int, - pad_with_silence: bool = False, - ): - """Calculate the ViSQOL metric for a pair of audio signals at a given sample rate. - Args: - ref_sig (torch.Tensor): Reference signals as [B, C, T]. - deg_sig (torch.Tensor): Degraded signals as [B, C, T]. - sr (int): Sample rate of the two audio signals. - pad_with_silence (bool): Whether to pad the file with silences as recommended - in visqol guidelines (see: https://github.com/google/visqol#general-guidelines-for-input). - Returns: - float: The ViSQOL score or mean score for the batch. - """ - logger.debug(f"Calculating visqol with mode={self.visqol_mode} on {len(ref_sig)} samples") - tmp_dir, input_csv, results_csv, debug_json = self._prepare_files( - ref_sig, deg_sig, sr, self.target_sr, pad_with_silence - ) - try: - if input_csv and results_csv: - self._run_visqol( - input_csv, - results_csv, - debug_json if self.debug else None, - ) - mosqol = self._collect_moslqo_score(results_csv) - return mosqol - else: - raise RuntimeError("Something unexpected happened when running VISQOL!") - except Exception as e: - logger.error("Exception occurred when running ViSQOL: %s", e) - finally: - self._flush_files(tmp_dir) diff --git a/spaces/Sujal7/Shiksha-Connect/README.md b/spaces/Sujal7/Shiksha-Connect/README.md deleted file mode 100644 index 9e34736f5e21afe8edfc3bdd1979ffbbcc7aa73a..0000000000000000000000000000000000000000 --- a/spaces/Sujal7/Shiksha-Connect/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Shiksha Connect -emoji: 🏆 -colorFrom: purple -colorTo: yellow -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sumsub/Sumsub-ffs-demo/README.md b/spaces/Sumsub/Sumsub-ffs-demo/README.md deleted file mode 100644 index e6a1684daeaad32b8226172a1f2a3665454e336e..0000000000000000000000000000000000000000 --- a/spaces/Sumsub/Sumsub-ffs-demo/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: >- - For Fake's Sake: a set of models for detecting deepfakes, generated images and - synthetic images -emoji: 🐠 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_gevent_integration.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_gevent_integration.py deleted file mode 100644 index f42d909d55a3c4757ffec07b5ef82a02bdf86939..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_gevent_integration.py +++ /dev/null @@ -1,93 +0,0 @@ -import pydevd_tracing -import greenlet -import gevent -from _pydev_bundle._pydev_saved_modules import threading -from _pydevd_bundle.pydevd_custom_frames import add_custom_frame, update_custom_frame, remove_custom_frame -from _pydevd_bundle.pydevd_constants import GEVENT_SHOW_PAUSED_GREENLETS, get_global_debugger, \ - thread_get_ident -from _pydev_bundle import pydev_log -from pydevd_file_utils import basename - -_saved_greenlets_to_custom_frame_thread_id = {} - -if GEVENT_SHOW_PAUSED_GREENLETS: - - def _get_paused_name(py_db, g): - frame = g.gr_frame - use_frame = frame - - # i.e.: Show in the description of the greenlet the last user-code found. - while use_frame is not None: - if py_db.apply_files_filter(use_frame, use_frame.f_code.co_filename, True): - frame = use_frame - use_frame = use_frame.f_back - else: - break - - if use_frame is None: - use_frame = frame - - return '%s: %s - %s' % (type(g).__name__, use_frame.f_code.co_name, basename(use_frame.f_code.co_filename)) - - def greenlet_events(event, args): - if event in ('switch', 'throw'): - py_db = get_global_debugger() - origin, target = args - - if not origin.dead and origin.gr_frame is not None: - frame_custom_thread_id = _saved_greenlets_to_custom_frame_thread_id.get(origin) - if frame_custom_thread_id is None: - _saved_greenlets_to_custom_frame_thread_id[origin] = add_custom_frame( - origin.gr_frame, _get_paused_name(py_db, origin), thread_get_ident()) - else: - update_custom_frame( - frame_custom_thread_id, origin.gr_frame, _get_paused_name(py_db, origin), thread_get_ident()) - else: - frame_custom_thread_id = _saved_greenlets_to_custom_frame_thread_id.pop(origin, None) - if frame_custom_thread_id is not None: - remove_custom_frame(frame_custom_thread_id) - - # This one will be resumed, so, remove custom frame from it. - frame_custom_thread_id = _saved_greenlets_to_custom_frame_thread_id.pop(target, None) - if frame_custom_thread_id is not None: - remove_custom_frame(frame_custom_thread_id) - - # The tracing needs to be reapplied for each greenlet as gevent - # clears the tracing set through sys.settrace for each greenlet. - pydevd_tracing.reapply_settrace() - -else: - - # i.e.: no logic related to showing paused greenlets is needed. - def greenlet_events(event, args): - pydevd_tracing.reapply_settrace() - - -def enable_gevent_integration(): - # References: - # https://greenlet.readthedocs.io/en/latest/api.html#greenlet.settrace - # https://greenlet.readthedocs.io/en/latest/tracing.html - - # Note: gevent.version_info is WRONG (gevent.__version__ must be used). - try: - if tuple(int(x) for x in gevent.__version__.split('.')[:2]) <= (20, 0): - if not GEVENT_SHOW_PAUSED_GREENLETS: - return - - if not hasattr(greenlet, 'settrace'): - # In older versions it was optional. - # We still try to use if available though. - pydev_log.debug('greenlet.settrace not available. GEVENT_SHOW_PAUSED_GREENLETS will have no effect.') - return - try: - greenlet.settrace(greenlet_events) - except: - pydev_log.exception('Error with greenlet.settrace.') - except: - pydev_log.exception('Error setting up gevent %s.', gevent.__version__) - - -def log_gevent_debug_info(): - pydev_log.debug('Greenlet version: %s', greenlet.__version__) - pydev_log.debug('Gevent version: %s', gevent.__version__) - pydev_log.debug('Gevent install location: %s', gevent.__file__) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/keypoints.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/keypoints.py deleted file mode 100644 index b93ebed4f6554e67ba9bde8d3af90e8dbb3246b6..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/keypoints.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Any, List, Tuple, Union -import torch -from torch.nn import functional as F - - -class Keypoints: - """ - Stores keypoint **annotation** data. GT Instances have a `gt_keypoints` property - containing the x,y location and visibility flag of each keypoint. This tensor has shape - (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. - - The visibility flag follows the COCO format and must be one of three integers: - - * v=0: not labeled (in which case x=y=0) - * v=1: labeled but not visible - * v=2: labeled and visible - """ - - def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): - """ - Arguments: - keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. - The shape should be (N, K, 3) where N is the number of - instances, and K is the number of keypoints per instance. - """ - device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") - keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) - assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape - self.tensor = keypoints - - def __len__(self) -> int: - return self.tensor.size(0) - - def to(self, *args: Any, **kwargs: Any) -> "Keypoints": - return type(self)(self.tensor.to(*args, **kwargs)) - - @property - def device(self) -> torch.device: - return self.tensor.device - - def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: - """ - Convert keypoint annotations to a heatmap of one-hot labels for training, - as described in :paper:`Mask R-CNN`. - - Arguments: - boxes: Nx4 tensor, the boxes to draw the keypoints to - - Returns: - heatmaps: - A tensor of shape (N, K), each element is integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: - A tensor of shape (N, K) containing whether each keypoint is in the roi or not. - """ - return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": - """ - Create a new `Keypoints` by indexing on this `Keypoints`. - - The following usage are allowed: - - 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. - 2. `new_kpts = kpts[2:10]`: return a slice of key points. - 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor - with `length = len(kpts)`. Nonzero elements in the vector will be selected. - - Note that the returned Keypoints might share storage with this Keypoints, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Keypoints([self.tensor[item]]) - return Keypoints(self.tensor[item]) - - def __repr__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={})".format(len(self.tensor)) - return s - - @staticmethod - def cat(keypoints_list: List["Keypoints"]) -> "Keypoints": - """ - Concatenates a list of Keypoints into a single Keypoints - - Arguments: - keypoints_list (list[Keypoints]) - - Returns: - Keypoints: the concatenated Keypoints - """ - assert isinstance(keypoints_list, (list, tuple)) - assert len(keypoints_list) > 0 - assert all(isinstance(keypoints, Keypoints) for keypoints in keypoints_list) - - cat_kpts = type(keypoints_list[0])( - torch.cat([kpts.tensor for kpts in keypoints_list], dim=0) - ) - return cat_kpts - - -# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) -def _keypoints_to_heatmap( - keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int -) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. - - Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the - closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the - continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): - d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - - Arguments: - keypoints: tensor of keypoint locations in of shape (N, K, 3). - rois: Nx4 tensor of rois in xyxy format - heatmap_size: integer side length of square heatmap. - - Returns: - heatmaps: A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: A tensor of shape (N, K) containing whether each keypoint is in - the roi or not. - """ - - if rois.numel() == 0: - return rois.new().long(), rois.new().long() - offset_x = rois[:, 0] - offset_y = rois[:, 1] - scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) - scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) - - offset_x = offset_x[:, None] - offset_y = offset_y[:, None] - scale_x = scale_x[:, None] - scale_y = scale_y[:, None] - - x = keypoints[..., 0] - y = keypoints[..., 1] - - x_boundary_inds = x == rois[:, 2][:, None] - y_boundary_inds = y == rois[:, 3][:, None] - - x = (x - offset_x) * scale_x - x = x.floor().long() - y = (y - offset_y) * scale_y - y = y.floor().long() - - x[x_boundary_inds] = heatmap_size - 1 - y[y_boundary_inds] = heatmap_size - 1 - - valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) - vis = keypoints[..., 2] > 0 - valid = (valid_loc & vis).long() - - lin_ind = y * heatmap_size + x - heatmaps = lin_ind * valid - - return heatmaps, valid - - -@torch.jit.script_if_tracing -def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: - """ - Extract predicted keypoint locations from heatmaps. - - Args: - maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for - each ROI and each keypoint. - rois (Tensor): (#ROIs, 4). The box of each ROI. - - Returns: - Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to - (x, y, logit, score) for each keypoint. - - When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, - we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from - Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - """ - - offset_x = rois[:, 0] - offset_y = rois[:, 1] - - widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) - heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) - widths_ceil = widths.ceil() - heights_ceil = heights.ceil() - - num_rois, num_keypoints = maps.shape[:2] - xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) - - width_corrections = widths / widths_ceil - height_corrections = heights / heights_ceil - - keypoints_idx = torch.arange(num_keypoints, device=maps.device) - - for i in range(num_rois): - outsize = (int(heights_ceil[i]), int(widths_ceil[i])) - roi_map = F.interpolate(maps[[i]], size=outsize, mode="bicubic", align_corners=False) - - # Although semantically equivalent, `reshape` is used instead of `squeeze` due - # to limitation during ONNX export of `squeeze` in scripting mode - roi_map = roi_map.reshape(roi_map.shape[1:]) # keypoints x H x W - - # softmax over the spatial region - max_score, _ = roi_map.view(num_keypoints, -1).max(1) - max_score = max_score.view(num_keypoints, 1, 1) - tmp_full_resolution = (roi_map - max_score).exp_() - tmp_pool_resolution = (maps[i] - max_score).exp_() - # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, - # so that the scores of objects of different absolute sizes will be more comparable - roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) - - w = roi_map.shape[2] - pos = roi_map.view(num_keypoints, -1).argmax(1) - - x_int = pos % w - y_int = (pos - x_int) // w - - assert ( - roi_map_scores[keypoints_idx, y_int, x_int] - == roi_map_scores.view(num_keypoints, -1).max(1)[0] - ).all() - - x = (x_int.float() + 0.5) * width_corrections[i] - y = (y_int.float() + 0.5) * height_corrections[i] - - xy_preds[i, :, 0] = x + offset_x[i] - xy_preds[i, :, 1] = y + offset_y[i] - xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] - xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] - - return xy_preds diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/latent_distributions.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/latent_distributions.py deleted file mode 100644 index bc892e7e0be6d3a79b24d95af4840c4667f94940..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/latent_distributions.py +++ /dev/null @@ -1,468 +0,0 @@ -from typing import Optional, Callable, Tuple -import warnings - -from abc import ABC, abstractmethod -from einops import rearrange, repeat -import torch -import torch.nn as nn - - -def relaxed_one_hot_categorical_without_replacement(temperature, logits, num_samples=1): - # See paper Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement (https://arxiv.org/pdf/1903.06059.pdf) - # for explanation of the trick - scores = ( - (torch.distributions.Gumbel(logits, 1).rsample() / temperature) - .softmax(-1) - .clamp_min(1e-10) - ) - top_scores, top_indices = torch.topk( - scores, - num_samples, - dim=-1, - ) - return scores, top_indices - - -class AbstractLatentDistribution(nn.Module, ABC): - """Base class for latent distribution""" - - @abstractmethod - def sample( - self, num_samples: int, *args, **kwargs - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Sample from the latent distribution.""" - - @abstractmethod - def kl_loss( - self, - other: "GaussianLatentDistribution", - threshold: float = 0, - mask_z: Optional[torch.Tensor] = None, - ) -> torch.Tensor: - """Compute the KL divergence between two latent distributions.""" - - @abstractmethod - def sampling_loss(self) -> torch.Tensor: - """Loss of the latent distribution.""" - - @abstractmethod - def average( - self, other: "AbstractLatentDistribution", weight_other: torch.Tensor - ) -> "AbstractLatentDistribution": - """Average of the latent distribution.""" - - @abstractmethod - def log_dict(self, type: str) -> dict: - """Log the latent distribution values.""" - - -class GaussianLatentDistribution(AbstractLatentDistribution): - """Gaussian latent distribution""" - - def __init__(self, latent_representation: torch.Tensor): - super().__init__() - mu, logvar = torch.chunk(latent_representation, 2, dim=-1) - self.register_buffer("mu", mu, False) - self.register_buffer("logvar", logvar, False) - - def sample( - self, n_samples: int = 0, *args, **kwargs - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Sample from Gaussian with a reparametrization trick - - Args: - n_samples (optional): number of samples to make, (if 0 one sample with no extra - dimension). Defaults to 0. - Returns: - Random Gaussian sample of size (some_shape, (n_samples), latent_dim) - """ - - std = (self.logvar / 2).exp() - if n_samples <= 0: - eps = torch.randn_like(std) - latent_samples = self.mu + eps * std - weights = torch.ones_like(latent_samples[..., 0]) - else: - eps = torch.randn( - [*std.shape[:-1], n_samples, self.mu.shape[-1]], device=std.device - ) - # Reshape - latent_samples = self.mu.unsqueeze(-2) + eps * std.unsqueeze(-2) - weights = torch.ones_like(latent_samples[..., 0]) / n_samples - return latent_samples, weights - - def kl_loss( - self, - other: "GaussianLatentDistribution", - threshold: float = 0, - mask_z: Optional[torch.Tensor] = None, - ) -> torch.Tensor: - """Compute the KL divergence between two latent distributions.""" - assert type(other) == GaussianLatentDistribution - kl_loss = ( - (other.logvar - - self.logvar - + ((self.mu - other.mu).square() + self.logvar.exp()) / other.logvar.exp() - - 1)*0.5 - ).clamp_min(threshold) - if mask_z is None: - return kl_loss.mean() - else: - assert mask_z.any() - return torch.sum(kl_loss.mean(-1) * mask_z) / torch.sum(mask_z) - - def sampling_loss(self) -> torch.Tensor: - return torch.zeros(1, device=self.mu.device) - - def average( - self, other: "GaussianLatentDistribution", weight_other: torch.Tensor - ) -> "GaussianLatentDistribution": - assert type(other) == GaussianLatentDistribution - assert other.mu.shape == self.mu.shape - average_log_var = ( - self.logvar.exp() * (1 - weight_other) + other.logvar.exp() * weight_other - ).log() - return GaussianLatentDistribution( - torch.cat( - ( - self.mu * (1 - weight_other) + other.mu * weight_other, - average_log_var, - ), - dim=-1, - ) - ) - - def log_dict(self, type: str) -> dict: - return { - f"latent/{type}/abs_mean": self.mu.abs().mean(), - f"latent/{type}/std": (self.logvar * 0.5).exp().mean(), - } - - -class QuantizedLatentDistribution(AbstractLatentDistribution): - """Quantized latent distribution. - It is defined with a codebook of quantized latents and a continuous latent. - The distribution is based on distances of the continuous latent to the codebook. - Sampling is only quantizing the continuous latent. - - Args: - continuous_latent : Continuous latent representation of shape (some_shape, latent_dim) - codebook : Codebook of shape (num_embeddings, latent_dim) - """ - - def __init__( - self, - continuous_latent: torch.Tensor, - codebook: torch.Tensor, - flush_weights: Callable[[], None], - get_weights: Callable[[], torch.Tensor], - index_add_one_weights: Callable[[torch.Tensor], None], - ): - super().__init__() - self.register_buffer("continuous_latent", continuous_latent, False) - self.register_buffer("codebook", codebook, False) - self.flush_weights = flush_weights - self.get_weights = get_weights - self.index_add_one_weights = index_add_one_weights - self.quantization_loss = None - self.accuracy = None - - def sample( - self, n_samples: int = 0, *args, **kwargs - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Quantize the continuous latent from the latent dictionary. - - Args: - latent: (batch_size, num_agents, latent_dim) Continuous latent input - - Returns: - quantized_latent, quantization_loss - """ - assert n_samples == 0, "Only one sample is supported for quantized latent" - - distances_to_quantized = ( - ( - self.codebook.view(1, 1, *self.codebook.shape) - - self.continuous_latent.unsqueeze(-2) - ) - .square() - .sum(-1) - ) - batch_size, num_agents, num_vq = distances_to_quantized.shape - - self.soft_one_hot = ( - (-100 * distances_to_quantized) - .softmax(dim=-1) - .view(batch_size, num_agents, num_vq) - ) - # quantized, args_selected = self.sample(soft_one_hot) - _, args_selected = torch.min(distances_to_quantized, dim=-1) - quantized = self.codebook[args_selected, :] - args_selected = args_selected.view(-1) - - # Update weights - self.index_add_one_weights(args_selected) - - distances_to_quantized = distances_to_quantized.view( - batch_size * num_agents, num_vq - ) - - # Resample useless latent vectors - random_latents = self.continuous_latent.view( - batch_size * num_agents, self.codebook.shape[-1] - )[torch.randint(batch_size * num_agents, (num_vq,))] - codebook_weights = self.get_weights() - total_samples = codebook_weights.sum() - # TODO: The value 100 is arbitrary, should it be a parameter? - # The uselessness of a codebook vector is defined by the number of times it has been sampled - # if it has been sampled less than 1% of the time, it is pushed towards a random continuous latent sample - # this prevents the codebook from being dominated by a few vectors - self.uselessness = ( - ( - torch.where( - (codebook_weights < total_samples / (100 * num_vq)).unsqueeze(-1), - random_latents.detach() - self.codebook, - torch.zeros_like(self.codebook), - ).abs() - + 1 - ) - .log() - .sum(-1) - .mean() - ) - # TODO: The value 1e6 is arbitrary, should it be a parameter? - if total_samples > 1e6 * num_vq: - # Flush the codebook weights when the number of samples is too high - # This prevents the codebook from being dominated by its history - # if a few vectors were visited a lot and also prevents overflows - self.flush_weights() - - # commit_loss = (self.continuous_latent - quantized.detach()).square().clamp_min(self.distance_threshold).sum(-1).mean() - - self.quantization_loss = ( - (self.continuous_latent - quantized).square().sum(-1).mean() - ) - - quantized = ( - quantized.detach() - + self.continuous_latent - - self.continuous_latent.detach() - ) - - self.latent_diversity = ( - (self.continuous_latent[None, ...] - self.continuous_latent[:, None, ...]) - .square() - .sum(-1) - .mean() - ) - - return quantized, torch.ones_like(quantized[..., 0]) / num_vq - - def kl_loss( - self, - other: "ClassifiedLatentDistribution", - threshold: float = 0, - mask_z: Optional[torch.Tensor] = None, - ) -> torch.Tensor: - """Compute the cross entropy between two latent distributions.""" - assert type(other) == ClassifiedLatentDistribution - min_logits = -10 - max_logits = 10 - pred_log = other.logits.clamp(min_logits, max_logits).log_softmax(-1) - self_pred = self.soft_one_hot - self.accuracy = (self_pred.argmax(-1) == other.logits.argmax(-1)).float().mean() - return -2 * (pred_log * self_pred).sum(-1).mean() - - def sampling_loss(self) -> torch.Tensor: - if self.quantization_loss is None: - self.sample() - return 0.5 * ( - self.quantization_loss + self.uselessness + 0.001 * self.latent_diversity - ) - - def average( - self, other: "QuantizedLatentDistribution", weight_other: torch.Tensor - ) -> "QuantizedLatentDistribution": - raise NotImplementedError( - "Average is not implemented for QuantizedLatentDistribution" - ) - - def log_dict(self, type: str) -> dict: - log_dict = { - f"latent/{type}/quantization_loss": self.quantization_loss, - f"latent/{type}/uselessness": self.uselessness, - f"latent/{type}/latent_diversity": self.latent_diversity, - f"latent/{type}/codebook_abs_mean": self.codebook.abs().mean(), - f"latent/{type}/codebook_std": self.codebook.std(), - f"latent/{type}/latent_abs_mean": self.continuous_latent.abs().mean(), - f"latent/{type}/latent_std": self.continuous_latent.std(), - } - if self.accuracy is not None: - log_dict[f"latent/{type}/accuracy"] = self.accuracy - return log_dict - - -class ClassifiedLatentDistribution(AbstractLatentDistribution): - """Classified latent distribution. - It is defined with a codebook of quantized latents and a probability distribution over the codebook elements. - - Args: - logits : Logits of shape (some_shape, num_embeddings) - codebook : Codebook of shape (num_embeddings, latent_dim) - """ - - def __init__(self, logits: torch.Tensor, codebook: torch.Tensor): - super().__init__() - self.register_buffer("logits", logits, persistent=False) - self.register_buffer("codebook", codebook, persistent=False) - - def sample( - self, n_samples: int = 0, replacement: bool = True, *args, **kwargs - ) -> Tuple[torch.Tensor, torch.Tensor]: - batch_size, num_agents, num_vq = self.logits.shape - squeeze_out = False - if n_samples == 0: - squeeze_out = True - n_samples = 1 - elif n_samples > self.codebook.shape[0]: - warnings.warn( - f"Requested {n_samples} samples but only {self.codebook.shape[0]} are available in the descrete latent space. Switching to replacement=True to support it." - ) - replacement = True - - if self.training: - # TODO: should we make the temperature a parameter? - all_weights, indices = relaxed_one_hot_categorical_without_replacement( - logits=self.logits, temperature=1, num_samples=n_samples - ) - selected_latents = self.codebook[indices, :] - # Cumulative mask of indices that have been sampled in order of probability - mask_selection = torch.nn.functional.one_hot(indices, num_vq).cumsum(-2) - mask_selection[..., 1:, :] = mask_selection[..., :-1, :] - mask_selection[..., 0, :] = 0.0 - # Remove the probability of previous samples to account for sampling without replacement - masked_weights = all_weights.unsqueeze(-2) * (1 - mask_selection.float()) - # Renormalize the probabilities to sum to 1 - masked_weights = masked_weights / masked_weights.sum(-1, keepdim=True) - - latent_samples = ( - masked_weights.unsqueeze(-1) - * self.codebook[None, None, None, ...].detach() - ).sum(-2) - latent_samples = ( - selected_latents.detach() + latent_samples - latent_samples.detach() - ) - probs = torch.gather(self.logits.softmax(-1), -1, indices) - else: - probs = self.logits.softmax(-1) - samples = torch.multinomial( - probs.view(batch_size * num_agents, num_vq), - n_samples, - replacement=replacement, - ) - latent_samples = self.codebook[samples] - probs = torch.gather( - probs, -1, samples.view(batch_size, num_agents, num_vq) - ) - - if squeeze_out: - latent_samples = latent_samples.view( - batch_size, num_agents, self.codebook.shape[-1] - ) - else: - latent_samples = latent_samples.view( - batch_size, num_agents, n_samples, self.codebook.shape[-1] - ) - return latent_samples, probs - - def kl_loss( - self, - other: "ClassifiedLatentDistribution", - threshold: float = 0, - mask_z: Optional[torch.Tensor] = None, - ) -> torch.Tensor: - """Compute the cross entropy between two latent distributions. Self being the reference distribution and other the distribution to compare.""" - assert type(other) == ClassifiedLatentDistribution - min_logits = -10 - max_logits = 10 - pred_log = other.logits.clamp(min_logits, max_logits).log_softmax(-1) - self_pred = ( - (0.5 * (self.logits.detach() + self.logits)) - .clamp(min_logits, max_logits) - .softmax(-1) - ) - return -2 * (pred_log * self_pred).sum(-1).mean() - - def sampling_loss(self) -> torch.Tensor: - return torch.zeros(1, device=self.logits.device) - - def average( - self, other: "ClassifiedLatentDistribution", weight_other: torch.Tensor - ) -> "ClassifiedLatentDistribution": - assert type(other) == ClassifiedLatentDistribution - assert (self.codebook == other.codebook).all() - return ClassifiedLatentDistribution( - ( - self.logits.exp() * (1 - weight_other) - + other.logits.exp() * weight_other - ).log(), - self.codebook, - ) - - def log_dict(self, type: str) -> dict: - max_probs, _ = self.logits.softmax(-1).max(-1) - return { - f"latent/{type}/codebook_abs_mean": self.codebook.abs().mean(), - f"latent/{type}/codebook_std": self.codebook.std(), - f"latent/{type}/class_max_mean": max_probs.mean(), - f"latent/{type}/class_max_std": max_probs.std(), - } - - -class QuantizedDistributionCreator(nn.Module): - """Creates a distribution from a latent vector.""" - - def __init__( - self, - latent_dim: int, - num_embeddings: int, - ): - super().__init__() - self.latent_dim = latent_dim - self.num_embeddings = num_embeddings - self.codebook = nn.Parameter(torch.randn(num_embeddings, latent_dim)) - self.register_buffer( - "codebook_weights", - torch.ones(num_embeddings, requires_grad=False), - persistent=False, - ) - - def _flush_codebook_weights(self): - self.codebook_weights = torch.ones_like(self.codebook_weights) - - def _get_codebook_weights(self): - return self.codebook_weights - - def _index_add_one_codebook_weight(self, indices: torch.Tensor): - self.codebook_weights = self.codebook_weights.index_add( - 0, - indices.flatten(), - torch.ones_like(self.codebook_weights[indices]), - ) - - def forward(self, latent: torch.Tensor) -> AbstractLatentDistribution: - if latent.shape[-1] == self.latent_dim: - return QuantizedLatentDistribution( - latent, - self.codebook, - self._flush_codebook_weights, - self._get_codebook_weights, - self._index_add_one_codebook_weight, - ) - elif latent.shape[-1] == self.num_embeddings: - return ClassifiedLatentDistribution( - latent, - self.codebook, - ) - else: - raise ValueError(f"Latent vector has wrong dimension: {latent.shape[-1]}") diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py deleted file mode 100644 index e564438d5bf016bcdbb65b4bbdc215d79f579f8a..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .coco import register_coco_instances # noqa -from .coco_panoptic import register_coco_panoptic_separated # noqa diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/fpn.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/fpn.py deleted file mode 100644 index d0bdfc9da8cb7afc9ef421baef2c173a63ff1743..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/fpn.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import Conv2d, ShapeSpec, get_norm - -from .backbone import Backbone -from .build import BACKBONE_REGISTRY -from .resnet import build_resnet_backbone - -__all__ = ["build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"] - - -class FPN(Backbone): - """ - This module implements :paper:`FPN`. - It creates pyramid features built on top of some input feature maps. - """ - - _fuse_type: torch.jit.Final[str] - - def __init__( - self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - norm (str): the normalization to use. - top_block (nn.Module or None): if provided, an extra operation will - be performed on the output of the last (smallest resolution) - FPN output, and the result will extend the result list. The top_block - further downsamples the feature map. It must have an attribute - "num_levels", meaning the number of extra FPN levels added by - this block, and "in_feature", which is a string representing - its input feature (e.g., p5). - fuse_type (str): types for fusing the top down features and the lateral - ones. It can be "sum" (default), which sums up element-wise; or "avg", - which takes the element-wise mean of the two. - """ - super(FPN, self).__init__() - assert isinstance(bottom_up, Backbone) - assert in_features, in_features - - # Feature map strides and channels from the bottom up network (e.g. ResNet) - input_shapes = bottom_up.output_shape() - strides = [input_shapes[f].stride for f in in_features] - in_channels_per_feature = [input_shapes[f].channels for f in in_features] - - _assert_strides_are_log2_contiguous(strides) - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(in_channels_per_feature): - lateral_norm = get_norm(norm, out_channels) - output_norm = get_norm(norm, out_channels) - - lateral_conv = Conv2d( - in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm - ) - output_conv = Conv2d( - out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - stage = int(math.log2(strides[idx])) - self.add_module("fpn_lateral{}".format(stage), lateral_conv) - self.add_module("fpn_output{}".format(stage), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - self.top_block = top_block - self.in_features = tuple(in_features) - self.bottom_up = bottom_up - # Return feature names are "p", like ["p2", "p3", ..., "p6"] - self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides} - # top block output feature maps. - if self.top_block is not None: - for s in range(stage, stage + self.top_block.num_levels): - self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1) - - self._out_features = list(self._out_feature_strides.keys()) - self._out_feature_channels = {k: out_channels for k in self._out_features} - self._size_divisibility = strides[-1] - assert fuse_type in {"avg", "sum"} - self._fuse_type = fuse_type - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to - feature map tensor for each feature level in high to low resolution order. - - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p", where stage has stride = 2 ** stage e.g., - ["p2", "p3", ..., "p6"]. - """ - bottom_up_features = self.bottom_up(x) - results = [] - prev_features = self.lateral_convs[0](bottom_up_features[self.in_features[-1]]) - results.append(self.output_convs[0](prev_features)) - - # Reverse feature maps into top-down order (from low to high resolution) - for idx, (lateral_conv, output_conv) in enumerate( - zip(self.lateral_convs, self.output_convs) - ): - # Slicing of ModuleList is not supported https://github.com/pytorch/pytorch/issues/47336 - # Therefore we loop over all modules but skip the first one - if idx > 0: - features = self.in_features[-idx - 1] - features = bottom_up_features[features] - top_down_features = F.interpolate(prev_features, scale_factor=2.0, mode="nearest") - lateral_features = lateral_conv(features) - prev_features = lateral_features + top_down_features - if self._fuse_type == "avg": - prev_features /= 2 - results.insert(0, output_conv(prev_features)) - - if self.top_block is not None: - if self.top_block.in_feature in bottom_up_features: - top_block_in_feature = bottom_up_features[self.top_block.in_feature] - else: - top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)] - results.extend(self.top_block(top_block_in_feature)) - assert len(self._out_features) == len(results) - return {f: res for f, res in zip(self._out_features, results)} - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - -def _assert_strides_are_log2_contiguous(strides): - """ - Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". - """ - for i, stride in enumerate(strides[1:], 1): - assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format( - stride, strides[i - 1] - ) - - -class LastLevelMaxPool(nn.Module): - """ - This module is used in the original FPN to generate a downsampled - P6 feature from P5. - """ - - def __init__(self): - super().__init__() - self.num_levels = 1 - self.in_feature = "p5" - - def forward(self, x): - return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)] - - -class LastLevelP6P7(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7 from - C5 feature. - """ - - def __init__(self, in_channels, out_channels, in_feature="res5"): - super().__init__() - self.num_levels = 2 - self.in_feature = in_feature - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -@BACKBONE_REGISTRY.register() -def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelMaxPool(), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - in_channels_p6p7 = bottom_up.output_shape()["res5"].channels - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7(in_channels_p6p7, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/test_packaging.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/test_packaging.py deleted file mode 100644 index a5b1661e8f341fe66a6e02c59fe172bce445782b..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/test_packaging.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import unittest - -from detectron2.utils.collect_env import collect_env_info - - -class TestProjects(unittest.TestCase): - def test_import(self): - from detectron2.projects import point_rend - - _ = point_rend.add_pointrend_config - - import detectron2.projects.deeplab as deeplab - - _ = deeplab.add_deeplab_config - - # import detectron2.projects.panoptic_deeplab as panoptic_deeplab - - # _ = panoptic_deeplab.add_panoptic_deeplab_config - - -class TestCollectEnv(unittest.TestCase): - def test(self): - _ = collect_env_info() diff --git a/spaces/Thanhdotr/facebook-fastspeech2-en-ljspeech/README.md b/spaces/Thanhdotr/facebook-fastspeech2-en-ljspeech/README.md deleted file mode 100644 index 7080b28fd809faf2cf2013dd741596c49df11180..0000000000000000000000000000000000000000 --- a/spaces/Thanhdotr/facebook-fastspeech2-en-ljspeech/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Facebook Fastspeech2 En Ljspeech -emoji: 👁 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Theivaprakasham/yolov6/yolov6/utils/checkpoint.py b/spaces/Theivaprakasham/yolov6/yolov6/utils/checkpoint.py deleted file mode 100644 index 3ce2ede485e62131554acc374c990331bcbfbac5..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/yolov6/utils/checkpoint.py +++ /dev/null @@ -1,60 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import os -import shutil -import torch -import os.path as osp -from yolov6.utils.events import LOGGER -from yolov6.utils.torch_utils import fuse_model - - -def load_state_dict(weights, model, map_location=None): - """Load weights from checkpoint file, only assign weights those layers' name and shape are match.""" - ckpt = torch.load(weights, map_location=map_location) - state_dict = ckpt['model'].float().state_dict() - model_state_dict = model.state_dict() - state_dict = {k: v for k, v in state_dict.items() if k in model_state_dict and v.shape == model_state_dict[k].shape} - model.load_state_dict(state_dict, strict=False) - del ckpt, state_dict, model_state_dict - return model - - -def load_checkpoint(weights, map_location=None, inplace=True, fuse=True): - """Load model from checkpoint file.""" - LOGGER.info("Loading checkpoint from {}".format(weights)) - ckpt = torch.load(weights, map_location=map_location) # load - model = ckpt['ema' if ckpt.get('ema') else 'model'].float() - if fuse: - LOGGER.info("\nFusing model...") - model = fuse_model(model).eval() - else: - model = model.eval() - return model - - -def save_checkpoint(ckpt, is_best, save_dir, model_name=""): - """ Save checkpoint to the disk.""" - if not osp.exists(save_dir): - os.makedirs(save_dir) - filename = osp.join(save_dir, model_name + '.pt') - torch.save(ckpt, filename) - if is_best: - best_filename = osp.join(save_dir, 'best_ckpt.pt') - shutil.copyfile(filename, best_filename) - - -def strip_optimizer(ckpt_dir): - for s in ['best', 'last']: - ckpt_path = osp.join(ckpt_dir, '{}_ckpt.pt'.format(s)) - if not osp.exists(ckpt_path): - continue - ckpt = torch.load(ckpt_path, map_location=torch.device('cpu')) - if ckpt.get('ema'): - ckpt['model'] = ckpt['ema'] # replace model with ema - for k in ['optimizer', 'ema', 'updates']: # keys - ckpt[k] = None - ckpt['epoch'] = -1 - ckpt['model'].half() # to FP16 - for p in ckpt['model'].parameters(): - p.requires_grad = False - torch.save(ckpt, ckpt_path) diff --git a/spaces/Vrk/SeeFood/helper_functions.py b/spaces/Vrk/SeeFood/helper_functions.py deleted file mode 100644 index a5d604f3635655a2eae5deca57bb9d2ac3ad6c30..0000000000000000000000000000000000000000 --- a/spaces/Vrk/SeeFood/helper_functions.py +++ /dev/null @@ -1,288 +0,0 @@ -### We create a bunch of helpful functions throughout the course. -### Storing them here so they're easily accessible. - -import tensorflow as tf - -# Create a function to import an image and resize it to be able to be used with our model -def load_and_prep_image(filename, img_shape=224, scale=True): - """ - Reads in an image from filename, turns it into a tensor and reshapes into - (224, 224, 3). - - Parameters - ---------- - filename (str): string filename of target image - img_shape (int): size to resize target image to, default 224 - scale (bool): whether to scale pixel values to range(0, 1), default True - """ - # Read in the image - img = tf.io.read_file(filename) - # Decode it into a tensor - img = tf.image.decode_jpeg(img) - # Resize the image - img = tf.image.resize(img, [img_shape, img_shape]) - if scale: - # Rescale the image (get all values between 0 and 1) - return img/255. - else: - return img - -# Note: The following confusion matrix code is a remix of Scikit-Learn's -# plot_confusion_matrix function - https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html -import itertools -import matplotlib.pyplot as plt -import numpy as np -from sklearn.metrics import confusion_matrix - -# Our function needs a different name to sklearn's plot_confusion_matrix -def make_confusion_matrix(y_true, y_pred, classes=None, figsize=(10, 10), text_size=15, norm=False, savefig=False): - """Makes a labelled confusion matrix comparing predictions and ground truth labels. - - If classes is passed, confusion matrix will be labelled, if not, integer class values - will be used. - - Args: - y_true: Array of truth labels (must be same shape as y_pred). - y_pred: Array of predicted labels (must be same shape as y_true). - classes: Array of class labels (e.g. string form). If `None`, integer labels are used. - figsize: Size of output figure (default=(10, 10)). - text_size: Size of output figure text (default=15). - norm: normalize values or not (default=False). - savefig: save confusion matrix to file (default=False). - - Returns: - A labelled confusion matrix plot comparing y_true and y_pred. - - Example usage: - make_confusion_matrix(y_true=test_labels, # ground truth test labels - y_pred=y_preds, # predicted labels - classes=class_names, # array of class label names - figsize=(15, 15), - text_size=10) - """ - # Create the confustion matrix - cm = confusion_matrix(y_true, y_pred) - cm_norm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis] # normalize it - n_classes = cm.shape[0] # find the number of classes we're dealing with - - # Plot the figure and make it pretty - fig, ax = plt.subplots(figsize=figsize) - cax = ax.matshow(cm, cmap=plt.cm.Blues) # colors will represent how 'correct' a class is, darker == better - fig.colorbar(cax) - - # Are there a list of classes? - if classes: - labels = classes - else: - labels = np.arange(cm.shape[0]) - - # Label the axes - ax.set(title="Confusion Matrix", - xlabel="Predicted label", - ylabel="True label", - xticks=np.arange(n_classes), # create enough axis slots for each class - yticks=np.arange(n_classes), - xticklabels=labels, # axes will labeled with class names (if they exist) or ints - yticklabels=labels) - - # Make x-axis labels appear on bottom - ax.xaxis.set_label_position("bottom") - ax.xaxis.tick_bottom() - - # Set the threshold for different colors - threshold = (cm.max() + cm.min()) / 2. - - # Plot the text on each cell - for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): - if norm: - plt.text(j, i, f"{cm[i, j]} ({cm_norm[i, j]*100:.1f}%)", - horizontalalignment="center", - color="white" if cm[i, j] > threshold else "black", - size=text_size) - else: - plt.text(j, i, f"{cm[i, j]}", - horizontalalignment="center", - color="white" if cm[i, j] > threshold else "black", - size=text_size) - - # Save the figure to the current working directory - if savefig: - fig.savefig("confusion_matrix.png") - -# Make a function to predict on images and plot them (works with multi-class) -def pred_and_plot(model, filename, class_names): - """ - Imports an image located at filename, makes a prediction on it with - a trained model and plots the image with the predicted class as the title. - """ - # Import the target image and preprocess it - img = load_and_prep_image(filename) - - # Make a prediction - pred = model.predict(tf.expand_dims(img, axis=0)) - - # Get the predicted class - if len(pred[0]) > 1: # check for multi-class - pred_class = class_names[pred.argmax()] # if more than one output, take the max - else: - pred_class = class_names[int(tf.round(pred)[0][0])] # if only one output, round - - # Plot the image and predicted class - plt.imshow(img) - plt.title(f"Prediction: {pred_class}") - plt.axis(False); - -import datetime - -def create_tensorboard_callback(dir_name, experiment_name): - """ - Creates a TensorBoard callback instand to store log files. - - Stores log files with the filepath: - "dir_name/experiment_name/current_datetime/" - - Args: - dir_name: target directory to store TensorBoard log files - experiment_name: name of experiment directory (e.g. efficientnet_model_1) - """ - log_dir = dir_name + "/" + experiment_name + "/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") - tensorboard_callback = tf.keras.callbacks.TensorBoard( - log_dir=log_dir - ) - print(f"Saving TensorBoard log files to: {log_dir}") - return tensorboard_callback - -# Plot the validation and training data separately -import matplotlib.pyplot as plt - -def plot_loss_curves(history): - """ - Returns separate loss curves for training and validation metrics. - - Args: - history: TensorFlow model History object (see: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History) - """ - loss = history.history['loss'] - val_loss = history.history['val_loss'] - - accuracy = history.history['accuracy'] - val_accuracy = history.history['val_accuracy'] - - epochs = range(len(history.history['loss'])) - - # Plot loss - plt.plot(epochs, loss, label='training_loss') - plt.plot(epochs, val_loss, label='val_loss') - plt.title('Loss') - plt.xlabel('Epochs') - plt.legend() - - # Plot accuracy - plt.figure() - plt.plot(epochs, accuracy, label='training_accuracy') - plt.plot(epochs, val_accuracy, label='val_accuracy') - plt.title('Accuracy') - plt.xlabel('Epochs') - plt.legend(); - -def compare_historys(original_history, new_history, initial_epochs=5): - """ - Compares two TensorFlow model History objects. - - Args: - original_history: History object from original model (before new_history) - new_history: History object from continued model training (after original_history) - initial_epochs: Number of epochs in original_history (new_history plot starts from here) - """ - - # Get original history measurements - acc = original_history.history["accuracy"] - loss = original_history.history["loss"] - - val_acc = original_history.history["val_accuracy"] - val_loss = original_history.history["val_loss"] - - # Combine original history with new history - total_acc = acc + new_history.history["accuracy"] - total_loss = loss + new_history.history["loss"] - - total_val_acc = val_acc + new_history.history["val_accuracy"] - total_val_loss = val_loss + new_history.history["val_loss"] - - # Make plots - plt.figure(figsize=(8, 8)) - plt.subplot(2, 1, 1) - plt.plot(total_acc, label='Training Accuracy') - plt.plot(total_val_acc, label='Validation Accuracy') - plt.plot([initial_epochs-1, initial_epochs-1], - plt.ylim(), label='Start Fine Tuning') # reshift plot around epochs - plt.legend(loc='lower right') - plt.title('Training and Validation Accuracy') - - plt.subplot(2, 1, 2) - plt.plot(total_loss, label='Training Loss') - plt.plot(total_val_loss, label='Validation Loss') - plt.plot([initial_epochs-1, initial_epochs-1], - plt.ylim(), label='Start Fine Tuning') # reshift plot around epochs - plt.legend(loc='upper right') - plt.title('Training and Validation Loss') - plt.xlabel('epoch') - plt.show() - -# Create function to unzip a zipfile into current working directory -# (since we're going to be downloading and unzipping a few files) -import zipfile - -def unzip_data(filename): - """ - Unzips filename into the current working directory. - - Args: - filename (str): a filepath to a target zip folder to be unzipped. - """ - zip_ref = zipfile.ZipFile(filename, "r") - zip_ref.extractall() - zip_ref.close() - -# Walk through an image classification directory and find out how many files (images) -# are in each subdirectory. -import os - -def walk_through_dir(dir_path): - """ - Walks through dir_path returning its contents. - - Args: - dir_path (str): target directory - - Returns: - A print out of: - number of subdiretories in dir_path - number of images (files) in each subdirectory - name of each subdirectory - """ - for dirpath, dirnames, filenames in os.walk(dir_path): - print(f"There are {len(dirnames)} directories and {len(filenames)} images in '{dirpath}'.") - -# Function to evaluate: accuracy, precision, recall, f1-score -from sklearn.metrics import accuracy_score, precision_recall_fscore_support - -def calculate_results(y_true, y_pred): - """ - Calculates model accuracy, precision, recall and f1 score of a binary classification model. - - Args: - y_true: true labels in the form of a 1D array - y_pred: predicted labels in the form of a 1D array - - Returns a dictionary of accuracy, precision, recall, f1-score. - """ - # Calculate model accuracy - model_accuracy = accuracy_score(y_true, y_pred) * 100 - # Calculate model precision, recall and f1 score using "weighted average - model_precision, model_recall, model_f1, _ = precision_recall_fscore_support(y_true, y_pred, average="weighted") - model_results = {"accuracy": model_accuracy, - "precision": model_precision, - "recall": model_recall, - "f1": model_f1} - return model_results diff --git a/spaces/Xhaheen/Lexica_prompt_search/app.py b/spaces/Xhaheen/Lexica_prompt_search/app.py deleted file mode 100644 index 6f9fd1fa814f6e522156b823779da0834aee5353..0000000000000000000000000000000000000000 --- a/spaces/Xhaheen/Lexica_prompt_search/app.py +++ /dev/null @@ -1,85 +0,0 @@ - -import requests -import shutil -from PIL import Image -from io import BytesIO -import numpy as np -import matplotlib.pyplot as plt -import pandas as pd -import random -import gradio as gr -import pandas as pd - -design='india' -def lexica(design,n): - - request=requests.get(f'https://lexica.art/api/v1/search?q={design}') - request.json() - data = request.json() - data_items = list(data.items()) - - random.shuffle(data_items) - - data = dict(data_items) - - image_urls = [] - image_prompts = [] - image_gallery=[] - - for key, value in data.items(): - for i in range(n): - image_url = value[i]['src'] - if isinstance(image_url, list): - image_url = image_url[0] - image_urls.append(image_url) - image_gallery.append(value[i]['gallery']) - - - image_prompts.append(value[i]['prompt']) - - images = [] - - # Loop through the image URLs - for url in image_urls: - # Download the image from the URL - response = requests.get(url) - - # Load the image data into PIL format - image = Image.open(BytesIO(response.content)) - - # Add the image to the list - images.append(image) - - - # df = pd.DataFrame(image_prompts, columns=["Lexica Prompt"], index=range(1, len(image_prompts)+1)) - df = pd.DataFrame({'image_prompts': image_prompts, 'image_gallery': image_gallery}) - - - # df.index.name = "Sr. No." - - - for image in images: - - array = np.array(image) - - - return images , df -design='india' -# lexica(design) - -inputs =[ gr.Textbox(label = 'Enter prompt to search Lexica.art'), - gr.Slider(label='Number of images ', minimum = 4, maximum = 20, step = 1, value = 4)] - -outputs= [gr.Gallery(lable='Output gallery').style(grid=3,height=200,container=True), - gr.Dataframe(label='prompts for corresponding images')] -interface = gr.Interface(lexica, - inputs=inputs, - outputs=outputs, - examples =[ ['trending digital art', 5], - ['beautiful home', 5], - ['interior design of living room', 5]] - , - title = "" +' 🔍 🖌️🎨 Lexica Art - A Search Engine for Generative Art Prompts and Works '+ "", - description="🔍🖌️ 🎨 lexica huggingface space , Find inspiration and discover new generative artworks with Lexica Art, a search engine built by by @[Sharif shameem](https://twitter.com/sharifshameem) . Explore a vast collection of prompts and corresponding artworks, and let your imagination take over as you create your own masterpieces. \n\n Visit @[baith_al_suroor](https://huggingface.co/spaces/Xhaheen/Baith-al-suroor) to redesign your home interiors for FREE \n\n💡🖌️ spaces built with ❤️ @[Xhaheen](https://www.linkedin.com/in/sallu-mandya)") - -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py b/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py deleted file mode 100644 index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000 --- a/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py +++ /dev/null @@ -1,31 +0,0 @@ -import ONNXVITS_models -import utils -from text import text_to_sequence -import torch -import commons - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") -symbols = hps.symbols -net_g = ONNXVITS_models.SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("ありがとうございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.tensor([0]) - o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1) \ No newline at end of file diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/README.md b/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/README.md deleted file mode 100644 index a726acbd92043458311dd949cc09c0195cd35400..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/README.md +++ /dev/null @@ -1,73 +0,0 @@ -# Flask REST API - -[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/wiki/API)s are -commonly used to expose Machine Learning (ML) models to other services. This folder contains an example REST API -created using Flask to expose the YOLOv5s model from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/). - -## Requirements - -[Flask](https://palletsprojects.com/p/flask/) is required. Install with: - -```shell -$ pip install Flask -``` - -## Run - -After Flask installation run: - -```shell -$ python3 restapi.py --port 5000 -``` - -Then use [curl](https://curl.se/) to perform a request: - -```shell -$ curl -X POST -F image=@zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s' -``` - -The model inference results are returned as a JSON response: - -```json -[ - { - "class": 0, - "confidence": 0.8900438547, - "height": 0.9318675399, - "name": "person", - "width": 0.3264600933, - "xcenter": 0.7438579798, - "ycenter": 0.5207948685 - }, - { - "class": 0, - "confidence": 0.8440024257, - "height": 0.7155083418, - "name": "person", - "width": 0.6546785235, - "xcenter": 0.427829951, - "ycenter": 0.6334488392 - }, - { - "class": 27, - "confidence": 0.3771208823, - "height": 0.3902671337, - "name": "tie", - "width": 0.0696444362, - "xcenter": 0.3675483763, - "ycenter": 0.7991207838 - }, - { - "class": 27, - "confidence": 0.3527112305, - "height": 0.1540903747, - "name": "tie", - "width": 0.0336618312, - "xcenter": 0.7814827561, - "ycenter": 0.5065554976 - } -] -``` - -An example python script to perform inference using [requests](https://docs.python-requests.org/en/master/) is given -in `example_request.py` diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/utils/dummy_flax_objects.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/utils/dummy_flax_objects.py deleted file mode 100644 index 8e308bb41bea681993049d8a5ec3ff22987d5d14..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/utils/dummy_flax_objects.py +++ /dev/null @@ -1,184 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -# flake8: noqa - -from ..utils import DummyObject, requires_backends - - -class FlaxModelMixin(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxUNet2DConditionModel(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxAutoencoderKL(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDiffusionPipeline(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDDIMScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDDPMScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxDPMSolverMultistepScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxKarrasVeScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxLMSDiscreteScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxPNDMScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxSchedulerMixin(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - -class FlaxScoreSdeVeScheduler(metaclass=DummyObject): - _backends = ["flax"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax"]) diff --git a/spaces/Yiqin/ChatVID/model/fastchat/data/alpaca-converter.py b/spaces/Yiqin/ChatVID/model/fastchat/data/alpaca-converter.py deleted file mode 100644 index 392ed2c2beaae92ce0464aecac6c254ffee53300..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/data/alpaca-converter.py +++ /dev/null @@ -1,67 +0,0 @@ -import argparse -import json -import pathlib - -# Prompt from stanford alpaca's training script -PROMPT_DICT = { - "prompt_input": ( - "Below is an instruction that describes a task, paired with an input that provides further context. " - "Write a response that appropriately completes the request.\n\n" - "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" - ), - "prompt_no_input": ( - "Below is an instruction that describes a task. " - "Write a response that appropriately completes the request.\n\n" - "### Instruction:\n{instruction}\n\n### Response:" - ), -} - - -def main(args): - data_path = pathlib.Path(args.data_path) - with data_path.open() as f: - data = json.load(f) - - prompt_input, prompt_no_input = ( - PROMPT_DICT["prompt_input"], - PROMPT_DICT["prompt_no_input"], - ) - sources = [ - prompt_input.format_map(example) - if example.get("input", "") != "" - else prompt_no_input.format_map(example) - for example in data - ] - targets = [example["output"] for example in data] - - new_data = [] - cnt = 1 - for s, t in zip(sources, targets): - new_data.append( - { - "id": str(cnt), - "conversations": [ - { - "from": "human", - "value": s, - }, - { - "from": "gpt", - "value": t, - }, - ], - } - ) - cnt += 1 - - json.dump(new_data, open(args.output_path, "w"), indent=2) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--data_path", type=str, default="alpaca-data.json") - parser.add_argument( - "--output_path", type=str, default="alpaca-data-conversation.json" - ) - args = parser.parse_args() - main(args) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/GETTING_STARTED.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/GETTING_STARTED.md deleted file mode 100644 index 404b0c8f467264d1adf61e8274e5f864e24018e8..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/GETTING_STARTED.md +++ /dev/null @@ -1,79 +0,0 @@ -## Getting Started with Detectron2 - -This document provides a brief intro of the usage of builtin command-line tools in detectron2. - -For a tutorial that involves actual coding with the API, -see our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -which covers how to run inference with an -existing model, and how to train a builtin model on a custom dataset. - - -### Inference Demo with Pre-trained Models - -1. Pick a model and its config file from - [model zoo](MODEL_ZOO.md), - for example, `mask_rcnn_R_50_FPN_3x.yaml`. -2. We provide `demo.py` that is able to demo builtin configs. Run it with: -``` -cd demo/ -python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - --input input1.jpg input2.jpg \ - [--other-options] - --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl -``` -The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation. -This command will run the inference and show visualizations in an OpenCV window. - -For details of the command line arguments, see `demo.py -h` or look at its source code -to understand its behavior. Some common arguments are: -* To run __on your webcam__, replace `--input files` with `--webcam`. -* To run __on a video__, replace `--input files` with `--video-input video.mp4`. -* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`. -* To save outputs to a directory (for images) or a file (for webcam or video), use `--output`. - - -### Training & Evaluation in Command Line - -We provide two scripts in "tools/plain_train_net.py" and "tools/train_net.py", -that are made to train all the configs provided in detectron2. You may want to -use it as a reference to write your own training script. - -Compared to "train_net.py", "plain_train_net.py" supports fewer default -features. It also includes fewer abstraction, therefore is easier to add custom -logic. - -To train a model with "train_net.py", first -setup the corresponding datasets following -[datasets/README.md](./datasets/README.md), -then run: -``` -cd tools/ -./train_net.py --num-gpus 8 \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml -``` - -The configs are made for 8-GPU training. -To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g.: -``` -./train_net.py \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ - --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 -``` - -To evaluate a model's performance, use -``` -./train_net.py \ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \ - --eval-only MODEL.WEIGHTS /path/to/checkpoint_file -``` -For more options, see `./train_net.py -h`. - -### Use Detectron2 APIs in Your Code - -See our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -to learn how to use detectron2 APIs to: -1. run inference with an existing model -2. train a builtin model on a custom dataset - -See [detectron2/projects](https://github.com/facebookresearch/detectron2/tree/main/projects) -for more ways to build your project on detectron2. diff --git a/spaces/YlcldKlns/bing/src/pages/api/sydney.ts b/spaces/YlcldKlns/bing/src/pages/api/sydney.ts deleted file mode 100644 index a5b99574289f532e6ef7c5e70a6360a556db9643..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/pages/api/sydney.ts +++ /dev/null @@ -1,61 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/utils/export.py b/spaces/Yudha515/Rvc-Models/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/Yuliang/ECON/lib/torch_utils/ops/conv2d_resample.py b/spaces/Yuliang/ECON/lib/torch_utils/ops/conv2d_resample.py deleted file mode 100644 index 2529947e6f5d34f9aa65bd21ede9e0fac87190ab..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/torch_utils/ops/conv2d_resample.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. -"""2D convolution with optional up/downsampling.""" - -import torch - -from .. import misc -from . import conv2d_gradfix, upfirdn2d -from .upfirdn2d import _get_filter_size, _parse_padding - -#---------------------------------------------------------------------------- - - -def _get_weight_shape(w): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - shape = [int(sz) for sz in w.shape] - misc.assert_shape(w, shape) - return shape - - -#---------------------------------------------------------------------------- - - -def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True): - """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations. - """ - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - - # Flip weight if requested. - if not flip_weight: # conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False). - w = w.flip([2, 3]) - - # Workaround performance pitfall in cuDNN 8.0.5, triggered when using - # 1x1 kernel + memory_format=channels_last + less than 64 channels. - if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose: - if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64: - if out_channels <= 4 and groups == 1: - in_shape = x.shape - x = w.squeeze(3).squeeze(2) @ x.reshape([in_shape[0], in_channels_per_group, -1]) - x = x.reshape([in_shape[0], out_channels, in_shape[2], in_shape[3]]) - else: - x = x.to(memory_format=torch.contiguous_format) - w = w.to(memory_format=torch.contiguous_format) - x = conv2d_gradfix.conv2d(x, w, groups=groups) - return x.to(memory_format=torch.channels_last) - - # Otherwise => execute using conv2d_gradfix. - op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d - return op(x, w, stride=stride, padding=padding, groups=groups) - - -#---------------------------------------------------------------------------- - - -@misc.profiled_function -def conv2d_resample( - x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False -): - r"""2D convolution with optional up/downsampling. - - Padding is performed only once at the beginning, not between the operations. - - Args: - x: Input tensor of shape - `[batch_size, in_channels, in_height, in_width]`. - w: Weight tensor of shape - `[out_channels, in_channels//groups, kernel_height, kernel_width]`. - f: Low-pass filter for up/downsampling. Must be prepared beforehand by - calling upfirdn2d.setup_filter(). None = identity (default). - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - groups: Split input channels into N groups (default: 1). - flip_weight: False = convolution, True = correlation (default: True). - flip_filter: False = convolution, True = correlation (default: False). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and (x.ndim == 4) - assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype) - assert f is None or ( - isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32 - ) - assert isinstance(up, int) and (up >= 1) - assert isinstance(down, int) and (down >= 1) - assert isinstance(groups, int) and (groups >= 1) - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - fw, fh = _get_filter_size(f) - px0, px1, py0, py1 = _parse_padding(padding) - - # Adjust padding to account for up/downsampling. - if up > 1: - px0 += (fw + up - 1) // 2 - px1 += (fw - up) // 2 - py0 += (fh + up - 1) // 2 - py1 += (fh - up) // 2 - if down > 1: - px0 += (fw - down + 1) // 2 - px1 += (fw - down) // 2 - py0 += (fh - down + 1) // 2 - py1 += (fh - down) // 2 - - # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve. - if kw == 1 and kh == 1 and (down > 1 and up == 1): - x = upfirdn2d.upfirdn2d( - x=x, f=f, down=down, padding=[px0, px1, py0, py1], flip_filter=flip_filter - ) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample. - if kw == 1 and kh == 1 and (up > 1 and down == 1): - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - x = upfirdn2d.upfirdn2d( - x=x, f=f, up=up, padding=[px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter - ) - return x - - # Fast path: downsampling only => use strided convolution. - if down > 1 and up == 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0, px1, py0, py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: upsampling with optional downsampling => use transpose strided convolution. - if up > 1: - if groups == 1: - w = w.transpose(0, 1) - else: - w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw) - w = w.transpose(1, 2) - w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw) - px0 -= kw - 1 - px1 -= kw - up - py0 -= kh - 1 - py1 -= kh - up - pxt = max(min(-px0, -px1), 0) - pyt = max(min(-py0, -py1), 0) - x = _conv2d_wrapper( - x=x, - w=w, - stride=up, - padding=[pyt, pxt], - groups=groups, - transpose=True, - flip_weight=(not flip_weight) - ) - x = upfirdn2d.upfirdn2d( - x=x, - f=f, - padding=[px0 + pxt, px1 + pxt, py0 + pyt, py1 + pyt], - gain=up**2, - flip_filter=flip_filter - ) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - - # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d. - if up == 1 and down == 1: - if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0: - return _conv2d_wrapper( - x=x, w=w, padding=[py0, px0], groups=groups, flip_weight=flip_weight - ) - - # Fallback: Generic reference implementation. - x = upfirdn2d.upfirdn2d( - x=x, - f=(f if up > 1 else None), - up=up, - padding=[px0, px1, py0, py1], - gain=up**2, - flip_filter=flip_filter - ) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - - -#---------------------------------------------------------------------------- diff --git a/spaces/Zenne/chatbot_for_files_langchain/app.py b/spaces/Zenne/chatbot_for_files_langchain/app.py deleted file mode 100644 index 0687994afdac4ed3cb79abe7ef5a8f77c6df2cc8..0000000000000000000000000000000000000000 --- a/spaces/Zenne/chatbot_for_files_langchain/app.py +++ /dev/null @@ -1,277 +0,0 @@ -# Import required libraries -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.document_loaders import ( - UnstructuredWordDocumentLoader, - PyMuPDFLoader, - UnstructuredFileLoader, -) -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.chat_models import ChatOpenAI -from langchain.vectorstores import Pinecone, Chroma -from langchain.chains import ConversationalRetrievalChain -import os -import langchain -import pinecone -import streamlit as st -import shutil -import json - -OPENAI_API_KEY = '' -PINECONE_API_KEY = '' -PINECONE_API_ENV = '' -langchain.verbose = False - - -@st.cache_data() -def init(): - pinecone_index_name = '' - chroma_collection_name = '' - persist_directory = '' - docsearch_ready = False - directory_name = 'tmp_docs' - return pinecone_index_name, chroma_collection_name, persist_directory, docsearch_ready, directory_name - - -@st.cache_data() -def save_file(files): - # Remove existing files in the directory - if os.path.exists(directory_name): - for filename in os.listdir(directory_name): - file_path = os.path.join(directory_name, filename) - try: - if os.path.isfile(file_path): - os.remove(file_path) - except Exception as e: - print(f"Error: {e}") - # Save the new file with original filename - if files is not None: - for file in files: - file_name = file.name - file_path = os.path.join(directory_name, file_name) - with open(file_path, 'wb') as f: - shutil.copyfileobj(file, f) - - -def load_files(): - all_texts = [] - n_files = 0 - n_char = 0 - n_texts = 0 - - text_splitter = RecursiveCharacterTextSplitter( - chunk_size=400, chunk_overlap=50 - ) - for filename in os.listdir(directory_name): - file = os.path.join(directory_name, filename) - if os.path.isfile(file): - if file.endswith(".docx"): - loader = UnstructuredWordDocumentLoader(file) - elif file.endswith(".pdf"): - loader = PyMuPDFLoader(file) - else: # assume a pure text format and attempt to load it - loader = UnstructuredFileLoader(file) - data = loader.load() - texts = text_splitter.split_documents(data) - n_files += 1 - n_char += len(data[0].page_content) - n_texts += len(texts) - all_texts.extend(texts) - st.write( - f"Loaded {n_files} file(s) with {n_char} characters, and split into {n_texts} split-documents." - ) - return all_texts, n_texts - - -@st.cache_resource() -def ingest(_all_texts, use_pinecone, _embeddings, pinecone_index_name, chroma_collection_name, persist_directory): - if use_pinecone: - docsearch = Pinecone.from_texts( - [t.page_content for t in _all_texts], _embeddings, index_name=pinecone_index_name) # add namespace=pinecone_namespace if provided - else: - docsearch = Chroma.from_documents( - _all_texts, _embeddings, collection_name=chroma_collection_name, persist_directory=persist_directory) - - return docsearch - - -def setup_retriever(docsearch, k): - retriever = docsearch.as_retriever( - search_type="similarity", search_kwargs={"k": k}, include_metadata=True) - return retriever - - -def setup_docsearch(use_pinecone, pinecone_index_name, embeddings, chroma_collection_name, persist_directory): - docsearch = [] - n_texts = 0 - if use_pinecone: - # Load the pre-created Pinecone index. - # The index which has already be stored in pinecone.io as long-term memory - if pinecone_index_name in pinecone.list_indexes(): - docsearch = Pinecone.from_existing_index( - pinecone_index_name, embeddings) # add namespace=pinecone_namespace if provided - index_client = pinecone.Index(pinecone_index_name) - # Get the index information - index_info = index_client.describe_index_stats() - namespace_name = '' - n_texts = index_info['namespaces'][namespace_name]['vector_count'] - else: - raise ValueError('''Cannot find the specified Pinecone index. - Create one in pinecone.io or using, e.g., - pinecone.create_index( - name=index_name, dimension=1536, metric="cosine", shards=1)''') - else: - docsearch = Chroma(persist_directory=persist_directory, embedding_function=embeddings, - collection_name=chroma_collection_name) - - n_texts = docsearch._collection.count() - - return docsearch, n_texts - - -def get_response(query, chat_history, CRqa): - result = CRqa({"question": query, "chat_history": chat_history}) - return result['answer'], result['source_documents'] - - -def setup_em_llm(OPENAI_API_KEY, temperature, r_llm): - # Set up OpenAI embeddings - embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY) - # Use Open AI LLM with gpt-3.5-turbo or gpt-4. - # Set the temperature to be 0 if you do not want it to make up things - llm = ChatOpenAI(temperature=temperature, model_name=r_llm, streaming=True, - openai_api_key=OPENAI_API_KEY) - return embeddings, llm - - -def load_chat_history(CHAT_HISTORY_FILENAME): - try: - with open(CHAT_HISTORY_FILENAME, 'r') as f: - chat_history = json.load(f) - except (FileNotFoundError, json.JSONDecodeError): - chat_history = [] - return chat_history - - -def save_chat_history(chat_history, CHAT_HISTORY_FILENAME): - with open(CHAT_HISTORY_FILENAME, 'w') as f: - json.dump(chat_history, f) - - -pinecone_index_name, chroma_collection_name, persist_directory, docsearch_ready, directory_name = init() - - -def main(pinecone_index_name, chroma_collection_name, persist_directory, docsearch_ready, directory_name): - docsearch_ready = False - chat_history = [] - latest_chats = [] - reply = '' - source = '' - # Get user input of whether to use Pinecone or not - col1, col2, col3 = st.columns([1, 1, 1]) - # create the radio buttons and text input fields - with col1: - r_pinecone = st.radio('Use Pinecone?', ('Yes', 'No')) - r_ingest = st.radio( - 'Ingest file(s)?', ('Yes', 'No')) - r_llm = st.multiselect( - 'LLM:', ['gpt-3.5-turbo', 'gpt-4'], 'gpt-3.5-turbo') - r_llm = r_llm[0] - with col2: - OPENAI_API_KEY = st.text_input( - "OpenAI API key:", type="password") - temperature = st.slider('Temperature', 0.0, 1.0, 0.1) - k_sources = st.slider('# source(s) to print out', 0, 20, 2) - with col3: - if OPENAI_API_KEY: - embeddings, llm = setup_em_llm(OPENAI_API_KEY, temperature, r_llm) - if r_pinecone.lower() == 'yes': - use_pinecone = True - PINECONE_API_KEY = st.text_input( - "Pinecone API key:", type="password") - PINECONE_API_ENV = st.text_input( - "Pinecone API env:", type="password") - pinecone_index_name = st.text_input('Pinecone index:') - pinecone.init(api_key=PINECONE_API_KEY, - environment=PINECONE_API_ENV) - else: - use_pinecone = False - chroma_collection_name = st.text_input( - '''Chroma collection name of 3-63 characters:''') - persist_directory = "./vectorstore" - - if pinecone_index_name or chroma_collection_name: - session_name = pinecone_index_name + chroma_collection_name - if r_ingest.lower() == 'yes': - files = st.file_uploader( - 'Upload Files', accept_multiple_files=True) - if files: - save_file(files) - all_texts, n_texts = load_files() - docsearch = ingest(all_texts, use_pinecone, embeddings, pinecone_index_name, - chroma_collection_name, persist_directory) - docsearch_ready = True - else: - st.write( - 'No data is to be ingested. Make sure the Pinecone index or Chroma collection name you provided contains data.') - docsearch, n_texts = setup_docsearch(use_pinecone, pinecone_index_name, - embeddings, chroma_collection_name, persist_directory) - docsearch_ready = True - if docsearch_ready: - # number of sources (split-documents when ingesting files); default is 4 - k = min([20, n_texts]) - retriever = setup_retriever(docsearch, k) - CRqa = ConversationalRetrievalChain.from_llm( - llm, retriever=retriever, return_source_documents=True) - - st.title(':blue[Chatbot]') - # Get user input - query = st.text_area('Enter your question:', height=10, - placeholder='''Summarize the context. - \nAfter typing your question, click on SUBMIT to send it to the bot.''') - submitted = st.button('SUBMIT') - - CHAT_HISTORY_FILENAME = f"chat_history/{session_name}_chat_hist.json" - chat_history = load_chat_history(CHAT_HISTORY_FILENAME) - st.markdown('', unsafe_allow_html=True) - - if query and submitted: - # Generate a reply based on the user input and chat history - chat_history = [(user, bot) - for user, bot in chat_history] - reply, source = get_response(query, chat_history, CRqa) - # Update the chat history with the user input and system response - chat_history.append(('User', query)) - chat_history.append(('Bot', reply)) - save_chat_history(chat_history, CHAT_HISTORY_FILENAME) - c = chat_history[-4:] - if len(chat_history) >= 4: - latest_chats = [c[2],c[3],c[0],c[1]] - else: - latest_chats = c - - if latest_chats: - chat_history_str1 = '
          '.join([f'{x[0]}: {x[1]}' for x in latest_chats]) - st.markdown(f'
          {chat_history_str1}
          ', unsafe_allow_html=True) - - if reply and source: - # Display sources - for i, source_i in enumerate(source): - if i < k_sources: - if len(source_i.page_content) > 400: - page_content = source_i.page_content[:400] - else: - page_content = source_i.page_content - if source_i.metadata: - metadata_source = source_i.metadata['source'] - st.markdown(f"

          Source {i+1}: {metadata_source}


          {page_content}", unsafe_allow_html=True) - else: - st.markdown(f"

          Source {i+1}:


          {page_content}", unsafe_allow_html=True) - - all_chats = chat_history - all_chat_history_str = '\n'.join( - [f'{x[0]}: {x[1]}' for x in all_chats]) - st.title(':blue[All chat records]') - st.text_area('', value=all_chat_history_str, height=250, label_visibility='collapsed') -if __name__ == '__main__': - main(pinecone_index_name, chroma_collection_name, persist_directory, - docsearch_ready, directory_name) diff --git a/spaces/aabyzov/playground/app.py b/spaces/aabyzov/playground/app.py deleted file mode 100644 index a4491fa68b763a8a344f905b856e79f8ff7aabf7..0000000000000000000000000000000000000000 --- a/spaces/aabyzov/playground/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import streamlit as st - -x = st.slider('Select a value') -st.write(x, 'squared is', x * x) \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/approx_max_iou_assigner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/approx_max_iou_assigner.py deleted file mode 100644 index 6d07656d173744426795c81c14c6bcdb4e63a406..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/approx_max_iou_assigner.py +++ /dev/null @@ -1,145 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .max_iou_assigner import MaxIoUAssigner - - -@BBOX_ASSIGNERS.register_module() -class ApproxMaxIoUAssigner(MaxIoUAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with an integer indicating the ground truth - index. (semi-positive index: gt label (0-based), -1: background) - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - ignore_iof_thr=-1, - ignore_wrt_candidates=True, - match_low_quality=True, - gpu_assign_thr=-1, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - approxs, - squares, - approxs_per_octave, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to approxs. - - This method assign a gt bbox to each group of approxs (bboxes), - each group of approxs is represent by a base approx (bbox) and - will be assigned with -1, or a semi-positive number. - background_label (-1) means negative sample, - semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to background_label (-1) - 2. use the max IoU of each group of approxs to assign - 2. assign proposals whose iou with all gts < neg_iou_thr to background - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - approxs (Tensor): Bounding boxes to be assigned, - shape(approxs_per_octave*n, 4). - squares (Tensor): Base Bounding boxes to be assigned, - shape(n, 4). - approxs_per_octave (int): number of approxs per octave - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_squares = squares.size(0) - num_gts = gt_bboxes.size(0) - - if num_squares == 0 or num_gts == 0: - # No predictions and/or truth, return empty assignment - overlaps = approxs.new(num_gts, num_squares) - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - return assign_result - - # re-organize anchors by approxs_per_octave x num_squares - approxs = torch.transpose( - approxs.view(num_squares, approxs_per_octave, 4), 0, - 1).contiguous().view(-1, 4) - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - num_gts > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = approxs.device - approxs = approxs.cpu() - gt_bboxes = gt_bboxes.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - if gt_labels is not None: - gt_labels = gt_labels.cpu() - all_overlaps = self.iou_calculator(approxs, gt_bboxes) - - overlaps, _ = all_overlaps.view(approxs_per_octave, num_squares, - num_gts).max(dim=0) - overlaps = torch.transpose(overlaps, 0, 1) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and squares.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - squares, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, squares, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/adaptation.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/adaptation.py deleted file mode 100644 index 0c70ce1235cfd978e4d85e5e8f67704b7b993ff4..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/openal/adaptation.py +++ /dev/null @@ -1,366 +0,0 @@ -import weakref - -from . import interface -from pyglet.util import debug_print -from pyglet.media.drivers.base import AbstractAudioDriver, AbstractAudioPlayer, MediaEvent -from pyglet.media.mediathreads import PlayerWorkerThread -from pyglet.media.drivers.listener import AbstractListener - -_debug = debug_print('debug_media') - - -class OpenALDriver(AbstractAudioDriver): - def __init__(self, device_name=None): - super().__init__() - - self.device = interface.OpenALDevice(device_name) - self.context = self.device.create_context() - self.context.make_current() - - self._listener = OpenALListener(self) - - self.worker = PlayerWorkerThread() - self.worker.start() - - def __del__(self): - assert _debug("Delete OpenALDriver") - self.delete() - - def create_audio_player(self, source, player): - assert self.device is not None, "Device was closed" - return OpenALAudioPlayer(self, source, player) - - def delete(self): - self.worker.stop() - self.context = None - - def have_version(self, major, minor): - return (major, minor) <= self.get_version() - - def get_version(self): - assert self.device is not None, "Device was closed" - return self.device.get_version() - - def get_extensions(self): - assert self.device is not None, "Device was closed" - return self.device.get_extensions() - - def have_extension(self, extension): - return extension in self.get_extensions() - - def get_listener(self): - return self._listener - - -class OpenALListener(AbstractListener): - def __init__(self, driver): - self._driver = weakref.proxy(driver) - self._al_listener = interface.OpenALListener() - - def __del__(self): - assert _debug("Delete OpenALListener") - - def _set_volume(self, volume): - self._al_listener.gain = volume - self._volume = volume - - def _set_position(self, position): - self._al_listener.position = position - self._position = position - - def _set_forward_orientation(self, orientation): - self._al_listener.orientation = orientation + self._up_orientation - self._forward_orientation = orientation - - def _set_up_orientation(self, orientation): - self._al_listener.orientation = self._forward_orientation + orientation - self._up_orientation = orientation - - -class OpenALAudioPlayer(AbstractAudioPlayer): - #: Minimum size of an OpenAL buffer worth bothering with, in bytes - min_buffer_size = 512 - - #: Aggregate (desired) buffer size, in seconds - _ideal_buffer_size = 1.0 - - def __init__(self, driver, source, player): - super(OpenALAudioPlayer, self).__init__(source, player) - self.driver = driver - self.alsource = driver.context.create_source() - - # Cursor positions, like DSound and Pulse drivers, refer to a - # hypothetical infinite-length buffer. Cursor units are in bytes. - - # Cursor position of current (head) AL buffer - self._buffer_cursor = 0 - - # Estimated playback cursor position (last seen) - self._play_cursor = 0 - - # Cursor position of end of queued AL buffer. - self._write_cursor = 0 - - # List of currently queued buffer sizes (in bytes) - self._buffer_sizes = [] - - # List of currently queued buffer timestamps - self._buffer_timestamps = [] - - # Timestamp at end of last written buffer (timestamp to return in case - # of underrun) - self._underrun_timestamp = None - - # List of (cursor, MediaEvent) - self._events = [] - - # Desired play state (True even if stopped due to underrun) - self._playing = False - - # When clearing, the play cursor can be incorrect - self._clearing = False - - # Up to one audio data may be buffered if too much data was received - # from the source that could not be written immediately into the - # buffer. See _refill(). - self._audiodata_buffer = None - - self._refill(self.ideal_buffer_size) - - def __del__(self): - self.delete() - - def delete(self): - self.driver.worker.remove(self) - self.alsource = None - - @property - def ideal_buffer_size(self): - return int(self._ideal_buffer_size * self.source.audio_format.bytes_per_second) - - def play(self): - assert _debug('OpenALAudioPlayer.play()') - - assert self.driver is not None - assert self.alsource is not None - - if not self.alsource.is_playing: - self.alsource.play() - self._playing = True - self._clearing = False - - self.driver.worker.add(self) - - def stop(self): - self.driver.worker.remove(self) - assert _debug('OpenALAudioPlayer.stop()') - assert self.driver is not None - assert self.alsource is not None - self.alsource.pause() - self._playing = False - - def clear(self): - assert _debug('OpenALAudioPlayer.clear()') - - assert self.driver is not None - assert self.alsource is not None - - super().clear() - self.alsource.stop() - self._handle_processed_buffers() - self.alsource.clear() - self.alsource.byte_offset = 0 - self._playing = False - self._clearing = True - self._audiodata_buffer = None - - self._buffer_cursor = 0 - self._play_cursor = 0 - self._write_cursor = 0 - del self._events[:] - del self._buffer_sizes[:] - del self._buffer_timestamps[:] - - def _update_play_cursor(self): - assert self.driver is not None - assert self.alsource is not None - - self._handle_processed_buffers() - - # Update play cursor using buffer cursor + estimate into current buffer - if self._clearing: - self._play_cursor = self._buffer_cursor - else: - self._play_cursor = self._buffer_cursor + self.alsource.byte_offset - assert self._check_cursors() - - self._dispatch_events() - - def _handle_processed_buffers(self): - processed = self.alsource.unqueue_buffers() - - if processed > 0: - if (len(self._buffer_timestamps) == processed - and self._buffer_timestamps[-1] is not None): - assert _debug('OpenALAudioPlayer: Underrun') - # Underrun, take note of timestamp. - # We check that the timestamp is not None, because otherwise - # our source could have been cleared. - self._underrun_timestamp = self._buffer_timestamps[-1] + \ - self._buffer_sizes[-1] / float(self.source.audio_format.bytes_per_second) - self._update_buffer_cursor(processed) - - return processed - - def _update_buffer_cursor(self, processed): - self._buffer_cursor += sum(self._buffer_sizes[:processed]) - del self._buffer_sizes[:processed] - del self._buffer_timestamps[:processed] - - def _dispatch_events(self): - while self._events and self._events[0][0] <= self._play_cursor: - _, event = self._events.pop(0) - event._sync_dispatch_to_player(self.player) - - def _get_write_size(self): - self._update_play_cursor() - buffer_size = int(self._write_cursor - self._play_cursor) - - # Only write when current buffer size is smaller than ideal - write_size = max(self.ideal_buffer_size - buffer_size, 0) - - assert _debug("Write size {} bytes".format(write_size)) - return write_size - - def refill_buffer(self): - write_size = self._get_write_size() - if write_size > self.min_buffer_size: - self._refill(write_size) - return True - return False - - def _refill(self, write_size): - assert _debug('_refill', write_size) - - while write_size > self.min_buffer_size: - audio_data = self._get_audiodata() - - if audio_data is None: - break - - length = min(write_size, audio_data.length) - if length == 0: - assert _debug('Empty AudioData. Discard it.') - - else: - assert _debug('Writing {} bytes'.format(length)) - self._queue_audio_data(audio_data, length) - write_size -= length - - # Check for underrun stopping playback - if self._playing and not self.alsource.is_playing: - assert _debug('underrun') - self.alsource.play() - - def _get_audiodata(self): - if self._audiodata_buffer is None or self._audiodata_buffer.length == 0: - self._get_new_audiodata() - - return self._audiodata_buffer - - def _get_new_audiodata(self): - assert _debug('Getting new audio data buffer.') - compensation_time = self.get_audio_time_diff() - self._audiodata_buffer= self.source.get_audio_data(self.ideal_buffer_size, compensation_time) - - if self._audiodata_buffer is not None: - assert _debug('New audio data available: {} bytes'.format(self._audiodata_buffer.length)) - self._queue_events(self._audiodata_buffer) - else: - assert _debug('No audio data left') - if self._has_underrun(): - assert _debug('Underrun') - MediaEvent('on_eos').sync_dispatch_to_player(self.player) - - def _queue_audio_data(self, audio_data, length): - buf = self.alsource.get_buffer() - buf.data(audio_data, self.source.audio_format, length) - self.alsource.queue_buffer(buf) - self._update_write_cursor(audio_data, length) - - def _update_write_cursor(self, audio_data, length): - self._write_cursor += length - self._buffer_sizes.append(length) - self._buffer_timestamps.append(audio_data.timestamp) - audio_data.consume(length, self.source.audio_format) - assert self._check_cursors() - - def _queue_events(self, audio_data): - for event in audio_data.events: - cursor = self._write_cursor + event.timestamp * \ - self.source.audio_format.bytes_per_second - self._events.append((cursor, event)) - - def _has_underrun(self): - return self.alsource.buffers_queued == 0 - - def get_time(self): - # Update first, might remove buffers - self._update_play_cursor() - - if not self._buffer_timestamps: - timestamp = self._underrun_timestamp - assert _debug('OpenALAudioPlayer: Return underrun timestamp') - else: - timestamp = self._buffer_timestamps[0] - assert _debug('OpenALAudioPlayer: Buffer timestamp: {}'.format(timestamp)) - - if timestamp is not None: - timestamp += ((self._play_cursor - self._buffer_cursor) / - float(self.source.audio_format.bytes_per_second)) - - assert _debug('OpenALAudioPlayer: get_time = {}'.format(timestamp)) - - return timestamp - - def _check_cursors(self): - assert self._play_cursor >= 0 - assert self._buffer_cursor >= 0 - assert self._write_cursor >= 0 - assert self._buffer_cursor <= self._play_cursor - assert self._play_cursor <= self._write_cursor - assert _debug('Buffer[{}], Play[{}], Write[{}]'.format(self._buffer_cursor, - self._play_cursor, - self._write_cursor)) - return True # Return true so it can be called in an assert (and optimized out) - - def set_volume(self, volume): - self.alsource.gain = volume - - def set_position(self, position): - self.alsource.position = position - - def set_min_distance(self, min_distance): - self.alsource.reference_distance = min_distance - - def set_max_distance(self, max_distance): - self.alsource.max_distance = max_distance - - def set_pitch(self, pitch): - self.alsource.pitch = pitch - - def set_cone_orientation(self, cone_orientation): - self.alsource.direction = cone_orientation - - def set_cone_inner_angle(self, cone_inner_angle): - self.alsource.cone_inner_angle = cone_inner_angle - - def set_cone_outer_angle(self, cone_outer_angle): - self.alsource.cone_outer_angle = cone_outer_angle - - def set_cone_outer_gain(self, cone_outer_gain): - self.alsource.cone_outer_gain = cone_outer_gain - - def prefill_audio(self): - write_size = self._get_write_size() - self._refill(write_size) diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/document.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/document.py deleted file mode 100644 index 5d3a44aa17675f292ed7aff3d1d39667d109456a..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/document.py +++ /dev/null @@ -1,701 +0,0 @@ -"""Formatted and unformatted document interfaces used by text layout. - -Abstract representation -======================= - -Styled text in pyglet is represented by one of the :py:class:`~pyglet.text.document.AbstractDocument` classes, -which manage the state representation of text and style independently of how -it is loaded or rendered. - -A document consists of the document text (a Unicode string) and a set of -named style ranges. For example, consider the following (artificial) -example:: - - 0 5 10 15 20 - The cat sat on the mat. - +++++++ +++++++ "bold" - ++++++ "italic" - -If this example were to be rendered, "The cat" and "the mat" would be in bold, -and "on the" in italics. Note that the second "the" is both bold and italic. - -The document styles recorded for this example would be ``"bold"`` over ranges -(0-7, 15-22) and ``"italic"`` over range (12-18). Overlapping styles are -permitted; unlike HTML and other structured markup, the ranges need not be -nested. - -The document has no knowledge of the semantics of ``"bold"`` or ``"italic"``, -it stores only the style names. The pyglet layout classes give meaning to -these style names in the way they are rendered; but you are also free to -invent your own style names (which will be ignored by the layout classes). -This can be useful to tag areas of interest in a document, or maintain -references back to the source material. - -As well as text, the document can contain arbitrary elements represented by -:py:class:`~pyglet.text.document.InlineElement`. An inline element behaves -like a single character in the document, but can be rendered by the application. - -Paragraph breaks -================ - -Paragraph breaks are marked with a "newline" character (U+0010). The Unicode -paragraph break (U+2029) can also be used. - -Line breaks (U+2028) can be used to force a line break within a paragraph. - -See Unicode recommendation UTR #13 for more information: -http://unicode.org/reports/tr13/tr13-5.html. - -Document classes -================ - -Any class implementing :py:class:`~pyglet.text.document.AbstractDocument` provides an interface to a -document model as described above. In theory a structured document such as -HTML or XML could export this model, though the classes provided by pyglet -implement only unstructured documents. - -The :py:class:`~pyglet.text.document.UnformattedDocument` class assumes any styles set are set over the entire -document. So, regardless of the range specified when setting a ``"bold"`` -style attribute, for example, the entire document will receive that style. - -The :py:class:`~pyglet.text.document.FormattedDocument` class implements the document model directly, using -the `RunList` class to represent style runs efficiently. - -Style attributes -================ - -The following character style attribute names are recognised by pyglet: - -``font_name`` - Font family name, as given to :py:func:`pyglet.font.load`. -``font_size`` - Font size, in points. -``bold`` - Boolean. -``italic`` - Boolean. -``underline`` - 4-tuple of ints in range (0, 255) giving RGBA underline color, or None - (default) for no underline. -``kerning`` - Additional space to insert between glyphs, in points. Defaults to 0. -``baseline`` - Offset of glyph baseline from line baseline, in points. Positive values - give a superscript, negative values give a subscript. Defaults to 0. -``color`` - 4-tuple of ints in range (0, 255) giving RGBA text color -``background_color`` - 4-tuple of ints in range (0, 255) giving RGBA text background color; or - ``None`` for no background fill. - -The following paragraph style attribute names are recognised by pyglet. Note -that paragraph styles are handled no differently from character styles by the -document: it is the application's responsibility to set the style over an -entire paragraph, otherwise results are undefined. - -``align`` - ``left`` (default), ``center`` or ``right``. -``indent`` - Additional horizontal space to insert before the first -``leading`` - Additional space to insert between consecutive lines within a paragraph, - in points. Defaults to 0. -``line_spacing`` - Distance between consecutive baselines in a paragraph, in points. - Defaults to ``None``, which automatically calculates the tightest line - spacing for each line based on the font ascent and descent. -``margin_left`` - Left paragraph margin, in pixels. -``margin_right`` - Right paragraph margin, in pixels. -``margin_top`` - Margin above paragraph, in pixels. -``margin_bottom`` - Margin below paragraph, in pixels. Adjacent margins do not collapse. -``tab_stops`` - List of horizontal tab stops, in pixels, measured from the left edge of - the text layout. Defaults to the empty list. When the tab stops - are exhausted, they implicitly continue at 50 pixel intervals. -``wrap`` - Boolean. If True (the default), text wraps within the width of the layout. - -Other attributes can be used to store additional style information within the -document; it will be ignored by the built-in text classes. - -All style attributes (including those not present in a document) default to -``None`` (including the so-called "boolean" styles listed above). The meaning -of a ``None`` style is style- and application-dependent. - -.. versionadded:: 1.1 -""" - -import re -import sys - -from pyglet import event -from pyglet.text import runlist - -_is_pyglet_doc_run = hasattr(sys, "is_pyglet_doc_run") and sys.is_pyglet_doc_run - -#: The style attribute takes on multiple values in the document. -STYLE_INDETERMINATE = 'indeterminate' - - -class InlineElement: - """Arbitrary inline element positioned within a formatted document. - - Elements behave like a single glyph in the document. They are - measured by their horizontal advance, ascent above the baseline, and - descent below the baseline. - - The pyglet layout classes reserve space in the layout for elements and - call the element's methods to ensure they are rendered at the - appropriate position. - - If the size of a element (any of the `advance`, `ascent`, or `descent` - instance variables) is modified it is the application's responsibility to - trigger a reflow of the appropriate area in the affected layouts. This - can be done by forcing a style change over the element's position. - - :Ivariables: - `ascent` : int - Ascent of the element above the baseline, in pixels. - `descent` : int - Descent of the element below the baseline, in pixels. - Typically negative. - `advance` : int - Width of the element, in pixels. - - """ - - def __init__(self, ascent, descent, advance): - self.ascent = ascent - self.descent = descent - self.advance = advance - self._position = None - - @property - def position(self): - """Position of the element within the document. Read-only. - - :type: int - """ - return self._position - - def place(self, layout, x, y, z): - """Construct an instance of the element at the given coordinates. - - Called when the element's position within a layout changes, either - due to the initial condition, changes in the document or changes in - the layout size. - - It is the responsibility of the element to clip itself against - the layout boundaries, and position itself appropriately with respect - to the layout's position and viewport offset. - - The `TextLayout.top_state` graphics state implements this transform - and clipping into window space. - - :Parameters: - `layout` : `pyglet.text.layout.TextLayout` - The layout the element moved within. - `x` : int - Position of the left edge of the element, relative - to the left edge of the document, in pixels. - `y` : int - Position of the baseline, relative to the top edge of the - document, in pixels. Note that this is typically negative. - - """ - raise NotImplementedError('abstract') - - def remove(self, layout): - """Remove this element from a layout. - - The counterpart of `place`; called when the element is no longer - visible in the given layout. - - :Parameters: - `layout` : `pyglet.text.layout.TextLayout` - The layout the element was removed from. - - """ - raise NotImplementedError('abstract') - - -class AbstractDocument(event.EventDispatcher): - """Abstract document interface used by all :py:mod:`pyglet.text` classes. - - This class can be overridden to interface pyglet with a third-party - document format. It may be easier to implement the document format in - terms of one of the supplied concrete classes :py:class:`~pyglet.text.document.FormattedDocument` or - :py:class:`~pyglet.text.document.UnformattedDocument`. - """ - _previous_paragraph_re = re.compile(u'\n[^\n\u2029]*$') - _next_paragraph_re = re.compile(u'[\n\u2029]') - - def __init__(self, text=''): - super().__init__() - self._text = u'' - self._elements = [] - if text: - self.insert_text(0, text) - - @property - def text(self): - """Document text. - - For efficient incremental updates, use the :py:func:`~pyglet.text.document.AbstractDocument.insert_text` and - :py:func:`~pyglet.text.document.AbstractDocument.delete_text` methods instead of replacing this property. - - :type: str - """ - return self._text - - @text.setter - def text(self, text): - if text == self._text: - return - self.delete_text(0, len(self._text)) - self.insert_text(0, text) - - def get_paragraph_start(self, pos): - """Get the starting position of a paragraph. - - :Parameters: - `pos` : int - Character position within paragraph. - - :rtype: int - """ - # Tricky special case where the $ in pattern matches before the - # \n at the end of the string instead of the end of the string. - if self._text[:pos + 1].endswith('\n') or self._text[:pos + 1].endswith(u'\u2029'): - return pos - - m = self._previous_paragraph_re.search(self._text, 0, pos + 1) - if not m: - return 0 - return m.start() + 1 - - def get_paragraph_end(self, pos): - """Get the end position of a paragraph. - - :Parameters: - `pos` : int - Character position within paragraph. - - :rtype: int - """ - m = self._next_paragraph_re.search(self._text, pos) - if not m: - return len(self._text) - return m.start() + 1 - - def get_style_runs(self, attribute): - """Get a style iterator over the given style attribute. - - :Parameters: - `attribute` : str - Name of style attribute to query. - - :rtype: `AbstractRunIterator` - """ - raise NotImplementedError('abstract') - - def get_style(self, attribute, position=0): - """Get an attribute style at the given position. - - :Parameters: - `attribute` : str - Name of style attribute to query. - `position` : int - Character position of document to query. - - :return: The style set for the attribute at the given position. - """ - raise NotImplementedError('abstract') - - def get_style_range(self, attribute, start, end): - """Get an attribute style over the given range. - - If the style varies over the range, `STYLE_INDETERMINATE` is returned. - - :Parameters: - `attribute` : str - Name of style attribute to query. - `start` : int - Starting character position. - `end` : int - Ending character position (exclusive). - - :return: The style set for the attribute over the given range, or - `STYLE_INDETERMINATE` if more than one value is set. - """ - iterable = self.get_style_runs(attribute) - _, value_end, value = next(iterable.ranges(start, end)) - if value_end < end: - return STYLE_INDETERMINATE - else: - return value - - def get_font_runs(self, dpi=None): - """Get a style iterator over the `pyglet.font.Font` instances used in - the document. - - The font instances are created on-demand by inspection of the - ``font_name``, ``font_size``, ``bold`` and ``italic`` style - attributes. - - :Parameters: - `dpi` : float - Optional resolution to construct fonts at. See - :py:func:`pyglet.font.load`. - - :rtype: `AbstractRunIterator` - """ - raise NotImplementedError('abstract') - - def get_font(self, position, dpi=None): - """Get the font instance used at the given position. - - :see: `get_font_runs` - - :Parameters: - `position` : int - Character position of document to query. - `dpi` : float - Optional resolution to construct fonts at. See - :py:func:`pyglet.font.load`. - - :rtype: `pyglet.font.Font` - :return: The font at the given position. - """ - raise NotImplementedError('abstract') - - def insert_text(self, start, text, attributes=None): - """Insert text into the document. - - :Parameters: - `start` : int - Character insertion point within document. - `text` : str - Text to insert. - `attributes` : dict - Optional dictionary giving named style attributes of the - inserted text. - - """ - self._insert_text(start, text, attributes) - self.dispatch_event('on_insert_text', start, text) - - def _insert_text(self, start, text, attributes): - self._text = u''.join((self._text[:start], text, self._text[start:])) - len_text = len(text) - for element in self._elements: - if element._position >= start: - element._position += len_text - - def delete_text(self, start, end): - """Delete text from the document. - - :Parameters: - `start` : int - Starting character position to delete from. - `end` : int - Ending character position to delete to (exclusive). - - """ - self._delete_text(start, end) - self.dispatch_event('on_delete_text', start, end) - - def _delete_text(self, start, end): - for element in list(self._elements): - if start <= element._position < end: - self._elements.remove(element) - elif element._position >= end: # fix bug 538 - element._position -= (end - start) - - self._text = self._text[:start] + self._text[end:] - - def insert_element(self, position, element, attributes=None): - """Insert a element into the document. - - See the :py:class:`~pyglet.text.document.InlineElement` class - documentation for details of usage. - - :Parameters: - `position` : int - Character insertion point within document. - `element` : `~pyglet.text.document.InlineElement` - Element to insert. - `attributes` : dict - Optional dictionary giving named style attributes of the - inserted text. - - """ - assert element._position is None, 'Element is already in a document.' - self.insert_text(position, '\0', attributes) - element._position = position - self._elements.append(element) - self._elements.sort(key=lambda d: d.position) - - def get_element(self, position): - """Get the element at a specified position. - - :Parameters: - `position` : int - Position in the document of the element. - - :rtype: :py:class:`~pyglet.text.document.InlineElement` - """ - for element in self._elements: - if element._position == position: - return element - raise RuntimeError(f'No element at position {position}') - - def set_style(self, start, end, attributes): - """Set text style of some or all of the document. - - :Parameters: - `start` : int - Starting character position. - `end` : int - Ending character position (exclusive). - `attributes` : dict - Dictionary giving named style attributes of the text. - - """ - self._set_style(start, end, attributes) - self.dispatch_event('on_style_text', start, end, attributes) - - def _set_style(self, start, end, attributes): - raise NotImplementedError('abstract') - - def set_paragraph_style(self, start, end, attributes): - """Set the style for a range of paragraphs. - - This is a convenience method for `set_style` that aligns the - character range to the enclosing paragraph(s). - - :Parameters: - `start` : int - Starting character position. - `end` : int - Ending character position (exclusive). - `attributes` : dict - Dictionary giving named style attributes of the paragraphs. - - """ - start = self.get_paragraph_start(start) - end = self.get_paragraph_end(end) - self._set_style(start, end, attributes) - self.dispatch_event('on_style_text', start, end, attributes) - - if _is_pyglet_doc_run: - def on_insert_text(self, start, text): - """Text was inserted into the document. - - :Parameters: - `start` : int - Character insertion point within document. - `text` : str - The text that was inserted. - - :event: - """ - - def on_delete_text(self, start, end): - """Text was deleted from the document. - - :Parameters: - `start` : int - Starting character position of deleted text. - `end` : int - Ending character position of deleted text (exclusive). - - :event: - """ - - def on_style_text(self, start, end, attributes): - """Text character style was modified. - - :Parameters: - `start` : int - Starting character position of modified text. - `end` : int - Ending character position of modified text (exclusive). - `attributes` : dict - Dictionary giving updated named style attributes of the - text. - - :event: - """ - - -AbstractDocument.register_event_type('on_insert_text') -AbstractDocument.register_event_type('on_delete_text') -AbstractDocument.register_event_type('on_style_text') - - -class UnformattedDocument(AbstractDocument): - """A document having uniform style over all text. - - Changes to the style of text within the document affects the entire - document. For convenience, the ``position`` parameters of the style - methods may therefore be omitted. - """ - - def __init__(self, text=''): - super().__init__(text) - self.styles = {} - - def get_style_runs(self, attribute): - value = self.styles.get(attribute) - return runlist.ConstRunIterator(len(self.text), value) - - def get_style(self, attribute, position=None): - return self.styles.get(attribute) - - def set_style(self, start, end, attributes): - return super().set_style(0, len(self.text), attributes) - - def _set_style(self, start, end, attributes): - self.styles.update(attributes) - - def set_paragraph_style(self, start, end, attributes): - return super().set_paragraph_style(0, len(self.text), attributes) - - def get_font_runs(self, dpi=None): - ft = self.get_font(dpi=dpi) - return runlist.ConstRunIterator(len(self.text), ft) - - def get_font(self, position=None, dpi=None): - from pyglet import font - font_name = self.styles.get('font_name') - font_size = self.styles.get('font_size') - bold = self.styles.get('bold', False) - italic = self.styles.get('italic', False) - stretch = self.styles.get('stretch', False) - return font.load(font_name, font_size, bold=bold, italic=italic, stretch=stretch, dpi=dpi) - - def get_element_runs(self): - return runlist.ConstRunIterator(len(self._text), None) - - -class FormattedDocument(AbstractDocument): - """Simple implementation of a document that maintains text formatting. - - Changes to text style are applied according to the description in - :py:class:`~pyglet.text.document.AbstractDocument`. All styles default to ``None``. - """ - - def __init__(self, text=''): - self._style_runs = {} - super().__init__(text) - - def get_style_runs(self, attribute): - try: - return self._style_runs[attribute].get_run_iterator() - except KeyError: - return _no_style_range_iterator - - def get_style(self, attribute, position=0): - try: - return self._style_runs[attribute][position] - except KeyError: - return None - - def _set_style(self, start, end, attributes): - for attribute, value in attributes.items(): - try: - runs = self._style_runs[attribute] - except KeyError: - runs = self._style_runs[attribute] = runlist.RunList(0, None) - runs.insert(0, len(self._text)) - runs.set_run(start, end, value) - - def get_font_runs(self, dpi=None): - return _FontStyleRunsRangeIterator( - self.get_style_runs('font_name'), - self.get_style_runs('font_size'), - self.get_style_runs('bold'), - self.get_style_runs('italic'), - self.get_style_runs('stretch'), - dpi) - - def get_font(self, position, dpi=None): - runs_iter = self.get_font_runs(dpi) - return runs_iter[position] - - def get_element_runs(self): - return _ElementIterator(self._elements, len(self._text)) - - def _insert_text(self, start, text, attributes): - super()._insert_text(start, text, attributes) - - len_text = len(text) - for runs in self._style_runs.values(): - runs.insert(start, len_text) - - if attributes is not None: - for attribute, value in attributes.items(): - try: - runs = self._style_runs[attribute] - except KeyError: - runs = self._style_runs[attribute] = runlist.RunList(0, None) - runs.insert(0, len(self.text)) - runs.set_run(start, start + len_text, value) - - def _delete_text(self, start, end): - super()._delete_text(start, end) - for runs in self._style_runs.values(): - runs.delete(start, end) - - -def _iter_elements(elements, length): - last = 0 - for element in elements: - p = element.position - yield last, p, None - yield p, p + 1, element - last = p + 1 - yield last, length, None - - -class _ElementIterator(runlist.RunIterator): - def __init__(self, elements, length): - self._run_list_iter = _iter_elements(elements, length) - self.start, self.end, self.value = next(self) - - -class _FontStyleRunsRangeIterator: - # XXX subclass runlist - def __init__(self, font_names, font_sizes, bolds, italics, stretch, dpi): - self.zip_iter = runlist.ZipRunIterator((font_names, font_sizes, bolds, italics, stretch)) - self.dpi = dpi - - def ranges(self, start, end): - from pyglet import font - for start, end, styles in self.zip_iter.ranges(start, end): - font_name, font_size, bold, italic, stretch = styles - ft = font.load(font_name, font_size, bold=bool(bold), italic=bool(italic), stretch=stretch, dpi=self.dpi) - yield start, end, ft - - def __getitem__(self, index): - from pyglet import font - font_name, font_size, bold, italic, stretch = self.zip_iter[index] - return font.load(font_name, font_size, bold=bool(bold), italic=bool(italic), stretch=stretch, dpi=self.dpi) - - -class _NoStyleRangeIterator: - # XXX subclass runlist - @staticmethod - def ranges(start, end): - yield start, end, None - - def __getitem__(self, index): - return None - - -_no_style_range_iterator = _NoStyleRangeIterator() diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/setup.py b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/setup.py deleted file mode 100644 index 3c105518dce6da67e0db106858a3100ae0029ab5..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/setup.py +++ /dev/null @@ -1,96 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -"""Setup Parallel WaveGAN libarary.""" - -import os -import pip -import sys - -from distutils.version import LooseVersion -from setuptools import find_packages -from setuptools import setup - -if LooseVersion(sys.version) < LooseVersion("3.6"): - raise RuntimeError( - "parallel-wavegan requires Python>=3.6, " - "but your Python is {}".format(sys.version) - ) -if LooseVersion(pip.__version__) < LooseVersion("19"): - raise RuntimeError( - "pip>=19.0.0 is required, but your pip is {}. " - 'Try again after "pip install -U pip"'.format(pip.__version__) - ) - -requirements = { - "install": [ - "torch>=1.0.1", - "setuptools>=38.5.1", - "librosa>=0.8.0", - "soundfile>=0.10.2", - "tensorboardX>=1.8", - "matplotlib>=3.1.0", - "PyYAML>=3.12", - "tqdm>=4.26.1", - "kaldiio>=2.14.1", - "h5py>=2.9.0", - "yq>=2.10.0", - "gdown", - "filelock", - ], - "setup": [ - "numpy", - "pytest-runner", - ], - "test": [ - "pytest>=3.3.0", - "hacking>=4.1.0", - "flake8-docstrings>=1.3.1", - "black", - ], -} -entry_points = { - "console_scripts": [ - "parallel-wavegan-preprocess=parallel_wavegan.bin.preprocess:main", - "parallel-wavegan-compute-statistics=parallel_wavegan.bin.compute_statistics:main", - "parallel-wavegan-normalize=parallel_wavegan.bin.normalize:main", - "parallel-wavegan-train=parallel_wavegan.bin.train:main", - "parallel-wavegan-decode=parallel_wavegan.bin.decode:main", - ] -} - -install_requires = requirements["install"] -setup_requires = requirements["setup"] -tests_require = requirements["test"] -extras_require = { - k: v for k, v in requirements.items() if k not in ["install", "setup"] -} - -dirname = os.path.dirname(__file__) -setup( - name="parallel_wavegan", - version="0.5.3", - url="http://github.com/kan-bayashi/ParallelWaveGAN", - author="Tomoki Hayashi", - author_email="hayashi.tomoki@g.sp.m.is.nagoya-u.ac.jp", - description="Parallel WaveGAN implementation", - long_description=open(os.path.join(dirname, "README.md"), encoding="utf-8").read(), - long_description_content_type="text/markdown", - license="MIT License", - packages=find_packages(include=["parallel_wavegan*"]), - install_requires=install_requires, - setup_requires=setup_requires, - tests_require=tests_require, - extras_require=extras_require, - entry_points=entry_points, - classifiers=[ - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "Programming Language :: Python :: 3.9", - "Intended Audience :: Science/Research", - "Operating System :: POSIX :: Linux", - "License :: OSI Approved :: MIT License", - "Topic :: Software Development :: Libraries :: Python Modules", - ], -) diff --git a/spaces/alamin655/websurfx/public/templates/search_bar.html b/spaces/alamin655/websurfx/public/templates/search_bar.html deleted file mode 100644 index 8bb6bd9713a89d7077459c46e0c323dec3fb63e4..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/templates/search_bar.html +++ /dev/null @@ -1,27 +0,0 @@ -{{>bar this}} -
          - {{#if engineErrorsInfo}} - - - {{else}} - - - {{/if}} -
          - diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py deleted file mode 100644 index 44e4309d20dfe3190988905258a4159411a662b3..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py +++ /dev/null @@ -1,43 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -""" -The cache object API for implementing caches. The default is a thread -safe in-memory dictionary. -""" -from threading import Lock - - -class BaseCache(object): - - def get(self, key): - raise NotImplementedError() - - def set(self, key, value, expires=None): - raise NotImplementedError() - - def delete(self, key): - raise NotImplementedError() - - def close(self): - pass - - -class DictCache(BaseCache): - - def __init__(self, init_dict=None): - self.lock = Lock() - self.data = init_dict or {} - - def get(self, key): - return self.data.get(key, None) - - def set(self, key, value, expires=None): - with self.lock: - self.data.update({key: value}) - - def delete(self, key): - with self.lock: - if key in self.data: - self.data.pop(key) diff --git a/spaces/ali-ghamdan/gfp-Gans/gfpgan/__init__.py b/spaces/ali-ghamdan/gfp-Gans/gfpgan/__init__.py deleted file mode 100644 index 94daaeebce5604d61999f0b1b354b9a9e299b991..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/gfpgan/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# flake8: noqa -from .archs import * -from .data import * -from .models import * -from .utils import * - -# from .version import * diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/__init__.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/__init__.py deleted file mode 100644 index df61bf8713419f847d7c2ee8c6036797c7b03ef7..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .infinibatch.infinibatch import datasets, iterators diff --git a/spaces/alphunt/diffdock-alphunt-demo/datasets/__init__.py b/spaces/alphunt/diffdock-alphunt-demo/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/amanatid/ArxivGPT_Streamlit/faq.py b/spaces/amanatid/ArxivGPT_Streamlit/faq.py deleted file mode 100644 index 59592c5dd7088da2edc1eedd05a12b04e307735d..0000000000000000000000000000000000000000 --- a/spaces/amanatid/ArxivGPT_Streamlit/faq.py +++ /dev/null @@ -1,31 +0,0 @@ -# flake8: noqa -import streamlit as st - - -def faq(): - st.markdown( - """ -# FAQ -## How does ArxivGPT works work? -When you load do it will be divided into smaller chunks -and stored in a special type of database called a vector index -that allows for semantic search and retrieval. - -When you ask a question, ArxivGPT will search through the -pdf chunks and find the most relevant ones using the vector index. -Then, it will use the powerful language model GPT4 to generate a final answer. - -## Why does ArxivGPT take time to index my document? -The reason is the following one. In the case of a free OpenAI API key -takes time to index the loaded pdf files since a free API key has a -restricted [rate limits](https://platform.openai.com/docs/guides/rate-limits/overview). -To make the process fast, you can use a paid API key. - - -## How accurate is ArxivGPT? -To our experience and our tests, it seems impressive accurate but keep in mind the -following since GPT-4 is language model is keen to mistakes. It is -based on semantic search and extracts the most relevant chuncks from the pdf -files. -""" - ) diff --git a/spaces/amin2809/rvc-models/config.py b/spaces/amin2809/rvc-models/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/amin2809/rvc-models/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/anonderpling/repo_uploader/app.py b/spaces/anonderpling/repo_uploader/app.py deleted file mode 100644 index 04db3848c0947f83b416054e8fb7c40295186378..0000000000000000000000000000000000000000 --- a/spaces/anonderpling/repo_uploader/app.py +++ /dev/null @@ -1,100 +0,0 @@ -import gradio as gr -import requests -import subprocess -import os -from huggingface_hub import whoami -from huggingface_hub import HfApi -from huggingface_hub import login -import random -import time - -api=HfApi() -REPO_TYPES = ["model", "dataset", "space"] - -def duplicate(source_url, dst_repo, token, new_name, dst_repo_path, repo_type): - try: - _ = whoami(token) - # ^ this will throw if token is invalid - - # make sure the user fills out the other required paths. - if not dst_repo_path[len(dst_repo_path)-1] == '/': - raise Exception("Your destination path *must* end with a /") - if not source_url: - raise Exception("You haven't chosen a file to download!") - if not dst_repo: - raise Exception("You haven't chosen a repo to download to") - login(token=token) - - # keep things separate, partly in case people download different files with same name (`download.zip`). Especially, it also allows saving filename to work - dir="/home/user/apps/downloads/"+str(int(time.time()))+str(random.getrandbits(8))+"/" - subprocess.check_call([r"mkdir","-p",dir]) - subprocess.check_call([r"aria2c","-x16","--split=16",source_url,"--dir="+dir]) - files=os.listdir(dir) - - if new_name: - dst_repo_path=dst_repo_path.strip("/")+"/"+new_name.strip("/") - else: - dst_repo_path=dst_repo_path.strip("/")+"/"+files[0] - - api.upload_file( - path_or_fileobj=dir+files[0], - path_in_repo=dst_repo_path, - repo_id=dst_repo, - repo_type=repo_type - ) - - # now clean up - os.remove(dir+files[0]) - os.rmdir(dir) - match repo_type: - case "space": - repo_url=f"https://hf.co/spaces/{dst_repo}" - case "dataset": - repo_url=f"https://hf.co/datasets/{dst_repo}" - case "model": - repo_url=f"https://hf.co/{dst_repo}" - return ( - f'Find your repo here', - "sp.jpg", - ) - - except Exception as e: - blames=["grandma","my boss","your boss","God","you","you. It's *all* your fault.","the pope"] - blameweights=(1,1,1,1,4,2,1) - excuses=["I blame it all on "+random.choices(blames,weights=blameweights)[0],"It's my fault, sorry.","I did it on purpose.","That file doesn't want to be downloaded.","You nincompoop!"] - excusesweights=(12,1,1,2,3) - excuse=random.choices(excuses,weights=excusesweights)[0] - return ( - f""" - ### Error 😢😢😢 - - {e} - - - - """ + excuse+"", - None, - ) - - -interface = gr.Interface( - fn=duplicate, - inputs=[ - gr.Textbox(placeholder="Source URL (e.g. civitai.com/api/download/models/4324322534)"), - gr.Textbox(placeholder="Destination repository (e.g. osanseviero/dst)"), - gr.Textbox(placeholder="Write access token", type="password"), - gr.Textbox(placeholder="Post-download name of your file, if you want it changed (e.g. stupidmodel.safetensors)"), - gr.Textbox(placeholder="Destination for your file within your repo. Don't include the filename (e.g. /models/Stable-diffusion/)"), - gr.Dropdown(choices=REPO_TYPES, value="model"), - ], - outputs=[ - gr.Markdown(label="output"), - gr.Image(show_label=False), - ], - title="Download a file to your repo!", - description="Download a file to your Hugging Face repository! You need to specify a write token obtained in https://hf.co/settings/tokens. This Space is a an experimental demo.", - article="

          Find your write token at token settings

          ", - allow_flagging="never", - live=False, # since i keep wondering, this prevents it from running again automatically when an input changes -) -interface.launch(enable_queue=True) diff --git a/spaces/anzorq/hf-spaces-semantic-search/pages/_app.js b/spaces/anzorq/hf-spaces-semantic-search/pages/_app.js deleted file mode 100644 index 23002013d70aa52189700305aacd93dba6849067..0000000000000000000000000000000000000000 --- a/spaces/anzorq/hf-spaces-semantic-search/pages/_app.js +++ /dev/null @@ -1,5 +0,0 @@ -import '@/styles/globals.css' - -export default function App({ Component, pageProps }) { - return -} diff --git a/spaces/apsys/normflows/README.md b/spaces/apsys/normflows/README.md deleted file mode 100644 index 824b6c1224adf857741eb61fec9d1f2c336c8c85..0000000000000000000000000000000000000000 --- a/spaces/apsys/normflows/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Normflows -emoji: 🌖 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ari7thomas/bible.ai/README.md b/spaces/ari7thomas/bible.ai/README.md deleted file mode 100644 index 19bc778084383d4c258060b92e28b6598c16eab2..0000000000000000000000000000000000000000 --- a/spaces/ari7thomas/bible.ai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AutoTrain Advanced -emoji: 🚀 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false -duplicated_from: autotrain-projects/autotrain-advanced -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/train_encoder.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/train_encoder.py deleted file mode 100644 index f2e7779c0c109a3ec78f1972ebf1147ec436048a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/train_encoder.py +++ /dev/null @@ -1,319 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -import os -import sys -import time -import traceback - -import torch -from torch.utils.data import DataLoader -from trainer.torch import NoamLR -from trainer.trainer_utils import get_optimizer - -from TTS.encoder.dataset import EncoderDataset -from TTS.encoder.utils.generic_utils import save_best_model, save_checkpoint, setup_encoder_model -from TTS.encoder.utils.training import init_training -from TTS.encoder.utils.visual import plot_embeddings -from TTS.tts.datasets import load_tts_samples -from TTS.utils.audio import AudioProcessor -from TTS.utils.generic_utils import count_parameters, remove_experiment_folder -from TTS.utils.io import copy_model_files -from TTS.utils.samplers import PerfectBatchSampler -from TTS.utils.training import check_update - -torch.backends.cudnn.enabled = True -torch.backends.cudnn.benchmark = True -torch.manual_seed(54321) -use_cuda = torch.cuda.is_available() -num_gpus = torch.cuda.device_count() -print(" > Using CUDA: ", use_cuda) -print(" > Number of GPUs: ", num_gpus) - - -def setup_loader(ap: AudioProcessor, is_val: bool = False, verbose: bool = False): - num_utter_per_class = c.num_utter_per_class if not is_val else c.eval_num_utter_per_class - num_classes_in_batch = c.num_classes_in_batch if not is_val else c.eval_num_classes_in_batch - - dataset = EncoderDataset( - c, - ap, - meta_data_eval if is_val else meta_data_train, - voice_len=c.voice_len, - num_utter_per_class=num_utter_per_class, - num_classes_in_batch=num_classes_in_batch, - verbose=verbose, - augmentation_config=c.audio_augmentation if not is_val else None, - use_torch_spec=c.model_params.get("use_torch_spec", False), - ) - # get classes list - classes = dataset.get_class_list() - - sampler = PerfectBatchSampler( - dataset.items, - classes, - batch_size=num_classes_in_batch * num_utter_per_class, # total batch size - num_classes_in_batch=num_classes_in_batch, - num_gpus=1, - shuffle=not is_val, - drop_last=True, - ) - - if len(classes) < num_classes_in_batch: - if is_val: - raise RuntimeError( - f"config.eval_num_classes_in_batch ({num_classes_in_batch}) need to be <= {len(classes)} (Number total of Classes in the Eval dataset) !" - ) - raise RuntimeError( - f"config.num_classes_in_batch ({num_classes_in_batch}) need to be <= {len(classes)} (Number total of Classes in the Train dataset) !" - ) - - # set the classes to avoid get wrong class_id when the number of training and eval classes are not equal - if is_val: - dataset.set_classes(train_classes) - - loader = DataLoader( - dataset, - num_workers=c.num_loader_workers, - batch_sampler=sampler, - collate_fn=dataset.collate_fn, - ) - - return loader, classes, dataset.get_map_classid_to_classname() - - -def evaluation(model, criterion, data_loader, global_step): - eval_loss = 0 - for _, data in enumerate(data_loader): - with torch.no_grad(): - # setup input data - inputs, labels = data - - # agroup samples of each class in the batch. perfect sampler produces [3,2,1,3,2,1] we need [3,3,2,2,1,1] - labels = torch.transpose( - labels.view(c.eval_num_utter_per_class, c.eval_num_classes_in_batch), 0, 1 - ).reshape(labels.shape) - inputs = torch.transpose( - inputs.view(c.eval_num_utter_per_class, c.eval_num_classes_in_batch, -1), 0, 1 - ).reshape(inputs.shape) - - # dispatch data to GPU - if use_cuda: - inputs = inputs.cuda(non_blocking=True) - labels = labels.cuda(non_blocking=True) - - # forward pass model - outputs = model(inputs) - - # loss computation - loss = criterion( - outputs.view(c.eval_num_classes_in_batch, outputs.shape[0] // c.eval_num_classes_in_batch, -1), labels - ) - - eval_loss += loss.item() - - eval_avg_loss = eval_loss / len(data_loader) - # save stats - dashboard_logger.eval_stats(global_step, {"loss": eval_avg_loss}) - # plot the last batch in the evaluation - figures = { - "UMAP Plot": plot_embeddings(outputs.detach().cpu().numpy(), c.num_classes_in_batch), - } - dashboard_logger.eval_figures(global_step, figures) - return eval_avg_loss - - -def train(model, optimizer, scheduler, criterion, data_loader, eval_data_loader, global_step): - model.train() - best_loss = float("inf") - avg_loader_time = 0 - end_time = time.time() - for epoch in range(c.epochs): - tot_loss = 0 - epoch_time = 0 - for _, data in enumerate(data_loader): - start_time = time.time() - - # setup input data - inputs, labels = data - # agroup samples of each class in the batch. perfect sampler produces [3,2,1,3,2,1] we need [3,3,2,2,1,1] - labels = torch.transpose(labels.view(c.num_utter_per_class, c.num_classes_in_batch), 0, 1).reshape( - labels.shape - ) - inputs = torch.transpose(inputs.view(c.num_utter_per_class, c.num_classes_in_batch, -1), 0, 1).reshape( - inputs.shape - ) - # ToDo: move it to a unit test - # labels_converted = torch.transpose(labels.view(c.num_utter_per_class, c.num_classes_in_batch), 0, 1).reshape(labels.shape) - # inputs_converted = torch.transpose(inputs.view(c.num_utter_per_class, c.num_classes_in_batch, -1), 0, 1).reshape(inputs.shape) - # idx = 0 - # for j in range(0, c.num_classes_in_batch, 1): - # for i in range(j, len(labels), c.num_classes_in_batch): - # if not torch.all(labels[i].eq(labels_converted[idx])) or not torch.all(inputs[i].eq(inputs_converted[idx])): - # print("Invalid") - # print(labels) - # exit() - # idx += 1 - # labels = labels_converted - # inputs = inputs_converted - - loader_time = time.time() - end_time - global_step += 1 - - # setup lr - if c.lr_decay: - scheduler.step() - optimizer.zero_grad() - - # dispatch data to GPU - if use_cuda: - inputs = inputs.cuda(non_blocking=True) - labels = labels.cuda(non_blocking=True) - - # forward pass model - outputs = model(inputs) - - # loss computation - loss = criterion( - outputs.view(c.num_classes_in_batch, outputs.shape[0] // c.num_classes_in_batch, -1), labels - ) - loss.backward() - grad_norm, _ = check_update(model, c.grad_clip) - optimizer.step() - - step_time = time.time() - start_time - epoch_time += step_time - - # acumulate the total epoch loss - tot_loss += loss.item() - - # Averaged Loader Time - num_loader_workers = c.num_loader_workers if c.num_loader_workers > 0 else 1 - avg_loader_time = ( - 1 / num_loader_workers * loader_time + (num_loader_workers - 1) / num_loader_workers * avg_loader_time - if avg_loader_time != 0 - else loader_time - ) - current_lr = optimizer.param_groups[0]["lr"] - - if global_step % c.steps_plot_stats == 0: - # Plot Training Epoch Stats - train_stats = { - "loss": loss.item(), - "lr": current_lr, - "grad_norm": grad_norm, - "step_time": step_time, - "avg_loader_time": avg_loader_time, - } - dashboard_logger.train_epoch_stats(global_step, train_stats) - figures = { - "UMAP Plot": plot_embeddings(outputs.detach().cpu().numpy(), c.num_classes_in_batch), - } - dashboard_logger.train_figures(global_step, figures) - - if global_step % c.print_step == 0: - print( - " | > Step:{} Loss:{:.5f} GradNorm:{:.5f} " - "StepTime:{:.2f} LoaderTime:{:.2f} AvGLoaderTime:{:.2f} LR:{:.6f}".format( - global_step, loss.item(), grad_norm, step_time, loader_time, avg_loader_time, current_lr - ), - flush=True, - ) - - if global_step % c.save_step == 0: - # save model - save_checkpoint(model, optimizer, criterion, loss.item(), OUT_PATH, global_step, epoch) - - end_time = time.time() - - print("") - print( - ">>> Epoch:{} AvgLoss: {:.5f} GradNorm:{:.5f} " - "EpochTime:{:.2f} AvGLoaderTime:{:.2f} ".format( - epoch, tot_loss / len(data_loader), grad_norm, epoch_time, avg_loader_time - ), - flush=True, - ) - # evaluation - if c.run_eval: - model.eval() - eval_loss = evaluation(model, criterion, eval_data_loader, global_step) - print("\n\n") - print("--> EVAL PERFORMANCE") - print( - " | > Epoch:{} AvgLoss: {:.5f} ".format(epoch, eval_loss), - flush=True, - ) - # save the best checkpoint - best_loss = save_best_model(model, optimizer, criterion, eval_loss, best_loss, OUT_PATH, global_step, epoch) - model.train() - - return best_loss, global_step - - -def main(args): # pylint: disable=redefined-outer-name - # pylint: disable=global-variable-undefined - global meta_data_train - global meta_data_eval - global train_classes - - ap = AudioProcessor(**c.audio) - model = setup_encoder_model(c) - - optimizer = get_optimizer(c.optimizer, c.optimizer_params, c.lr, model) - - # pylint: disable=redefined-outer-name - meta_data_train, meta_data_eval = load_tts_samples(c.datasets, eval_split=True) - - train_data_loader, train_classes, map_classid_to_classname = setup_loader(ap, is_val=False, verbose=True) - if c.run_eval: - eval_data_loader, _, _ = setup_loader(ap, is_val=True, verbose=True) - else: - eval_data_loader = None - - num_classes = len(train_classes) - criterion = model.get_criterion(c, num_classes) - - if c.loss == "softmaxproto" and c.model != "speaker_encoder": - c.map_classid_to_classname = map_classid_to_classname - copy_model_files(c, OUT_PATH) - - if args.restore_path: - criterion, args.restore_step = model.load_checkpoint( - c, args.restore_path, eval=False, use_cuda=use_cuda, criterion=criterion - ) - print(" > Model restored from step %d" % args.restore_step, flush=True) - else: - args.restore_step = 0 - - if c.lr_decay: - scheduler = NoamLR(optimizer, warmup_steps=c.warmup_steps, last_epoch=args.restore_step - 1) - else: - scheduler = None - - num_params = count_parameters(model) - print("\n > Model has {} parameters".format(num_params), flush=True) - - if use_cuda: - model = model.cuda() - criterion.cuda() - - global_step = args.restore_step - _, global_step = train(model, optimizer, scheduler, criterion, train_data_loader, eval_data_loader, global_step) - - -if __name__ == "__main__": - args, c, OUT_PATH, AUDIO_PATH, c_logger, dashboard_logger = init_training() - - try: - main(args) - except KeyboardInterrupt: - remove_experiment_folder(OUT_PATH) - try: - sys.exit(0) - except SystemExit: - os._exit(0) # pylint: disable=protected-access - except Exception: # pylint: disable=broad-except - remove_experiment_folder(OUT_PATH) - traceback.print_exc() - sys.exit(1) diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/README.md b/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/README.md deleted file mode 100644 index 94508a7f2ecd7d161b16997e415ed4c4935a39f2..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# 🐸💬 TTS LJspeech Recipes - -For running the recipes - -1. Download the LJSpeech dataset here either manually from [its official website](https://keithito.com/LJ-Speech-Dataset/) or using ```download_ljspeech.sh```. -2. Go to your desired model folder and run the training. - - Running Python files. (Choose the desired GPU ID for your run and set ```CUDA_VISIBLE_DEVICES```) - ```terminal - CUDA_VISIBLE_DEVICES="0" python train_modelX.py - ``` - - Running bash scripts. - ```terminal - bash run.sh - ``` - -💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best -result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪. diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/evaluation/scores_LSE/SyncNetInstance_calc_scores.py b/spaces/artificialguybr/video-dubbing/Wav2Lip/evaluation/scores_LSE/SyncNetInstance_calc_scores.py deleted file mode 100644 index 64906e257bd1f521d8fadb93e877ba83da7764ce..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/Wav2Lip/evaluation/scores_LSE/SyncNetInstance_calc_scores.py +++ /dev/null @@ -1,210 +0,0 @@ -#!/usr/bin/python -#-*- coding: utf-8 -*- -# Video 25 FPS, Audio 16000HZ - -import torch -import numpy -import time, pdb, argparse, subprocess, os, math, glob -import cv2 -import python_speech_features - -from scipy import signal -from scipy.io import wavfile -from SyncNetModel import * -from shutil import rmtree - - -# ==================== Get OFFSET ==================== - -def calc_pdist(feat1, feat2, vshift=10): - - win_size = vshift*2+1 - - feat2p = torch.nn.functional.pad(feat2,(0,0,vshift,vshift)) - - dists = [] - - for i in range(0,len(feat1)): - - dists.append(torch.nn.functional.pairwise_distance(feat1[[i],:].repeat(win_size, 1), feat2p[i:i+win_size,:])) - - return dists - -# ==================== MAIN DEF ==================== - -class SyncNetInstance(torch.nn.Module): - - def __init__(self, dropout = 0, num_layers_in_fc_layers = 1024): - super(SyncNetInstance, self).__init__(); - - self.__S__ = S(num_layers_in_fc_layers = num_layers_in_fc_layers).cuda(); - - def evaluate(self, opt, videofile): - - self.__S__.eval(); - - # ========== ========== - # Convert files - # ========== ========== - - if os.path.exists(os.path.join(opt.tmp_dir,opt.reference)): - rmtree(os.path.join(opt.tmp_dir,opt.reference)) - - os.makedirs(os.path.join(opt.tmp_dir,opt.reference)) - - command = ("ffmpeg -loglevel error -y -i %s -threads 1 -f image2 %s" % (videofile,os.path.join(opt.tmp_dir,opt.reference,'%06d.jpg'))) - output = subprocess.call(command, shell=True, stdout=None) - - command = ("ffmpeg -loglevel error -y -i %s -async 1 -ac 1 -vn -acodec pcm_s16le -ar 16000 %s" % (videofile,os.path.join(opt.tmp_dir,opt.reference,'audio.wav'))) - output = subprocess.call(command, shell=True, stdout=None) - - # ========== ========== - # Load video - # ========== ========== - - images = [] - - flist = glob.glob(os.path.join(opt.tmp_dir,opt.reference,'*.jpg')) - flist.sort() - - for fname in flist: - img_input = cv2.imread(fname) - img_input = cv2.resize(img_input, (224,224)) #HARD CODED, CHANGE BEFORE RELEASE - images.append(img_input) - - im = numpy.stack(images,axis=3) - im = numpy.expand_dims(im,axis=0) - im = numpy.transpose(im,(0,3,4,1,2)) - - imtv = torch.autograd.Variable(torch.from_numpy(im.astype(float)).float()) - - # ========== ========== - # Load audio - # ========== ========== - - sample_rate, audio = wavfile.read(os.path.join(opt.tmp_dir,opt.reference,'audio.wav')) - mfcc = zip(*python_speech_features.mfcc(audio,sample_rate)) - mfcc = numpy.stack([numpy.array(i) for i in mfcc]) - - cc = numpy.expand_dims(numpy.expand_dims(mfcc,axis=0),axis=0) - cct = torch.autograd.Variable(torch.from_numpy(cc.astype(float)).float()) - - # ========== ========== - # Check audio and video input length - # ========== ========== - - #if (float(len(audio))/16000) != (float(len(images))/25) : - # print("WARNING: Audio (%.4fs) and video (%.4fs) lengths are different."%(float(len(audio))/16000,float(len(images))/25)) - - min_length = min(len(images),math.floor(len(audio)/640)) - - # ========== ========== - # Generate video and audio feats - # ========== ========== - - lastframe = min_length-5 - im_feat = [] - cc_feat = [] - - tS = time.time() - for i in range(0,lastframe,opt.batch_size): - - im_batch = [ imtv[:,:,vframe:vframe+5,:,:] for vframe in range(i,min(lastframe,i+opt.batch_size)) ] - im_in = torch.cat(im_batch,0) - im_out = self.__S__.forward_lip(im_in.cuda()); - im_feat.append(im_out.data.cpu()) - - cc_batch = [ cct[:,:,:,vframe*4:vframe*4+20] for vframe in range(i,min(lastframe,i+opt.batch_size)) ] - cc_in = torch.cat(cc_batch,0) - cc_out = self.__S__.forward_aud(cc_in.cuda()) - cc_feat.append(cc_out.data.cpu()) - - im_feat = torch.cat(im_feat,0) - cc_feat = torch.cat(cc_feat,0) - - # ========== ========== - # Compute offset - # ========== ========== - - #print('Compute time %.3f sec.' % (time.time()-tS)) - - dists = calc_pdist(im_feat,cc_feat,vshift=opt.vshift) - mdist = torch.mean(torch.stack(dists,1),1) - - minval, minidx = torch.min(mdist,0) - - offset = opt.vshift-minidx - conf = torch.median(mdist) - minval - - fdist = numpy.stack([dist[minidx].numpy() for dist in dists]) - # fdist = numpy.pad(fdist, (3,3), 'constant', constant_values=15) - fconf = torch.median(mdist).numpy() - fdist - fconfm = signal.medfilt(fconf,kernel_size=9) - - numpy.set_printoptions(formatter={'float': '{: 0.3f}'.format}) - #print('Framewise conf: ') - #print(fconfm) - #print('AV offset: \t%d \nMin dist: \t%.3f\nConfidence: \t%.3f' % (offset,minval,conf)) - - dists_npy = numpy.array([ dist.numpy() for dist in dists ]) - return offset.numpy(), conf.numpy(), minval.numpy() - - def extract_feature(self, opt, videofile): - - self.__S__.eval(); - - # ========== ========== - # Load video - # ========== ========== - cap = cv2.VideoCapture(videofile) - - frame_num = 1; - images = [] - while frame_num: - frame_num += 1 - ret, image = cap.read() - if ret == 0: - break - - images.append(image) - - im = numpy.stack(images,axis=3) - im = numpy.expand_dims(im,axis=0) - im = numpy.transpose(im,(0,3,4,1,2)) - - imtv = torch.autograd.Variable(torch.from_numpy(im.astype(float)).float()) - - # ========== ========== - # Generate video feats - # ========== ========== - - lastframe = len(images)-4 - im_feat = [] - - tS = time.time() - for i in range(0,lastframe,opt.batch_size): - - im_batch = [ imtv[:,:,vframe:vframe+5,:,:] for vframe in range(i,min(lastframe,i+opt.batch_size)) ] - im_in = torch.cat(im_batch,0) - im_out = self.__S__.forward_lipfeat(im_in.cuda()); - im_feat.append(im_out.data.cpu()) - - im_feat = torch.cat(im_feat,0) - - # ========== ========== - # Compute offset - # ========== ========== - - print('Compute time %.3f sec.' % (time.time()-tS)) - - return im_feat - - - def loadParameters(self, path): - loaded_state = torch.load(path, map_location=lambda storage, loc: storage); - - self_state = self.__S__.state_dict(); - - for name, param in loaded_state.items(): - - self_state[name].copy_(param); diff --git a/spaces/artificialguybr/video-dubbing/whisper/tests/conftest.py b/spaces/artificialguybr/video-dubbing/whisper/tests/conftest.py deleted file mode 100644 index 31f1d6b4851362ae3af405b09309ec38ac884c36..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/whisper/tests/conftest.py +++ /dev/null @@ -1,14 +0,0 @@ -import random as rand - -import numpy -import pytest - - -def pytest_configure(config): - config.addinivalue_line("markers", "requires_cuda") - - -@pytest.fixture -def random(): - rand.seed(42) - numpy.random.seed(42) diff --git a/spaces/arxify/RVC-beta-v2-0618/i18n.py b/spaces/arxify/RVC-beta-v2-0618/i18n.py deleted file mode 100644 index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/i18n.py +++ /dev/null @@ -1,28 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = locale.getdefaultlocale()[ - 0 - ] # getlocale can't identify the system's language ((None, None)) - if not os.path.exists(f"./i18n/{language}.json"): - language = "en_US" - self.language = language - # print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) - - def print(self): - print("Use Language:", self.language) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/RIPEMD160.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/RIPEMD160.py deleted file mode 100644 index 820b57dd71f1666fbaa82589cd92b26ccd8c42d6..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/RIPEMD160.py +++ /dev/null @@ -1,169 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Util.py3compat import bord - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr) - -_raw_ripemd160_lib = load_pycryptodome_raw_lib( - "Crypto.Hash._RIPEMD160", - """ - int ripemd160_init(void **shaState); - int ripemd160_destroy(void *shaState); - int ripemd160_update(void *hs, - const uint8_t *buf, - size_t len); - int ripemd160_digest(const void *shaState, - uint8_t digest[20]); - int ripemd160_copy(const void *src, void *dst); - """) - - -class RIPEMD160Hash(object): - """A RIPEMD-160 hash object. - Do not instantiate directly. - Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - - :ivar block_size: the size in bytes of the internal message block, - input to the compression function - :vartype block_size: integer - - :ivar digest_size: the size in bytes of the resulting hash - :vartype digest_size: integer - """ - - # The size of the resulting hash in bytes. - digest_size = 20 - # The internal block size of the hash algorithm in bytes. - block_size = 64 - # ASN.1 Object ID - oid = "1.3.36.3.2.1" - - def __init__(self, data=None): - state = VoidPointer() - result = _raw_ripemd160_lib.ripemd160_init(state.address_of()) - if result: - raise ValueError("Error %d while instantiating RIPEMD160" - % result) - self._state = SmartPointer(state.get(), - _raw_ripemd160_lib.ripemd160_destroy) - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - result = _raw_ripemd160_lib.ripemd160_update(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while instantiating ripemd160" - % result) - - def digest(self): - """Return the **binary** (non-printable) digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - bfr = create_string_buffer(self.digest_size) - result = _raw_ripemd160_lib.ripemd160_digest(self._state.get(), - bfr) - if result: - raise ValueError("Error %d while instantiating ripemd160" - % result) - - return get_raw_buffer(bfr) - - def hexdigest(self): - """Return the **printable** digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def copy(self): - """Return a copy ("clone") of the hash object. - - The copy will have the same internal state as the original hash - object. - This can be used to efficiently compute the digests of strings that - share a common initial substring. - - :return: A hash object of the same type - """ - - clone = RIPEMD160Hash() - result = _raw_ripemd160_lib.ripemd160_copy(self._state.get(), - clone._state.get()) - if result: - raise ValueError("Error %d while copying ripemd160" % result) - return clone - - def new(self, data=None): - """Create a fresh RIPEMD-160 hash object.""" - - return RIPEMD160Hash(data) - - -def new(data=None): - """Create a new hash object. - - :parameter data: - Optional. The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`RIPEMD160Hash.update`. - :type data: byte string/byte array/memoryview - - :Return: A :class:`RIPEMD160Hash` hash object - """ - - return RIPEMD160Hash().new(data) - -# The size of the resulting hash in bytes. -digest_size = RIPEMD160Hash.digest_size - -# The internal block size of the hash algorithm in bytes. -block_size = RIPEMD160Hash.block_size diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py deleted file mode 100644 index 153c5700f1a20666e77f1011aa1a8bbec537611c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_RIPEMD160.py +++ /dev/null @@ -1,71 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/test_RIPEMD160.py: Self-test for the RIPEMD-160 hash function -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -#"""Self-test suite for Crypto.Hash.RIPEMD160""" - -from Crypto.Util.py3compat import * - -# This is a list of (expected_result, input[, description]) tuples. -test_data = [ - # Test vectors downloaded 2008-09-12 from - # http://homes.esat.kuleuven.be/~bosselae/ripemd160.html - ('9c1185a5c5e9fc54612808977ee8f548b2258d31', '', "'' (empty string)"), - ('0bdc9d2d256b3ee9daae347be6f4dc835a467ffe', 'a'), - ('8eb208f7e05d987a9b044a8e98c6b087f15a0bfc', 'abc'), - ('5d0689ef49d2fae572b881b123a85ffa21595f36', 'message digest'), - - ('f71c27109c692c1b56bbdceb5b9d2865b3708dbc', - 'abcdefghijklmnopqrstuvwxyz', - 'a-z'), - - ('12a053384a9c0c88e405a06c27dcf49ada62eb2b', - 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq', - 'abcdbcd...pnopq'), - - ('b0e20b6e3116640286ed3a87a5713079b21f5189', - 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', - 'A-Z, a-z, 0-9'), - - ('9b752e45573d4b39f4dbd3323cab82bf63326bfb', - '1234567890' * 8, - "'1234567890' * 8"), - - ('52783243c1697bdbe16d37f97f68f08325dc1528', - 'a' * 10**6, - '"a" * 10**6'), -] - -def get_tests(config={}): - from Crypto.Hash import RIPEMD160 - from .common import make_hash_tests - return make_hash_tests(RIPEMD160, "RIPEMD160", test_data, - digest_size=20, - oid="1.3.36.3.2.1") - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcfFontFile.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcfFontFile.py deleted file mode 100644 index 442ac70c49dbf3e0d3da0d321ce41f70ef2546f8..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcfFontFile.py +++ /dev/null @@ -1,246 +0,0 @@ -# -# THIS IS WORK IN PROGRESS -# -# The Python Imaging Library -# $Id$ -# -# portable compiled font file parser -# -# history: -# 1997-08-19 fl created -# 2003-09-13 fl fixed loading of unicode fonts -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1997-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import io - -from . import FontFile, Image -from ._binary import i8 -from ._binary import i16be as b16 -from ._binary import i16le as l16 -from ._binary import i32be as b32 -from ._binary import i32le as l32 - -# -------------------------------------------------------------------- -# declarations - -PCF_MAGIC = 0x70636601 # "\x01fcp" - -PCF_PROPERTIES = 1 << 0 -PCF_ACCELERATORS = 1 << 1 -PCF_METRICS = 1 << 2 -PCF_BITMAPS = 1 << 3 -PCF_INK_METRICS = 1 << 4 -PCF_BDF_ENCODINGS = 1 << 5 -PCF_SWIDTHS = 1 << 6 -PCF_GLYPH_NAMES = 1 << 7 -PCF_BDF_ACCELERATORS = 1 << 8 - -BYTES_PER_ROW = [ - lambda bits: ((bits + 7) >> 3), - lambda bits: ((bits + 15) >> 3) & ~1, - lambda bits: ((bits + 31) >> 3) & ~3, - lambda bits: ((bits + 63) >> 3) & ~7, -] - - -def sz(s, o): - return s[o : s.index(b"\0", o)] - - -class PcfFontFile(FontFile.FontFile): - """Font file plugin for the X11 PCF format.""" - - name = "name" - - def __init__(self, fp, charset_encoding="iso8859-1"): - - self.charset_encoding = charset_encoding - - magic = l32(fp.read(4)) - if magic != PCF_MAGIC: - raise SyntaxError("not a PCF file") - - super().__init__() - - count = l32(fp.read(4)) - self.toc = {} - for i in range(count): - type = l32(fp.read(4)) - self.toc[type] = l32(fp.read(4)), l32(fp.read(4)), l32(fp.read(4)) - - self.fp = fp - - self.info = self._load_properties() - - metrics = self._load_metrics() - bitmaps = self._load_bitmaps(metrics) - encoding = self._load_encoding() - - # - # create glyph structure - - for ch, ix in enumerate(encoding): - if ix is not None: - x, y, l, r, w, a, d, f = metrics[ix] - glyph = (w, 0), (l, d - y, x + l, d), (0, 0, x, y), bitmaps[ix] - self.glyph[ch] = glyph - - def _getformat(self, tag): - - format, size, offset = self.toc[tag] - - fp = self.fp - fp.seek(offset) - - format = l32(fp.read(4)) - - if format & 4: - i16, i32 = b16, b32 - else: - i16, i32 = l16, l32 - - return fp, format, i16, i32 - - def _load_properties(self): - - # - # font properties - - properties = {} - - fp, format, i16, i32 = self._getformat(PCF_PROPERTIES) - - nprops = i32(fp.read(4)) - - # read property description - p = [] - for i in range(nprops): - p.append((i32(fp.read(4)), i8(fp.read(1)), i32(fp.read(4)))) - if nprops & 3: - fp.seek(4 - (nprops & 3), io.SEEK_CUR) # pad - - data = fp.read(i32(fp.read(4))) - - for k, s, v in p: - k = sz(data, k) - if s: - v = sz(data, v) - properties[k] = v - - return properties - - def _load_metrics(self): - - # - # font metrics - - metrics = [] - - fp, format, i16, i32 = self._getformat(PCF_METRICS) - - append = metrics.append - - if (format & 0xFF00) == 0x100: - - # "compressed" metrics - for i in range(i16(fp.read(2))): - left = i8(fp.read(1)) - 128 - right = i8(fp.read(1)) - 128 - width = i8(fp.read(1)) - 128 - ascent = i8(fp.read(1)) - 128 - descent = i8(fp.read(1)) - 128 - xsize = right - left - ysize = ascent + descent - append((xsize, ysize, left, right, width, ascent, descent, 0)) - - else: - - # "jumbo" metrics - for i in range(i32(fp.read(4))): - left = i16(fp.read(2)) - right = i16(fp.read(2)) - width = i16(fp.read(2)) - ascent = i16(fp.read(2)) - descent = i16(fp.read(2)) - attributes = i16(fp.read(2)) - xsize = right - left - ysize = ascent + descent - append((xsize, ysize, left, right, width, ascent, descent, attributes)) - - return metrics - - def _load_bitmaps(self, metrics): - - # - # bitmap data - - bitmaps = [] - - fp, format, i16, i32 = self._getformat(PCF_BITMAPS) - - nbitmaps = i32(fp.read(4)) - - if nbitmaps != len(metrics): - raise OSError("Wrong number of bitmaps") - - offsets = [] - for i in range(nbitmaps): - offsets.append(i32(fp.read(4))) - - bitmap_sizes = [] - for i in range(4): - bitmap_sizes.append(i32(fp.read(4))) - - # byteorder = format & 4 # non-zero => MSB - bitorder = format & 8 # non-zero => MSB - padindex = format & 3 - - bitmapsize = bitmap_sizes[padindex] - offsets.append(bitmapsize) - - data = fp.read(bitmapsize) - - pad = BYTES_PER_ROW[padindex] - mode = "1;R" - if bitorder: - mode = "1" - - for i in range(nbitmaps): - x, y, l, r, w, a, d, f = metrics[i] - b, e = offsets[i], offsets[i + 1] - bitmaps.append(Image.frombytes("1", (x, y), data[b:e], "raw", mode, pad(x))) - - return bitmaps - - def _load_encoding(self): - fp, format, i16, i32 = self._getformat(PCF_BDF_ENCODINGS) - - first_col, last_col = i16(fp.read(2)), i16(fp.read(2)) - first_row, last_row = i16(fp.read(2)), i16(fp.read(2)) - - i16(fp.read(2)) # default - - nencoding = (last_col - first_col + 1) * (last_row - first_row + 1) - - # map character code to bitmap index - encoding = [None] * min(256, nencoding) - - encoding_offsets = [i16(fp.read(2)) for _ in range(nencoding)] - - for i in range(first_col, len(encoding)): - try: - encoding_offset = encoding_offsets[ - ord(bytearray([i]).decode(self.charset_encoding)) - ] - if encoding_offset != 0xFFFF: - encoding[i] = encoding_offset - except UnicodeDecodeError: - # character is not supported in selected encoding - pass - - return encoding diff --git a/spaces/asalhi85/ArabiToolsDialecRecognition/app.py b/spaces/asalhi85/ArabiToolsDialecRecognition/app.py deleted file mode 100644 index 18ed16681eaf985664c52e744c88748c64c4c3ae..0000000000000000000000000000000000000000 --- a/spaces/asalhi85/ArabiToolsDialecRecognition/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr -# Creating a gradio app using the inferene API -App = gr.Interface.load("models/asalhi85/Dialect_Recognitionv2", - title="
          مثال لتحليل اللهجات - أدوات عربي", description ="
          نموذج لغوي عميق خاص في كشف اللهجات وحاليا يدعم اللهجات التالية : اللغة العربية الفصحى الحديثة ، اللهجة النجدية، اللهجة الحجازية، اللهجة الخليجية
          ", - allow_flagging=False, examples=[["اتفضل يا خوي تفضل روح ارتاح بغرفتك انا ماني غريب"], ["استاذ عبود هل تعتقد معي ان عدم مجيء هذا النجم الجماهيري الكبير الى هذا المهرجان سيقلل من نجاح هذا الحفل"], ["طب انت مستعدة قولي ايش الحل الاول وانا اروح له وعد شرف اننا اسعى لك واحاول انفذ طلبك بقد ما اقدر"]] -) - -App.launch() diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference_main.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference_main.py deleted file mode 100644 index 80a470ea9146f1f75e785411dd5d3b6fade64b70..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/inference_main.py +++ /dev/null @@ -1,100 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="/Volumes/Extend/下载/G_20800.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nyaru'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False, - help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="/Volumes/Extend/下载/so-vits-svc-4.0/logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=1, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式') - - args = parser.parse_args() - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path) - infer_tool.mkdir(["raw", "results"]) - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale - ) - _audio = out_audio.cpu().numpy() - - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/old——{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - -if __name__ == '__main__': - main() diff --git a/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer/README.md b/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer/README.md deleted file mode 100644 index d7d2fcf9f3c7db1ac8e615510fd4916de548b400..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 📚CSV Dataset Analyzer Streamlit -emoji: 📚 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Emoji-Short-Codes/README.md b/spaces/awacke1/Emoji-Short-Codes/README.md deleted file mode 100644 index 28aea7ab434423b31c61676ce7a48375affbff63..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Emoji-Short-Codes/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🇦🇼 Aaron Wacker 💝 Emojis -emoji: 🇦🇼💝 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/MusicGenStreamFacebook/app.py b/spaces/awacke1/MusicGenStreamFacebook/app.py deleted file mode 100644 index 58ef6ed236e7ce584c6a3db46419435266f67473..0000000000000000000000000000000000000000 --- a/spaces/awacke1/MusicGenStreamFacebook/app.py +++ /dev/null @@ -1,214 +0,0 @@ -import numpy as np -import torch -import gradio as gr -import spaces -from queue import Queue -from threading import Thread -from typing import Optional -from transformers import MusicgenForConditionalGeneration, MusicgenProcessor, set_seed -from transformers.generation.streamers import BaseStreamer - -model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") -processor = MusicgenProcessor.from_pretrained("facebook/musicgen-small") - -title = "MusicGenStream with Facebook MusicGen-Small Model" -description = """ -Generate and stream music using https://huggingface.co/facebook/musicgen-small -""" - -article = """ -## How Does It Work? -MusicGen is an auto-regressive transformer-based model, meaning generates audio codes (tokens) in a causal fashion. -At each decoding step, the model generates a new set of audio codes, conditional on the text input and all previous audio codes. From the -frame rate of the [EnCodec model](https://huggingface.co/facebook/encodec_32khz) used to decode the generated codes to audio waveform. -""" - - -class MusicgenStreamer(BaseStreamer): - def __init__( - self, - model: MusicgenForConditionalGeneration, - device: Optional[str] = None, - play_steps: Optional[int] = 10, - stride: Optional[int] = None, - timeout: Optional[float] = None, - ): - """ - Streamer that stores playback-ready audio in a queue, to be used by a downstream application as an iterator. This is - useful for applications that benefit from acessing the generated audio in a non-blocking way (e.g. in an interactive - Gradio demo). - Parameters: - model (`MusicgenForConditionalGeneration`): - The MusicGen model used to generate the audio waveform. - device (`str`, *optional*): - The torch device on which to run the computation. If `None`, will default to the device of the model. - play_steps (`int`, *optional*, defaults to 10): - The number of generation steps with which to return the generated audio array. Using fewer steps will - mean the first chunk is ready faster, but will require more codec decoding steps overall. This value - should be tuned to your device and latency requirements. - stride (`int`, *optional*): - The window (stride) between adjacent audio samples. Using a stride between adjacent audio samples reduces - the hard boundary between them, giving smoother playback. If `None`, will default to a value equivalent to - play_steps // 6 in the audio space. - timeout (`int`, *optional*): - The timeout for the audio queue. If `None`, the queue will block indefinitely. Useful to handle exceptions - in `.generate()`, when it is called in a separate thread. - """ - self.decoder = model.decoder - self.audio_encoder = model.audio_encoder - self.generation_config = model.generation_config - self.device = device if device is not None else model.device - - # variables used in the streaming process - self.play_steps = play_steps - if stride is not None: - self.stride = stride - else: - hop_length = np.prod(self.audio_encoder.config.upsampling_ratios) - self.stride = hop_length * (play_steps - self.decoder.num_codebooks) // 6 - self.token_cache = None - self.to_yield = 0 - - # varibles used in the thread process - self.audio_queue = Queue() - self.stop_signal = None - self.timeout = timeout - - def apply_delay_pattern_mask(self, input_ids): - # build the delay pattern mask for offsetting each codebook prediction by 1 (this behaviour is specific to MusicGen) - _, decoder_delay_pattern_mask = self.decoder.build_delay_pattern_mask( - input_ids[:, :1], - pad_token_id=self.generation_config.decoder_start_token_id, - max_length=input_ids.shape[-1], - ) - # apply the pattern mask to the input ids - input_ids = self.decoder.apply_delay_pattern_mask(input_ids, decoder_delay_pattern_mask) - - # revert the pattern delay mask by filtering the pad token id - input_ids = input_ids[input_ids != self.generation_config.pad_token_id].reshape( - 1, self.decoder.num_codebooks, -1 - ) - - # append the frame dimension back to the audio codes - input_ids = input_ids[None, ...] - - # send the input_ids to the correct device - input_ids = input_ids.to(self.audio_encoder.device) - - output_values = self.audio_encoder.decode( - input_ids, - audio_scales=[None], - ) - audio_values = output_values.audio_values[0, 0] - return audio_values.cpu().float().numpy() - - def put(self, value): - batch_size = value.shape[0] // self.decoder.num_codebooks - if batch_size > 1: - raise ValueError("MusicgenStreamer only supports batch size 1") - - if self.token_cache is None: - self.token_cache = value - else: - self.token_cache = torch.concatenate([self.token_cache, value[:, None]], dim=-1) - - if self.token_cache.shape[-1] % self.play_steps == 0: - audio_values = self.apply_delay_pattern_mask(self.token_cache) - self.on_finalized_audio(audio_values[self.to_yield : -self.stride]) - self.to_yield += len(audio_values) - self.to_yield - self.stride - - def end(self): - """Flushes any remaining cache and appends the stop symbol.""" - if self.token_cache is not None: - audio_values = self.apply_delay_pattern_mask(self.token_cache) - else: - audio_values = np.zeros(self.to_yield) - - self.on_finalized_audio(audio_values[self.to_yield :], stream_end=True) - - def on_finalized_audio(self, audio: np.ndarray, stream_end: bool = False): - """Put the new audio in the queue. If the stream is ending, also put a stop signal in the queue.""" - self.audio_queue.put(audio, timeout=self.timeout) - if stream_end: - self.audio_queue.put(self.stop_signal, timeout=self.timeout) - - def __iter__(self): - return self - - def __next__(self): - value = self.audio_queue.get(timeout=self.timeout) - if not isinstance(value, np.ndarray) and value == self.stop_signal: - raise StopIteration() - else: - return value - - -sampling_rate = model.audio_encoder.config.sampling_rate -frame_rate = model.audio_encoder.config.frame_rate - -target_dtype = np.int16 -max_range = np.iinfo(target_dtype).max - - -@spaces.GPU -def generate_audio(text_prompt, audio_length_in_s=10.0, play_steps_in_s=2.0, seed=0): - max_new_tokens = int(frame_rate * audio_length_in_s) - play_steps = int(frame_rate * play_steps_in_s) - - device = "cuda:0" if torch.cuda.is_available() else "cpu" - if device != model.device: - model.to(device) - if device == "cuda:0": - model.half() - - inputs = processor( - text=text_prompt, - padding=True, - return_tensors="pt", - ) - - streamer = MusicgenStreamer(model, device=device, play_steps=play_steps) - - generation_kwargs = dict( - **inputs.to(device), - streamer=streamer, - max_new_tokens=max_new_tokens, - ) - thread = Thread(target=model.generate, kwargs=generation_kwargs) - thread.start() - - set_seed(seed) - for new_audio in streamer: - print(f"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds") - new_audio = (new_audio * max_range).astype(np.int16) - yield (sampling_rate, new_audio) - - -demo = gr.Interface( - fn=generate_audio, - inputs=[ - gr.Text(label="Prompt", value="80s pop track with synth and instrumentals"), - gr.Slider(10, 30, value=15, step=5, label="Audio length in seconds"), - gr.Slider(0.5, 2.5, value=0.5, step=0.5, label="Streaming interval in seconds", info="Lower = shorter chunks, lower latency, more codec steps"), - gr.Slider(0, 10, value=5, step=1, label="Seed for random generations"), - ], - outputs=[ - gr.Audio(label="Generated Music", streaming=True, autoplay=True) - ], - examples = [ - ["Country acoustic guitar fast line dance singer like Kenny Chesney and Garth brooks and Luke Combs and Chris Stapleton. bpm: 100", 30, 0.5, 5], - ["Electronic Dance track with pulsating bass and high energy synths. bpm: 126", 30, 0.5, 5], - ["Rap Beats with deep bass and snappy snares. bpm: 80", 30, 0.5, 5], - ["Lo-Fi track with smooth beats and chill vibes. bpm: 100", 30, 0.5, 5], - ["Global Groove track with international instruments and dance rhythms. bpm: 128", 30, 0.5, 5], - ["Relaxing Meditation music with ambient pads and soothing melodies. bpm: 80", 30, 0.5, 5], - ["Rave Dance track with hard-hitting beats and euphoric synths. bpm: 128", 30, 0.5, 5] - ], - - title=title, - description=description, - article=article, - cache_examples=False, -) - -demo.queue().launch() \ No newline at end of file diff --git a/spaces/awacke1/SpeechStoryReadAloud/README.md b/spaces/awacke1/SpeechStoryReadAloud/README.md deleted file mode 100644 index da1543f87bfdced1b221f99e502dae05c23157bc..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SpeechStoryReadAloud/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🗣️NLP Speech Story Read Aloud📚 -emoji: 🗣️📚💕 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.0.11 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/VizLib-Mahotas/app.py b/spaces/awacke1/VizLib-Mahotas/app.py deleted file mode 100644 index 8e9303e7a7c92b11ba8095b07f308be43dba003b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VizLib-Mahotas/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import streamlit as st -import mahotas as mh -import pandas as pd -import plotly.express as px -import urllib.request -from skimage import io - -# Define a list of medical conditions -conditions = [ - {"name": "Depression", "test_for": "Patient Health Questionnaire-9 (PHQ-9)"}, - {"name": "Anxiety", "test_for": "Generalized Anxiety Disorder-7 (GAD-7)"}, - {"name": "Diabetes", "test_for": "Hemoglobin A1C test"}, - {"name": "Hypertension", "test_for": "Blood pressure measurement"}, - {"name": "Asthma", "test_for": "Pulmonary function test"}, - {"name": "Cancer", "test_for": "Biopsy or imaging tests (e.g., CT scan, MRI)"}, - {"name": "Arthritis", "test_for": "X-ray, MRI, or ultrasound"}, - {"name": "Heart disease", "test_for": "Electrocardiogram (ECG)"}, - {"name": "Obesity", "test_for": "Body mass index (BMI)"}, - {"name": "Substance use disorder", "test_for": "Substance Abuse Subtle Screening Inventory (SASSI)"} -] - -# Define a function to process images using Mahotas -def process_image(image): - # Convert the image to grayscale - grayscale_image = mh.colors.rgb2gray(image) - # Apply a Gaussian filter to the image to reduce noise - filtered_image = mh.gaussian_filter(grayscale_image, 4) - # Threshold the image to create a binary image - binary_image = filtered_image > mh.otsu(filtered_image) - # Compute the connected components in the binary image - labels, num_labels = mh.label(binary_image) - # Compute the size of each connected component - sizes = mh.labeled.labeled_size(labels) - # Sort the sizes in descending order - sorted_sizes = sorted(sizes, reverse=True) - # Return the top 10 sizes - return sorted_sizes[:10] - -# Define the Streamlit app -def app(): - # Add a title to the app - st.title("Mahotas Demo") - - # Add a sidebar to the app - st.sidebar.title("Medical Conditions") - selected_condition = st.sidebar.selectbox("Select a condition", [c["name"] for c in conditions]) - - # Get the selected condition - condition = next(c for c in conditions if c["name"] == selected_condition) - - # Display the selected condition - st.header(condition["name"]) - st.write("Test for:", condition["test_for"]) - - # Load an example medical image - if selected_condition == "Depression": - image_url = "https://i.imgur.com/kPQoD8C.jpg" - elif selected_condition == "Anxiety": - image_url = "https://i.imgur.com/ZWyKjJN.jpg" - elif selected_condition == "Diabetes": - image_url = "https://i.imgur.com/1gOEMO5.jpg" - elif selected_condition == "Hypertension": - image_url = "https://i.imgur.com/BoSUwio.jpg" - elif selected_condition == "Asthma": - image_url = "https://i.imgur.com/BLKjzJF.jpg" - elif selected_condition == "Cancer": - image_url = "https://i.imgur.com/nq3vV8.jpg" - elif selected_condition == "Arthritis": - image_url = "https://i.imgur.com/ffzd6Fo.jpg" - elif selected_condition == "Heart disease": - image_url = "https://i.imgur.com/1I7axhd.jpg" - elif selected_condition == "Obesity": - image_url = "https://i.imgur.com/nZ1EjJr.jpg" - else: - image_url = "https://i.imgur.com/RUBZOWF.jpg" - - image = io.imread(image_url) - - # Process the image using Mahotas - sizes = process_image(image) - - # Display the top 10 connected component sizes - df = pd.DataFrame({"Size": sizes}) - st.write(df) - - # Create a sunburst chart using Plotly - fig = px.sunburst( - df, - path=["Size"], - values="Size", - color="Size", - color_continuous_scale="blues" - ) - st.plotly_chart(fig) - -st.markdown(""" -# Alternate Image Links Per Condition: -Depression: https://www.pexels.com/photo/woman-sitting-on-grass-field-while-holding-her-head-7127866/ -Anxiety: https://www.pexels.com/photo/woman-sitting-on-rock-and-looking-at-the-ocean-7119798/ -Diabetes: https://www.pexels.com/photo/man-taking-blood-sugar-test-4050305/ -Hypertension: https://www.pexels.com/photo/woman-measuring-blood-pressure-with-sphygmomanometer-5691686/ -Asthma: https://www.pexels.com/photo/woman-having-asthma-attack-in-park-7127511/ -Cancer: https://www.pexels.com/photo/close-up-of-pink-ribbon-on-cancer-awareness-banner-4219366/ -Arthritis: https://www.pexels.com/photo/man-with-back-pain-lying-on-bed-4050323/ -Heart disease: https://www.pexels.com/photo/woman-touching-chest-during-chest-pain-7127487/ -Obesity: https://www.pexels.com/photo/woman-in-black-pants-lying-on-bed-7127516/ - """) - -app() \ No newline at end of file diff --git a/spaces/azamat/twitter_geocoder/README.md b/spaces/azamat/twitter_geocoder/README.md deleted file mode 100644 index 473226b08bee9cd89efc25f9bd6b352f12848022..0000000000000000000000000000000000000000 --- a/spaces/azamat/twitter_geocoder/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Twitter Geocoder -emoji: 💩 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/configuration_mpt.py b/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/configuration_mpt.py deleted file mode 100644 index e9eb6fc59b50654ddbe19ed56ad8c0abd1b8efef..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/configuration_mpt.py +++ /dev/null @@ -1,118 +0,0 @@ -"""A HuggingFace-style model configuration.""" -from typing import Dict, Optional, Union -from transformers import PretrainedConfig -attn_config_defaults: Dict = {'attn_type': 'multihead_attention', 'attn_pdrop': 0.0, 'attn_impl': 'triton', 'qk_ln': False, 'clip_qkv': None, 'softmax_scale': None, 'prefix_lm': False, 'attn_uses_sequence_id': False, 'alibi': False, 'alibi_bias_max': 8} -init_config_defaults: Dict = {'name': 'kaiming_normal_', 'fan_mode': 'fan_in', 'init_nonlinearity': 'relu', 'init_div_is_residual': True, 'emb_init_std': None, 'emb_init_uniform_lim': None, 'init_std': None, 'init_gain': 0.0} - -class MPTConfig(PretrainedConfig): - model_type = 'mpt' - - def __init__(self, d_model: int=2048, n_heads: int=16, n_layers: int=24, expansion_ratio: int=4, max_seq_len: int=2048, vocab_size: int=50368, resid_pdrop: float=0.0, emb_pdrop: float=0.0, learned_pos_emb: bool=True, attn_config: Dict=attn_config_defaults, init_device: str='cpu', logit_scale: Optional[Union[float, str]]=None, no_bias: bool=False, verbose: int=0, embedding_fraction: float=1.0, norm_type: str='low_precision_layernorm', use_cache: bool=False, init_config: Dict=init_config_defaults, **kwargs): - """The MPT configuration class. - - Args: - d_model (int): The size of the embedding dimension of the model. - n_heads (int): The number of attention heads. - n_layers (int): The number of layers in the model. - expansion_ratio (int): The ratio of the up/down scale in the MLP. - max_seq_len (int): The maximum sequence length of the model. - vocab_size (int): The size of the vocabulary. - resid_pdrop (float): The dropout probability applied to the attention output before combining with residual. - emb_pdrop (float): The dropout probability for the embedding layer. - learned_pos_emb (bool): Whether to use learned positional embeddings - attn_config (Dict): A dictionary used to configure the model's attention module: - attn_type (str): type of attention to use. Options: multihead_attention, multiquery_attention - attn_pdrop (float): The dropout probability for the attention layers. - attn_impl (str): The attention implementation to use. One of 'torch', 'flash', or 'triton'. - qk_ln (bool): Whether to apply layer normalization to the queries and keys in the attention layer. - clip_qkv (Optional[float]): If not None, clip the queries, keys, and values in the attention layer to - this value. - softmax_scale (Optional[float]): If not None, scale the softmax in the attention layer by this value. If None, - use the default scale of ``1/sqrt(d_keys)``. - prefix_lm (Optional[bool]): Whether the model should operate as a Prefix LM. This requires passing an - extra `prefix_mask` argument which indicates which tokens belong to the prefix. Tokens in the prefix - can attend to one another bi-directionally. Tokens outside the prefix use causal attention. - attn_uses_sequence_id (Optional[bool]): Whether to restrict attention to tokens that have the same sequence_id. - When the model is in `train` mode, this requires passing an extra `sequence_id` argument which indicates - which sub-sequence each token belongs to. - Defaults to ``False`` meaning any provided `sequence_id` will be ignored. - alibi (bool): Whether to use the alibi bias instead of position embeddings. - alibi_bias_max (int): The maximum value of the alibi bias. - init_device (str): The device to use for parameter initialization. - logit_scale (Optional[Union[float, str]]): If not None, scale the logits by this value. - no_bias (bool): Whether to use bias in all layers. - verbose (int): The verbosity level. 0 is silent. - embedding_fraction (float): The fraction to scale the gradients of the embedding layer by. - norm_type (str): choose type of norm to use - multiquery_attention (bool): Whether to use multiquery attention implementation. - use_cache (bool): Whether or not the model should return the last key/values attentions - init_config (Dict): A dictionary used to configure the model initialization: - init_config.name: The parameter initialization scheme to use. Options: 'default_', 'baseline_', - 'kaiming_uniform_', 'kaiming_normal_', 'neox_init_', 'small_init_', 'xavier_uniform_', or - 'xavier_normal_'. These mimic the parameter initialization methods in PyTorch. - init_div_is_residual (Union[int, float, str, bool]): Value to divide initial weights by if ``module._is_residual`` is True. - emb_init_std (Optional[float]): The standard deviation of the normal distribution used to initialize the embedding layer. - emb_init_uniform_lim (Optional[Union[Tuple[float, float], float]]): The lower and upper limits of the uniform distribution - used to initialize the embedding layer. Mutually exclusive with ``emb_init_std``. - init_std (float): The standard deviation of the normal distribution used to initialize the model, - if using the baseline_ parameter initialization scheme. - init_gain (float): The gain to use for parameter initialization with kaiming or xavier initialization schemes. - fan_mode (str): The fan mode to use for parameter initialization with kaiming initialization schemes. - init_nonlinearity (str): The nonlinearity to use for parameter initialization with kaiming initialization schemes. - --- - See llmfoundry.models.utils.param_init_fns.py for info on other param init config options - """ - self.d_model = d_model - self.n_heads = n_heads - self.n_layers = n_layers - self.expansion_ratio = expansion_ratio - self.max_seq_len = max_seq_len - self.vocab_size = vocab_size - self.resid_pdrop = resid_pdrop - self.emb_pdrop = emb_pdrop - self.learned_pos_emb = learned_pos_emb - self.attn_config = attn_config - self.init_device = init_device - self.logit_scale = logit_scale - self.no_bias = no_bias - self.verbose = verbose - self.embedding_fraction = embedding_fraction - self.norm_type = norm_type - self.use_cache = use_cache - self.init_config = init_config - if 'name' in kwargs: - del kwargs['name'] - if 'loss_fn' in kwargs: - del kwargs['loss_fn'] - super().__init__(**kwargs) - self._validate_config() - - def _set_config_defaults(self, config, config_defaults): - for (k, v) in config_defaults.items(): - if k not in config: - config[k] = v - return config - - def _validate_config(self): - self.attn_config = self._set_config_defaults(self.attn_config, attn_config_defaults) - self.init_config = self._set_config_defaults(self.init_config, init_config_defaults) - if self.d_model % self.n_heads != 0: - raise ValueError('d_model must be divisible by n_heads') - if any((prob < 0 or prob > 1 for prob in [self.attn_config['attn_pdrop'], self.resid_pdrop, self.emb_pdrop])): - raise ValueError("self.attn_config['attn_pdrop'], resid_pdrop, emb_pdrop are probabilities and must be between 0 and 1") - if self.attn_config['attn_impl'] not in ['torch', 'flash', 'triton']: - raise ValueError(f"Unknown attn_impl={self.attn_config['attn_impl']}") - if self.attn_config['prefix_lm'] and self.attn_config['attn_impl'] not in ['torch', 'triton']: - raise NotImplementedError('prefix_lm only implemented with torch and triton attention.') - if self.attn_config['alibi'] and self.attn_config['attn_impl'] not in ['torch', 'triton']: - raise NotImplementedError('alibi only implemented with torch and triton attention.') - if self.attn_config['attn_uses_sequence_id'] and self.attn_config['attn_impl'] not in ['torch', 'triton']: - raise NotImplementedError('attn_uses_sequence_id only implemented with torch and triton attention.') - if self.embedding_fraction > 1 or self.embedding_fraction <= 0: - raise ValueError('model.embedding_fraction must be between 0 (exclusive) and 1 (inclusive)!') - if isinstance(self.logit_scale, str) and self.logit_scale != 'inv_sqrt_d_model': - raise ValueError(f"self.logit_scale={self.logit_scale!r} is not recognized as an option; use numeric value or 'inv_sqrt_d_model'.") - if self.init_config.get('name', None) is None: - raise ValueError(f"self.init_config={self.init_config!r} 'name' needs to be set.") - if not self.learned_pos_emb and (not self.attn_config['alibi']): - raise ValueError(f'Positional information must be provided to the model using either learned_pos_emb or alibi.') \ No newline at end of file diff --git a/spaces/banana-projects/datasets-card-creator/postcss.config.js b/spaces/banana-projects/datasets-card-creator/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/datasets-card-creator/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/core/Clock.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/core/Clock.d.ts deleted file mode 100644 index 82a24d4dc0d7faf0084295204a80e75415c16b2e..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/core/Clock.d.ts +++ /dev/null @@ -1,59 +0,0 @@ -/** - * Object for keeping track of time. - * - * @see src/core/Clock.js - */ -export class Clock { - /** - * @param autoStart Automatically start the clock. - */ - constructor(autoStart?: boolean); - - /** - * If set, starts the clock automatically when the first update is called. - */ - autoStart: boolean; - - /** - * When the clock is running, It holds the starttime of the clock. - * This counted from the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC. - */ - startTime: number; - - /** - * When the clock is running, It holds the previous time from a update. - * This counted from the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC. - */ - oldTime: number; - - /** - * When the clock is running, It holds the time elapsed between the start of the clock to the previous update. - * This parameter is in seconds of three decimal places. - */ - elapsedTime: number; - - /** - * This property keeps track whether the clock is running or not. - */ - running: boolean; - - /** - * Starts clock. - */ - start(): void; - - /** - * Stops clock. - */ - stop(): void; - - /** - * Get the seconds passed since the clock started. - */ - getElapsedTime(): number; - - /** - * Get the seconds passed since the last call to this method. - */ - getDelta(): number; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/Curves.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/Curves.js deleted file mode 100644 index c984a853942011a89bcd2add1be38392c751a639..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/Curves.js +++ /dev/null @@ -1,10 +0,0 @@ -export { ArcCurve } from './ArcCurve.js'; -export { CatmullRomCurve3 } from './CatmullRomCurve3.js'; -export { CubicBezierCurve } from './CubicBezierCurve.js'; -export { CubicBezierCurve3 } from './CubicBezierCurve3.js'; -export { EllipseCurve } from './EllipseCurve.js'; -export { LineCurve } from './LineCurve.js'; -export { LineCurve3 } from './LineCurve3.js'; -export { QuadraticBezierCurve } from './QuadraticBezierCurve.js'; -export { QuadraticBezierCurve3 } from './QuadraticBezierCurve3.js'; -export { SplineCurve } from './SplineCurve.js'; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/GridHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/GridHelper.js deleted file mode 100644 index 649ac6b4c7aceb726a4a568fc0d0663930e0bac5..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/GridHelper.js +++ /dev/null @@ -1,72 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -import { LineSegments } from '../objects/LineSegments.js'; -import { VertexColors } from '../constants.js'; -import { LineBasicMaterial } from '../materials/LineBasicMaterial.js'; -import { Float32BufferAttribute } from '../core/BufferAttribute.js'; -import { BufferGeometry } from '../core/BufferGeometry.js'; -import { Color } from '../math/Color.js'; - -function GridHelper( size, divisions, color1, color2 ) { - - size = size || 10; - divisions = divisions || 10; - color1 = new Color( color1 !== undefined ? color1 : 0x444444 ); - color2 = new Color( color2 !== undefined ? color2 : 0x888888 ); - - var center = divisions / 2; - var step = size / divisions; - var halfSize = size / 2; - - var vertices = [], colors = []; - - for ( var i = 0, j = 0, k = - halfSize; i <= divisions; i ++, k += step ) { - - vertices.push( - halfSize, 0, k, halfSize, 0, k ); - vertices.push( k, 0, - halfSize, k, 0, halfSize ); - - var color = i === center ? color1 : color2; - - color.toArray( colors, j ); j += 3; - color.toArray( colors, j ); j += 3; - color.toArray( colors, j ); j += 3; - color.toArray( colors, j ); j += 3; - - } - - var geometry = new BufferGeometry(); - geometry.addAttribute( 'position', new Float32BufferAttribute( vertices, 3 ) ); - geometry.addAttribute( 'color', new Float32BufferAttribute( colors, 3 ) ); - - var material = new LineBasicMaterial( { vertexColors: VertexColors } ); - - LineSegments.call( this, geometry, material ); - -} - -GridHelper.prototype = Object.assign( Object.create( LineSegments.prototype ), { - - constructor: GridHelper, - - copy: function ( source ) { - - LineSegments.prototype.copy.call( this, source ); - - this.geometry.copy( source.geometry ); - this.material.copy( source.material ); - - return this; - - }, - - clone: function () { - - return new this.constructor().copy( this ); - - } - -} ); - -export { GridHelper }; diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621100808.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621100808.py deleted file mode 100644 index c32292a4790046adc0db8f67474deb6748a2b2a8..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621100808.py +++ /dev/null @@ -1,51 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) -background = st.selectbox("表格线条是否隐藏",(False,True),) -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") - - #tables_all= cam.read_pdf("input.pdf", pages="all", process_background=background) - result_all = pd.ExcelWriter('result_all.xlsx', engine='xlsxwriter') - # for i in range(0,len(tables_all)): - # table = tables_all[i].df - # sheetname = str(i) - # table.to_excel(result_all, sheetname,index=False) - with open('result_all.xlsx','rb') as f: - st.download_button('一件抽取完成,点击下载!', f,file_name='result_all.xlsx',mime="application/vnd.ms-excel") - - -row9_spacer1, row9_1, row9_spacer2, row9_2, row9_spacer3 = st.columns((.2, 2.3, .4, 4.4, .2)) -with row9_1: - if st.button('单页抽取'): - st.write('单页抽取') -with row9_2: - if st.button('全文抽取'): - st.write('全文抽取') - - diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (moonu Tamil Movie Download Dvdrip) TOP.md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (moonu Tamil Movie Download Dvdrip) TOP.md deleted file mode 100644 index e16bbc58197c288fa78262ad2b7ead2b798b6ac3..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (moonu Tamil Movie Download Dvdrip) TOP.md +++ /dev/null @@ -1,114 +0,0 @@ - -

          HD Online Player (moonu tamil movie download dvdrip) - A Guide to Watch the Romantic Drama

          - -

          Moonu is a 2012 Tamil romantic drama film written and directed by Aishwarya R. Dhanush. It stars Dhanush and Shruti Haasan as two young lovers who face various challenges and tragedies in their relationship. The film is also known as 3 or Three, and has a famous song called "Why This Kolaveri Di" that became a viral sensation.

          -

          HD Online Player (moonu tamil movie download dvdrip)


          DOWNLOAD 🗸🗸🗸 https://urloso.com/2uyRod



          - -

          Moonu was a commercial and critical success, winning several awards and nominations, including three National Film Awards. The film is widely praised for its realistic portrayal of love, emotions, and mental health issues. The film is also known for its nonlinear narrative and twist ending.

          - -

          If you are a fan of Moonu or want to watch it for the first time, you might be wondering how to download it in dvdrip format. Dvdrip is a popular video file format that can store high-quality video and audio data. It is also compatible with most media players and devices.

          - -

          In this article, we will show you how to download Moonu movie in dvdrip format from various sources. We will also give you some tips on how to watch it online or offline with HD online player.

          - -

          How to Download Moonu Movie in Dvdrip Format

          - -

          There are many websites that offer Moonu movie download in dvdrip format. However, not all of them are safe, legal, or reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also have broken links, low-quality files, or annoying ads and pop-ups.

          -

          - -

          To avoid these risks, you should always use trusted and reputable sources to download Moonu movie in dvdrip format. Here are some of the best options that we recommend:

          - -
            -
          • Isaimini.store: This is a popular entertainment platform that offers movies, music, TV shows, and more. You can download Moonu Tamil movie (2012) to your Isaimini account and watch it online or offline. You can also enjoy other features like playlists, radio stations, music videos, and more.
          • -
          • Filmyzon.com: This is a free film streaming website that offers Bollywood movies, Tamil movies, and more. You can watch Moonu movie online or download it in HD quality. You can also find other related movies like Rekka, Race 3, Race 2, etc.
          • -
          • Bivinpabe.wixsite.com: This is a website builder that allows you to create your own website for free. You can find Moonu movie download in dvdrip format on this website as a torrent file. You can also view other content like blogs, photos, videos, etc.
          • -
          • Trello.com: This is a web-based project management tool that helps you organize your tasks and collaborate with others. You can find Moonu movie download in dvdrip format on this website as a link. You can also find other Bollywood movie downloads, TV show downloads, etc.
          • -
          - -

          How to Watch Moonu Movie Online or Offline with HD Online Player

          - -

          After downloading Moonu movie in dvdrip format from any of the sources mentioned above, you can watch it online or offline with HD online player. HD online player is a media player that supports various file formats and provides high-quality video and audio playback. It also has features like subtitles, playlists, screen capture, etc.

          - -

          Here are some tips on how to watch Moonu movie online or offline with HD online player:

          - -
            -
          • Online: To watch Moonu movie online in dvdrip format, you need a media player that supports this file format. Some of the best media players that we recommend are VLC Media Player, KMPlayer, PotPlayer, Media Player Classic Home Cinema (MPC-HC), etc. You can download any of these media players from their official websites and install them on your device. Then you can open the dvdrip file with the media player and enjoy the movie online.
          • -
          • Offline: To watch Moonu movie offline in dvdrip format, you need to transfer the dvdrip file to your device's storage or external storage device like USB flash drive, SD card, etc. Then you can use any of the media players mentioned above to play the dvdrip file offline.
          • -
          - -

          Conclusion

          - -

          Moonu is a Tamil classic that you should not miss if you love romantic dramas. It tells a beautiful story of love and loss that will touch your heart and make you cry. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player.

          - -

          We hope this article has helped you find the best way to enjoy Moonu movie in dvdrip format. If you have any questions or suggestions, please feel free to leave a comment below.

          -

          Why You Should Watch Moonu Movie

          - -

          Moonu movie is not just a typical Tamil romance. It is a film that explores the depth and complexity of love, emotions, and mental health issues. It shows how two people can be deeply connected and yet face various challenges and tragedies in their relationship. It also portrays the importance of family and social support in coping with life's difficulties.

          - -

          Moonu movie is a film that will make you feel and think. It has a powerful and realistic story, brilliant and natural performances, catchy and meaningful songs, and stunning and authentic visuals. It is a film that will appeal to all kinds of audiences and preferences.

          - -

          Moonu movie is a film that you should watch if you want to experience the magic of Tamil cinema. It is a film that will make you laugh, cry, and fall in love. It is a film that will stay with you long after it ends.

          - -

          How to Download Moonu Movie Songs in MP3 Format

          - -

          Moonu movie has a wonderful soundtrack that complements the story and mood of the film. The songs are composed by Anirudh Ravichander and sung by various artists like Dhanush, Shruti Haasan, Mohit Chauhan, etc. The songs are catchy, romantic, and emotional.

          - -

          If you want to download Moonu movie songs in MP3 format, you can use any of the following sources:

          - -
            -
          • Isaimini.store: This website offers Moonu movie songs in MP3 format for free. You can also listen to them online or offline. You can also enjoy other features like playlists, radio stations, music videos, and more.
          • -
          • Gaana.com: This website offers Moonu movie songs in MP3 format for free. You can also listen to them online or offline. You can also enjoy other features like lyrics, podcasts, stories, and more.
          • -
          • Saavn.com: This website offers Moonu movie songs in MP3 format for free. You can also listen to them online or offline. You can also enjoy other features like recommendations, charts, originals, and more.
          • -
          • Pagalworld.com: This website offers Moonu movie songs in MP3 format for free. You can also download them in various qualities and sizes. You can also find other Tamil songs, ringtones, wallpapers, etc.
          • -
          - -

          Conclusion

          - -

          Moonu movie is a masterpiece of Tamil cinema that you should not miss. It is a film that will make you appreciate the beauty and complexity of love and life. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player. You can also download Moonu movie songs in MP3 format from various sources and listen to them online or offline.

          - -

          We hope this article has helped you find the best way to enjoy Moonu movie and its related content. If you have any questions or suggestions, please feel free to leave a comment below.

          -

          HD Online Player (moonu tamil movie download dvdrip) - A Guide to Watch the Romantic Drama

          - -

          Moonu is a 2012 Tamil romantic drama film written and directed by Aishwarya R. Dhanush. It stars Dhanush and Shruti Haasan as two young lovers who face various challenges and tragedies in their relationship. The film is also known as 3 or Three, and has a famous song called "Why This Kolaveri Di" that became a viral sensation.

          - -

          Moonu was a commercial and critical success, winning several awards and nominations, including three National Film Awards. The film is widely praised for its realistic portrayal of love, emotions, and mental health issues. The film is also known for its nonlinear narrative and twist ending.

          - -

          If you are a fan of Moonu or want to watch it for the first time, you might be wondering how to download it in dvdrip format. Dvdrip is a popular video file format that can store high-quality video and audio data. It is also compatible with most media players and devices.

          - -

          In this article, we will show you how to download Moonu movie in dvdrip format from various sources. We will also give you some tips on how to watch it online or offline with HD online player.

          - -

          How to Download Moonu Movie in Dvdrip Format

          - -

          There are many websites that offer Moonu movie download in dvdrip format. However, not all of them are safe, legal, or reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also have broken links, low-quality files, or annoying ads and pop-ups.

          - -

          To avoid these risks, you should always use trusted and reputable sources to download Moonu movie in dvdrip format. Here are some of the best options that we recommend:

          - -
            -
          • Isaimini.store: This is a popular entertainment platform that offers movies, music, TV shows, and more. You can download Moonu Tamil movie (2012) to your Isaimini account and watch it online or offline. You can also enjoy other features like playlists, radio stations, music videos, and more.
          • -
          • Filmyzon.com: This is a free film streaming website that offers Bollywood movies, Tamil movies, and more. You can watch Moonu movie online or download it in HD quality. You can also find other related movies like Rekka, Race 3, Race 2, etc.
          • -
          • Bivinpabe.wixsite.com: This is a website builder that allows you to create your own website for free. You can find Moonu movie download in dvdrip format on this website as a torrent file. You can also view other content like blogs, photos, videos, etc.
          • -
          • Trello.com: This is a web-based project management tool that helps you organize your tasks and collaborate with others. You can find Moonu movie download in dvdrip format on this website as a link. You can also find other Bollywood movie downloads, TV show downloads, etc.
          • -
          - -

          How to Watch Moonu Movie Online or Offline with HD Online Player

          - -

          After downloading Moonu movie in dvdrip format from any of the sources mentioned above, you can watch it online or offline with HD online player. HD online player is a media player that supports various file formats and provides high-quality video and audio playback. It also has features like subtitles, playlists, screen capture, etc.

          - -

          Here are some tips on how to watch Moonu movie online or offline with HD online player:

          - -
            -
          • Online: To watch Moonu movie online in dvdrip format, you need a media player that supports this file format. Some of the best media players that we recommend are VLC Media Player, KMPlayer, PotPlayer, Media Player Classic Home Cinema (MPC-HC), etc. You can download any of these media players from their official websites and install them on your device. Then you can open the dvdrip file with the media player and enjoy the movie online.
          • -
          • Offline: To watch Moonu movie offline in dvdrip format, you need to transfer the dvdrip file to your device's storage or external storage device like USB flash drive, SD card, etc. Then you can use any of the media players mentioned above to play the dvdrip file offline.
          • -
          - -

          Conclusion

          - -

          Moonu is a Tamil classic that you should not miss if you love romantic dramas. It tells a beautiful story of love and loss that will touch your heart and make you cry. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player.

          - -

          We hope this article has helped you find the best way to enjoy Moonu movie in dvdrip format. If you have any questions or suggestions, please feel free to leave a comment below.

          -

          Conclusion

          - -

          Moonu is a Tamil classic that you should not miss if you love romantic dramas. It tells a beautiful story of love and loss that will touch your heart and make you cry. You can download Moonu movie in dvdrip format from various sources and watch it online or offline with HD online player.

          - -

          We hope this article has helped you find the best way to enjoy Moonu movie in dvdrip format. If you have any questions or suggestions, please feel free to leave a comment below.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/hifi-gan/models.py b/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/hifi-gan/models.py deleted file mode 100644 index 338b92516af241c6a07a427371611e35b227eacf..0000000000000000000000000000000000000000 --- a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/hifi-gan/models.py +++ /dev/null @@ -1,285 +0,0 @@ -""" from https://github.com/jik876/hifi-gan """ - -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from xutils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm(Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if h.resblock == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(h.upsample_initial_channel//(2**i), h.upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorP(2), - DiscriminatorP(3), - DiscriminatorP(5), - DiscriminatorP(7), - DiscriminatorP(11), - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i-1](y) - y_hat = self.meanpools[i-1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss*2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py deleted file mode 100644 index 4248f6c91b641a4ad1d00d0316ee82d701f9152f..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Dict -import torch -from torch.nn import functional as F - -from detectron2.structures.boxes import Boxes, BoxMode - -from ..structures import ( - DensePoseChartPredictorOutput, - DensePoseChartResult, - DensePoseChartResultWithConfidences, -) -from . import resample_fine_and_coarse_segm_to_bbox -from .base import IntTupleBox, make_int_box - - -def resample_uv_tensors_to_bbox( - u: torch.Tensor, - v: torch.Tensor, - labels: torch.Tensor, - box_xywh_abs: IntTupleBox, -) -> torch.Tensor: - """ - Resamples U and V coordinate estimates for the given bounding box - - Args: - u (tensor [1, C, H, W] of float): U coordinates - v (tensor [1, C, H, W] of float): V coordinates - labels (tensor [H, W] of long): labels obtained by resampling segmentation - outputs for the given bounding box - box_xywh_abs (tuple of 4 int): bounding box that corresponds to predictor outputs - Return: - Resampled U and V coordinates - a tensor [2, H, W] of float - """ - x, y, w, h = box_xywh_abs - w = max(int(w), 1) - h = max(int(h), 1) - u_bbox = F.interpolate(u, (h, w), mode="bilinear", align_corners=False) - v_bbox = F.interpolate(v, (h, w), mode="bilinear", align_corners=False) - uv = torch.zeros([2, h, w], dtype=torch.float32, device=u.device) - for part_id in range(1, u_bbox.size(1)): - uv[0][labels == part_id] = u_bbox[0, part_id][labels == part_id] - uv[1][labels == part_id] = v_bbox[0, part_id][labels == part_id] - return uv - - -def resample_uv_to_bbox( - predictor_output: DensePoseChartPredictorOutput, - labels: torch.Tensor, - box_xywh_abs: IntTupleBox, -) -> torch.Tensor: - """ - Resamples U and V coordinate estimates for the given bounding box - - Args: - predictor_output (DensePoseChartPredictorOutput): DensePose predictor - output to be resampled - labels (tensor [H, W] of long): labels obtained by resampling segmentation - outputs for the given bounding box - box_xywh_abs (tuple of 4 int): bounding box that corresponds to predictor outputs - Return: - Resampled U and V coordinates - a tensor [2, H, W] of float - """ - return resample_uv_tensors_to_bbox( - predictor_output.u, - predictor_output.v, - labels, - box_xywh_abs, - ) - - -def densepose_chart_predictor_output_to_result( - predictor_output: DensePoseChartPredictorOutput, boxes: Boxes -) -> DensePoseChartResult: - """ - Convert densepose chart predictor outputs to results - - Args: - predictor_output (DensePoseChartPredictorOutput): DensePose predictor - output to be converted to results, must contain only 1 output - boxes (Boxes): bounding box that corresponds to the predictor output, - must contain only 1 bounding box - Return: - DensePose chart-based result (DensePoseChartResult) - """ - assert len(predictor_output) == 1 and len(boxes) == 1, ( - f"Predictor output to result conversion can operate only single outputs" - f", got {len(predictor_output)} predictor outputs and {len(boxes)} boxes" - ) - - boxes_xyxy_abs = boxes.tensor.clone() - boxes_xywh_abs = BoxMode.convert(boxes_xyxy_abs, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - box_xywh = make_int_box(boxes_xywh_abs[0]) - - labels = resample_fine_and_coarse_segm_to_bbox(predictor_output, box_xywh).squeeze(0) - uv = resample_uv_to_bbox(predictor_output, labels, box_xywh) - return DensePoseChartResult(labels=labels, uv=uv) - - -def resample_confidences_to_bbox( - predictor_output: DensePoseChartPredictorOutput, - labels: torch.Tensor, - box_xywh_abs: IntTupleBox, -) -> Dict[str, torch.Tensor]: - """ - Resamples confidences for the given bounding box - - Args: - predictor_output (DensePoseChartPredictorOutput): DensePose predictor - output to be resampled - labels (tensor [H, W] of long): labels obtained by resampling segmentation - outputs for the given bounding box - box_xywh_abs (tuple of 4 int): bounding box that corresponds to predictor outputs - Return: - Resampled confidences - a dict of [H, W] tensors of float - """ - - x, y, w, h = box_xywh_abs - w = max(int(w), 1) - h = max(int(h), 1) - - confidence_names = [ - "sigma_1", - "sigma_2", - "kappa_u", - "kappa_v", - "fine_segm_confidence", - "coarse_segm_confidence", - ] - confidence_results = {key: None for key in confidence_names} - confidence_names = [ - key for key in confidence_names if getattr(predictor_output, key) is not None - ] - confidence_base = torch.zeros([h, w], dtype=torch.float32, device=predictor_output.u.device) - - # assign data from channels that correspond to the labels - for key in confidence_names: - resampled_confidence = F.interpolate( - getattr(predictor_output, key), - (h, w), - mode="bilinear", - align_corners=False, - ) - result = confidence_base.clone() - for part_id in range(1, predictor_output.u.size(1)): - if resampled_confidence.size(1) != predictor_output.u.size(1): - # confidence is not part-based, don't try to fill it part by part - continue - result[labels == part_id] = resampled_confidence[0, part_id][labels == part_id] - - if resampled_confidence.size(1) != predictor_output.u.size(1): - # confidence is not part-based, fill the data with the first channel - # (targeted for segmentation confidences that have only 1 channel) - result = resampled_confidence[0, 0] - - confidence_results[key] = result - - return confidence_results # pyre-ignore[7] - - -def densepose_chart_predictor_output_to_result_with_confidences( - predictor_output: DensePoseChartPredictorOutput, boxes: Boxes -) -> DensePoseChartResultWithConfidences: - """ - Convert densepose chart predictor outputs to results - - Args: - predictor_output (DensePoseChartPredictorOutput): DensePose predictor - output with confidences to be converted to results, must contain only 1 output - boxes (Boxes): bounding box that corresponds to the predictor output, - must contain only 1 bounding box - Return: - DensePose chart-based result with confidences (DensePoseChartResultWithConfidences) - """ - assert len(predictor_output) == 1 and len(boxes) == 1, ( - f"Predictor output to result conversion can operate only single outputs" - f", got {len(predictor_output)} predictor outputs and {len(boxes)} boxes" - ) - - boxes_xyxy_abs = boxes.tensor.clone() - boxes_xywh_abs = BoxMode.convert(boxes_xyxy_abs, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - box_xywh = make_int_box(boxes_xywh_abs[0]) - - labels = resample_fine_and_coarse_segm_to_bbox(predictor_output, box_xywh).squeeze(0) - uv = resample_uv_to_bbox(predictor_output, labels, box_xywh) - confidences = resample_confidences_to_bbox(predictor_output, labels, box_xywh) - return DensePoseChartResultWithConfidences(labels=labels, uv=uv, **confidences) diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/tests/test_inference.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/tests/test_inference.py deleted file mode 100644 index 6d40a6b02905c95911674ac33b8d7bc0f1eda5ec..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/tests/test_inference.py +++ /dev/null @@ -1,17 +0,0 @@ -import io -import torch -from PIL import Image - -# Model -model = torch.hub.load("ultralytics/yolov5", "yolov5s", pretrained=True, force_reload=True) - -# img = Image.open("zidane.jpg") # PIL image direct open - -# Read from bytes as we do in app -with open("zidane.jpg", "rb") as file: - img_bytes = file.read() -img = Image.open(io.BytesIO(img_bytes)) - -results = model(img, size=640) # includes NMS - -print(results.pandas().xyxy[0]) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/detection_utils.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/detection_utils.py deleted file mode 100644 index ada19bdb4a2aa74874da4dba5d179ce38201c85d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/detection_utils.py +++ /dev/null @@ -1,659 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -Common data processing utilities that are used in a -typical object detection data pipeline. -""" -import logging -import numpy as np -from typing import List, Union -import pycocotools.mask as mask_util -import torch -from PIL import Image - -from detectron2.structures import ( - BitMasks, - Boxes, - BoxMode, - Instances, - Keypoints, - PolygonMasks, - RotatedBoxes, - polygons_to_bitmask, -) -from detectron2.utils.file_io import PathManager - -from . import transforms as T -from .catalog import MetadataCatalog - -__all__ = [ - "SizeMismatchError", - "convert_image_to_rgb", - "check_image_size", - "transform_proposals", - "transform_instance_annotations", - "annotations_to_instances", - "annotations_to_instances_rotated", - "build_augmentation", - "build_transform_gen", - "create_keypoint_hflip_indices", - "filter_empty_instances", - "read_image", -] - - -class SizeMismatchError(ValueError): - """ - When loaded image has difference width/height compared with annotation. - """ - - -# https://en.wikipedia.org/wiki/YUV#SDTV_with_BT.601 -_M_RGB2YUV = [[0.299, 0.587, 0.114], [-0.14713, -0.28886, 0.436], [0.615, -0.51499, -0.10001]] -_M_YUV2RGB = [[1.0, 0.0, 1.13983], [1.0, -0.39465, -0.58060], [1.0, 2.03211, 0.0]] - -# https://www.exiv2.org/tags.html -_EXIF_ORIENT = 274 # exif 'Orientation' tag - - -def convert_PIL_to_numpy(image, format): - """ - Convert PIL image to numpy array of target format. - - Args: - image (PIL.Image): a PIL image - format (str): the format of output image - - Returns: - (np.ndarray): also see `read_image` - """ - if format is not None: - # PIL only supports RGB, so convert to RGB and flip channels over below - conversion_format = format - if format in ["BGR", "YUV-BT.601"]: - conversion_format = "RGB" - image = image.convert(conversion_format) - image = np.asarray(image) - # PIL squeezes out the channel dimension for "L", so make it HWC - if format == "L": - image = np.expand_dims(image, -1) - - # handle formats not supported by PIL - elif format == "BGR": - # flip channels if needed - image = image[:, :, ::-1] - elif format == "YUV-BT.601": - image = image / 255.0 - image = np.dot(image, np.array(_M_RGB2YUV).T) - - return image - - -def convert_image_to_rgb(image, format): - """ - Convert an image from given format to RGB. - - Args: - image (np.ndarray or Tensor): an HWC image - format (str): the format of input image, also see `read_image` - - Returns: - (np.ndarray): (H,W,3) RGB image in 0-255 range, can be either float or uint8 - """ - if isinstance(image, torch.Tensor): - image = image.cpu().numpy() - if format == "BGR": - image = image[:, :, [2, 1, 0]] - elif format == "YUV-BT.601": - image = np.dot(image, np.array(_M_YUV2RGB).T) - image = image * 255.0 - else: - if format == "L": - image = image[:, :, 0] - image = image.astype(np.uint8) - image = np.asarray(Image.fromarray(image, mode=format).convert("RGB")) - return image - - -def _apply_exif_orientation(image): - """ - Applies the exif orientation correctly. - - This code exists per the bug: - https://github.com/python-pillow/Pillow/issues/3973 - with the function `ImageOps.exif_transpose`. The Pillow source raises errors with - various methods, especially `tobytes` - - Function based on: - https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59 - https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527 - - Args: - image (PIL.Image): a PIL image - - Returns: - (PIL.Image): the PIL image with exif orientation applied, if applicable - """ - if not hasattr(image, "getexif"): - return image - - try: - exif = image.getexif() - except Exception: # https://github.com/facebookresearch/detectron2/issues/1885 - exif = None - - if exif is None: - return image - - orientation = exif.get(_EXIF_ORIENT) - - method = { - 2: Image.FLIP_LEFT_RIGHT, - 3: Image.ROTATE_180, - 4: Image.FLIP_TOP_BOTTOM, - 5: Image.TRANSPOSE, - 6: Image.ROTATE_270, - 7: Image.TRANSVERSE, - 8: Image.ROTATE_90, - }.get(orientation) - - if method is not None: - return image.transpose(method) - return image - - -def read_image(file_name, format=None): - """ - Read an image into the given format. - Will apply rotation and flipping if the image has such exif information. - - Args: - file_name (str): image file path - format (str): one of the supported image modes in PIL, or "BGR" or "YUV-BT.601". - - Returns: - image (np.ndarray): - an HWC image in the given format, which is 0-255, uint8 for - supported image modes in PIL or "BGR"; float (0-1 for Y) for YUV-BT.601. - """ - with PathManager.open(file_name, "rb") as f: - image = Image.open(f) - - # work around this bug: https://github.com/python-pillow/Pillow/issues/3973 - image = _apply_exif_orientation(image) - return convert_PIL_to_numpy(image, format) - - -def check_image_size(dataset_dict, image): - """ - Raise an error if the image does not match the size specified in the dict. - """ - if "width" in dataset_dict or "height" in dataset_dict: - image_wh = (image.shape[1], image.shape[0]) - expected_wh = (dataset_dict["width"], dataset_dict["height"]) - if not image_wh == expected_wh: - raise SizeMismatchError( - "Mismatched image shape{}, got {}, expect {}.".format( - " for image " + dataset_dict["file_name"] - if "file_name" in dataset_dict - else "", - image_wh, - expected_wh, - ) - + " Please check the width/height in your annotation." - ) - - # To ensure bbox always remap to original image size - if "width" not in dataset_dict: - dataset_dict["width"] = image.shape[1] - if "height" not in dataset_dict: - dataset_dict["height"] = image.shape[0] - - -def transform_proposals(dataset_dict, image_shape, transforms, *, proposal_topk, min_box_size=0): - """ - Apply transformations to the proposals in dataset_dict, if any. - - Args: - dataset_dict (dict): a dict read from the dataset, possibly - contains fields "proposal_boxes", "proposal_objectness_logits", "proposal_bbox_mode" - image_shape (tuple): height, width - transforms (TransformList): - proposal_topk (int): only keep top-K scoring proposals - min_box_size (int): proposals with either side smaller than this - threshold are removed - - The input dict is modified in-place, with abovementioned keys removed. A new - key "proposals" will be added. Its value is an `Instances` - object which contains the transformed proposals in its field - "proposal_boxes" and "objectness_logits". - """ - if "proposal_boxes" in dataset_dict: - # Transform proposal boxes - boxes = transforms.apply_box( - BoxMode.convert( - dataset_dict.pop("proposal_boxes"), - dataset_dict.pop("proposal_bbox_mode"), - BoxMode.XYXY_ABS, - ) - ) - boxes = Boxes(boxes) - objectness_logits = torch.as_tensor( - dataset_dict.pop("proposal_objectness_logits").astype("float32") - ) - - boxes.clip(image_shape) - keep = boxes.nonempty(threshold=min_box_size) - boxes = boxes[keep] - objectness_logits = objectness_logits[keep] - - proposals = Instances(image_shape) - proposals.proposal_boxes = boxes[:proposal_topk] - proposals.objectness_logits = objectness_logits[:proposal_topk] - dataset_dict["proposals"] = proposals - - -def get_bbox(annotation): - """ - Get bbox from data - Args: - annotation (dict): dict of instance annotations for a single instance. - Returns: - bbox (ndarray): x1, y1, x2, y2 coordinates - """ - # bbox is 1d (per-instance bounding box) - bbox = BoxMode.convert(annotation["bbox"], annotation["bbox_mode"], BoxMode.XYXY_ABS) - return bbox - - -def transform_instance_annotations( - annotation, transforms, image_size, *, keypoint_hflip_indices=None -): - """ - Apply transforms to box, segmentation and keypoints annotations of a single instance. - - It will use `transforms.apply_box` for the box, and - `transforms.apply_coords` for segmentation polygons & keypoints. - If you need anything more specially designed for each data structure, - you'll need to implement your own version of this function or the transforms. - - Args: - annotation (dict): dict of instance annotations for a single instance. - It will be modified in-place. - transforms (TransformList or list[Transform]): - image_size (tuple): the height, width of the transformed image - keypoint_hflip_indices (ndarray[int]): see `create_keypoint_hflip_indices`. - - Returns: - dict: - the same input dict with fields "bbox", "segmentation", "keypoints" - transformed according to `transforms`. - The "bbox_mode" field will be set to XYXY_ABS. - """ - if isinstance(transforms, (tuple, list)): - transforms = T.TransformList(transforms) - # bbox is 1d (per-instance bounding box) - bbox = BoxMode.convert(annotation["bbox"], annotation["bbox_mode"], BoxMode.XYXY_ABS) - # clip transformed bbox to image size - bbox = transforms.apply_box(np.array([bbox]))[0].clip(min=0) - annotation["bbox"] = np.minimum(bbox, list(image_size + image_size)[::-1]) - annotation["bbox_mode"] = BoxMode.XYXY_ABS - - if "segmentation" in annotation: - # each instance contains 1 or more polygons - segm = annotation["segmentation"] - if isinstance(segm, list): - # polygons - polygons = [np.asarray(p).reshape(-1, 2) for p in segm] - annotation["segmentation"] = [ - p.reshape(-1) for p in transforms.apply_polygons(polygons) - ] - elif isinstance(segm, dict): - # RLE - mask = mask_util.decode(segm) - mask = transforms.apply_segmentation(mask) - assert tuple(mask.shape[:2]) == image_size - annotation["segmentation"] = mask - else: - raise ValueError( - "Cannot transform segmentation of type '{}'!" - "Supported types are: polygons as list[list[float] or ndarray]," - " COCO-style RLE as a dict.".format(type(segm)) - ) - - if "keypoints" in annotation: - keypoints = transform_keypoint_annotations( - annotation["keypoints"], transforms, image_size, keypoint_hflip_indices - ) - annotation["keypoints"] = keypoints - - return annotation - - -def transform_keypoint_annotations(keypoints, transforms, image_size, keypoint_hflip_indices=None): - """ - Transform keypoint annotations of an image. - If a keypoint is transformed out of image boundary, it will be marked "unlabeled" (visibility=0) - - Args: - keypoints (list[float]): Nx3 float in Detectron2's Dataset format. - Each point is represented by (x, y, visibility). - transforms (TransformList): - image_size (tuple): the height, width of the transformed image - keypoint_hflip_indices (ndarray[int]): see `create_keypoint_hflip_indices`. - When `transforms` includes horizontal flip, will use the index - mapping to flip keypoints. - """ - # (N*3,) -> (N, 3) - keypoints = np.asarray(keypoints, dtype="float64").reshape(-1, 3) - keypoints_xy = transforms.apply_coords(keypoints[:, :2]) - - # Set all out-of-boundary points to "unlabeled" - inside = (keypoints_xy >= np.array([0, 0])) & (keypoints_xy <= np.array(image_size[::-1])) - inside = inside.all(axis=1) - keypoints[:, :2] = keypoints_xy - keypoints[:, 2][~inside] = 0 - - # This assumes that HorizFlipTransform is the only one that does flip - do_hflip = sum(isinstance(t, T.HFlipTransform) for t in transforms.transforms) % 2 == 1 - - # Alternative way: check if probe points was horizontally flipped. - # probe = np.asarray([[0.0, 0.0], [image_width, 0.0]]) - # probe_aug = transforms.apply_coords(probe.copy()) - # do_hflip = np.sign(probe[1][0] - probe[0][0]) != np.sign(probe_aug[1][0] - probe_aug[0][0]) # noqa - - # If flipped, swap each keypoint with its opposite-handed equivalent - if do_hflip: - if keypoint_hflip_indices is None: - raise ValueError("Cannot flip keypoints without providing flip indices!") - if len(keypoints) != len(keypoint_hflip_indices): - raise ValueError( - "Keypoint data has {} points, but metadata " - "contains {} points!".format(len(keypoints), len(keypoint_hflip_indices)) - ) - keypoints = keypoints[np.asarray(keypoint_hflip_indices, dtype=np.int32), :] - - # Maintain COCO convention that if visibility == 0 (unlabeled), then x, y = 0 - keypoints[keypoints[:, 2] == 0] = 0 - return keypoints - - -def annotations_to_instances(annos, image_size, mask_format="polygon"): - """ - Create an :class:`Instances` object used by the models, - from instance annotations in the dataset dict. - - Args: - annos (list[dict]): a list of instance annotations in one image, each - element for one instance. - image_size (tuple): height, width - - Returns: - Instances: - It will contain fields "gt_boxes", "gt_classes", - "gt_masks", "gt_keypoints", if they can be obtained from `annos`. - This is the format that builtin models expect. - """ - boxes = ( - np.stack( - [BoxMode.convert(obj["bbox"], obj["bbox_mode"], BoxMode.XYXY_ABS) for obj in annos] - ) - if len(annos) - else np.zeros((0, 4)) - ) - target = Instances(image_size) - target.gt_boxes = Boxes(boxes) - - classes = [int(obj["category_id"]) for obj in annos] - classes = torch.tensor(classes, dtype=torch.int64) - target.gt_classes = classes - - if len(annos) and "segmentation" in annos[0]: - segms = [obj["segmentation"] for obj in annos] - if mask_format == "polygon": - try: - masks = PolygonMasks(segms) - except ValueError as e: - raise ValueError( - "Failed to use mask_format=='polygon' from the given annotations!" - ) from e - else: - assert mask_format == "bitmask", mask_format - masks = [] - for segm in segms: - if isinstance(segm, list): - # polygon - masks.append(polygons_to_bitmask(segm, *image_size)) - elif isinstance(segm, dict): - # COCO RLE - masks.append(mask_util.decode(segm)) - elif isinstance(segm, np.ndarray): - assert segm.ndim == 2, "Expect segmentation of 2 dimensions, got {}.".format( - segm.ndim - ) - # mask array - masks.append(segm) - else: - raise ValueError( - "Cannot convert segmentation of type '{}' to BitMasks!" - "Supported types are: polygons as list[list[float] or ndarray]," - " COCO-style RLE as a dict, or a binary segmentation mask " - " in a 2D numpy array of shape HxW.".format(type(segm)) - ) - # torch.from_numpy does not support array with negative stride. - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x)) for x in masks]) - ) - target.gt_masks = masks - - if len(annos) and "keypoints" in annos[0]: - kpts = [obj.get("keypoints", []) for obj in annos] - target.gt_keypoints = Keypoints(kpts) - - return target - - -def annotations_to_instances_rotated(annos, image_size): - """ - Create an :class:`Instances` object used by the models, - from instance annotations in the dataset dict. - Compared to `annotations_to_instances`, this function is for rotated boxes only - - Args: - annos (list[dict]): a list of instance annotations in one image, each - element for one instance. - image_size (tuple): height, width - - Returns: - Instances: - Containing fields "gt_boxes", "gt_classes", - if they can be obtained from `annos`. - This is the format that builtin models expect. - """ - boxes = [obj["bbox"] for obj in annos] - target = Instances(image_size) - boxes = target.gt_boxes = RotatedBoxes(boxes) - boxes.clip(image_size) - - classes = [obj["category_id"] for obj in annos] - classes = torch.tensor(classes, dtype=torch.int64) - target.gt_classes = classes - - return target - - -def filter_empty_instances( - instances, by_box=True, by_mask=True, box_threshold=1e-5, return_mask=False -): - """ - Filter out empty instances in an `Instances` object. - - Args: - instances (Instances): - by_box (bool): whether to filter out instances with empty boxes - by_mask (bool): whether to filter out instances with empty masks - box_threshold (float): minimum width and height to be considered non-empty - return_mask (bool): whether to return boolean mask of filtered instances - - Returns: - Instances: the filtered instances. - tensor[bool], optional: boolean mask of filtered instances - """ - assert by_box or by_mask - r = [] - if by_box: - r.append(instances.gt_boxes.nonempty(threshold=box_threshold)) - if instances.has("gt_masks") and by_mask: - r.append(instances.gt_masks.nonempty()) - - # TODO: can also filter visible keypoints - - if not r: - return instances - m = r[0] - for x in r[1:]: - m = m & x - if return_mask: - return instances[m], m - return instances[m] - - -def create_keypoint_hflip_indices(dataset_names: Union[str, List[str]]) -> List[int]: - """ - Args: - dataset_names: list of dataset names - - Returns: - list[int]: a list of size=#keypoints, storing the - horizontally-flipped keypoint indices. - """ - if isinstance(dataset_names, str): - dataset_names = [dataset_names] - - check_metadata_consistency("keypoint_names", dataset_names) - check_metadata_consistency("keypoint_flip_map", dataset_names) - - meta = MetadataCatalog.get(dataset_names[0]) - names = meta.keypoint_names - # TODO flip -> hflip - flip_map = dict(meta.keypoint_flip_map) - flip_map.update({v: k for k, v in flip_map.items()}) - flipped_names = [i if i not in flip_map else flip_map[i] for i in names] - flip_indices = [names.index(i) for i in flipped_names] - return flip_indices - - -def get_fed_loss_cls_weights(dataset_names: Union[str, List[str]], freq_weight_power=1.0): - """ - Get frequency weight for each class sorted by class id. - We now calcualte freqency weight using image_count to the power freq_weight_power. - - Args: - dataset_names: list of dataset names - freq_weight_power: power value - """ - if isinstance(dataset_names, str): - dataset_names = [dataset_names] - - check_metadata_consistency("class_image_count", dataset_names) - - meta = MetadataCatalog.get(dataset_names[0]) - class_freq_meta = meta.class_image_count - class_freq = torch.tensor( - [c["image_count"] for c in sorted(class_freq_meta, key=lambda x: x["id"])] - ) - class_freq_weight = class_freq.float() ** freq_weight_power - return class_freq_weight - - -def gen_crop_transform_with_instance(crop_size, image_size, instance): - """ - Generate a CropTransform so that the cropping region contains - the center of the given instance. - - Args: - crop_size (tuple): h, w in pixels - image_size (tuple): h, w - instance (dict): an annotation dict of one instance, in Detectron2's - dataset format. - """ - crop_size = np.asarray(crop_size, dtype=np.int32) - bbox = BoxMode.convert(instance["bbox"], instance["bbox_mode"], BoxMode.XYXY_ABS) - center_yx = (bbox[1] + bbox[3]) * 0.5, (bbox[0] + bbox[2]) * 0.5 - assert ( - image_size[0] >= center_yx[0] and image_size[1] >= center_yx[1] - ), "The annotation bounding box is outside of the image!" - assert ( - image_size[0] >= crop_size[0] and image_size[1] >= crop_size[1] - ), "Crop size is larger than image size!" - - min_yx = np.maximum(np.floor(center_yx).astype(np.int32) - crop_size, 0) - max_yx = np.maximum(np.asarray(image_size, dtype=np.int32) - crop_size, 0) - max_yx = np.minimum(max_yx, np.ceil(center_yx).astype(np.int32)) - - y0 = np.random.randint(min_yx[0], max_yx[0] + 1) - x0 = np.random.randint(min_yx[1], max_yx[1] + 1) - return T.CropTransform(x0, y0, crop_size[1], crop_size[0]) - - -def check_metadata_consistency(key, dataset_names): - """ - Check that the datasets have consistent metadata. - - Args: - key (str): a metadata key - dataset_names (list[str]): a list of dataset names - - Raises: - AttributeError: if the key does not exist in the metadata - ValueError: if the given datasets do not have the same metadata values defined by key - """ - if len(dataset_names) == 0: - return - logger = logging.getLogger(__name__) - entries_per_dataset = [getattr(MetadataCatalog.get(d), key) for d in dataset_names] - for idx, entry in enumerate(entries_per_dataset): - if entry != entries_per_dataset[0]: - logger.error( - "Metadata '{}' for dataset '{}' is '{}'".format(key, dataset_names[idx], str(entry)) - ) - logger.error( - "Metadata '{}' for dataset '{}' is '{}'".format( - key, dataset_names[0], str(entries_per_dataset[0]) - ) - ) - raise ValueError("Datasets have different metadata '{}'!".format(key)) - - -def build_augmentation(cfg, is_train): - """ - Create a list of default :class:`Augmentation` from config. - Now it includes resizing and flipping. - - Returns: - list[Augmentation] - """ - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)] - if is_train and cfg.INPUT.RANDOM_FLIP != "none": - augmentation.append( - T.RandomFlip( - horizontal=cfg.INPUT.RANDOM_FLIP == "horizontal", - vertical=cfg.INPUT.RANDOM_FLIP == "vertical", - ) - ) - return augmentation - - -build_transform_gen = build_augmentation -""" -Alias for backward-compatibility. -""" diff --git a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/scripts/download_models.py b/spaces/cffl/Exploring_Intelligent_Writing_Assistance/scripts/download_models.py deleted file mode 100644 index 2c7011ebb147bbe8663d45262f8e954fbf481952..0000000000000000000000000000000000000000 --- a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/scripts/download_models.py +++ /dev/null @@ -1,70 +0,0 @@ -# ########################################################################### -# -# CLOUDERA APPLIED MACHINE LEARNING PROTOTYPE (AMP) -# (C) Cloudera, Inc. 2022 -# All rights reserved. -# -# Applicable Open Source License: Apache 2.0 -# -# NOTE: Cloudera open source products are modular software products -# made up of hundreds of individual components, each of which was -# individually copyrighted. Each Cloudera open source product is a -# collective work under U.S. Copyright Law. Your license to use the -# collective work is as provided in your written agreement with -# Cloudera. Used apart from the collective work, this file is -# licensed for your use pursuant to the open source license -# identified above. -# -# This code is provided to you pursuant a written agreement with -# (i) Cloudera, Inc. or (ii) a third-party authorized to distribute -# this code. If you do not have a written agreement with Cloudera nor -# with an authorized and properly licensed third party, you do not -# have any rights to access nor to use this code. -# -# Absent a written agreement with Cloudera, Inc. (“Cloudera”) to the -# contrary, A) CLOUDERA PROVIDES THIS CODE TO YOU WITHOUT WARRANTIES OF ANY -# KIND; (B) CLOUDERA DISCLAIMS ANY AND ALL EXPRESS AND IMPLIED -# WARRANTIES WITH RESPECT TO THIS CODE, INCLUDING BUT NOT LIMITED TO -# IMPLIED WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY AND -# FITNESS FOR A PARTICULAR PURPOSE; (C) CLOUDERA IS NOT LIABLE TO YOU, -# AND WILL NOT DEFEND, INDEMNIFY, NOR HOLD YOU HARMLESS FOR ANY CLAIMS -# ARISING FROM OR RELATED TO THE CODE; AND (D)WITH RESPECT TO YOUR EXERCISE -# OF ANY RIGHTS GRANTED TO YOU FOR THE CODE, CLOUDERA IS NOT LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, PUNITIVE OR -# CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO, DAMAGES -# RELATED TO LOST REVENUE, LOST PROFITS, LOSS OF INCOME, LOSS OF -# BUSINESS ADVANTAGE OR UNAVAILABILITY, OR LOSS OR CORRUPTION OF -# DATA. -# -# ########################################################################### - -from apps.data_utils import DATA_PACKET -from src.style_transfer import StyleTransfer -from src.style_classification import StyleIntensityClassifier -from src.content_preservation import ContentPreservationScorer - - -def load_and_cache_HF_models(style_data_packet): - """ - This utility function is used to download and cache models needed for all style - attributes in `apps.data_utils.DATA_PACKET` - - Args: - style_data_packet (dict) - """ - - for style_data in style_data_packet.keys(): - try: - st = StyleTransfer(model_identifier=style_data.seq2seq_model_path) - sic = StyleIntensityClassifier(style_data.cls_model_path) - cps = ContentPreservationScorer( - cls_model_identifier=style_data.cls_model_path, - sbert_model_identifier=style_data.sbert_model_path, - ) - - del st, sic, cps - except Exception as e: - print(e) - -if __name__=="__main__": - load_and_cache_HF_models(DATA_PACKET) \ No newline at end of file diff --git a/spaces/changlisheng/shangChat/modules/presets.py b/spaces/changlisheng/shangChat/modules/presets.py deleted file mode 100644 index a6e601700ba70e4e2167345be8540cca78797b00..0000000000000000000000000000000000000000 --- a/spaces/changlisheng/shangChat/modules/presets.py +++ /dev/null @@ -1,198 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr -from pathlib import Path - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 -no_input_msg = "请输入对话内容。" # 未输入对话内容 - -timeout_streaming = 30 # 流式对话时的超时时间 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

          川虎ChatGPT 🚀

          """ -description = """\ -
          - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
          -""" - -footer = """\ -
          {versions}
          -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - -MODEL_SOFT_TOKEN_LIMIT = { - "gpt-3.5-turbo": { - "streaming": 3500, - "all": 3500 - }, - "gpt-3.5-turbo-0301": { - "streaming": 3500, - "all": 3500 - }, - "gpt-4": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-0314": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-32k": { - "streaming": 31000, - "all": 31000 - }, - "gpt-4-32k-0314": { - "streaming": 31000, - "all": 31000 - } -} - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/chansung/llama2-with-gradio-chat/styles.py b/spaces/chansung/llama2-with-gradio-chat/styles.py deleted file mode 100644 index 5971a234203ca3e07f96a02d31b028671d0df337..0000000000000000000000000000000000000000 --- a/spaces/chansung/llama2-with-gradio-chat/styles.py +++ /dev/null @@ -1,389 +0,0 @@ -PARENT_BLOCK_CSS = """ -#col-container { - width: 95%; - height: 100%; - margin-left: auto; - margin-right: auto; -} - -#chatbot { - height: 800px; - overflow: auto; -} - -#chatbot > .wrap { - max-height: 780px; -} -""" - -MODEL_SELECTION_CSS = """ - -#internet_option_radio { - text-align: center !important; - padding: 0px !important; - background-color: snow; - - @media (prefers-color-scheme: dark) { - background-color: #0b0f19; - } -} - -#internet_option_radio > .wrap:nth-child(3) { - display: block !important; -} - -.template-txt { - text-align: center; - font-size: 15pt !important; -} - -.message { - margin: 0px !important; -} - -.load-mode-selector:nth-child(3) { - margin: auto !important; - text-align: center !important; - width: fit-content !important; -} - -code { - white-space: break-spaces !important; -} - -.progress-view { - background: transparent !important; - border-radius: 25px !important; -} - -#landing-container { - width: 85%; - margin: auto; -} - -.landing-btn { - font-size: 2.3vw !important; - margin-top: 25px !important; - border-radius: 25px !important; - height: 120px !important; - - @media screen and (max-width: 1000px) { - font-size: 20px !important; - } -} - -#landing-bottom { - margin-top: 20px !important; -} - -.custom-btn { - border: none !important; - background: none !important; - box-shadow: none !important; - display: block !important; - text-align: left !important; -} -.custom-btn:hover { - background: rgb(243 244 246) !important; -} - -.custom-btn-highlight { - border: none !important; - background: rgb(243 244 246) !important; - box-shadow: none !important; - display: block !important; - text-align: left !important; - - @media (prefers-color-scheme: dark) { - background-color: rgba(17,24,39,255) !important; - } -} - -#prompt-txt > label > span { - display: none !important; -} -#prompt-txt > label > textarea { - border: transparent; - border-radius: 20px; -} -#chatbot { - height: 800px !important; - overflow: auto; - box-shadow: none !important; - border: none !important; -} -#chatbot > .wrap { - max-height: 780px !important; -} -#chatbot + div { - border-radius: 35px !important; - width: 80% !important; - margin: auto !important; -} - -#left-pane { - background-color: transparent; - border-radius: 15px; - padding: 10px; - - @media (prefers-color-scheme: dark) { - background-color: rgba(31,41,55,255) !important; - } -} - -#left-top { - padding-left: 10px; - padding-right: 10px; - text-align: center; - font-weight: bold; - font-size: large; -} - -#chat-history-accordion { - background: transparent; - border: 0.8px !important; -} - -#right-pane { - margin-left: 20px; - margin-right: 20px; - background: transparent; - border-radius: 20px; - - @media (prefers-color-scheme: dark) { - background-color: rgba(31,41,55,255) !important; - } - - @media screen and (max-width: 1000px) { - margin: 0px !important; - } -} - -#initial-popup { - z-index: 100; - position: absolute; - width: 50%; - top: 50%; - height: 50%; - left: 50%; - transform: translate(-50%, -50%); - border-radius: 35px; - padding: 15px; -} - -#initial-popup-title { - text-align: center; - font-size: 18px; - font-weight: bold; -} - -#initial-popup-left-pane { - min-width: 150px !important; -} - -#initial-popup-right-pane { - text-align: right; -} - -.example-btn { - padding-top: 20px !important; - padding-bottom: 20px !important; - padding-left: 5px !important; - padding-right: 5px !important; - background: linear-gradient(to bottom right, #f7faff, #ffffff) !important; - box-shadow: none !important; - border-radius: 20px !important; - - @media (prefers-color-scheme: dark) { - background: rgba(70,79,86,255) !important; - } -} - -.example-btn:hover { - box-shadow: 0.3px 0.3px 0.3px gray !important; - - @media (prefers-color-scheme: dark) { - background: rgba(34,37,42,255) !important; - } -} - -.example-btn:active { - @media (prefers-color-scheme: dark) { - background: rgba(70,79,86,255) !important; - } -} - -#example-title { - margin-bottom: 15px; -} - -#aux-btns-popup { - z-index: 200; - position: absolute !important; - bottom: 75px !important; - right: 40px !important; -} - -#aux-btns-popup > div { - flex-wrap: nowrap; - width: fit-content; - margin: auto; -} - -.aux-btn { - height: 30px !important; - flex-wrap: initial !important; - flex: none !important; - min-width: min(100px,100%) !important; - font-weight: unset !important; - font-size: 10pt !important; - - background: linear-gradient(to bottom right, #f7faff, #ffffff) !important; - box-shadow: none !important; - border-radius: 20px !important; - - opacity: 0.5; - border-width: 0.5px; - border-color: grey; - - color: red !important; - - @media (prefers-color-scheme: dark) { - opacity: 0.2 !important; - color: black !important; - } -} - -.aux-btn:hover { - opacity: 1.0; - box-shadow: 0.3px 0.3px 0.3px gray !important; - - @media (prefers-color-scheme: dark) { - opacity: 1.0 !important; - box-shadow: 0.3px 0.3px 0.3px gray !important; - } -} - -#aux-viewer { - position: absolute !important; - border-style: solid !important; - overflow: visible !important; - border: none !important; - box-shadow: none !important; - z-index: 1000 !important; - opacity: 0.0 !important; - width: 75% !important; - right: 1px !important; - transition: all 0.5s; -} - -#aux-viewer:hover { - opacity: 1.0 !important; - box-shadow: 0px 0.5px 0px 0px gray !important; -} - -#aux-viewer > .label-wrap { - justify-content: end; -} - -#aux-viewer > .label-wrap > span { - margin-right: 10px; -} - -#aux-viewer-inspector { - padding: 0px; -} - -#aux-viewer-inspector > label > span { - display: none !important; -} - -#aux-viewer-inspector > label > textarea { - box-shadow: none; - border-color: transparent; -} - -#global-context > label > span { - display: none !important; -} - -#chat-back-btn { - background: transparent !important; -} - -#chat-back-btn:hover { - @media (prefers-color-scheme: dark) { - background: rgb(75,85,99) !important; - } -} - -#chat-back-btn:active { - @media (prefers-color-scheme: dark) { - background: transparent !important; - } -} - -#col-container { - max-width: 70%; - height: 100%; - margin-left: auto; - margin-right: auto; -} - - -#container { - max-width: 70%; - margin: auto; -} - -#container2 { - max-width: 60%; - margin: auto; -} - -#container3 { - max-width: 60%; - margin: auto; -} - -.square { - height: 100px; - - @media (prefers-color-scheme: dark) { - background-color: rgba(70,79,86,255) !important; - } -} - -.square:hover { - @media (prefers-color-scheme: dark) { - background-color: rgba(34,37,42,255) !important; - } -} - -.square:active { - @media (prefers-color-scheme: dark) { - background-color: rgba(70,79,86,255) !important; - } -} - -.placeholders { - min-width: max-content !important; -} - -.placeholders > button { - border-color: transparent !important; - background-color: transparent !important; - box-shadow: none !important; - cursor: default !important; -} - -.center { - text-align: center; -} - -.sub-container > div { - min-width: max-content !important; -} - -.normal_ -""" diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ncnn/android/README.md b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ncnn/android/README.md deleted file mode 100644 index 2197ffe9a348d20f541d0e664363e07dfaf425ac..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ncnn/android/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# YOLOX-Android-ncnn - -Andoird app of YOLOX object detection base on [ncnn](https://github.com/Tencent/ncnn) - - -## Tutorial - -### Step1 - -Download ncnn-android-vulkan.zip from [releases of ncnn](https://github.com/Tencent/ncnn/releases). This repo uses -[20210525 release](https://github.com/Tencent/ncnn/releases/download/20210525/ncnn-20210525-android-vulkan.zip) for building. - -### Step2 - -After downloading, please extract your zip file. Then, there are two ways to finish this step: -* put your extracted directory into **app/src/main/jni** -* change the **ncnn_DIR** path in **app/src/main/jni/CMakeLists.txt** to your extracted directory - -### Step3 -Download example param and bin file from [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ESXBH_GSSmFMszWJ6YG2VkQB5cWDfqVWXgk0D996jH0rpQ?e=qzEqUh) or [github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s_ncnn.tar.gz). Unzip the file to **app/src/main/assets**. - -### Step4 -Open this project with Android Studio, build it and enjoy! - -## Reference - -* [ncnn-android-yolov5](https://github.com/nihui/ncnn-android-yolov5) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag_ray_end2end.sh b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag_ray_end2end.sh deleted file mode 100644 index cef1a264c935ca4d4af4f85907cb7dbda6e4e9f4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag_ray_end2end.sh +++ /dev/null @@ -1,68 +0,0 @@ -# Sample script to finetune RAG using Ray for distributed retrieval. - -# Add parent directory to python path to access lightning_base.py -export PYTHONPATH="../":"${PYTHONPATH}" - -#creates the custom knowlegebase -python use_own_knowledge_dataset.py \ - --csv_path /DIR/SQUAD-KB/squad-kb.csv \ - --output_dir /DIR/SQUAD-KB - -# Start a single-node Ray cluster. -ray start --head - -# A sample finetuning run, you need to specify data_dir, output_dir and model_name_or_path -# run ./examples/rag/finetune_rag_ray.sh --help to see all the possible options - - - -python finetune_rag.py \ - --data_dir /DIR/squad-training-data \ - --output_dir /DIR/model_checkpoints \ - --model_name_or_path facebook/rag-token-base \ - --model_type rag_token \ - --fp16 \ - --gpus 2 \ - --profile \ - --do_train \ - --end2end \ - --do_predict \ - --n_val -1 \ - --train_batch_size 4 \ - --eval_batch_size 1 \ - --max_source_length 128 \ - --max_target_length 25 \ - --val_max_target_length 25 \ - --test_max_target_length 25 \ - --label_smoothing 0.1 \ - --dropout 0.1 \ - --attention_dropout 0.1 \ - --weight_decay 0.001 \ - --adam_epsilon 1e-08 \ - --max_grad_norm 0.1 \ - --lr_scheduler polynomial \ - --learning_rate 3e-05 \ - --num_train_epochs 10 \ - --warmup_steps 500 \ - --gradient_accumulation_steps 8 \ - --distributed_retriever ray \ - --num_retrieval_workers 4 \ - --passages_path /DIR/SQUAD-KB/my_knowledge_dataset \ - --index_path /DIR/SQUAD-KB/my_knowledge_dataset_hnsw_index.faiss \ - --index_name custom \ - --context_encoder_name facebook/dpr-ctx_encoder-multiset-base \ - --csv_path /DIR/SQUAD-KB/squad-kb.csv \ - --index_gpus 1 \ - --gpu_order [5,6,7,8,9,0,1,2,3,4] \ - --shard_dir ./test_dir/kb-shards \ - --indexing_freq 500 - - - -# Stop the Ray cluster. -ray stop - - -#this script was used to test the SQuAD data. -#change the dir paramater acording to your prefernece. -#please use the same device ordere when running CUDA_VISIBLE_DEVICES=5,6,7,8,9,0,1,2,3,4 sh finetune_rag_ray_end2end.sh \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/notebooks/README.md b/spaces/chendl/compositional_test/transformers/notebooks/README.md deleted file mode 100644 index 97f804eb6d935b038f1e2513ee046330368c2ef1..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/notebooks/README.md +++ /dev/null @@ -1,146 +0,0 @@ - - -# 🤗 Transformers Notebooks - -You can find here a list of the official notebooks provided by Hugging Face. - -Also, we would like to list here interesting content created by the community. -If you wrote some notebook(s) leveraging 🤗 Transformers and would like be listed here, please open a -Pull Request so it can be included under the Community notebooks. - - -## Hugging Face's notebooks 🤗 - -### Documentation notebooks - -You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them: - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [Quicktour of the library](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb) | A presentation of the various APIs in Transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/en/transformers_doc/quicktour.ipynb)| -| [Summary of the tasks](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb) | How to run the models of the Transformers library task by task |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| -| [Preprocessing data](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb) | How to use a tokenizer to preprocess your data |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| -| [Fine-tuning a pretrained model](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb) | How to use the Trainer to fine-tune a pretrained model |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| -| [Summary of the tokenizers](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb) | The differences between the tokenizers algorithm |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| -| [Multilingual models](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb) | How to use the multilingual models of the library |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| - - -### PyTorch Examples - -#### Natural Language Processing[[pytorch-nlp]] - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| -| [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb) | How to easily start using transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| -| [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| -| [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| -| [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| -| [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| -| [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| -| [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation.ipynb)| -| [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)| -| [How to train a language model from scratch](https://github.com/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| Highlight all the steps to effectively train Transformer model on custom data | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| -| [How to generate text](https://github.com/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| How to use different decoding methods for language generation with transformers | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| -| [How to generate text (with constraints)](https://github.com/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| How to guide language generation with user-provided constraints | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| -| [Reformer](https://github.com/huggingface/blog/blob/main/notebooks/03_reformer.ipynb)| How Reformer pushes the limits of language modeling | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)| - -#### Computer Vision[[pytorch-cv]] - -| Notebook | Description | | | -|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:| -| [How to fine-tune a model on image classification (Torchvision)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)| -| [How to fine-tune a model on image classification (Albumentations)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)| -| [How to fine-tune a model on image classification (Kornia)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)| -| [How to perform zero-shot object detection with OWL-ViT](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) | Show how to perform zero-shot object detection on images with text queries | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)| -| [How to fine-tune an image captioning model](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | Show how to fine-tune BLIP for image captioning on a custom dataset | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb)| -| [How to build an image similarity system with Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | Show how to build an image similarity system | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb)| -| [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb)| -| [How to fine-tune a VideoMAE model on video classification](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | Show how to preprocess the data and fine-tune a pretrained VideoMAE model on Video Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb)| - -#### Audio[[pytorch-audio]] - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [How to fine-tune a speech recognition model in English](https://github.com/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| -| [How to fine-tune a speech recognition model in any language](https://github.com/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| -| [How to fine-tune a model on audio classification](https://github.com/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| - -#### Other modalities[[pytorch-other]] - -| Notebook | Description | | | -|:----------|:----------------------------------------------------------------------------------------|:-------------|------:| -| [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | -| [How to generate protein folds](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | See how to go from protein sequence to a full protein model and PDB file | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | -| [Probabilistic Time Series Forecasting](https://github.com/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | See how to train Time Series Transformer on a custom dataset | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | - -#### Utility notebooks[[pytorch-utility]] - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [How to export model to ONNX](https://github.com/huggingface/notebooks/blob/main/examples/onnx-export.ipynb)| Highlight how to export and run inference workloads through ONNX | -| [How to use Benchmarks](https://github.com/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| How to benchmark models with transformers | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| - -### TensorFlow Examples - -#### Natural Language Processing[[tensorflow-nlp]] - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| -| [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb) | How to easily start using transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)| -| [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| -| [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| -| [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| -| [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| -| [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| -| [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| -| [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| - -#### Computer Vision[[tensorflow-cv]] - -| Notebook | Description | | | -|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:-------------|------:| -| [How to fine-tune a model on image classification](https://github.com/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb) | Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)| -| [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)| - -#### Other modalities[[tensorflow-other]] - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | - -#### Utility notebooks[[tensorflow-utility]] - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [How to train TF/Keras models on TPU](https://github.com/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | See how to train at high speed on Google's TPU hardware | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | - -### Optimum notebooks - -🤗 [Optimum](https://github.com/huggingface/optimum) is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares. - -| Notebook | Description | | | -|:----------|:-------------|:-------------|------:| -| [How to quantize a model with ONNX Runtime for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| Show how to apply static and dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| -| [How to quantize a model with Intel Neural Compressor for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| Show how to apply static, dynamic and aware training quantization on a model using [Intel Neural Compressor (INC)](https://github.com/intel/neural-compressor) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| -| [How to fine-tune a model on text classification with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| Show how to preprocess the data and fine-tune a model on any GLUE task using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| -| [How to fine-tune a model on summarization with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| Show how to preprocess the data and fine-tune a model on XSUM using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| - -## Community notebooks: - -More notebooks developed by the community are available [here](https://hf.co/docs/transformers/community#community-notebooks). diff --git a/spaces/chendl/compositional_test/transformers/scripts/check_tokenizers.py b/spaces/chendl/compositional_test/transformers/scripts/check_tokenizers.py deleted file mode 100644 index cfd0a7f3a1defc9d2a5cbed51b6b95d326c7a2b8..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/scripts/check_tokenizers.py +++ /dev/null @@ -1,169 +0,0 @@ -from collections import Counter -import datasets -import transformers -from transformers.convert_slow_tokenizer import SLOW_TO_FAST_CONVERTERS - -from transformers.utils import logging - -logging.set_verbosity_info() - -TOKENIZER_CLASSES = { - name: (getattr(transformers, name), getattr(transformers, name + "Fast")) for name in SLOW_TO_FAST_CONVERTERS -} - -dataset = datasets.load_dataset("xnli", split="test+validation") - -total = 0 -perfect = 0 -imperfect = 0 -wrong = 0 - - -def check_diff(spm_diff, tok_diff, slow, fast): - if spm_diff == list(reversed(tok_diff)): - # AAA -> AA+A vs A+AA case. - return True - elif len(spm_diff) == len(tok_diff) and fast.decode(spm_diff) == fast.decode(tok_diff): - # Second order OK - # Barrich -> Barr + ich vs Bar + rich - return True - spm_reencoded = slow.encode(slow.decode(spm_diff)) - tok_reencoded = fast.encode(fast.decode(spm_diff)) - if spm_reencoded != spm_diff and spm_reencoded == tok_reencoded: - # Type 3 error. - # Snehagatha -> - # Sne, h, aga, th, a - # Sne, ha, gat, ha - # Encoding the wrong with sp does not even recover what spm gave us - # It fits tokenizer however... - return True - return False - - -def check_LTR_mark(line, idx, fast): - enc = fast.encode_plus(line)[0] - offsets = enc.offsets - curr, prev = offsets[idx], offsets[idx - 1] - if curr is not None and line[curr[0] : curr[1]] == "\u200f": - return True - if prev is not None and line[prev[0] : prev[1]] == "\u200f": - return True - - -def check_details(line, spm_ids, tok_ids, slow, fast): - # Encoding can be the same with same result AAA -> A + AA vs AA + A - # We can check that we use at least exactly the same number of tokens. - for i, (spm_id, tok_id) in enumerate(zip(spm_ids, tok_ids)): - if spm_id != tok_id: - break - first = i - for i, (spm_id, tok_id) in enumerate(zip(reversed(spm_ids), reversed(tok_ids))): - if spm_id != tok_id: - break - last = len(spm_ids) - i - - spm_diff = spm_ids[first:last] - tok_diff = tok_ids[first:last] - - if check_diff(spm_diff, tok_diff, slow, fast): - return True - - if check_LTR_mark(line, first, fast): - return True - - if last - first > 5: - # We might have twice a single problem, attempt to subdivide the disjointed tokens into smaller problems - spms = Counter(spm_ids[first:last]) - toks = Counter(tok_ids[first:last]) - - removable_tokens = {spm_ for (spm_, si) in spms.items() if toks.get(spm_, 0) == si} - min_width = 3 - for i in range(last - first - min_width): - if all(spm_ids[first + i + j] in removable_tokens for j in range(min_width)): - possible_matches = [ - k - for k in range(last - first - min_width) - if tok_ids[first + k : first + k + min_width] == spm_ids[first + i : first + i + min_width] - ] - for j in possible_matches: - if check_diff(spm_ids[first : first + i], tok_ids[first : first + j], sp, tok) and check_details( - line, - spm_ids[first + i : last], - tok_ids[first + j : last], - slow, - fast, - ): - return True - - print(f"Spm: {[fast.decode([spm_ids[i]]) for i in range(first, last)]}") - try: - print(f"Tok: {[fast.decode([tok_ids[i]]) for i in range(first, last)]}") - except Exception: - pass - - ok_start = fast.decode(spm_ids[:first]) - ok_end = fast.decode(spm_ids[last:]) - wrong = fast.decode(spm_ids[first:last]) - print() - print(wrong) - return False - - -def test_string(slow, fast, text): - global perfect - global imperfect - global wrong - global total - - slow_ids = slow.encode(text) - fast_ids = fast.encode(text) - - skip_assert = False - total += 1 - - if slow_ids != fast_ids: - if check_details(text, slow_ids, fast_ids, slow, fast): - skip_assert = True - imperfect += 1 - else: - wrong += 1 - else: - perfect += 1 - - if total % 10000 == 0: - print(f"({perfect} / {imperfect} / {wrong} ----- {perfect + imperfect + wrong})") - - if skip_assert: - return - - assert ( - slow_ids == fast_ids - ), f"line {text} : \n\n{slow_ids}\n{fast_ids}\n\n{slow.tokenize(text)}\n{fast.tokenize(text)}" - - -def test_tokenizer(slow, fast): - global batch_total - for i in range(len(dataset)): - # premise, all languages - for text in dataset[i]["premise"].values(): - test_string(slow, fast, text) - - # hypothesis, all languages - for text in dataset[i]["hypothesis"]["translation"]: - test_string(slow, fast, text) - - -if __name__ == "__main__": - for name, (slow_class, fast_class) in TOKENIZER_CLASSES.items(): - checkpoint_names = list(slow_class.max_model_input_sizes.keys()) - for checkpoint in checkpoint_names: - imperfect = 0 - perfect = 0 - wrong = 0 - total = 0 - - print(f"========================== Checking {name}: {checkpoint} ==========================") - slow = slow_class.from_pretrained(checkpoint, force_download=True) - fast = fast_class.from_pretrained(checkpoint, force_download=True) - test_tokenizer(slow, fast) - print(f"Accuracy {perfect * 100 / total:.2f}") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/client.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/client.py deleted file mode 100644 index 0d0f4c16c0cfa3751343e2ee60104e3e1a3db04c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/client.py +++ /dev/null @@ -1,1305 +0,0 @@ -"""HTTP Client for asyncio.""" - -import asyncio -import base64 -import hashlib -import json -import os -import sys -import traceback -import warnings -from contextlib import suppress -from types import SimpleNamespace, TracebackType -from typing import ( - Any, - Awaitable, - Callable, - Coroutine, - FrozenSet, - Generator, - Generic, - Iterable, - List, - Mapping, - Optional, - Set, - Tuple, - Type, - TypeVar, - Union, -) - -import attr -from multidict import CIMultiDict, MultiDict, MultiDictProxy, istr -from yarl import URL - -from . import hdrs, http, payload -from .abc import AbstractCookieJar -from .client_exceptions import ( - ClientConnectionError as ClientConnectionError, - ClientConnectorCertificateError as ClientConnectorCertificateError, - ClientConnectorError as ClientConnectorError, - ClientConnectorSSLError as ClientConnectorSSLError, - ClientError as ClientError, - ClientHttpProxyError as ClientHttpProxyError, - ClientOSError as ClientOSError, - ClientPayloadError as ClientPayloadError, - ClientProxyConnectionError as ClientProxyConnectionError, - ClientResponseError as ClientResponseError, - ClientSSLError as ClientSSLError, - ContentTypeError as ContentTypeError, - InvalidURL as InvalidURL, - ServerConnectionError as ServerConnectionError, - ServerDisconnectedError as ServerDisconnectedError, - ServerFingerprintMismatch as ServerFingerprintMismatch, - ServerTimeoutError as ServerTimeoutError, - TooManyRedirects as TooManyRedirects, - WSServerHandshakeError as WSServerHandshakeError, -) -from .client_reqrep import ( - ClientRequest as ClientRequest, - ClientResponse as ClientResponse, - Fingerprint as Fingerprint, - RequestInfo as RequestInfo, - _merge_ssl_params, -) -from .client_ws import ClientWebSocketResponse as ClientWebSocketResponse -from .connector import ( - BaseConnector as BaseConnector, - NamedPipeConnector as NamedPipeConnector, - TCPConnector as TCPConnector, - UnixConnector as UnixConnector, -) -from .cookiejar import CookieJar -from .helpers import ( - DEBUG, - PY_36, - BasicAuth, - TimeoutHandle, - ceil_timeout, - get_env_proxy_for_url, - get_running_loop, - sentinel, - strip_auth_from_url, -) -from .http import WS_KEY, HttpVersion, WebSocketReader, WebSocketWriter -from .http_websocket import WSHandshakeError, WSMessage, ws_ext_gen, ws_ext_parse -from .streams import FlowControlDataQueue -from .tracing import Trace, TraceConfig -from .typedefs import Final, JSONEncoder, LooseCookies, LooseHeaders, StrOrURL - -__all__ = ( - # client_exceptions - "ClientConnectionError", - "ClientConnectorCertificateError", - "ClientConnectorError", - "ClientConnectorSSLError", - "ClientError", - "ClientHttpProxyError", - "ClientOSError", - "ClientPayloadError", - "ClientProxyConnectionError", - "ClientResponseError", - "ClientSSLError", - "ContentTypeError", - "InvalidURL", - "ServerConnectionError", - "ServerDisconnectedError", - "ServerFingerprintMismatch", - "ServerTimeoutError", - "TooManyRedirects", - "WSServerHandshakeError", - # client_reqrep - "ClientRequest", - "ClientResponse", - "Fingerprint", - "RequestInfo", - # connector - "BaseConnector", - "TCPConnector", - "UnixConnector", - "NamedPipeConnector", - # client_ws - "ClientWebSocketResponse", - # client - "ClientSession", - "ClientTimeout", - "request", -) - - -try: - from ssl import SSLContext -except ImportError: # pragma: no cover - SSLContext = object # type: ignore[misc,assignment] - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ClientTimeout: - total: Optional[float] = None - connect: Optional[float] = None - sock_read: Optional[float] = None - sock_connect: Optional[float] = None - - # pool_queue_timeout: Optional[float] = None - # dns_resolution_timeout: Optional[float] = None - # socket_connect_timeout: Optional[float] = None - # connection_acquiring_timeout: Optional[float] = None - # new_connection_timeout: Optional[float] = None - # http_header_timeout: Optional[float] = None - # response_body_timeout: Optional[float] = None - - # to create a timeout specific for a single request, either - # - create a completely new one to overwrite the default - # - or use http://www.attrs.org/en/stable/api.html#attr.evolve - # to overwrite the defaults - - -# 5 Minute default read timeout -DEFAULT_TIMEOUT: Final[ClientTimeout] = ClientTimeout(total=5 * 60) - -_RetType = TypeVar("_RetType") - - -class ClientSession: - """First-class interface for making HTTP requests.""" - - ATTRS = frozenset( - [ - "_base_url", - "_source_traceback", - "_connector", - "requote_redirect_url", - "_loop", - "_cookie_jar", - "_connector_owner", - "_default_auth", - "_version", - "_json_serialize", - "_requote_redirect_url", - "_timeout", - "_raise_for_status", - "_auto_decompress", - "_trust_env", - "_default_headers", - "_skip_auto_headers", - "_request_class", - "_response_class", - "_ws_response_class", - "_trace_configs", - "_read_bufsize", - ] - ) - - _source_traceback = None # type: Optional[traceback.StackSummary] - _connector = None # type: Optional[BaseConnector] - - def __init__( - self, - base_url: Optional[StrOrURL] = None, - *, - connector: Optional[BaseConnector] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - cookies: Optional[LooseCookies] = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Optional[Iterable[str]] = None, - auth: Optional[BasicAuth] = None, - json_serialize: JSONEncoder = json.dumps, - request_class: Type[ClientRequest] = ClientRequest, - response_class: Type[ClientResponse] = ClientResponse, - ws_response_class: Type[ClientWebSocketResponse] = ClientWebSocketResponse, - version: HttpVersion = http.HttpVersion11, - cookie_jar: Optional[AbstractCookieJar] = None, - connector_owner: bool = True, - raise_for_status: bool = False, - read_timeout: Union[float, object] = sentinel, - conn_timeout: Optional[float] = None, - timeout: Union[object, ClientTimeout] = sentinel, - auto_decompress: bool = True, - trust_env: bool = False, - requote_redirect_url: bool = True, - trace_configs: Optional[List[TraceConfig]] = None, - read_bufsize: int = 2**16, - ) -> None: - if loop is None: - if connector is not None: - loop = connector._loop - - loop = get_running_loop(loop) - - if base_url is None or isinstance(base_url, URL): - self._base_url: Optional[URL] = base_url - else: - self._base_url = URL(base_url) - assert ( - self._base_url.origin() == self._base_url - ), "Only absolute URLs without path part are supported" - - if connector is None: - connector = TCPConnector(loop=loop) - - if connector._loop is not loop: - raise RuntimeError("Session and connector has to use same event loop") - - self._loop = loop - - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - if cookie_jar is None: - cookie_jar = CookieJar(loop=loop) - self._cookie_jar = cookie_jar - - if cookies is not None: - self._cookie_jar.update_cookies(cookies) - - self._connector = connector - self._connector_owner = connector_owner - self._default_auth = auth - self._version = version - self._json_serialize = json_serialize - if timeout is sentinel: - self._timeout = DEFAULT_TIMEOUT - if read_timeout is not sentinel: - warnings.warn( - "read_timeout is deprecated, " "use timeout argument instead", - DeprecationWarning, - stacklevel=2, - ) - self._timeout = attr.evolve(self._timeout, total=read_timeout) - if conn_timeout is not None: - self._timeout = attr.evolve(self._timeout, connect=conn_timeout) - warnings.warn( - "conn_timeout is deprecated, " "use timeout argument instead", - DeprecationWarning, - stacklevel=2, - ) - else: - self._timeout = timeout # type: ignore[assignment] - if read_timeout is not sentinel: - raise ValueError( - "read_timeout and timeout parameters " - "conflict, please setup " - "timeout.read" - ) - if conn_timeout is not None: - raise ValueError( - "conn_timeout and timeout parameters " - "conflict, please setup " - "timeout.connect" - ) - self._raise_for_status = raise_for_status - self._auto_decompress = auto_decompress - self._trust_env = trust_env - self._requote_redirect_url = requote_redirect_url - self._read_bufsize = read_bufsize - - # Convert to list of tuples - if headers: - real_headers: CIMultiDict[str] = CIMultiDict(headers) - else: - real_headers = CIMultiDict() - self._default_headers: CIMultiDict[str] = real_headers - if skip_auto_headers is not None: - self._skip_auto_headers = frozenset(istr(i) for i in skip_auto_headers) - else: - self._skip_auto_headers = frozenset() - - self._request_class = request_class - self._response_class = response_class - self._ws_response_class = ws_response_class - - self._trace_configs = trace_configs or [] - for trace_config in self._trace_configs: - trace_config.freeze() - - def __init_subclass__(cls: Type["ClientSession"]) -> None: - warnings.warn( - "Inheritance class {} from ClientSession " - "is discouraged".format(cls.__name__), - DeprecationWarning, - stacklevel=2, - ) - - if DEBUG: - - def __setattr__(self, name: str, val: Any) -> None: - if name not in self.ATTRS: - warnings.warn( - "Setting custom ClientSession.{} attribute " - "is discouraged".format(name), - DeprecationWarning, - stacklevel=2, - ) - super().__setattr__(name, val) - - def __del__(self, _warnings: Any = warnings) -> None: - if not self.closed: - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn( - f"Unclosed client session {self!r}", ResourceWarning, **kwargs - ) - context = {"client_session": self, "message": "Unclosed client session"} - if self._source_traceback is not None: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - def request( - self, method: str, url: StrOrURL, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP request.""" - return _RequestContextManager(self._request(method, url, **kwargs)) - - def _build_url(self, str_or_url: StrOrURL) -> URL: - url = URL(str_or_url) - if self._base_url is None: - return url - else: - assert not url.is_absolute() and url.path.startswith("/") - return self._base_url.join(url) - - async def _request( - self, - method: str, - str_or_url: StrOrURL, - *, - params: Optional[Mapping[str, str]] = None, - data: Any = None, - json: Any = None, - cookies: Optional[LooseCookies] = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Optional[Iterable[str]] = None, - auth: Optional[BasicAuth] = None, - allow_redirects: bool = True, - max_redirects: int = 10, - compress: Optional[str] = None, - chunked: Optional[bool] = None, - expect100: bool = False, - raise_for_status: Optional[bool] = None, - read_until_eof: bool = True, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - timeout: Union[ClientTimeout, object] = sentinel, - verify_ssl: Optional[bool] = None, - fingerprint: Optional[bytes] = None, - ssl_context: Optional[SSLContext] = None, - ssl: Optional[Union[SSLContext, bool, Fingerprint]] = None, - proxy_headers: Optional[LooseHeaders] = None, - trace_request_ctx: Optional[SimpleNamespace] = None, - read_bufsize: Optional[int] = None, - ) -> ClientResponse: - - # NOTE: timeout clamps existing connect and read timeouts. We cannot - # set the default to None because we need to detect if the user wants - # to use the existing timeouts by setting timeout to None. - - if self.closed: - raise RuntimeError("Session is closed") - - ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) - - if data is not None and json is not None: - raise ValueError( - "data and json parameters can not be used at the same time" - ) - elif json is not None: - data = payload.JsonPayload(json, dumps=self._json_serialize) - - if not isinstance(chunked, bool) and chunked is not None: - warnings.warn("Chunk size is deprecated #1615", DeprecationWarning) - - redirects = 0 - history = [] - version = self._version - - # Merge with default headers and transform to CIMultiDict - headers = self._prepare_headers(headers) - proxy_headers = self._prepare_headers(proxy_headers) - - try: - url = self._build_url(str_or_url) - except ValueError as e: - raise InvalidURL(str_or_url) from e - - skip_headers = set(self._skip_auto_headers) - if skip_auto_headers is not None: - for i in skip_auto_headers: - skip_headers.add(istr(i)) - - if proxy is not None: - try: - proxy = URL(proxy) - except ValueError as e: - raise InvalidURL(proxy) from e - - if timeout is sentinel: - real_timeout: ClientTimeout = self._timeout - else: - if not isinstance(timeout, ClientTimeout): - real_timeout = ClientTimeout(total=timeout) # type: ignore[arg-type] - else: - real_timeout = timeout - # timeout is cumulative for all request operations - # (request, redirects, responses, data consuming) - tm = TimeoutHandle(self._loop, real_timeout.total) - handle = tm.start() - - if read_bufsize is None: - read_bufsize = self._read_bufsize - - traces = [ - Trace( - self, - trace_config, - trace_config.trace_config_ctx(trace_request_ctx=trace_request_ctx), - ) - for trace_config in self._trace_configs - ] - - for trace in traces: - await trace.send_request_start(method, url.update_query(params), headers) - - timer = tm.timer() - try: - with timer: - while True: - url, auth_from_url = strip_auth_from_url(url) - if auth and auth_from_url: - raise ValueError( - "Cannot combine AUTH argument with " - "credentials encoded in URL" - ) - - if auth is None: - auth = auth_from_url - if auth is None: - auth = self._default_auth - # It would be confusing if we support explicit - # Authorization header with auth argument - if ( - headers is not None - and auth is not None - and hdrs.AUTHORIZATION in headers - ): - raise ValueError( - "Cannot combine AUTHORIZATION header " - "with AUTH argument or credentials " - "encoded in URL" - ) - - all_cookies = self._cookie_jar.filter_cookies(url) - - if cookies is not None: - tmp_cookie_jar = CookieJar() - tmp_cookie_jar.update_cookies(cookies) - req_cookies = tmp_cookie_jar.filter_cookies(url) - if req_cookies: - all_cookies.load(req_cookies) - - if proxy is not None: - proxy = URL(proxy) - elif self._trust_env: - with suppress(LookupError): - proxy, proxy_auth = get_env_proxy_for_url(url) - - req = self._request_class( - method, - url, - params=params, - headers=headers, - skip_auto_headers=skip_headers, - data=data, - cookies=all_cookies, - auth=auth, - version=version, - compress=compress, - chunked=chunked, - expect100=expect100, - loop=self._loop, - response_class=self._response_class, - proxy=proxy, - proxy_auth=proxy_auth, - timer=timer, - session=self, - ssl=ssl, - proxy_headers=proxy_headers, - traces=traces, - ) - - # connection timeout - try: - async with ceil_timeout(real_timeout.connect): - assert self._connector is not None - conn = await self._connector.connect( - req, traces=traces, timeout=real_timeout - ) - except asyncio.TimeoutError as exc: - raise ServerTimeoutError( - "Connection timeout " "to host {}".format(url) - ) from exc - - assert conn.transport is not None - - assert conn.protocol is not None - conn.protocol.set_response_params( - timer=timer, - skip_payload=method.upper() == "HEAD", - read_until_eof=read_until_eof, - auto_decompress=self._auto_decompress, - read_timeout=real_timeout.sock_read, - read_bufsize=read_bufsize, - ) - - try: - try: - resp = await req.send(conn) - try: - await resp.start(conn) - except BaseException: - resp.close() - raise - except BaseException: - conn.close() - raise - except ClientError: - raise - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise ClientOSError(*exc.args) from exc - - self._cookie_jar.update_cookies(resp.cookies, resp.url) - - # redirects - if resp.status in (301, 302, 303, 307, 308) and allow_redirects: - - for trace in traces: - await trace.send_request_redirect( - method, url.update_query(params), headers, resp - ) - - redirects += 1 - history.append(resp) - if max_redirects and redirects >= max_redirects: - resp.close() - raise TooManyRedirects( - history[0].request_info, tuple(history) - ) - - # For 301 and 302, mimic IE, now changed in RFC - # https://github.com/kennethreitz/requests/pull/269 - if (resp.status == 303 and resp.method != hdrs.METH_HEAD) or ( - resp.status in (301, 302) and resp.method == hdrs.METH_POST - ): - method = hdrs.METH_GET - data = None - if headers.get(hdrs.CONTENT_LENGTH): - headers.pop(hdrs.CONTENT_LENGTH) - - r_url = resp.headers.get(hdrs.LOCATION) or resp.headers.get( - hdrs.URI - ) - if r_url is None: - # see github.com/aio-libs/aiohttp/issues/2022 - break - else: - # reading from correct redirection - # response is forbidden - resp.release() - - try: - parsed_url = URL( - r_url, encoded=not self._requote_redirect_url - ) - - except ValueError as e: - raise InvalidURL(r_url) from e - - scheme = parsed_url.scheme - if scheme not in ("http", "https", ""): - resp.close() - raise ValueError("Can redirect only to http or https") - elif not scheme: - parsed_url = url.join(parsed_url) - - if url.origin() != parsed_url.origin(): - auth = None - headers.pop(hdrs.AUTHORIZATION, None) - - url = parsed_url - params = None - resp.release() - continue - - break - - # check response status - if raise_for_status is None: - raise_for_status = self._raise_for_status - if raise_for_status: - resp.raise_for_status() - - # register connection - if handle is not None: - if resp.connection is not None: - resp.connection.add_callback(handle.cancel) - else: - handle.cancel() - - resp._history = tuple(history) - - for trace in traces: - await trace.send_request_end( - method, url.update_query(params), headers, resp - ) - return resp - - except BaseException as e: - # cleanup timer - tm.close() - if handle: - handle.cancel() - handle = None - - for trace in traces: - await trace.send_request_exception( - method, url.update_query(params), headers, e - ) - raise - - def ws_connect( - self, - url: StrOrURL, - *, - method: str = hdrs.METH_GET, - protocols: Iterable[str] = (), - timeout: float = 10.0, - receive_timeout: Optional[float] = None, - autoclose: bool = True, - autoping: bool = True, - heartbeat: Optional[float] = None, - auth: Optional[BasicAuth] = None, - origin: Optional[str] = None, - params: Optional[Mapping[str, str]] = None, - headers: Optional[LooseHeaders] = None, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - ssl: Union[SSLContext, bool, None, Fingerprint] = None, - verify_ssl: Optional[bool] = None, - fingerprint: Optional[bytes] = None, - ssl_context: Optional[SSLContext] = None, - proxy_headers: Optional[LooseHeaders] = None, - compress: int = 0, - max_msg_size: int = 4 * 1024 * 1024, - ) -> "_WSRequestContextManager": - """Initiate websocket connection.""" - return _WSRequestContextManager( - self._ws_connect( - url, - method=method, - protocols=protocols, - timeout=timeout, - receive_timeout=receive_timeout, - autoclose=autoclose, - autoping=autoping, - heartbeat=heartbeat, - auth=auth, - origin=origin, - params=params, - headers=headers, - proxy=proxy, - proxy_auth=proxy_auth, - ssl=ssl, - verify_ssl=verify_ssl, - fingerprint=fingerprint, - ssl_context=ssl_context, - proxy_headers=proxy_headers, - compress=compress, - max_msg_size=max_msg_size, - ) - ) - - async def _ws_connect( - self, - url: StrOrURL, - *, - method: str = hdrs.METH_GET, - protocols: Iterable[str] = (), - timeout: float = 10.0, - receive_timeout: Optional[float] = None, - autoclose: bool = True, - autoping: bool = True, - heartbeat: Optional[float] = None, - auth: Optional[BasicAuth] = None, - origin: Optional[str] = None, - params: Optional[Mapping[str, str]] = None, - headers: Optional[LooseHeaders] = None, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - ssl: Union[SSLContext, bool, None, Fingerprint] = None, - verify_ssl: Optional[bool] = None, - fingerprint: Optional[bytes] = None, - ssl_context: Optional[SSLContext] = None, - proxy_headers: Optional[LooseHeaders] = None, - compress: int = 0, - max_msg_size: int = 4 * 1024 * 1024, - ) -> ClientWebSocketResponse: - - if headers is None: - real_headers: CIMultiDict[str] = CIMultiDict() - else: - real_headers = CIMultiDict(headers) - - default_headers = { - hdrs.UPGRADE: "websocket", - hdrs.CONNECTION: "upgrade", - hdrs.SEC_WEBSOCKET_VERSION: "13", - } - - for key, value in default_headers.items(): - real_headers.setdefault(key, value) - - sec_key = base64.b64encode(os.urandom(16)) - real_headers[hdrs.SEC_WEBSOCKET_KEY] = sec_key.decode() - - if protocols: - real_headers[hdrs.SEC_WEBSOCKET_PROTOCOL] = ",".join(protocols) - if origin is not None: - real_headers[hdrs.ORIGIN] = origin - if compress: - extstr = ws_ext_gen(compress=compress) - real_headers[hdrs.SEC_WEBSOCKET_EXTENSIONS] = extstr - - ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) - - # send request - resp = await self.request( - method, - url, - params=params, - headers=real_headers, - read_until_eof=False, - auth=auth, - proxy=proxy, - proxy_auth=proxy_auth, - ssl=ssl, - proxy_headers=proxy_headers, - ) - - try: - # check handshake - if resp.status != 101: - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid response status", - status=resp.status, - headers=resp.headers, - ) - - if resp.headers.get(hdrs.UPGRADE, "").lower() != "websocket": - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid upgrade header", - status=resp.status, - headers=resp.headers, - ) - - if resp.headers.get(hdrs.CONNECTION, "").lower() != "upgrade": - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid connection header", - status=resp.status, - headers=resp.headers, - ) - - # key calculation - r_key = resp.headers.get(hdrs.SEC_WEBSOCKET_ACCEPT, "") - match = base64.b64encode(hashlib.sha1(sec_key + WS_KEY).digest()).decode() - if r_key != match: - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid challenge response", - status=resp.status, - headers=resp.headers, - ) - - # websocket protocol - protocol = None - if protocols and hdrs.SEC_WEBSOCKET_PROTOCOL in resp.headers: - resp_protocols = [ - proto.strip() - for proto in resp.headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",") - ] - - for proto in resp_protocols: - if proto in protocols: - protocol = proto - break - - # websocket compress - notakeover = False - if compress: - compress_hdrs = resp.headers.get(hdrs.SEC_WEBSOCKET_EXTENSIONS) - if compress_hdrs: - try: - compress, notakeover = ws_ext_parse(compress_hdrs) - except WSHandshakeError as exc: - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message=exc.args[0], - status=resp.status, - headers=resp.headers, - ) from exc - else: - compress = 0 - notakeover = False - - conn = resp.connection - assert conn is not None - conn_proto = conn.protocol - assert conn_proto is not None - transport = conn.transport - assert transport is not None - reader: FlowControlDataQueue[WSMessage] = FlowControlDataQueue( - conn_proto, 2**16, loop=self._loop - ) - conn_proto.set_parser(WebSocketReader(reader, max_msg_size), reader) - writer = WebSocketWriter( - conn_proto, - transport, - use_mask=True, - compress=compress, - notakeover=notakeover, - ) - except BaseException: - resp.close() - raise - else: - return self._ws_response_class( - reader, - writer, - protocol, - resp, - timeout, - autoclose, - autoping, - self._loop, - receive_timeout=receive_timeout, - heartbeat=heartbeat, - compress=compress, - client_notakeover=notakeover, - ) - - def _prepare_headers(self, headers: Optional[LooseHeaders]) -> "CIMultiDict[str]": - """Add default headers and transform it to CIMultiDict""" - # Convert headers to MultiDict - result = CIMultiDict(self._default_headers) - if headers: - if not isinstance(headers, (MultiDictProxy, MultiDict)): - headers = CIMultiDict(headers) - added_names: Set[str] = set() - for key, value in headers.items(): - if key in added_names: - result.add(key, value) - else: - result[key] = value - added_names.add(key) - return result - - def get( - self, url: StrOrURL, *, allow_redirects: bool = True, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP GET request.""" - return _RequestContextManager( - self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs) - ) - - def options( - self, url: StrOrURL, *, allow_redirects: bool = True, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP OPTIONS request.""" - return _RequestContextManager( - self._request( - hdrs.METH_OPTIONS, url, allow_redirects=allow_redirects, **kwargs - ) - ) - - def head( - self, url: StrOrURL, *, allow_redirects: bool = False, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP HEAD request.""" - return _RequestContextManager( - self._request( - hdrs.METH_HEAD, url, allow_redirects=allow_redirects, **kwargs - ) - ) - - def post( - self, url: StrOrURL, *, data: Any = None, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP POST request.""" - return _RequestContextManager( - self._request(hdrs.METH_POST, url, data=data, **kwargs) - ) - - def put( - self, url: StrOrURL, *, data: Any = None, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP PUT request.""" - return _RequestContextManager( - self._request(hdrs.METH_PUT, url, data=data, **kwargs) - ) - - def patch( - self, url: StrOrURL, *, data: Any = None, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP PATCH request.""" - return _RequestContextManager( - self._request(hdrs.METH_PATCH, url, data=data, **kwargs) - ) - - def delete(self, url: StrOrURL, **kwargs: Any) -> "_RequestContextManager": - """Perform HTTP DELETE request.""" - return _RequestContextManager(self._request(hdrs.METH_DELETE, url, **kwargs)) - - async def close(self) -> None: - """Close underlying connector. - - Release all acquired resources. - """ - if not self.closed: - if self._connector is not None and self._connector_owner: - await self._connector.close() - self._connector = None - - @property - def closed(self) -> bool: - """Is client session closed. - - A readonly property. - """ - return self._connector is None or self._connector.closed - - @property - def connector(self) -> Optional[BaseConnector]: - """Connector instance used for the session.""" - return self._connector - - @property - def cookie_jar(self) -> AbstractCookieJar: - """The session cookies.""" - return self._cookie_jar - - @property - def version(self) -> Tuple[int, int]: - """The session HTTP protocol version.""" - return self._version - - @property - def requote_redirect_url(self) -> bool: - """Do URL requoting on redirection handling.""" - return self._requote_redirect_url - - @requote_redirect_url.setter - def requote_redirect_url(self, val: bool) -> None: - """Do URL requoting on redirection handling.""" - warnings.warn( - "session.requote_redirect_url modification " "is deprecated #2778", - DeprecationWarning, - stacklevel=2, - ) - self._requote_redirect_url = val - - @property - def loop(self) -> asyncio.AbstractEventLoop: - """Session's loop.""" - warnings.warn( - "client.loop property is deprecated", DeprecationWarning, stacklevel=2 - ) - return self._loop - - @property - def timeout(self) -> ClientTimeout: - """Timeout for the session.""" - return self._timeout - - @property - def headers(self) -> "CIMultiDict[str]": - """The default headers of the client session.""" - return self._default_headers - - @property - def skip_auto_headers(self) -> FrozenSet[istr]: - """Headers for which autogeneration should be skipped""" - return self._skip_auto_headers - - @property - def auth(self) -> Optional[BasicAuth]: - """An object that represents HTTP Basic Authorization""" - return self._default_auth - - @property - def json_serialize(self) -> JSONEncoder: - """Json serializer callable""" - return self._json_serialize - - @property - def connector_owner(self) -> bool: - """Should connector be closed on session closing""" - return self._connector_owner - - @property - def raise_for_status( - self, - ) -> Union[bool, Callable[[ClientResponse], Awaitable[None]]]: - """Should `ClientResponse.raise_for_status()` be called for each response.""" - return self._raise_for_status - - @property - def auto_decompress(self) -> bool: - """Should the body response be automatically decompressed.""" - return self._auto_decompress - - @property - def trust_env(self) -> bool: - """ - Should proxies information from environment or netrc be trusted. - - Information is from HTTP_PROXY / HTTPS_PROXY environment variables - or ~/.netrc file if present. - """ - return self._trust_env - - @property - def trace_configs(self) -> List[TraceConfig]: - """A list of TraceConfig instances used for client tracing""" - return self._trace_configs - - def detach(self) -> None: - """Detach connector from session without closing the former. - - Session is switched to closed state anyway. - """ - self._connector = None - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "ClientSession": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - await self.close() - - -class _BaseRequestContextManager(Coroutine[Any, Any, _RetType], Generic[_RetType]): - - __slots__ = ("_coro", "_resp") - - def __init__(self, coro: Coroutine["asyncio.Future[Any]", None, _RetType]) -> None: - self._coro = coro - - def send(self, arg: None) -> "asyncio.Future[Any]": - return self._coro.send(arg) - - def throw(self, arg: BaseException) -> None: # type: ignore[arg-type,override] - self._coro.throw(arg) - - def close(self) -> None: - return self._coro.close() - - def __await__(self) -> Generator[Any, None, _RetType]: - ret = self._coro.__await__() - return ret - - def __iter__(self) -> Generator[Any, None, _RetType]: - return self.__await__() - - async def __aenter__(self) -> _RetType: - self._resp = await self._coro - return self._resp - - -class _RequestContextManager(_BaseRequestContextManager[ClientResponse]): - __slots__ = () - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - # We're basing behavior on the exception as it can be caused by - # user code unrelated to the status of the connection. If you - # would like to close a connection you must do that - # explicitly. Otherwise connection error handling should kick in - # and close/recycle the connection as required. - self._resp.release() - - -class _WSRequestContextManager(_BaseRequestContextManager[ClientWebSocketResponse]): - __slots__ = () - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - await self._resp.close() - - -class _SessionRequestContextManager: - - __slots__ = ("_coro", "_resp", "_session") - - def __init__( - self, - coro: Coroutine["asyncio.Future[Any]", None, ClientResponse], - session: ClientSession, - ) -> None: - self._coro = coro - self._resp: Optional[ClientResponse] = None - self._session = session - - async def __aenter__(self) -> ClientResponse: - try: - self._resp = await self._coro - except BaseException: - await self._session.close() - raise - else: - return self._resp - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - assert self._resp is not None - self._resp.close() - await self._session.close() - - -def request( - method: str, - url: StrOrURL, - *, - params: Optional[Mapping[str, str]] = None, - data: Any = None, - json: Any = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Optional[Iterable[str]] = None, - auth: Optional[BasicAuth] = None, - allow_redirects: bool = True, - max_redirects: int = 10, - compress: Optional[str] = None, - chunked: Optional[bool] = None, - expect100: bool = False, - raise_for_status: Optional[bool] = None, - read_until_eof: bool = True, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - timeout: Union[ClientTimeout, object] = sentinel, - cookies: Optional[LooseCookies] = None, - version: HttpVersion = http.HttpVersion11, - connector: Optional[BaseConnector] = None, - read_bufsize: Optional[int] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> _SessionRequestContextManager: - """Constructs and sends a request. - - Returns response object. - method - HTTP method - url - request url - params - (optional) Dictionary or bytes to be sent in the query - string of the new request - data - (optional) Dictionary, bytes, or file-like object to - send in the body of the request - json - (optional) Any json compatible python object - headers - (optional) Dictionary of HTTP Headers to send with - the request - cookies - (optional) Dict object to send with the request - auth - (optional) BasicAuth named tuple represent HTTP Basic Auth - auth - aiohttp.helpers.BasicAuth - allow_redirects - (optional) If set to False, do not follow - redirects - version - Request HTTP version. - compress - Set to True if request has to be compressed - with deflate encoding. - chunked - Set to chunk size for chunked transfer encoding. - expect100 - Expect 100-continue response from server. - connector - BaseConnector sub-class instance to support - connection pooling. - read_until_eof - Read response until eof if response - does not have Content-Length header. - loop - Optional event loop. - timeout - Optional ClientTimeout settings structure, 5min - total timeout by default. - Usage:: - >>> import aiohttp - >>> resp = await aiohttp.request('GET', 'http://python.org/') - >>> resp - - >>> data = await resp.read() - """ - connector_owner = False - if connector is None: - connector_owner = True - connector = TCPConnector(loop=loop, force_close=True) - - session = ClientSession( - loop=loop, - cookies=cookies, - version=version, - timeout=timeout, - connector=connector, - connector_owner=connector_owner, - ) - - return _SessionRequestContextManager( - session._request( - method, - url, - params=params, - data=data, - json=json, - headers=headers, - skip_auto_headers=skip_auto_headers, - auth=auth, - allow_redirects=allow_redirects, - max_redirects=max_redirects, - compress=compress, - chunked=chunked, - expect100=expect100, - raise_for_status=raise_for_status, - read_until_eof=read_until_eof, - proxy=proxy, - proxy_auth=proxy_auth, - read_bufsize=read_bufsize, - ), - session, - ) diff --git a/spaces/cihyFjudo/fairness-paper-search/Hetman File Repair 11 Crack A Simple and Effective Way to Repair Your Files.md b/spaces/cihyFjudo/fairness-paper-search/Hetman File Repair 11 Crack A Simple and Effective Way to Repair Your Files.md deleted file mode 100644 index 569588bea2399d630fdbe71cf660a0e0687fc0fc..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Hetman File Repair 11 Crack A Simple and Effective Way to Repair Your Files.md +++ /dev/null @@ -1,6 +0,0 @@ -

          hetmanfilerepair11crack


          Download Filehttps://tinurli.com/2uwjaX



          - - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/cd.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/cd.py deleted file mode 100644 index 6e56fe84a9e0e63b918141bc27d708b2d915563f..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/cd.py +++ /dev/null @@ -1,390 +0,0 @@ -import importlib -from codecs import IncrementalDecoder -from collections import Counter -from functools import lru_cache -from typing import Counter as TypeCounter, Dict, List, Optional, Tuple - -from .assets import FREQUENCIES -from .constant import KO_NAMES, LANGUAGE_SUPPORTED_COUNT, TOO_SMALL_SEQUENCE, ZH_NAMES -from .md import is_suspiciously_successive_range -from .models import CoherenceMatches -from .utils import ( - is_accentuated, - is_latin, - is_multi_byte_encoding, - is_unicode_range_secondary, - unicode_range, -) - - -def encoding_unicode_range(iana_name: str) -> List[str]: - """ - Return associated unicode ranges in a single byte code page. - """ - if is_multi_byte_encoding(iana_name): - raise IOError("Function not supported on multi-byte code page") - - decoder = importlib.import_module( - "encodings.{}".format(iana_name) - ).IncrementalDecoder - - p: IncrementalDecoder = decoder(errors="ignore") - seen_ranges: Dict[str, int] = {} - character_count: int = 0 - - for i in range(0x40, 0xFF): - chunk: str = p.decode(bytes([i])) - - if chunk: - character_range: Optional[str] = unicode_range(chunk) - - if character_range is None: - continue - - if is_unicode_range_secondary(character_range) is False: - if character_range not in seen_ranges: - seen_ranges[character_range] = 0 - seen_ranges[character_range] += 1 - character_count += 1 - - return sorted( - [ - character_range - for character_range in seen_ranges - if seen_ranges[character_range] / character_count >= 0.15 - ] - ) - - -def unicode_range_languages(primary_range: str) -> List[str]: - """ - Return inferred languages used with a unicode range. - """ - languages: List[str] = [] - - for language, characters in FREQUENCIES.items(): - for character in characters: - if unicode_range(character) == primary_range: - languages.append(language) - break - - return languages - - -@lru_cache() -def encoding_languages(iana_name: str) -> List[str]: - """ - Single-byte encoding language association. Some code page are heavily linked to particular language(s). - This function does the correspondence. - """ - unicode_ranges: List[str] = encoding_unicode_range(iana_name) - primary_range: Optional[str] = None - - for specified_range in unicode_ranges: - if "Latin" not in specified_range: - primary_range = specified_range - break - - if primary_range is None: - return ["Latin Based"] - - return unicode_range_languages(primary_range) - - -@lru_cache() -def mb_encoding_languages(iana_name: str) -> List[str]: - """ - Multi-byte encoding language association. Some code page are heavily linked to particular language(s). - This function does the correspondence. - """ - if ( - iana_name.startswith("shift_") - or iana_name.startswith("iso2022_jp") - or iana_name.startswith("euc_j") - or iana_name == "cp932" - ): - return ["Japanese"] - if iana_name.startswith("gb") or iana_name in ZH_NAMES: - return ["Chinese"] - if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES: - return ["Korean"] - - return [] - - -@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT) -def get_target_features(language: str) -> Tuple[bool, bool]: - """ - Determine main aspects from a supported language if it contains accents and if is pure Latin. - """ - target_have_accents: bool = False - target_pure_latin: bool = True - - for character in FREQUENCIES[language]: - if not target_have_accents and is_accentuated(character): - target_have_accents = True - if target_pure_latin and is_latin(character) is False: - target_pure_latin = False - - return target_have_accents, target_pure_latin - - -def alphabet_languages( - characters: List[str], ignore_non_latin: bool = False -) -> List[str]: - """ - Return associated languages associated to given characters. - """ - languages: List[Tuple[str, float]] = [] - - source_have_accents = any(is_accentuated(character) for character in characters) - - for language, language_characters in FREQUENCIES.items(): - target_have_accents, target_pure_latin = get_target_features(language) - - if ignore_non_latin and target_pure_latin is False: - continue - - if target_have_accents is False and source_have_accents: - continue - - character_count: int = len(language_characters) - - character_match_count: int = len( - [c for c in language_characters if c in characters] - ) - - ratio: float = character_match_count / character_count - - if ratio >= 0.2: - languages.append((language, ratio)) - - languages = sorted(languages, key=lambda x: x[1], reverse=True) - - return [compatible_language[0] for compatible_language in languages] - - -def characters_popularity_compare( - language: str, ordered_characters: List[str] -) -> float: - """ - Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language. - The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit). - Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.) - """ - if language not in FREQUENCIES: - raise ValueError("{} not available".format(language)) - - character_approved_count: int = 0 - FREQUENCIES_language_set = set(FREQUENCIES[language]) - - ordered_characters_count: int = len(ordered_characters) - target_language_characters_count: int = len(FREQUENCIES[language]) - - large_alphabet: bool = target_language_characters_count > 26 - - for character, character_rank in zip( - ordered_characters, range(0, ordered_characters_count) - ): - if character not in FREQUENCIES_language_set: - continue - - character_rank_in_language: int = FREQUENCIES[language].index(character) - expected_projection_ratio: float = ( - target_language_characters_count / ordered_characters_count - ) - character_rank_projection: int = int(character_rank * expected_projection_ratio) - - if ( - large_alphabet is False - and abs(character_rank_projection - character_rank_in_language) > 4 - ): - continue - - if ( - large_alphabet is True - and abs(character_rank_projection - character_rank_in_language) - < target_language_characters_count / 3 - ): - character_approved_count += 1 - continue - - characters_before_source: List[str] = FREQUENCIES[language][ - 0:character_rank_in_language - ] - characters_after_source: List[str] = FREQUENCIES[language][ - character_rank_in_language: - ] - characters_before: List[str] = ordered_characters[0:character_rank] - characters_after: List[str] = ordered_characters[character_rank:] - - before_match_count: int = len( - set(characters_before) & set(characters_before_source) - ) - - after_match_count: int = len( - set(characters_after) & set(characters_after_source) - ) - - if len(characters_before_source) == 0 and before_match_count <= 4: - character_approved_count += 1 - continue - - if len(characters_after_source) == 0 and after_match_count <= 4: - character_approved_count += 1 - continue - - if ( - before_match_count / len(characters_before_source) >= 0.4 - or after_match_count / len(characters_after_source) >= 0.4 - ): - character_approved_count += 1 - continue - - return character_approved_count / len(ordered_characters) - - -def alpha_unicode_split(decoded_sequence: str) -> List[str]: - """ - Given a decoded text sequence, return a list of str. Unicode range / alphabet separation. - Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list; - One containing the latin letters and the other hebrew. - """ - layers: Dict[str, str] = {} - - for character in decoded_sequence: - if character.isalpha() is False: - continue - - character_range: Optional[str] = unicode_range(character) - - if character_range is None: - continue - - layer_target_range: Optional[str] = None - - for discovered_range in layers: - if ( - is_suspiciously_successive_range(discovered_range, character_range) - is False - ): - layer_target_range = discovered_range - break - - if layer_target_range is None: - layer_target_range = character_range - - if layer_target_range not in layers: - layers[layer_target_range] = character.lower() - continue - - layers[layer_target_range] += character.lower() - - return list(layers.values()) - - -def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches: - """ - This function merge results previously given by the function coherence_ratio. - The return type is the same as coherence_ratio. - """ - per_language_ratios: Dict[str, List[float]] = {} - for result in results: - for sub_result in result: - language, ratio = sub_result - if language not in per_language_ratios: - per_language_ratios[language] = [ratio] - continue - per_language_ratios[language].append(ratio) - - merge = [ - ( - language, - round( - sum(per_language_ratios[language]) / len(per_language_ratios[language]), - 4, - ), - ) - for language in per_language_ratios - ] - - return sorted(merge, key=lambda x: x[1], reverse=True) - - -def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches: - """ - We shall NOT return "English—" in CoherenceMatches because it is an alternative - of "English". This function only keeps the best match and remove the em-dash in it. - """ - index_results: Dict[str, List[float]] = dict() - - for result in results: - language, ratio = result - no_em_name: str = language.replace("—", "") - - if no_em_name not in index_results: - index_results[no_em_name] = [] - - index_results[no_em_name].append(ratio) - - if any(len(index_results[e]) > 1 for e in index_results): - filtered_results: CoherenceMatches = [] - - for language in index_results: - filtered_results.append((language, max(index_results[language]))) - - return filtered_results - - return results - - -@lru_cache(maxsize=2048) -def coherence_ratio( - decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None -) -> CoherenceMatches: - """ - Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers. - A layer = Character extraction by alphabets/ranges. - """ - - results: List[Tuple[str, float]] = [] - ignore_non_latin: bool = False - - sufficient_match_count: int = 0 - - lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else [] - if "Latin Based" in lg_inclusion_list: - ignore_non_latin = True - lg_inclusion_list.remove("Latin Based") - - for layer in alpha_unicode_split(decoded_sequence): - sequence_frequencies: TypeCounter[str] = Counter(layer) - most_common = sequence_frequencies.most_common() - - character_count: int = sum(o for c, o in most_common) - - if character_count <= TOO_SMALL_SEQUENCE: - continue - - popular_character_ordered: List[str] = [c for c, o in most_common] - - for language in lg_inclusion_list or alphabet_languages( - popular_character_ordered, ignore_non_latin - ): - ratio: float = characters_popularity_compare( - language, popular_character_ordered - ) - - if ratio < threshold: - continue - elif ratio >= 0.8: - sufficient_match_count += 1 - - results.append((language, round(ratio, 4))) - - if sufficient_match_count >= 3: - break - - return sorted( - filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True - ) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/renderer.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/renderer.py deleted file mode 100644 index ef1d065ee1328728af04ab61525dad77a73e3d28..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/renderer.py +++ /dev/null @@ -1,106 +0,0 @@ -from __future__ import annotations - -from abc import ABC, abstractmethod -from typing import TYPE_CHECKING, Any - -import numpy as np - -if TYPE_CHECKING: - import io - - from numpy.typing import ArrayLike - - from contourpy._contourpy import CoordinateArray, FillReturn, FillType, LineReturn, LineType - - -class Renderer(ABC): - """Abstract base class for renderers, defining the interface that they must implement.""" - - def _grid_as_2d(self, x: ArrayLike, y: ArrayLike) -> tuple[CoordinateArray, CoordinateArray]: - x = np.asarray(x) - y = np.asarray(y) - if x.ndim == 1: - x, y = np.meshgrid(x, y) - return x, y - - x = np.asarray(x) - y = np.asarray(y) - if x.ndim == 1: - x, y = np.meshgrid(x, y) - return x, y - - @abstractmethod - def filled( - self, - filled: FillReturn, - fill_type: FillType, - ax: Any = 0, - color: str = "C0", - alpha: float = 0.7, - ) -> None: - pass - - @abstractmethod - def grid( - self, - x: ArrayLike, - y: ArrayLike, - ax: Any = 0, - color: str = "black", - alpha: float = 0.1, - point_color: str | None = None, - quad_as_tri_alpha: float = 0, - ) -> None: - pass - - @abstractmethod - def lines( - self, - lines: LineReturn, - line_type: LineType, - ax: Any = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - ) -> None: - pass - - @abstractmethod - def mask( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike | np.ma.MaskedArray[Any, Any], - ax: Any = 0, - color: str = "black", - ) -> None: - pass - - @abstractmethod - def save(self, filename: str, transparent: bool = False) -> None: - pass - - @abstractmethod - def save_to_buffer(self) -> io.BytesIO: - pass - - @abstractmethod - def show(self) -> None: - pass - - @abstractmethod - def title(self, title: str, ax: Any = 0, color: str | None = None) -> None: - pass - - @abstractmethod - def z_values( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Any = 0, - color: str = "green", - fmt: str = ".1f", - quad_as_tri: bool = False, - ) -> None: - pass diff --git a/spaces/codedog-ai/codedog-demo/app.py b/spaces/codedog-ai/codedog-demo/app.py deleted file mode 100644 index 811358b453a012799139f37cf1ea1910fc4502f0..0000000000000000000000000000000000000000 --- a/spaces/codedog-ai/codedog-demo/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import gradio as gr - -from codedog_demo.callbacks import get_sample_choices, request_pr_review, show_sample - -sample_choices = get_sample_choices() - - -text = """# [codedog-ai/codedog #2 - feat(telemetry): :sparkles: collect gpt api cost](https://github.com/codedog-ai/codedog/pull/2) Pull Request Report - -*powered by GPT and codedog 0.8.2* - -## Execution -- Start at: 2023-09-07 07:18:18 -- Time usage: 12.72s -- Openai api tokens: 3506 -- Openai api costs: $0.0460 - - - - -## PR Summary - -### PR Overview -This PR is a new feature :sparkles: - -This PR aims to collect the cost of GPT API calls from the openai callback of langchain. It modifies several functions in the 'codedog/review.py' file to include an additional parameter 'cb.total_cost' in the '_meter_api_call_tokens' function call and updates the value of the 'cost' key in the '_telemetry' dictionary. It also modifies the 'examples/github/github_review.py' file to update the variables 'repository_name_or_id' and 'pull_request_number'. - - - -### Change Details - -| Major Changes | Description | -|---|---| -| **[review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-10471033f603ac7fae28b2c7c57040e8732947f0 "codedog/review.py")** | This diff contains the following changes in the file codedog/review.py: - Added a new key "cost" to the dictionary `_telemetry` in the `__init__` function. - Modified the `_single_file_summarize` function to include an additional parameter `cb.total_cost` in the `_meter_api_call_tokens` function call. - Modified the `_changelist_summarize` function to include an additional parameter `cb.total_cost` in the `_meter_api_call_tokens` function call. - Modified the `_feedback` function to include an additional parameter `cb.total_cost` in the `_meter_api_call_tokens` function call. - Modified the `_meter_api_call_tokens` function to include a new parameter `cost` and update the value of the "cost" key in the `_telemetry` dictionary. - No other changes were made in the file. | -| **[github_review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-78de2b9548d0316c55661aaf9b2222ad80a07012 "examples/github/github_review.py")** | This diff contains changes in the file `github_review.py`. The changes include: - Commenting out the lines that set the variables `repository_name_or_id` and `pull_request_number` to "ClickHouse/ClickHouse" and 49113 respectively. - Adding new lines that set the variables `repository_name_or_id` to "Arcadia822/codedog" and `pull_request_number` to 2. - The function `build_pull_request_event` is called with the updated `repository_name_or_id` and `pull_request_number` variables. | - - - - -
          -

          Change File List

          - -Modified files: -- codedog/review.py -- examples/github/github_review.py - - -
          - - - -## Code Review (preview) - -*This feature is still under test. Suggestions are given by AI and might be incorrect.* - -**[codedog/review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-10471033f603ac7fae28b2c7c57040e8732947f0)** - -Based on the code diff, here are my observations and suggestions: - -1. Line 44: The code change to add a new key "cost" to the `_telemetry` dictionary seems correct. It allows tracking the cost associated with API calls. - -2. Line 113 and 130: The code changes to the `_meter_api_call_tokens` method seem correct. It now accepts an additional parameter `cb.total_cost` to track the cost associated with API calls. - -3. Line 144: The code change to pass `cb.total_cost` as the second argument to `_meter_api_call_tokens` method seems correct. It ensures that the cost is properly tracked for API calls made during the feedback process. - -4. Line 175: The code change to add the `cost` key to the `TEMPLATE.REPORT_HEADER` format seems correct. It allows displaying the total cost in the generated report. - -Overall, the code changes seem correct and aligned with the purpose of tracking API call costs. However, here are a few suggestions for the author: - -- It would be helpful to include comments or docstrings explaining the purpose and usage of the `_meter_api_call_tokens` method and its parameters. - -- Consider using more descriptive variable names instead of abbreviations like `cb` to improve code readability. - -- Ensure that the `cb.total_cost` value passed to `_meter_api_call_tokens` is calculated correctly and represents the actual cost of API calls. - -- Consider adding unit tests to verify the correctness of the code changes and to ensure that the cost tracking functionality works as expected. - -- Double-check if there are any other places in the codebase where the `cost` value needs to be updated or used. - -These suggestions will help improve the clarity, maintainability, and reliability of the code. - -**[examples/github/github_review.py](https://github.com/codedog-ai/codedog/pull/2/files#diff-78de2b9548d0316c55661aaf9b2222ad80a07012)** - -Based on the code diff, it seems that the author has made some changes to the `github_review.py` file. Here are my observations and suggestions: - -1. The author has commented out the lines that set the `repository_name_or_id` and `pull_request_number` variables for the "ClickHouse/ClickHouse" repository. It appears that the author wants to disable this repository for now. If this change is intentional, it is fine. - -2. The author has uncommented the lines that set the `repository_name_or_id` and `pull_request_number` variables for the "Arcadia822/codedog" repository and pull request number 2. If this change is intentional, it is fine. - -3. It is important to ensure that the correct repository and pull request number are set for the desired review. Please double-check that the "Arcadia822/codedog" repository and pull request number 2 are the intended targets for the review. - -Overall, the code change seems to be correct, assuming the author's intention is to disable the "ClickHouse/ClickHouse" repository and review the "Arcadia822/codedog" repository's pull request number 2. - - - - -""" - - -about = """Codedog is used for a while in my team (reviewed about 2000 PRs.). -Basically it's a service triggered by PR event and comment directly on the PR to give a pre human review. - -Comment report includes PR summary and code suggestions. Summary is great and time saving. -But suggestions are not very valuable now. - -CR practice learned from: https://google.github.io/eng-practices/review/reviewer -""" - - -with gr.Blocks(theme="xiaobaiyuan/theme_brief") as demo: - gr.Markdown("# Codedog - A pull reqeust review tool") - - gr.Markdown( - """**Codedog is designed to save reviewer's time by providing useful information based on PR context.** - -- Github App (Rate limit is low): https://github.com/apps/codedog-assistant -- Source Code: https://github.com/codedog-ai/codedog -- Deploy as a service: https://github.com/codedog-ai/codedog/tree/master/examples -- Feedback or showcase ❤️: https://github.com/codedog-ai/codedog/discussions -""" - ) - - with gr.Tab(label="Try Yourself"): - with gr.Row(): - custom_pr_url = gr.Textbox( - max_lines=1, - value="https://github.com/codedog-ai/codedog/pull/2", - placeholder="Paste Github PR URL here", - show_label=False, - ) - with gr.Row(): - custom_submit = gr.Button(value="Review It") - with gr.Row(): - with gr.Tab(label="Markdown"): - custom_content = gr.Markdown(value=text) - with gr.Tab(label="Raw"): - custom_content_raw = gr.Textbox( - value=text, show_label=False, lines=100, max_lines=500 - ) - custom_submit.click( - request_pr_review, - inputs=[custom_pr_url], - outputs=[custom_content, custom_content_raw], - ) - - # with gr.Tab(label="Samples"): - # sample_choice = gr.Radio(choices=sample_choices, type="index", show_label=False) - # sample_content = gr.Markdown(value="") - - # sample_choice.input( - # show_sample, inputs=[sample_choice], outputs=[sample_content] - # ) - - with gr.Tab(label="About"): - gr.Markdown(value=about) - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h265_syntax_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h265_syntax_template.c deleted file mode 100644 index 2d4b9547185c4e95fc920869472e48034f2c657f..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_h265_syntax_template.c +++ /dev/null @@ -1,2101 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -static int FUNC(rbsp_trailing_bits)(CodedBitstreamContext *ctx, RWContext *rw) -{ - int err; - - fixed(1, rbsp_stop_one_bit, 1); - while (byte_alignment(rw) != 0) - fixed(1, rbsp_alignment_zero_bit, 0); - - return 0; -} - -static int FUNC(nal_unit_header)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawNALUnitHeader *current, - int expected_nal_unit_type) -{ - int err; - - fixed(1, forbidden_zero_bit, 0); - - if (expected_nal_unit_type >= 0) - u(6, nal_unit_type, expected_nal_unit_type, - expected_nal_unit_type); - else - ub(6, nal_unit_type); - - u(6, nuh_layer_id, 0, 62); - u(3, nuh_temporal_id_plus1, 1, 7); - - return 0; -} - -static int FUNC(byte_alignment)(CodedBitstreamContext *ctx, RWContext *rw) -{ - int err; - - fixed(1, alignment_bit_equal_to_one, 1); - while (byte_alignment(rw) != 0) - fixed(1, alignment_bit_equal_to_zero, 0); - - return 0; -} - -static int FUNC(extension_data)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawExtensionData *current) -{ - int err; - size_t k; -#ifdef READ - GetBitContext start; - uint8_t bit; - start = *rw; - for (k = 0; cbs_h2645_read_more_rbsp_data(rw); k++) - skip_bits(rw, 1); - current->bit_length = k; - if (k > 0) { - *rw = start; - allocate(current->data, (current->bit_length + 7) / 8); - for (k = 0; k < current->bit_length; k++) { - xu(1, extension_data, bit, 0, 1, 0); - current->data[k / 8] |= bit << (7 - k % 8); - } - } -#else - for (k = 0; k < current->bit_length; k++) - xu(1, extension_data, current->data[k / 8] >> (7 - k % 8) & 1, 0, 1, 0); -#endif - return 0; -} - -static int FUNC(profile_tier_level)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawProfileTierLevel *current, - int profile_present_flag, - int max_num_sub_layers_minus1) -{ - int err, i, j; - - if (profile_present_flag) { - u(2, general_profile_space, 0, 0); - flag(general_tier_flag); - ub(5, general_profile_idc); - - for (j = 0; j < 32; j++) - flags(general_profile_compatibility_flag[j], 1, j); - - flag(general_progressive_source_flag); - flag(general_interlaced_source_flag); - flag(general_non_packed_constraint_flag); - flag(general_frame_only_constraint_flag); - -#define profile_compatible(x) (current->general_profile_idc == (x) || \ - current->general_profile_compatibility_flag[x]) - if (profile_compatible(4) || profile_compatible(5) || - profile_compatible(6) || profile_compatible(7) || - profile_compatible(8) || profile_compatible(9) || - profile_compatible(10) || profile_compatible(11)) { - flag(general_max_12bit_constraint_flag); - flag(general_max_10bit_constraint_flag); - flag(general_max_8bit_constraint_flag); - flag(general_max_422chroma_constraint_flag); - flag(general_max_420chroma_constraint_flag); - flag(general_max_monochrome_constraint_flag); - flag(general_intra_constraint_flag); - flag(general_one_picture_only_constraint_flag); - flag(general_lower_bit_rate_constraint_flag); - - if (profile_compatible(5) || profile_compatible(9) || - profile_compatible(10) || profile_compatible(11)) { - flag(general_max_14bit_constraint_flag); - fixed(24, general_reserved_zero_33bits, 0); - fixed( 9, general_reserved_zero_33bits, 0); - } else { - fixed(24, general_reserved_zero_34bits, 0); - fixed(10, general_reserved_zero_34bits, 0); - } - } else if (profile_compatible(2)) { - fixed(7, general_reserved_zero_7bits, 0); - flag(general_one_picture_only_constraint_flag); - fixed(24, general_reserved_zero_35bits, 0); - fixed(11, general_reserved_zero_35bits, 0); - } else { - fixed(24, general_reserved_zero_43bits, 0); - fixed(19, general_reserved_zero_43bits, 0); - } - - if (profile_compatible(1) || profile_compatible(2) || - profile_compatible(3) || profile_compatible(4) || - profile_compatible(5) || profile_compatible(9) || - profile_compatible(11)) { - flag(general_inbld_flag); - } else { - fixed(1, general_reserved_zero_bit, 0); - } -#undef profile_compatible - } - - ub(8, general_level_idc); - - for (i = 0; i < max_num_sub_layers_minus1; i++) { - flags(sub_layer_profile_present_flag[i], 1, i); - flags(sub_layer_level_present_flag[i], 1, i); - } - - if (max_num_sub_layers_minus1 > 0) { - for (i = max_num_sub_layers_minus1; i < 8; i++) - fixed(2, reserved_zero_2bits, 0); - } - - for (i = 0; i < max_num_sub_layers_minus1; i++) { - if (current->sub_layer_profile_present_flag[i]) { - us(2, sub_layer_profile_space[i], 0, 0, 1, i); - flags(sub_layer_tier_flag[i], 1, i); - ubs(5, sub_layer_profile_idc[i], 1, i); - - for (j = 0; j < 32; j++) - flags(sub_layer_profile_compatibility_flag[i][j], 2, i, j); - - flags(sub_layer_progressive_source_flag[i], 1, i); - flags(sub_layer_interlaced_source_flag[i], 1, i); - flags(sub_layer_non_packed_constraint_flag[i], 1, i); - flags(sub_layer_frame_only_constraint_flag[i], 1, i); - -#define profile_compatible(x) (current->sub_layer_profile_idc[i] == (x) || \ - current->sub_layer_profile_compatibility_flag[i][x]) - if (profile_compatible(4) || profile_compatible(5) || - profile_compatible(6) || profile_compatible(7) || - profile_compatible(8) || profile_compatible(9) || - profile_compatible(10) || profile_compatible(11)) { - flags(sub_layer_max_12bit_constraint_flag[i], 1, i); - flags(sub_layer_max_10bit_constraint_flag[i], 1, i); - flags(sub_layer_max_8bit_constraint_flag[i], 1, i); - flags(sub_layer_max_422chroma_constraint_flag[i], 1, i); - flags(sub_layer_max_420chroma_constraint_flag[i], 1, i); - flags(sub_layer_max_monochrome_constraint_flag[i], 1, i); - flags(sub_layer_intra_constraint_flag[i], 1, i); - flags(sub_layer_one_picture_only_constraint_flag[i], 1, i); - flags(sub_layer_lower_bit_rate_constraint_flag[i], 1, i); - - if (profile_compatible(5) || profile_compatible(9) || - profile_compatible(10) || profile_compatible(11)) { - flags(sub_layer_max_14bit_constraint_flag[i], 1, i); - fixed(24, sub_layer_reserved_zero_33bits, 0); - fixed( 9, sub_layer_reserved_zero_33bits, 0); - } else { - fixed(24, sub_layer_reserved_zero_34bits, 0); - fixed(10, sub_layer_reserved_zero_34bits, 0); - } - } else if (profile_compatible(2)) { - fixed(7, sub_layer_reserved_zero_7bits, 0); - flags(sub_layer_one_picture_only_constraint_flag[i], 1, i); - fixed(24, sub_layer_reserved_zero_43bits, 0); - fixed(11, sub_layer_reserved_zero_43bits, 0); - } else { - fixed(24, sub_layer_reserved_zero_43bits, 0); - fixed(19, sub_layer_reserved_zero_43bits, 0); - } - - if (profile_compatible(1) || profile_compatible(2) || - profile_compatible(3) || profile_compatible(4) || - profile_compatible(5) || profile_compatible(9) || - profile_compatible(11)) { - flags(sub_layer_inbld_flag[i], 1, i); - } else { - fixed(1, sub_layer_reserved_zero_bit, 0); - } -#undef profile_compatible - } - if (current->sub_layer_level_present_flag[i]) - ubs(8, sub_layer_level_idc[i], 1, i); - } - - return 0; -} - -static int FUNC(sub_layer_hrd_parameters)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawHRDParameters *hrd, - int nal, int sub_layer_id) -{ - H265RawSubLayerHRDParameters *current; - int err, i; - - if (nal) - current = &hrd->nal_sub_layer_hrd_parameters[sub_layer_id]; - else - current = &hrd->vcl_sub_layer_hrd_parameters[sub_layer_id]; - - for (i = 0; i <= hrd->cpb_cnt_minus1[sub_layer_id]; i++) { - ues(bit_rate_value_minus1[i], 0, UINT32_MAX - 1, 1, i); - ues(cpb_size_value_minus1[i], 0, UINT32_MAX - 1, 1, i); - if (hrd->sub_pic_hrd_params_present_flag) { - ues(cpb_size_du_value_minus1[i], 0, UINT32_MAX - 1, 1, i); - ues(bit_rate_du_value_minus1[i], 0, UINT32_MAX - 1, 1, i); - } - flags(cbr_flag[i], 1, i); - } - - return 0; -} - -static int FUNC(hrd_parameters)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawHRDParameters *current, int common_inf_present_flag, - int max_num_sub_layers_minus1) -{ - int err, i; - - if (common_inf_present_flag) { - flag(nal_hrd_parameters_present_flag); - flag(vcl_hrd_parameters_present_flag); - - if (current->nal_hrd_parameters_present_flag || - current->vcl_hrd_parameters_present_flag) { - flag(sub_pic_hrd_params_present_flag); - if (current->sub_pic_hrd_params_present_flag) { - ub(8, tick_divisor_minus2); - ub(5, du_cpb_removal_delay_increment_length_minus1); - flag(sub_pic_cpb_params_in_pic_timing_sei_flag); - ub(5, dpb_output_delay_du_length_minus1); - } - - ub(4, bit_rate_scale); - ub(4, cpb_size_scale); - if (current->sub_pic_hrd_params_present_flag) - ub(4, cpb_size_du_scale); - - ub(5, initial_cpb_removal_delay_length_minus1); - ub(5, au_cpb_removal_delay_length_minus1); - ub(5, dpb_output_delay_length_minus1); - } else { - infer(sub_pic_hrd_params_present_flag, 0); - - infer(initial_cpb_removal_delay_length_minus1, 23); - infer(au_cpb_removal_delay_length_minus1, 23); - infer(dpb_output_delay_length_minus1, 23); - } - } - - for (i = 0; i <= max_num_sub_layers_minus1; i++) { - flags(fixed_pic_rate_general_flag[i], 1, i); - - if (!current->fixed_pic_rate_general_flag[i]) - flags(fixed_pic_rate_within_cvs_flag[i], 1, i); - else - infer(fixed_pic_rate_within_cvs_flag[i], 1); - - if (current->fixed_pic_rate_within_cvs_flag[i]) { - ues(elemental_duration_in_tc_minus1[i], 0, 2047, 1, i); - infer(low_delay_hrd_flag[i], 0); - } else - flags(low_delay_hrd_flag[i], 1, i); - - if (!current->low_delay_hrd_flag[i]) - ues(cpb_cnt_minus1[i], 0, 31, 1, i); - else - infer(cpb_cnt_minus1[i], 0); - - if (current->nal_hrd_parameters_present_flag) - CHECK(FUNC(sub_layer_hrd_parameters)(ctx, rw, current, 0, i)); - if (current->vcl_hrd_parameters_present_flag) - CHECK(FUNC(sub_layer_hrd_parameters)(ctx, rw, current, 1, i)); - } - - return 0; -} - -static int FUNC(vui_parameters)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawVUI *current, const H265RawSPS *sps) -{ - int err; - - flag(aspect_ratio_info_present_flag); - if (current->aspect_ratio_info_present_flag) { - ub(8, aspect_ratio_idc); - if (current->aspect_ratio_idc == 255) { - ub(16, sar_width); - ub(16, sar_height); - } - } else { - infer(aspect_ratio_idc, 0); - } - - flag(overscan_info_present_flag); - if (current->overscan_info_present_flag) - flag(overscan_appropriate_flag); - - flag(video_signal_type_present_flag); - if (current->video_signal_type_present_flag) { - ub(3, video_format); - flag(video_full_range_flag); - flag(colour_description_present_flag); - if (current->colour_description_present_flag) { - ub(8, colour_primaries); - ub(8, transfer_characteristics); - ub(8, matrix_coefficients); - } else { - infer(colour_primaries, 2); - infer(transfer_characteristics, 2); - infer(matrix_coefficients, 2); - } - } else { - infer(video_format, 5); - infer(video_full_range_flag, 0); - infer(colour_primaries, 2); - infer(transfer_characteristics, 2); - infer(matrix_coefficients, 2); - } - - flag(chroma_loc_info_present_flag); - if (current->chroma_loc_info_present_flag) { - ue(chroma_sample_loc_type_top_field, 0, 5); - ue(chroma_sample_loc_type_bottom_field, 0, 5); - } else { - infer(chroma_sample_loc_type_top_field, 0); - infer(chroma_sample_loc_type_bottom_field, 0); - } - - flag(neutral_chroma_indication_flag); - flag(field_seq_flag); - flag(frame_field_info_present_flag); - - flag(default_display_window_flag); - if (current->default_display_window_flag) { - ue(def_disp_win_left_offset, 0, 16384); - ue(def_disp_win_right_offset, 0, 16384); - ue(def_disp_win_top_offset, 0, 16384); - ue(def_disp_win_bottom_offset, 0, 16384); - } - - flag(vui_timing_info_present_flag); - if (current->vui_timing_info_present_flag) { - u(32, vui_num_units_in_tick, 1, UINT32_MAX); - u(32, vui_time_scale, 1, UINT32_MAX); - flag(vui_poc_proportional_to_timing_flag); - if (current->vui_poc_proportional_to_timing_flag) - ue(vui_num_ticks_poc_diff_one_minus1, 0, UINT32_MAX - 1); - - flag(vui_hrd_parameters_present_flag); - if (current->vui_hrd_parameters_present_flag) { - CHECK(FUNC(hrd_parameters)(ctx, rw, ¤t->hrd_parameters, - 1, sps->sps_max_sub_layers_minus1)); - } - } - - flag(bitstream_restriction_flag); - if (current->bitstream_restriction_flag) { - flag(tiles_fixed_structure_flag); - flag(motion_vectors_over_pic_boundaries_flag); - flag(restricted_ref_pic_lists_flag); - ue(min_spatial_segmentation_idc, 0, 4095); - ue(max_bytes_per_pic_denom, 0, 16); - ue(max_bits_per_min_cu_denom, 0, 16); - ue(log2_max_mv_length_horizontal, 0, 16); - ue(log2_max_mv_length_vertical, 0, 16); - } else { - infer(tiles_fixed_structure_flag, 0); - infer(motion_vectors_over_pic_boundaries_flag, 1); - infer(min_spatial_segmentation_idc, 0); - infer(max_bytes_per_pic_denom, 2); - infer(max_bits_per_min_cu_denom, 1); - infer(log2_max_mv_length_horizontal, 15); - infer(log2_max_mv_length_vertical, 15); - } - - return 0; -} - -static int FUNC(vps)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawVPS *current) -{ - int err, i, j; - - HEADER("Video Parameter Set"); - - CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_VPS)); - - ub(4, vps_video_parameter_set_id); - - flag(vps_base_layer_internal_flag); - flag(vps_base_layer_available_flag); - u(6, vps_max_layers_minus1, 0, HEVC_MAX_LAYERS - 1); - u(3, vps_max_sub_layers_minus1, 0, HEVC_MAX_SUB_LAYERS - 1); - flag(vps_temporal_id_nesting_flag); - - if (current->vps_max_sub_layers_minus1 == 0 && - current->vps_temporal_id_nesting_flag != 1) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: " - "vps_temporal_id_nesting_flag must be 1 if " - "vps_max_sub_layers_minus1 is 0.\n"); - return AVERROR_INVALIDDATA; - } - - fixed(16, vps_reserved_0xffff_16bits, 0xffff); - - CHECK(FUNC(profile_tier_level)(ctx, rw, ¤t->profile_tier_level, - 1, current->vps_max_sub_layers_minus1)); - - flag(vps_sub_layer_ordering_info_present_flag); - for (i = (current->vps_sub_layer_ordering_info_present_flag ? - 0 : current->vps_max_sub_layers_minus1); - i <= current->vps_max_sub_layers_minus1; i++) { - ues(vps_max_dec_pic_buffering_minus1[i], - 0, HEVC_MAX_DPB_SIZE - 1, 1, i); - ues(vps_max_num_reorder_pics[i], - 0, current->vps_max_dec_pic_buffering_minus1[i], 1, i); - ues(vps_max_latency_increase_plus1[i], - 0, UINT32_MAX - 1, 1, i); - } - if (!current->vps_sub_layer_ordering_info_present_flag) { - for (i = 0; i < current->vps_max_sub_layers_minus1; i++) { - infer(vps_max_dec_pic_buffering_minus1[i], - current->vps_max_dec_pic_buffering_minus1[current->vps_max_sub_layers_minus1]); - infer(vps_max_num_reorder_pics[i], - current->vps_max_num_reorder_pics[current->vps_max_sub_layers_minus1]); - infer(vps_max_latency_increase_plus1[i], - current->vps_max_latency_increase_plus1[current->vps_max_sub_layers_minus1]); - } - } - - u(6, vps_max_layer_id, 0, HEVC_MAX_LAYERS - 1); - ue(vps_num_layer_sets_minus1, 0, HEVC_MAX_LAYER_SETS - 1); - for (i = 1; i <= current->vps_num_layer_sets_minus1; i++) { - for (j = 0; j <= current->vps_max_layer_id; j++) - flags(layer_id_included_flag[i][j], 2, i, j); - } - for (j = 0; j <= current->vps_max_layer_id; j++) - infer(layer_id_included_flag[0][j], j == 0); - - flag(vps_timing_info_present_flag); - if (current->vps_timing_info_present_flag) { - u(32, vps_num_units_in_tick, 1, UINT32_MAX); - u(32, vps_time_scale, 1, UINT32_MAX); - flag(vps_poc_proportional_to_timing_flag); - if (current->vps_poc_proportional_to_timing_flag) - ue(vps_num_ticks_poc_diff_one_minus1, 0, UINT32_MAX - 1); - ue(vps_num_hrd_parameters, 0, current->vps_num_layer_sets_minus1 + 1); - for (i = 0; i < current->vps_num_hrd_parameters; i++) { - ues(hrd_layer_set_idx[i], - current->vps_base_layer_internal_flag ? 0 : 1, - current->vps_num_layer_sets_minus1, 1, i); - if (i > 0) - flags(cprms_present_flag[i], 1, i); - else - infer(cprms_present_flag[0], 1); - - CHECK(FUNC(hrd_parameters)(ctx, rw, ¤t->hrd_parameters[i], - current->cprms_present_flag[i], - current->vps_max_sub_layers_minus1)); - } - } - - flag(vps_extension_flag); - if (current->vps_extension_flag) - CHECK(FUNC(extension_data)(ctx, rw, ¤t->extension_data)); - - CHECK(FUNC(rbsp_trailing_bits)(ctx, rw)); - - return 0; -} - -static int FUNC(st_ref_pic_set)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSTRefPicSet *current, int st_rps_idx, - const H265RawSPS *sps) -{ - int err, i, j; - - if (st_rps_idx != 0) - flag(inter_ref_pic_set_prediction_flag); - else - infer(inter_ref_pic_set_prediction_flag, 0); - - if (current->inter_ref_pic_set_prediction_flag) { - unsigned int ref_rps_idx, num_delta_pocs, num_ref_pics; - const H265RawSTRefPicSet *ref; - int delta_rps, d_poc; - int ref_delta_poc_s0[HEVC_MAX_REFS], ref_delta_poc_s1[HEVC_MAX_REFS]; - int delta_poc_s0[HEVC_MAX_REFS], delta_poc_s1[HEVC_MAX_REFS]; - uint8_t used_by_curr_pic_s0[HEVC_MAX_REFS], - used_by_curr_pic_s1[HEVC_MAX_REFS]; - - if (st_rps_idx == sps->num_short_term_ref_pic_sets) - ue(delta_idx_minus1, 0, st_rps_idx - 1); - else - infer(delta_idx_minus1, 0); - - ref_rps_idx = st_rps_idx - (current->delta_idx_minus1 + 1); - ref = &sps->st_ref_pic_set[ref_rps_idx]; - num_delta_pocs = ref->num_negative_pics + ref->num_positive_pics; - av_assert0(num_delta_pocs < HEVC_MAX_DPB_SIZE); - - flag(delta_rps_sign); - ue(abs_delta_rps_minus1, 0, INT16_MAX); - delta_rps = (1 - 2 * current->delta_rps_sign) * - (current->abs_delta_rps_minus1 + 1); - - num_ref_pics = 0; - for (j = 0; j <= num_delta_pocs; j++) { - flags(used_by_curr_pic_flag[j], 1, j); - if (!current->used_by_curr_pic_flag[j]) - flags(use_delta_flag[j], 1, j); - else - infer(use_delta_flag[j], 1); - if (current->use_delta_flag[j]) - ++num_ref_pics; - } - if (num_ref_pics >= HEVC_MAX_DPB_SIZE) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: " - "short-term ref pic set %d " - "contains too many pictures.\n", st_rps_idx); - return AVERROR_INVALIDDATA; - } - - // Since the stored form of an RPS here is actually the delta-step - // form used when inter_ref_pic_set_prediction_flag is not set, we - // need to reconstruct that here in order to be able to refer to - // the RPS later (which is required for parsing, because we don't - // even know what syntax elements appear without it). Therefore, - // this code takes the delta-step form of the reference set, turns - // it into the delta-array form, applies the prediction process of - // 7.4.8, converts the result back to the delta-step form, and - // stores that as the current set for future use. Note that the - // inferences here mean that writers using prediction will need - // to fill in the delta-step values correctly as well - since the - // whole RPS prediction process is somewhat overly sophisticated, - // this hopefully forms a useful check for them to ensure their - // predicted form actually matches what was intended rather than - // an onerous additional requirement. - - d_poc = 0; - for (i = 0; i < ref->num_negative_pics; i++) { - d_poc -= ref->delta_poc_s0_minus1[i] + 1; - ref_delta_poc_s0[i] = d_poc; - } - d_poc = 0; - for (i = 0; i < ref->num_positive_pics; i++) { - d_poc += ref->delta_poc_s1_minus1[i] + 1; - ref_delta_poc_s1[i] = d_poc; - } - - i = 0; - for (j = ref->num_positive_pics - 1; j >= 0; j--) { - d_poc = ref_delta_poc_s1[j] + delta_rps; - if (d_poc < 0 && current->use_delta_flag[ref->num_negative_pics + j]) { - delta_poc_s0[i] = d_poc; - used_by_curr_pic_s0[i++] = - current->used_by_curr_pic_flag[ref->num_negative_pics + j]; - } - } - if (delta_rps < 0 && current->use_delta_flag[num_delta_pocs]) { - delta_poc_s0[i] = delta_rps; - used_by_curr_pic_s0[i++] = - current->used_by_curr_pic_flag[num_delta_pocs]; - } - for (j = 0; j < ref->num_negative_pics; j++) { - d_poc = ref_delta_poc_s0[j] + delta_rps; - if (d_poc < 0 && current->use_delta_flag[j]) { - delta_poc_s0[i] = d_poc; - used_by_curr_pic_s0[i++] = current->used_by_curr_pic_flag[j]; - } - } - - infer(num_negative_pics, i); - for (i = 0; i < current->num_negative_pics; i++) { - infer(delta_poc_s0_minus1[i], - -(delta_poc_s0[i] - (i == 0 ? 0 : delta_poc_s0[i - 1])) - 1); - infer(used_by_curr_pic_s0_flag[i], used_by_curr_pic_s0[i]); - } - - i = 0; - for (j = ref->num_negative_pics - 1; j >= 0; j--) { - d_poc = ref_delta_poc_s0[j] + delta_rps; - if (d_poc > 0 && current->use_delta_flag[j]) { - delta_poc_s1[i] = d_poc; - used_by_curr_pic_s1[i++] = current->used_by_curr_pic_flag[j]; - } - } - if (delta_rps > 0 && current->use_delta_flag[num_delta_pocs]) { - delta_poc_s1[i] = delta_rps; - used_by_curr_pic_s1[i++] = - current->used_by_curr_pic_flag[num_delta_pocs]; - } - for (j = 0; j < ref->num_positive_pics; j++) { - d_poc = ref_delta_poc_s1[j] + delta_rps; - if (d_poc > 0 && current->use_delta_flag[ref->num_negative_pics + j]) { - delta_poc_s1[i] = d_poc; - used_by_curr_pic_s1[i++] = - current->used_by_curr_pic_flag[ref->num_negative_pics + j]; - } - } - - infer(num_positive_pics, i); - for (i = 0; i < current->num_positive_pics; i++) { - infer(delta_poc_s1_minus1[i], - delta_poc_s1[i] - (i == 0 ? 0 : delta_poc_s1[i - 1]) - 1); - infer(used_by_curr_pic_s1_flag[i], used_by_curr_pic_s1[i]); - } - - } else { - ue(num_negative_pics, 0, 15); - ue(num_positive_pics, 0, 15 - current->num_negative_pics); - - for (i = 0; i < current->num_negative_pics; i++) { - ues(delta_poc_s0_minus1[i], 0, INT16_MAX, 1, i); - flags(used_by_curr_pic_s0_flag[i], 1, i); - } - - for (i = 0; i < current->num_positive_pics; i++) { - ues(delta_poc_s1_minus1[i], 0, INT16_MAX, 1, i); - flags(used_by_curr_pic_s1_flag[i], 1, i); - } - } - - return 0; -} - -static int FUNC(scaling_list_data)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawScalingList *current) -{ - int sizeId, matrixId; - int err, n, i; - - for (sizeId = 0; sizeId < 4; sizeId++) { - for (matrixId = 0; matrixId < 6; matrixId += (sizeId == 3 ? 3 : 1)) { - flags(scaling_list_pred_mode_flag[sizeId][matrixId], - 2, sizeId, matrixId); - if (!current->scaling_list_pred_mode_flag[sizeId][matrixId]) { - ues(scaling_list_pred_matrix_id_delta[sizeId][matrixId], - 0, sizeId == 3 ? matrixId / 3 : matrixId, - 2, sizeId, matrixId); - } else { - n = FFMIN(64, 1 << (4 + (sizeId << 1))); - if (sizeId > 1) { - ses(scaling_list_dc_coef_minus8[sizeId - 2][matrixId], -7, +247, - 2, sizeId - 2, matrixId); - } - for (i = 0; i < n; i++) { - ses(scaling_list_delta_coeff[sizeId][matrixId][i], - -128, +127, 3, sizeId, matrixId, i); - } - } - } - } - - return 0; -} - -static int FUNC(sps_range_extension)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSPS *current) -{ - int err; - - flag(transform_skip_rotation_enabled_flag); - flag(transform_skip_context_enabled_flag); - flag(implicit_rdpcm_enabled_flag); - flag(explicit_rdpcm_enabled_flag); - flag(extended_precision_processing_flag); - flag(intra_smoothing_disabled_flag); - flag(high_precision_offsets_enabled_flag); - flag(persistent_rice_adaptation_enabled_flag); - flag(cabac_bypass_alignment_enabled_flag); - - return 0; -} - -static int FUNC(sps_scc_extension)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSPS *current) -{ - int err, comp, i; - - flag(sps_curr_pic_ref_enabled_flag); - - flag(palette_mode_enabled_flag); - if (current->palette_mode_enabled_flag) { - ue(palette_max_size, 0, 64); - ue(delta_palette_max_predictor_size, 0, 128); - - flag(sps_palette_predictor_initializer_present_flag); - if (current->sps_palette_predictor_initializer_present_flag) { - ue(sps_num_palette_predictor_initializer_minus1, 0, 127); - for (comp = 0; comp < (current->chroma_format_idc ? 3 : 1); comp++) { - int bit_depth = comp == 0 ? current->bit_depth_luma_minus8 + 8 - : current->bit_depth_chroma_minus8 + 8; - for (i = 0; i <= current->sps_num_palette_predictor_initializer_minus1; i++) - ubs(bit_depth, sps_palette_predictor_initializers[comp][i], 2, comp, i); - } - } - } - - u(2, motion_vector_resolution_control_idc, 0, 2); - flag(intra_boundary_filtering_disable_flag); - - return 0; -} - -static int FUNC(vui_parameters_default)(CodedBitstreamContext *ctx, - RWContext *rw, H265RawVUI *current, - H265RawSPS *sps) -{ - infer(aspect_ratio_idc, 0); - - infer(video_format, 5); - infer(video_full_range_flag, 0); - infer(colour_primaries, 2); - infer(transfer_characteristics, 2); - infer(matrix_coefficients, 2); - - infer(chroma_sample_loc_type_top_field, 0); - infer(chroma_sample_loc_type_bottom_field, 0); - - infer(tiles_fixed_structure_flag, 0); - infer(motion_vectors_over_pic_boundaries_flag, 1); - infer(min_spatial_segmentation_idc, 0); - infer(max_bytes_per_pic_denom, 2); - infer(max_bits_per_min_cu_denom, 1); - infer(log2_max_mv_length_horizontal, 15); - infer(log2_max_mv_length_vertical, 15); - - return 0; -} - -static int FUNC(sps)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSPS *current) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawVPS *vps; - int err, i; - unsigned int min_cb_log2_size_y, ctb_log2_size_y, - min_cb_size_y, min_tb_log2_size_y; - - HEADER("Sequence Parameter Set"); - - CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_SPS)); - - ub(4, sps_video_parameter_set_id); - h265->active_vps = vps = h265->vps[current->sps_video_parameter_set_id]; - - u(3, sps_max_sub_layers_minus1, 0, HEVC_MAX_SUB_LAYERS - 1); - flag(sps_temporal_id_nesting_flag); - if (vps) { - if (vps->vps_max_sub_layers_minus1 > current->sps_max_sub_layers_minus1) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: " - "sps_max_sub_layers_minus1 (%d) must be less than or equal to " - "vps_max_sub_layers_minus1 (%d).\n", - vps->vps_max_sub_layers_minus1, - current->sps_max_sub_layers_minus1); - return AVERROR_INVALIDDATA; - } - if (vps->vps_temporal_id_nesting_flag && - !current->sps_temporal_id_nesting_flag) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: " - "sps_temporal_id_nesting_flag must be 1 if " - "vps_temporal_id_nesting_flag is 1.\n"); - return AVERROR_INVALIDDATA; - } - } - - CHECK(FUNC(profile_tier_level)(ctx, rw, ¤t->profile_tier_level, - 1, current->sps_max_sub_layers_minus1)); - - ue(sps_seq_parameter_set_id, 0, 15); - - ue(chroma_format_idc, 0, 3); - if (current->chroma_format_idc == 3) - flag(separate_colour_plane_flag); - else - infer(separate_colour_plane_flag, 0); - - ue(pic_width_in_luma_samples, 1, HEVC_MAX_WIDTH); - ue(pic_height_in_luma_samples, 1, HEVC_MAX_HEIGHT); - - flag(conformance_window_flag); - if (current->conformance_window_flag) { - ue(conf_win_left_offset, 0, current->pic_width_in_luma_samples); - ue(conf_win_right_offset, 0, current->pic_width_in_luma_samples); - ue(conf_win_top_offset, 0, current->pic_height_in_luma_samples); - ue(conf_win_bottom_offset, 0, current->pic_height_in_luma_samples); - } else { - infer(conf_win_left_offset, 0); - infer(conf_win_right_offset, 0); - infer(conf_win_top_offset, 0); - infer(conf_win_bottom_offset, 0); - } - - ue(bit_depth_luma_minus8, 0, 8); - ue(bit_depth_chroma_minus8, 0, 8); - - ue(log2_max_pic_order_cnt_lsb_minus4, 0, 12); - - flag(sps_sub_layer_ordering_info_present_flag); - for (i = (current->sps_sub_layer_ordering_info_present_flag ? - 0 : current->sps_max_sub_layers_minus1); - i <= current->sps_max_sub_layers_minus1; i++) { - ues(sps_max_dec_pic_buffering_minus1[i], - 0, HEVC_MAX_DPB_SIZE - 1, 1, i); - ues(sps_max_num_reorder_pics[i], - 0, current->sps_max_dec_pic_buffering_minus1[i], 1, i); - ues(sps_max_latency_increase_plus1[i], - 0, UINT32_MAX - 1, 1, i); - } - if (!current->sps_sub_layer_ordering_info_present_flag) { - for (i = 0; i < current->sps_max_sub_layers_minus1; i++) { - infer(sps_max_dec_pic_buffering_minus1[i], - current->sps_max_dec_pic_buffering_minus1[current->sps_max_sub_layers_minus1]); - infer(sps_max_num_reorder_pics[i], - current->sps_max_num_reorder_pics[current->sps_max_sub_layers_minus1]); - infer(sps_max_latency_increase_plus1[i], - current->sps_max_latency_increase_plus1[current->sps_max_sub_layers_minus1]); - } - } - - ue(log2_min_luma_coding_block_size_minus3, 0, 3); - min_cb_log2_size_y = current->log2_min_luma_coding_block_size_minus3 + 3; - - ue(log2_diff_max_min_luma_coding_block_size, 0, 3); - ctb_log2_size_y = min_cb_log2_size_y + - current->log2_diff_max_min_luma_coding_block_size; - - min_cb_size_y = 1 << min_cb_log2_size_y; - if (current->pic_width_in_luma_samples % min_cb_size_y || - current->pic_height_in_luma_samples % min_cb_size_y) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid dimensions: %ux%u not divisible " - "by MinCbSizeY = %u.\n", current->pic_width_in_luma_samples, - current->pic_height_in_luma_samples, min_cb_size_y); - return AVERROR_INVALIDDATA; - } - - ue(log2_min_luma_transform_block_size_minus2, 0, min_cb_log2_size_y - 3); - min_tb_log2_size_y = current->log2_min_luma_transform_block_size_minus2 + 2; - - ue(log2_diff_max_min_luma_transform_block_size, - 0, FFMIN(ctb_log2_size_y, 5) - min_tb_log2_size_y); - - ue(max_transform_hierarchy_depth_inter, - 0, ctb_log2_size_y - min_tb_log2_size_y); - ue(max_transform_hierarchy_depth_intra, - 0, ctb_log2_size_y - min_tb_log2_size_y); - - flag(scaling_list_enabled_flag); - if (current->scaling_list_enabled_flag) { - flag(sps_scaling_list_data_present_flag); - if (current->sps_scaling_list_data_present_flag) - CHECK(FUNC(scaling_list_data)(ctx, rw, ¤t->scaling_list)); - } else { - infer(sps_scaling_list_data_present_flag, 0); - } - - flag(amp_enabled_flag); - flag(sample_adaptive_offset_enabled_flag); - - flag(pcm_enabled_flag); - if (current->pcm_enabled_flag) { - u(4, pcm_sample_bit_depth_luma_minus1, - 0, current->bit_depth_luma_minus8 + 8 - 1); - u(4, pcm_sample_bit_depth_chroma_minus1, - 0, current->bit_depth_chroma_minus8 + 8 - 1); - - ue(log2_min_pcm_luma_coding_block_size_minus3, - FFMIN(min_cb_log2_size_y, 5) - 3, FFMIN(ctb_log2_size_y, 5) - 3); - ue(log2_diff_max_min_pcm_luma_coding_block_size, - 0, FFMIN(ctb_log2_size_y, 5) - (current->log2_min_pcm_luma_coding_block_size_minus3 + 3)); - - flag(pcm_loop_filter_disabled_flag); - } - - ue(num_short_term_ref_pic_sets, 0, HEVC_MAX_SHORT_TERM_REF_PIC_SETS); - for (i = 0; i < current->num_short_term_ref_pic_sets; i++) - CHECK(FUNC(st_ref_pic_set)(ctx, rw, ¤t->st_ref_pic_set[i], i, current)); - - flag(long_term_ref_pics_present_flag); - if (current->long_term_ref_pics_present_flag) { - ue(num_long_term_ref_pics_sps, 0, HEVC_MAX_LONG_TERM_REF_PICS); - for (i = 0; i < current->num_long_term_ref_pics_sps; i++) { - ubs(current->log2_max_pic_order_cnt_lsb_minus4 + 4, - lt_ref_pic_poc_lsb_sps[i], 1, i); - flags(used_by_curr_pic_lt_sps_flag[i], 1, i); - } - } - - flag(sps_temporal_mvp_enabled_flag); - flag(strong_intra_smoothing_enabled_flag); - - flag(vui_parameters_present_flag); - if (current->vui_parameters_present_flag) - CHECK(FUNC(vui_parameters)(ctx, rw, ¤t->vui, current)); - else - CHECK(FUNC(vui_parameters_default)(ctx, rw, ¤t->vui, current)); - - flag(sps_extension_present_flag); - if (current->sps_extension_present_flag) { - flag(sps_range_extension_flag); - flag(sps_multilayer_extension_flag); - flag(sps_3d_extension_flag); - flag(sps_scc_extension_flag); - ub(4, sps_extension_4bits); - } - - if (current->sps_range_extension_flag) - CHECK(FUNC(sps_range_extension)(ctx, rw, current)); - if (current->sps_multilayer_extension_flag) - return AVERROR_PATCHWELCOME; - if (current->sps_3d_extension_flag) - return AVERROR_PATCHWELCOME; - if (current->sps_scc_extension_flag) - CHECK(FUNC(sps_scc_extension)(ctx, rw, current)); - if (current->sps_extension_4bits) - CHECK(FUNC(extension_data)(ctx, rw, ¤t->extension_data)); - - CHECK(FUNC(rbsp_trailing_bits)(ctx, rw)); - - return 0; -} - -static int FUNC(pps_range_extension)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawPPS *current) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps = h265->active_sps; - int err, i; - - if (current->transform_skip_enabled_flag) - ue(log2_max_transform_skip_block_size_minus2, 0, 3); - flag(cross_component_prediction_enabled_flag); - - flag(chroma_qp_offset_list_enabled_flag); - if (current->chroma_qp_offset_list_enabled_flag) { - ue(diff_cu_chroma_qp_offset_depth, - 0, sps->log2_diff_max_min_luma_coding_block_size); - ue(chroma_qp_offset_list_len_minus1, 0, 5); - for (i = 0; i <= current->chroma_qp_offset_list_len_minus1; i++) { - ses(cb_qp_offset_list[i], -12, +12, 1, i); - ses(cr_qp_offset_list[i], -12, +12, 1, i); - } - } - - ue(log2_sao_offset_scale_luma, 0, FFMAX(0, sps->bit_depth_luma_minus8 - 2)); - ue(log2_sao_offset_scale_chroma, 0, FFMAX(0, sps->bit_depth_chroma_minus8 - 2)); - - return 0; -} - -static int FUNC(pps_scc_extension)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawPPS *current) -{ - int err, comp, i; - - flag(pps_curr_pic_ref_enabled_flag); - - flag(residual_adaptive_colour_transform_enabled_flag); - if (current->residual_adaptive_colour_transform_enabled_flag) { - flag(pps_slice_act_qp_offsets_present_flag); - se(pps_act_y_qp_offset_plus5, -7, +17); - se(pps_act_cb_qp_offset_plus5, -7, +17); - se(pps_act_cr_qp_offset_plus3, -9, +15); - } else { - infer(pps_slice_act_qp_offsets_present_flag, 0); - infer(pps_act_y_qp_offset_plus5, 0); - infer(pps_act_cb_qp_offset_plus5, 0); - infer(pps_act_cr_qp_offset_plus3, 0); - } - - flag(pps_palette_predictor_initializer_present_flag); - if (current->pps_palette_predictor_initializer_present_flag) { - ue(pps_num_palette_predictor_initializer, 0, 128); - if (current->pps_num_palette_predictor_initializer > 0) { - flag(monochrome_palette_flag); - ue(luma_bit_depth_entry_minus8, 0, 8); - if (!current->monochrome_palette_flag) - ue(chroma_bit_depth_entry_minus8, 0, 8); - for (comp = 0; comp < (current->monochrome_palette_flag ? 1 : 3); comp++) { - int bit_depth = comp == 0 ? current->luma_bit_depth_entry_minus8 + 8 - : current->chroma_bit_depth_entry_minus8 + 8; - for (i = 0; i < current->pps_num_palette_predictor_initializer; i++) - ubs(bit_depth, pps_palette_predictor_initializers[comp][i], 2, comp, i); - } - } - } - - return 0; -} - -static int FUNC(pps)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawPPS *current) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps; - int err, i; - - HEADER("Picture Parameter Set"); - - CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_PPS)); - - ue(pps_pic_parameter_set_id, 0, 63); - ue(pps_seq_parameter_set_id, 0, 15); - sps = h265->sps[current->pps_seq_parameter_set_id]; - if (!sps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "SPS id %d not available.\n", - current->pps_seq_parameter_set_id); - return AVERROR_INVALIDDATA; - } - h265->active_sps = sps; - - flag(dependent_slice_segments_enabled_flag); - flag(output_flag_present_flag); - ub(3, num_extra_slice_header_bits); - flag(sign_data_hiding_enabled_flag); - flag(cabac_init_present_flag); - - ue(num_ref_idx_l0_default_active_minus1, 0, 14); - ue(num_ref_idx_l1_default_active_minus1, 0, 14); - - se(init_qp_minus26, -(26 + 6 * sps->bit_depth_luma_minus8), +25); - - flag(constrained_intra_pred_flag); - flag(transform_skip_enabled_flag); - flag(cu_qp_delta_enabled_flag); - if (current->cu_qp_delta_enabled_flag) - ue(diff_cu_qp_delta_depth, - 0, sps->log2_diff_max_min_luma_coding_block_size); - else - infer(diff_cu_qp_delta_depth, 0); - - se(pps_cb_qp_offset, -12, +12); - se(pps_cr_qp_offset, -12, +12); - flag(pps_slice_chroma_qp_offsets_present_flag); - - flag(weighted_pred_flag); - flag(weighted_bipred_flag); - - flag(transquant_bypass_enabled_flag); - flag(tiles_enabled_flag); - flag(entropy_coding_sync_enabled_flag); - - if (current->tiles_enabled_flag) { - ue(num_tile_columns_minus1, 0, HEVC_MAX_TILE_COLUMNS); - ue(num_tile_rows_minus1, 0, HEVC_MAX_TILE_ROWS); - flag(uniform_spacing_flag); - if (!current->uniform_spacing_flag) { - for (i = 0; i < current->num_tile_columns_minus1; i++) - ues(column_width_minus1[i], 0, sps->pic_width_in_luma_samples, 1, i); - for (i = 0; i < current->num_tile_rows_minus1; i++) - ues(row_height_minus1[i], 0, sps->pic_height_in_luma_samples, 1, i); - } - flag(loop_filter_across_tiles_enabled_flag); - } else { - infer(num_tile_columns_minus1, 0); - infer(num_tile_rows_minus1, 0); - } - - flag(pps_loop_filter_across_slices_enabled_flag); - flag(deblocking_filter_control_present_flag); - if (current->deblocking_filter_control_present_flag) { - flag(deblocking_filter_override_enabled_flag); - flag(pps_deblocking_filter_disabled_flag); - if (!current->pps_deblocking_filter_disabled_flag) { - se(pps_beta_offset_div2, -6, +6); - se(pps_tc_offset_div2, -6, +6); - } else { - infer(pps_beta_offset_div2, 0); - infer(pps_tc_offset_div2, 0); - } - } else { - infer(deblocking_filter_override_enabled_flag, 0); - infer(pps_deblocking_filter_disabled_flag, 0); - infer(pps_beta_offset_div2, 0); - infer(pps_tc_offset_div2, 0); - } - - flag(pps_scaling_list_data_present_flag); - if (current->pps_scaling_list_data_present_flag) - CHECK(FUNC(scaling_list_data)(ctx, rw, ¤t->scaling_list)); - - flag(lists_modification_present_flag); - - ue(log2_parallel_merge_level_minus2, - 0, (sps->log2_min_luma_coding_block_size_minus3 + 3 + - sps->log2_diff_max_min_luma_coding_block_size - 2)); - - flag(slice_segment_header_extension_present_flag); - - flag(pps_extension_present_flag); - if (current->pps_extension_present_flag) { - flag(pps_range_extension_flag); - flag(pps_multilayer_extension_flag); - flag(pps_3d_extension_flag); - flag(pps_scc_extension_flag); - ub(4, pps_extension_4bits); - } - if (current->pps_range_extension_flag) - CHECK(FUNC(pps_range_extension)(ctx, rw, current)); - if (current->pps_multilayer_extension_flag) - return AVERROR_PATCHWELCOME; - if (current->pps_3d_extension_flag) - return AVERROR_PATCHWELCOME; - if (current->pps_scc_extension_flag) - CHECK(FUNC(pps_scc_extension)(ctx, rw, current)); - if (current->pps_extension_4bits) - CHECK(FUNC(extension_data)(ctx, rw, ¤t->extension_data)); - - CHECK(FUNC(rbsp_trailing_bits)(ctx, rw)); - - return 0; -} - -static int FUNC(aud)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawAUD *current) -{ - int err; - - HEADER("Access Unit Delimiter"); - - CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, HEVC_NAL_AUD)); - - u(3, pic_type, 0, 2); - - CHECK(FUNC(rbsp_trailing_bits)(ctx, rw)); - - return 0; -} - -static int FUNC(ref_pic_lists_modification)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSliceHeader *current, - unsigned int num_pic_total_curr) -{ - unsigned int entry_size; - int err, i; - - entry_size = av_log2(num_pic_total_curr - 1) + 1; - - flag(ref_pic_list_modification_flag_l0); - if (current->ref_pic_list_modification_flag_l0) { - for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) - us(entry_size, list_entry_l0[i], 0, num_pic_total_curr - 1, 1, i); - } - - if (current->slice_type == HEVC_SLICE_B) { - flag(ref_pic_list_modification_flag_l1); - if (current->ref_pic_list_modification_flag_l1) { - for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) - us(entry_size, list_entry_l1[i], 0, num_pic_total_curr - 1, 1, i); - } - } - - return 0; -} - -static int FUNC(pred_weight_table)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSliceHeader *current) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps = h265->active_sps; - int err, i, j; - int chroma = !sps->separate_colour_plane_flag && - sps->chroma_format_idc != 0; - - ue(luma_log2_weight_denom, 0, 7); - if (chroma) - se(delta_chroma_log2_weight_denom, -7, 7); - else - infer(delta_chroma_log2_weight_denom, 0); - - for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) { - if (1 /* is not same POC and same layer_id */) - flags(luma_weight_l0_flag[i], 1, i); - else - infer(luma_weight_l0_flag[i], 0); - } - if (chroma) { - for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) { - if (1 /* is not same POC and same layer_id */) - flags(chroma_weight_l0_flag[i], 1, i); - else - infer(chroma_weight_l0_flag[i], 0); - } - } - - for (i = 0; i <= current->num_ref_idx_l0_active_minus1; i++) { - if (current->luma_weight_l0_flag[i]) { - ses(delta_luma_weight_l0[i], -128, +127, 1, i); - ses(luma_offset_l0[i], - -(1 << (sps->bit_depth_luma_minus8 + 8 - 1)), - ((1 << (sps->bit_depth_luma_minus8 + 8 - 1)) - 1), 1, i); - } else { - infer(delta_luma_weight_l0[i], 0); - infer(luma_offset_l0[i], 0); - } - if (current->chroma_weight_l0_flag[i]) { - for (j = 0; j < 2; j++) { - ses(delta_chroma_weight_l0[i][j], -128, +127, 2, i, j); - ses(chroma_offset_l0[i][j], - -(4 << (sps->bit_depth_chroma_minus8 + 8 - 1)), - ((4 << (sps->bit_depth_chroma_minus8 + 8 - 1)) - 1), 2, i, j); - } - } else { - for (j = 0; j < 2; j++) { - infer(delta_chroma_weight_l0[i][j], 0); - infer(chroma_offset_l0[i][j], 0); - } - } - } - - if (current->slice_type == HEVC_SLICE_B) { - for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) { - if (1 /* RefPicList1[i] is not CurrPic, nor is it in a different layer */) - flags(luma_weight_l1_flag[i], 1, i); - else - infer(luma_weight_l1_flag[i], 0); - } - if (chroma) { - for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) { - if (1 /* RefPicList1[i] is not CurrPic, nor is it in a different layer */) - flags(chroma_weight_l1_flag[i], 1, i); - else - infer(chroma_weight_l1_flag[i], 0); - } - } - - for (i = 0; i <= current->num_ref_idx_l1_active_minus1; i++) { - if (current->luma_weight_l1_flag[i]) { - ses(delta_luma_weight_l1[i], -128, +127, 1, i); - ses(luma_offset_l1[i], - -(1 << (sps->bit_depth_luma_minus8 + 8 - 1)), - ((1 << (sps->bit_depth_luma_minus8 + 8 - 1)) - 1), 1, i); - } else { - infer(delta_luma_weight_l1[i], 0); - infer(luma_offset_l1[i], 0); - } - if (current->chroma_weight_l1_flag[i]) { - for (j = 0; j < 2; j++) { - ses(delta_chroma_weight_l1[i][j], -128, +127, 2, i, j); - ses(chroma_offset_l1[i][j], - -(4 << (sps->bit_depth_chroma_minus8 + 8 - 1)), - ((4 << (sps->bit_depth_chroma_minus8 + 8 - 1)) - 1), 2, i, j); - } - } else { - for (j = 0; j < 2; j++) { - infer(delta_chroma_weight_l1[i][j], 0); - infer(chroma_offset_l1[i][j], 0); - } - } - } - } - - return 0; -} - -static int FUNC(slice_segment_header)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSliceHeader *current) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps; - const H265RawPPS *pps; - unsigned int min_cb_log2_size_y, ctb_log2_size_y, ctb_size_y; - unsigned int pic_width_in_ctbs_y, pic_height_in_ctbs_y, pic_size_in_ctbs_y; - unsigned int num_pic_total_curr = 0; - int err, i; - - HEADER("Slice Segment Header"); - - CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, -1)); - - flag(first_slice_segment_in_pic_flag); - - if (current->nal_unit_header.nal_unit_type >= HEVC_NAL_BLA_W_LP && - current->nal_unit_header.nal_unit_type <= HEVC_NAL_RSV_IRAP_VCL23) - flag(no_output_of_prior_pics_flag); - - ue(slice_pic_parameter_set_id, 0, 63); - - pps = h265->pps[current->slice_pic_parameter_set_id]; - if (!pps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "PPS id %d not available.\n", - current->slice_pic_parameter_set_id); - return AVERROR_INVALIDDATA; - } - h265->active_pps = pps; - - sps = h265->sps[pps->pps_seq_parameter_set_id]; - if (!sps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "SPS id %d not available.\n", - pps->pps_seq_parameter_set_id); - return AVERROR_INVALIDDATA; - } - h265->active_sps = sps; - - min_cb_log2_size_y = sps->log2_min_luma_coding_block_size_minus3 + 3; - ctb_log2_size_y = min_cb_log2_size_y + sps->log2_diff_max_min_luma_coding_block_size; - ctb_size_y = 1 << ctb_log2_size_y; - pic_width_in_ctbs_y = - (sps->pic_width_in_luma_samples + ctb_size_y - 1) / ctb_size_y; - pic_height_in_ctbs_y = - (sps->pic_height_in_luma_samples + ctb_size_y - 1) / ctb_size_y; - pic_size_in_ctbs_y = pic_width_in_ctbs_y * pic_height_in_ctbs_y; - - if (!current->first_slice_segment_in_pic_flag) { - unsigned int address_size = av_log2(pic_size_in_ctbs_y - 1) + 1; - if (pps->dependent_slice_segments_enabled_flag) - flag(dependent_slice_segment_flag); - else - infer(dependent_slice_segment_flag, 0); - u(address_size, slice_segment_address, 0, pic_size_in_ctbs_y - 1); - } else { - infer(dependent_slice_segment_flag, 0); - } - - if (!current->dependent_slice_segment_flag) { - for (i = 0; i < pps->num_extra_slice_header_bits; i++) - flags(slice_reserved_flag[i], 1, i); - - ue(slice_type, 0, 2); - - if (pps->output_flag_present_flag) - flag(pic_output_flag); - - if (sps->separate_colour_plane_flag) - u(2, colour_plane_id, 0, 2); - - if (current->nal_unit_header.nal_unit_type != HEVC_NAL_IDR_W_RADL && - current->nal_unit_header.nal_unit_type != HEVC_NAL_IDR_N_LP) { - const H265RawSTRefPicSet *rps; - int dpb_slots_remaining; - - ub(sps->log2_max_pic_order_cnt_lsb_minus4 + 4, slice_pic_order_cnt_lsb); - - flag(short_term_ref_pic_set_sps_flag); - if (!current->short_term_ref_pic_set_sps_flag) { - CHECK(FUNC(st_ref_pic_set)(ctx, rw, ¤t->short_term_ref_pic_set, - sps->num_short_term_ref_pic_sets, sps)); - rps = ¤t->short_term_ref_pic_set; - } else if (sps->num_short_term_ref_pic_sets > 1) { - unsigned int idx_size = av_log2(sps->num_short_term_ref_pic_sets - 1) + 1; - u(idx_size, short_term_ref_pic_set_idx, - 0, sps->num_short_term_ref_pic_sets - 1); - rps = &sps->st_ref_pic_set[current->short_term_ref_pic_set_idx]; - } else { - infer(short_term_ref_pic_set_idx, 0); - rps = &sps->st_ref_pic_set[0]; - } - - dpb_slots_remaining = HEVC_MAX_DPB_SIZE - 1 - - rps->num_negative_pics - rps->num_positive_pics; - if (pps->pps_curr_pic_ref_enabled_flag && - (sps->sample_adaptive_offset_enabled_flag || - !pps->pps_deblocking_filter_disabled_flag || - pps->deblocking_filter_override_enabled_flag)) { - // This picture will occupy two DPB slots. - if (dpb_slots_remaining == 0) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid stream: " - "short-term ref pic set contains too many pictures " - "to use with current picture reference enabled.\n"); - return AVERROR_INVALIDDATA; - } - --dpb_slots_remaining; - } - - num_pic_total_curr = 0; - for (i = 0; i < rps->num_negative_pics; i++) - if (rps->used_by_curr_pic_s0_flag[i]) - ++num_pic_total_curr; - for (i = 0; i < rps->num_positive_pics; i++) - if (rps->used_by_curr_pic_s1_flag[i]) - ++num_pic_total_curr; - - if (sps->long_term_ref_pics_present_flag) { - unsigned int idx_size; - - if (sps->num_long_term_ref_pics_sps > 0) { - ue(num_long_term_sps, 0, FFMIN(sps->num_long_term_ref_pics_sps, - dpb_slots_remaining)); - idx_size = av_log2(sps->num_long_term_ref_pics_sps - 1) + 1; - dpb_slots_remaining -= current->num_long_term_sps; - } else { - infer(num_long_term_sps, 0); - idx_size = 0; - } - ue(num_long_term_pics, 0, dpb_slots_remaining); - - for (i = 0; i < current->num_long_term_sps + - current->num_long_term_pics; i++) { - if (i < current->num_long_term_sps) { - if (sps->num_long_term_ref_pics_sps > 1) - us(idx_size, lt_idx_sps[i], - 0, sps->num_long_term_ref_pics_sps - 1, 1, i); - if (sps->used_by_curr_pic_lt_sps_flag[current->lt_idx_sps[i]]) - ++num_pic_total_curr; - } else { - ubs(sps->log2_max_pic_order_cnt_lsb_minus4 + 4, poc_lsb_lt[i], 1, i); - flags(used_by_curr_pic_lt_flag[i], 1, i); - if (current->used_by_curr_pic_lt_flag[i]) - ++num_pic_total_curr; - } - flags(delta_poc_msb_present_flag[i], 1, i); - if (current->delta_poc_msb_present_flag[i]) - ues(delta_poc_msb_cycle_lt[i], 0, UINT32_MAX - 1, 1, i); - else - infer(delta_poc_msb_cycle_lt[i], 0); - } - } - - if (sps->sps_temporal_mvp_enabled_flag) - flag(slice_temporal_mvp_enabled_flag); - else - infer(slice_temporal_mvp_enabled_flag, 0); - - if (pps->pps_curr_pic_ref_enabled_flag) - ++num_pic_total_curr; - } - - if (sps->sample_adaptive_offset_enabled_flag) { - flag(slice_sao_luma_flag); - if (!sps->separate_colour_plane_flag && sps->chroma_format_idc != 0) - flag(slice_sao_chroma_flag); - else - infer(slice_sao_chroma_flag, 0); - } else { - infer(slice_sao_luma_flag, 0); - infer(slice_sao_chroma_flag, 0); - } - - if (current->slice_type == HEVC_SLICE_P || - current->slice_type == HEVC_SLICE_B) { - flag(num_ref_idx_active_override_flag); - if (current->num_ref_idx_active_override_flag) { - ue(num_ref_idx_l0_active_minus1, 0, 14); - if (current->slice_type == HEVC_SLICE_B) - ue(num_ref_idx_l1_active_minus1, 0, 14); - else - infer(num_ref_idx_l1_active_minus1, pps->num_ref_idx_l1_default_active_minus1); - } else { - infer(num_ref_idx_l0_active_minus1, pps->num_ref_idx_l0_default_active_minus1); - infer(num_ref_idx_l1_active_minus1, pps->num_ref_idx_l1_default_active_minus1); - } - - if (pps->lists_modification_present_flag && num_pic_total_curr > 1) - CHECK(FUNC(ref_pic_lists_modification)(ctx, rw, current, - num_pic_total_curr)); - - if (current->slice_type == HEVC_SLICE_B) - flag(mvd_l1_zero_flag); - if (pps->cabac_init_present_flag) - flag(cabac_init_flag); - else - infer(cabac_init_flag, 0); - if (current->slice_temporal_mvp_enabled_flag) { - if (current->slice_type == HEVC_SLICE_B) - flag(collocated_from_l0_flag); - else - infer(collocated_from_l0_flag, 1); - if (current->collocated_from_l0_flag) { - if (current->num_ref_idx_l0_active_minus1 > 0) - ue(collocated_ref_idx, 0, current->num_ref_idx_l0_active_minus1); - else - infer(collocated_ref_idx, 0); - } else { - if (current->num_ref_idx_l1_active_minus1 > 0) - ue(collocated_ref_idx, 0, current->num_ref_idx_l1_active_minus1); - else - infer(collocated_ref_idx, 0); - } - } - - if ((pps->weighted_pred_flag && current->slice_type == HEVC_SLICE_P) || - (pps->weighted_bipred_flag && current->slice_type == HEVC_SLICE_B)) - CHECK(FUNC(pred_weight_table)(ctx, rw, current)); - - ue(five_minus_max_num_merge_cand, 0, 4); - if (sps->motion_vector_resolution_control_idc == 2) - flag(use_integer_mv_flag); - else - infer(use_integer_mv_flag, sps->motion_vector_resolution_control_idc); - } - - se(slice_qp_delta, - - 6 * sps->bit_depth_luma_minus8 - (pps->init_qp_minus26 + 26), - + 51 - (pps->init_qp_minus26 + 26)); - if (pps->pps_slice_chroma_qp_offsets_present_flag) { - se(slice_cb_qp_offset, -12, +12); - se(slice_cr_qp_offset, -12, +12); - } else { - infer(slice_cb_qp_offset, 0); - infer(slice_cr_qp_offset, 0); - } - if (pps->pps_slice_act_qp_offsets_present_flag) { - se(slice_act_y_qp_offset, - -12 - (pps->pps_act_y_qp_offset_plus5 - 5), - +12 - (pps->pps_act_y_qp_offset_plus5 - 5)); - se(slice_act_cb_qp_offset, - -12 - (pps->pps_act_cb_qp_offset_plus5 - 5), - +12 - (pps->pps_act_cb_qp_offset_plus5 - 5)); - se(slice_act_cr_qp_offset, - -12 - (pps->pps_act_cr_qp_offset_plus3 - 3), - +12 - (pps->pps_act_cr_qp_offset_plus3 - 3)); - } else { - infer(slice_act_y_qp_offset, 0); - infer(slice_act_cb_qp_offset, 0); - infer(slice_act_cr_qp_offset, 0); - } - if (pps->chroma_qp_offset_list_enabled_flag) - flag(cu_chroma_qp_offset_enabled_flag); - else - infer(cu_chroma_qp_offset_enabled_flag, 0); - - if (pps->deblocking_filter_override_enabled_flag) - flag(deblocking_filter_override_flag); - else - infer(deblocking_filter_override_flag, 0); - if (current->deblocking_filter_override_flag) { - flag(slice_deblocking_filter_disabled_flag); - if (!current->slice_deblocking_filter_disabled_flag) { - se(slice_beta_offset_div2, -6, +6); - se(slice_tc_offset_div2, -6, +6); - } else { - infer(slice_beta_offset_div2, pps->pps_beta_offset_div2); - infer(slice_tc_offset_div2, pps->pps_tc_offset_div2); - } - } else { - infer(slice_deblocking_filter_disabled_flag, - pps->pps_deblocking_filter_disabled_flag); - infer(slice_beta_offset_div2, pps->pps_beta_offset_div2); - infer(slice_tc_offset_div2, pps->pps_tc_offset_div2); - } - if (pps->pps_loop_filter_across_slices_enabled_flag && - (current->slice_sao_luma_flag || current->slice_sao_chroma_flag || - !current->slice_deblocking_filter_disabled_flag)) - flag(slice_loop_filter_across_slices_enabled_flag); - else - infer(slice_loop_filter_across_slices_enabled_flag, - pps->pps_loop_filter_across_slices_enabled_flag); - } - - if (pps->tiles_enabled_flag || pps->entropy_coding_sync_enabled_flag) { - unsigned int num_entry_point_offsets_limit; - if (!pps->tiles_enabled_flag && pps->entropy_coding_sync_enabled_flag) - num_entry_point_offsets_limit = pic_height_in_ctbs_y - 1; - else if (pps->tiles_enabled_flag && !pps->entropy_coding_sync_enabled_flag) - num_entry_point_offsets_limit = - (pps->num_tile_columns_minus1 + 1) * (pps->num_tile_rows_minus1 + 1); - else - num_entry_point_offsets_limit = - (pps->num_tile_columns_minus1 + 1) * pic_height_in_ctbs_y - 1; - ue(num_entry_point_offsets, 0, num_entry_point_offsets_limit); - - if (current->num_entry_point_offsets > HEVC_MAX_ENTRY_POINT_OFFSETS) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Too many entry points: " - "%"PRIu16".\n", current->num_entry_point_offsets); - return AVERROR_PATCHWELCOME; - } - - if (current->num_entry_point_offsets > 0) { - ue(offset_len_minus1, 0, 31); - for (i = 0; i < current->num_entry_point_offsets; i++) - ubs(current->offset_len_minus1 + 1, entry_point_offset_minus1[i], 1, i); - } - } - - if (pps->slice_segment_header_extension_present_flag) { - ue(slice_segment_header_extension_length, 0, 256); - for (i = 0; i < current->slice_segment_header_extension_length; i++) - us(8, slice_segment_header_extension_data_byte[i], 0x00, 0xff, 1, i); - } - - CHECK(FUNC(byte_alignment)(ctx, rw)); - - return 0; -} - -static int FUNC(sei_buffering_period) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIBufferingPeriod *current, SEIMessageState *sei) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps; - const H265RawHRDParameters *hrd; - int err, i, length; - -#ifdef READ - int start_pos, end_pos; - start_pos = get_bits_count(rw); -#endif - - HEADER("Buffering Period"); - - ue(bp_seq_parameter_set_id, 0, HEVC_MAX_SPS_COUNT - 1); - - sps = h265->sps[current->bp_seq_parameter_set_id]; - if (!sps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "SPS id %d not available.\n", - current->bp_seq_parameter_set_id); - return AVERROR_INVALIDDATA; - } - h265->active_sps = sps; - - if (!sps->vui_parameters_present_flag || - !sps->vui.vui_hrd_parameters_present_flag) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Buffering period SEI requires " - "HRD parameters to be present in SPS.\n"); - return AVERROR_INVALIDDATA; - } - hrd = &sps->vui.hrd_parameters; - if (!hrd->nal_hrd_parameters_present_flag && - !hrd->vcl_hrd_parameters_present_flag) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Buffering period SEI requires " - "NAL or VCL HRD parameters to be present.\n"); - return AVERROR_INVALIDDATA; - } - - if (!hrd->sub_pic_hrd_params_present_flag) - flag(irap_cpb_params_present_flag); - else - infer(irap_cpb_params_present_flag, 0); - if (current->irap_cpb_params_present_flag) { - length = hrd->au_cpb_removal_delay_length_minus1 + 1; - ub(length, cpb_delay_offset); - length = hrd->dpb_output_delay_length_minus1 + 1; - ub(length, dpb_delay_offset); - } else { - infer(cpb_delay_offset, 0); - infer(dpb_delay_offset, 0); - } - - flag(concatenation_flag); - - length = hrd->au_cpb_removal_delay_length_minus1 + 1; - ub(length, au_cpb_removal_delay_delta_minus1); - - if (hrd->nal_hrd_parameters_present_flag) { - for (i = 0; i <= hrd->cpb_cnt_minus1[0]; i++) { - length = hrd->initial_cpb_removal_delay_length_minus1 + 1; - - ubs(length, nal_initial_cpb_removal_delay[i], 1, i); - ubs(length, nal_initial_cpb_removal_offset[i], 1, i); - - if (hrd->sub_pic_hrd_params_present_flag || - current->irap_cpb_params_present_flag) { - ubs(length, nal_initial_alt_cpb_removal_delay[i], 1, i); - ubs(length, nal_initial_alt_cpb_removal_offset[i], 1, i); - } - } - } - if (hrd->vcl_hrd_parameters_present_flag) { - for (i = 0; i <= hrd->cpb_cnt_minus1[0]; i++) { - length = hrd->initial_cpb_removal_delay_length_minus1 + 1; - - ubs(length, vcl_initial_cpb_removal_delay[i], 1, i); - ubs(length, vcl_initial_cpb_removal_offset[i], 1, i); - - if (hrd->sub_pic_hrd_params_present_flag || - current->irap_cpb_params_present_flag) { - ubs(length, vcl_initial_alt_cpb_removal_delay[i], 1, i); - ubs(length, vcl_initial_alt_cpb_removal_offset[i], 1, i); - } - } - } - -#ifdef READ - end_pos = get_bits_count(rw); - if (cbs_h265_payload_extension_present(rw, sei->payload_size, - end_pos - start_pos)) - flag(use_alt_cpb_params_flag); - else - infer(use_alt_cpb_params_flag, 0); -#else - // If unknown extension data exists, then use_alt_cpb_params_flag is - // coded in the bitstream and must be written even if it's 0. - if (current->use_alt_cpb_params_flag || sei->extension_present) { - flag(use_alt_cpb_params_flag); - // Ensure this bit is not the last in the payload by making the - // more_data_in_payload() check evaluate to true, so it may not - // be mistaken as something else by decoders. - sei->extension_present = 1; - } -#endif - - return 0; -} - -static int FUNC(sei_pic_timing) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIPicTiming *current, SEIMessageState *sei) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps; - const H265RawHRDParameters *hrd; - int err, expected_source_scan_type, i, length; - - HEADER("Picture Timing"); - - sps = h265->active_sps; - if (!sps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "No active SPS for pic_timing.\n"); - return AVERROR_INVALIDDATA; - } - - expected_source_scan_type = 2 - - 2 * sps->profile_tier_level.general_interlaced_source_flag - - sps->profile_tier_level.general_progressive_source_flag; - - if (sps->vui.frame_field_info_present_flag) { - u(4, pic_struct, 0, 12); - u(2, source_scan_type, - expected_source_scan_type >= 0 ? expected_source_scan_type : 0, - expected_source_scan_type >= 0 ? expected_source_scan_type : 2); - flag(duplicate_flag); - } else { - infer(pic_struct, 0); - infer(source_scan_type, - expected_source_scan_type >= 0 ? expected_source_scan_type : 2); - infer(duplicate_flag, 0); - } - - if (sps->vui_parameters_present_flag && - sps->vui.vui_hrd_parameters_present_flag) - hrd = &sps->vui.hrd_parameters; - else - hrd = NULL; - if (hrd && (hrd->nal_hrd_parameters_present_flag || - hrd->vcl_hrd_parameters_present_flag)) { - length = hrd->au_cpb_removal_delay_length_minus1 + 1; - ub(length, au_cpb_removal_delay_minus1); - - length = hrd->dpb_output_delay_length_minus1 + 1; - ub(length, pic_dpb_output_delay); - - if (hrd->sub_pic_hrd_params_present_flag) { - length = hrd->dpb_output_delay_du_length_minus1 + 1; - ub(length, pic_dpb_output_du_delay); - } - - if (hrd->sub_pic_hrd_params_present_flag && - hrd->sub_pic_cpb_params_in_pic_timing_sei_flag) { - // Each decoding unit must contain at least one slice segment. - ue(num_decoding_units_minus1, 0, HEVC_MAX_SLICE_SEGMENTS); - flag(du_common_cpb_removal_delay_flag); - - length = hrd->du_cpb_removal_delay_increment_length_minus1 + 1; - if (current->du_common_cpb_removal_delay_flag) - ub(length, du_common_cpb_removal_delay_increment_minus1); - - for (i = 0; i <= current->num_decoding_units_minus1; i++) { - ues(num_nalus_in_du_minus1[i], - 0, HEVC_MAX_SLICE_SEGMENTS, 1, i); - if (!current->du_common_cpb_removal_delay_flag && - i < current->num_decoding_units_minus1) - ubs(length, du_cpb_removal_delay_increment_minus1[i], 1, i); - } - } - } - - return 0; -} - -static int FUNC(sei_pan_scan_rect) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIPanScanRect *current, SEIMessageState *sei) -{ - int err, i; - - HEADER("Pan-Scan Rectangle"); - - ue(pan_scan_rect_id, 0, UINT32_MAX - 1); - flag(pan_scan_rect_cancel_flag); - - if (!current->pan_scan_rect_cancel_flag) { - ue(pan_scan_cnt_minus1, 0, 2); - - for (i = 0; i <= current->pan_scan_cnt_minus1; i++) { - ses(pan_scan_rect_left_offset[i], INT32_MIN + 1, INT32_MAX, 1, i); - ses(pan_scan_rect_right_offset[i], INT32_MIN + 1, INT32_MAX, 1, i); - ses(pan_scan_rect_top_offset[i], INT32_MIN + 1, INT32_MAX, 1, i); - ses(pan_scan_rect_bottom_offset[i], INT32_MIN + 1, INT32_MAX, 1, i); - } - - flag(pan_scan_rect_persistence_flag); - } - - return 0; -} - -static int FUNC(sei_recovery_point) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIRecoveryPoint *current, SEIMessageState *sei) -{ - int err; - - HEADER("Recovery Point"); - - se(recovery_poc_cnt, -32768, 32767); - - flag(exact_match_flag); - flag(broken_link_flag); - - return 0; -} - -static int FUNC(film_grain_characteristics)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawFilmGrainCharacteristics *current, - SEIMessageState *state) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps = h265->active_sps; - int err, c, i, j; - - HEADER("Film Grain Characteristics"); - - flag(film_grain_characteristics_cancel_flag); - if (!current->film_grain_characteristics_cancel_flag) { - int filmGrainBitDepth[3]; - - u(2, film_grain_model_id, 0, 1); - flag(separate_colour_description_present_flag); - if (current->separate_colour_description_present_flag) { - ub(3, film_grain_bit_depth_luma_minus8); - ub(3, film_grain_bit_depth_chroma_minus8); - flag(film_grain_full_range_flag); - ub(8, film_grain_colour_primaries); - ub(8, film_grain_transfer_characteristics); - ub(8, film_grain_matrix_coeffs); - } else { - if (!sps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "No active SPS for film_grain_characteristics.\n"); - return AVERROR_INVALIDDATA; - } - infer(film_grain_bit_depth_luma_minus8, sps->bit_depth_luma_minus8); - infer(film_grain_bit_depth_chroma_minus8, sps->bit_depth_chroma_minus8); - infer(film_grain_full_range_flag, sps->vui.video_full_range_flag); - infer(film_grain_colour_primaries, sps->vui.colour_primaries); - infer(film_grain_transfer_characteristics, sps->vui.transfer_characteristics); - infer(film_grain_matrix_coeffs, sps->vui.matrix_coefficients); - } - - filmGrainBitDepth[0] = current->film_grain_bit_depth_luma_minus8 + 8; - filmGrainBitDepth[1] = - filmGrainBitDepth[2] = current->film_grain_bit_depth_chroma_minus8 + 8; - - u(2, blending_mode_id, 0, 1); - ub(4, log2_scale_factor); - for (c = 0; c < 3; c++) - flags(comp_model_present_flag[c], 1, c); - for (c = 0; c < 3; c++) { - if (current->comp_model_present_flag[c]) { - ubs(8, num_intensity_intervals_minus1[c], 1, c); - us(3, num_model_values_minus1[c], 0, 5, 1, c); - for (i = 0; i <= current->num_intensity_intervals_minus1[c]; i++) { - ubs(8, intensity_interval_lower_bound[c][i], 2, c, i); - ubs(8, intensity_interval_upper_bound[c][i], 2, c, i); - for (j = 0; j <= current->num_model_values_minus1[c]; j++) - ses(comp_model_value[c][i][j], 0 - current->film_grain_model_id * (1 << (filmGrainBitDepth[c] - 1)), - ((1 << filmGrainBitDepth[c]) - 1) - current->film_grain_model_id * (1 << (filmGrainBitDepth[c] - 1)), - 3, c, i, j); - } - } - } - flag(film_grain_characteristics_persistence_flag); - } - - return 0; -} - -static int FUNC(sei_display_orientation) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIDisplayOrientation *current, SEIMessageState *sei) -{ - int err; - - HEADER("Display Orientation"); - - flag(display_orientation_cancel_flag); - if (!current->display_orientation_cancel_flag) { - flag(hor_flip); - flag(ver_flip); - ub(16, anticlockwise_rotation); - flag(display_orientation_persistence_flag); - } - - return 0; -} - -static int FUNC(sei_active_parameter_sets) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIActiveParameterSets *current, SEIMessageState *sei) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawVPS *vps; - int err, i; - - HEADER("Active Parameter Sets"); - - u(4, active_video_parameter_set_id, 0, HEVC_MAX_VPS_COUNT); - vps = h265->vps[current->active_video_parameter_set_id]; - if (!vps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "VPS id %d not available for active " - "parameter sets.\n", current->active_video_parameter_set_id); - return AVERROR_INVALIDDATA; - } - h265->active_vps = vps; - - flag(self_contained_cvs_flag); - flag(no_parameter_set_update_flag); - - ue(num_sps_ids_minus1, 0, HEVC_MAX_SPS_COUNT - 1); - for (i = 0; i <= current->num_sps_ids_minus1; i++) - ues(active_seq_parameter_set_id[i], 0, HEVC_MAX_SPS_COUNT - 1, 1, i); - - for (i = vps->vps_base_layer_internal_flag; - i <= FFMIN(62, vps->vps_max_layers_minus1); i++) { - ues(layer_sps_idx[i], 0, current->num_sps_ids_minus1, 1, i); - - if (i == 0) - h265->active_sps = h265->sps[current->active_seq_parameter_set_id[current->layer_sps_idx[0]]]; - } - - return 0; -} - -static int FUNC(sei_decoded_picture_hash) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIDecodedPictureHash *current, SEIMessageState *sei) -{ - CodedBitstreamH265Context *h265 = ctx->priv_data; - const H265RawSPS *sps = h265->active_sps; - int err, c, i; - - HEADER("Decoded Picture Hash"); - - if (!sps) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "No active SPS for decoded picture hash.\n"); - return AVERROR_INVALIDDATA; - } - - u(8, hash_type, 0, 2); - - for (c = 0; c < (sps->chroma_format_idc == 0 ? 1 : 3); c++) { - if (current->hash_type == 0) { - for (i = 0; i < 16; i++) - us(8, picture_md5[c][i], 0x00, 0xff, 2, c, i); - } else if (current->hash_type == 1) { - us(16, picture_crc[c], 0x0000, 0xffff, 1, c); - } else if (current->hash_type == 2) { - us(32, picture_checksum[c], 0x00000000, 0xffffffff, 1, c); - } - } - - return 0; -} - -static int FUNC(sei_time_code) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEITimeCode *current, SEIMessageState *sei) -{ - int err, i; - - HEADER("Time Code"); - - u(2, num_clock_ts, 1, 3); - - for (i = 0; i < current->num_clock_ts; i++) { - flags(clock_timestamp_flag[i], 1, i); - - if (current->clock_timestamp_flag[i]) { - flags(units_field_based_flag[i], 1, i); - us(5, counting_type[i], 0, 6, 1, i); - flags(full_timestamp_flag[i], 1, i); - flags(discontinuity_flag[i], 1, i); - flags(cnt_dropped_flag[i], 1, i); - - ubs(9, n_frames[i], 1, i); - - if (current->full_timestamp_flag[i]) { - us(6, seconds_value[i], 0, 59, 1, i); - us(6, minutes_value[i], 0, 59, 1, i); - us(5, hours_value[i], 0, 23, 1, i); - } else { - flags(seconds_flag[i], 1, i); - if (current->seconds_flag[i]) { - us(6, seconds_value[i], 0, 59, 1, i); - flags(minutes_flag[i], 1, i); - if (current->minutes_flag[i]) { - us(6, minutes_value[i], 0, 59, 1, i); - flags(hours_flag[i], 1, i); - if (current->hours_flag[i]) - us(5, hours_value[i], 0, 23, 1, i); - } - } - } - - ubs(5, time_offset_length[i], 1, i); - if (current->time_offset_length[i] > 0) - ibs(current->time_offset_length[i], time_offset_value[i], 1, i); - else - infer(time_offset_value[i], 0); - } - } - - return 0; -} - -static int FUNC(sei_alpha_channel_info) - (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIAlphaChannelInfo *current, SEIMessageState *sei) -{ - int err, length; - - HEADER("Alpha Channel Information"); - - flag(alpha_channel_cancel_flag); - if (!current->alpha_channel_cancel_flag) { - ub(3, alpha_channel_use_idc); - ub(3, alpha_channel_bit_depth_minus8); - length = current->alpha_channel_bit_depth_minus8 + 9; - ub(length, alpha_transparent_value); - ub(length, alpha_opaque_value); - flag(alpha_channel_incr_flag); - flag(alpha_channel_clip_flag); - if (current->alpha_channel_clip_flag) - flag(alpha_channel_clip_type_flag); - } else { - infer(alpha_channel_use_idc, 2); - infer(alpha_channel_incr_flag, 0); - infer(alpha_channel_clip_flag, 0); - } - - return 0; -} - -static int FUNC(sei)(CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEI *current, int prefix) -{ - int err; - - if (prefix) - HEADER("Prefix Supplemental Enhancement Information"); - else - HEADER("Suffix Supplemental Enhancement Information"); - - CHECK(FUNC(nal_unit_header)(ctx, rw, ¤t->nal_unit_header, - prefix ? HEVC_NAL_SEI_PREFIX - : HEVC_NAL_SEI_SUFFIX)); - - CHECK(FUNC_SEI(message_list)(ctx, rw, ¤t->message_list, prefix)); - - CHECK(FUNC(rbsp_trailing_bits)(ctx, rw)); - - return 0; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download GTA 5 APK OBB for Android in 2023 The best way to enjoy the epic game on your mobile device.md b/spaces/congsaPfin/Manga-OCR/logs/Download GTA 5 APK OBB for Android in 2023 The best way to enjoy the epic game on your mobile device.md deleted file mode 100644 index f8e0b3f7062538163c95c077e16cfc76887a984d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download GTA 5 APK OBB for Android in 2023 The best way to enjoy the epic game on your mobile device.md +++ /dev/null @@ -1,150 +0,0 @@ -
          -

          GTA 5 APK Download 2023: How to Play GTA 5 on Android Devices

          -

          If you are a fan of Grand Theft Auto V, or GTA 5 for short, you might be wondering if you can play this amazing game on your Android device. After all, GTA 5 is one of the most popular and successful video games of all time, with millions of players around the world. In this article, we will tell you everything you need to know about GTA 5 APK download for Android in 2023, including the official and unofficial ways to enjoy this game on your mobile device. Let's get started!

          -

          gta 5 apk download 2023


          Downloadhttps://urlca.com/2uO7FJ



          -

          Introduction

          -

          What is GTA 5?

          -

          GTA 5 is an open-world action-adventure game developed by Rockstar Games and released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. The game is set in the fictional state of San Andreas, which is based on Southern California, and follows the lives of three protagonists: Michael, a retired bank robber; Trevor, a psychopathic criminal; and Franklin, a young street hustler. The game allows the player to switch between these characters at any time and explore the vast and diverse world of San Andreas, which includes urban areas, rural landscapes, mountains, deserts, beaches, and more. The game also features a variety of activities, such as driving, shooting, fighting, stealth, racing, flying, parachuting, swimming, diving, hunting, golfing, tennis, yoga, and more. The game also has an online multiplayer mode called GTA Online, where up to 30 players can cooperate or compete in various missions and events.

          -

          Why is GTA 5 so popular?

          -

          GTA 5 is widely regarded as one of the best video games ever made, and has received critical acclaim and numerous awards for its gameplay, story, graphics, sound, music, and online features. The game has also broken several records in the gaming industry, such as being the fastest-selling entertainment product in history, earning $800 million in its first day and $1 billion in its first three days. As of December 2020, the game has sold over 140 million copies worldwide and is still one of the most played games online. Some of the reasons why GTA 5 is so popular are:

          -
            -
          • It offers a huge and immersive open-world that can be explored freely and dynamically.
          • -
          • It has a compelling and humorous story that features three different protagonists with their own personalities and backgrounds.
          • -
          • It has a rich and diverse gameplay that caters to different tastes and preferences.
          • -
          • It has stunning graphics and realistic physics that create a lifelike and cinematic experience.
          • -
          • It has a vibrant and lively online community that provides endless content and fun.
          • -
          -

          Can you play GTA 5 on Android devices?

          -

          The short answer is yes, but not directly. GTA 5 is not officially available on Android or iOS devices, as Rockstar Games has never ported this game to mobile platforms. However, there are some ways to play GTA 5 on your Android device in 2023, either by using a cloud gaming service or by using a fan-made port or emulator. We will explain these methods in detail in the next section.

          -

          How to download GTA 5 APK for Android in 2023

          -

          The official way: Use a cloud gaming service

          The official way: Use a cloud gaming service

          -

          Cloud gaming is a technology that allows you to stream games from remote servers to your device, without having to download or install anything. This means that you can play games that are not compatible with your device, as long as you have a stable and fast internet connection. Cloud gaming is also convenient, as you can access your games from anywhere and switch between devices easily.

          -

          Some of the benefits of cloud gaming are:

          -
            -
          • You don't need a powerful device or a lot of storage space to play high-end games.
          • -
          • You can play the latest games without having to wait for updates or patches.
          • -
          • You can save money on buying new hardware or software.
          • -
          • You can enjoy a smooth and lag-free gaming experience, as long as your internet connection is good.
          • -
          -

          Some of the best cloud gaming services for GTA 5 are:

          - - - - - - - - - - - - - - - - - - - - - -
          ServicePriceFeatures
          Xbox Cloud Gaming (Beta)$14.99/month (included in Xbox Game Pass Ultimate)- Stream hundreds of high-quality games from the Xbox Game Pass catalog, including GTA 5.
          - Play on Xbox consoles, PC, and mobile devices via supported browsers or apps.
          - Use an Xbox Wireless Controller, Sony DualShock 4, or touch controls.
          - Play with friends online across devices.
          - Play next-gen games like Microsoft Flight Simulator on your Xbox One and other devices.
          NVIDIA GeForce NOW$9.99/month (Priority membership) or free (Standard membership)- Stream games you already own from platforms like Steam, Epic Games Store, Ubisoft Connect, and more, including GTA 5.
          - Play on PC, Mac, Chromebook, NVIDIA Shield TV, Android, and iOS devices via supported browsers or apps.
          - Use a compatible gamepad, mouse and keyboard, or touch controls.
          - Enjoy ray tracing and DLSS features on supported games.
          - Priority members get extended session lengths and priority access to servers.
          Loudplay$9.99/month (Basic plan) or $19.99/month (Pro plan)- Stream games you already own from platforms like Steam, Epic Games Store, Ubisoft Connect, and more, including GTA 5.
          - Play on PC, Mac, or Android devices via supported browsers or apps.
          - Use a compatible gamepad, mouse and keyboard, or touch controls.
          - Get access to a powerful Windows desktop with full control over settings and apps.
          - Pro plan offers higher performance and more storage space.
          -

          To use any of these cloud gaming services for GTA 5 APK download for Android in 2023, you will need to:

          -

          gta 5 apk + obb download links for android in 2023: real mobile game or fake?
          -descargar gta 5 mobile apk para android 2023 rockstar games
          -play gta 5 - grand theft auto on pc with bluestacks
          -gta 5 apk 2023 latest version free download for android
          -how to install gta 5 on android phone in 2023 without verification
          -gta 5 apk mod + data obb full offline for android 2023
          -gta 5 mobile apk download for ios iphone ipad in 2023
          -gta 5 apk + obb highly compressed download for android in 2023
          -gta 5 android fan made game download apk + obb in 2023
          -gta 5 apk + obb download for android in 2023 no human verification
          -gta 5 mobile apk beta version download for android in 2023
          -gta 5 apk + obb download for android in 2023 by rockstar games
          -gta 5 apk + obb download for android in 2023 mediafıre link
          -gta 5 mobile apk + data download for android in 2023 offline
          -gta 5 apk + obb download for android in 2023 real or fake
          -gta 5 mobile apk + obb download for android in 2023 update
          -gta 5 apk + obb download for android in 2023 gameplay
          -gta 5 mobile apk + data download for android in 2023 online
          -gta 5 apk + obb download for android in 2023 system requirements
          -gta 5 mobile apk + obb download for android in 2023 size
          -gta 5 apk + obb download for android in 2023 new features
          -gta 5 mobile apk + data download for android in 2023 free
          -gta 5 apk + obb download for android in 2023 review
          -gta 5 mobile apk + obb download for android in 2023 graphics
          -gta 5 apk + obb download for android in 2023 cheats
          -gta 5 mobile apk + data download for android in 2023 mod menu
          -gta 5 apk + obb download for android in 2023 tutorial
          -gta 5 mobile apk + obb download for android in 2023 trailer
          -gta 5 apk + obb download for android in 2023 error fix
          -gta 5 mobile apk + data download for android in 2023 unlimited money

          -
            -
          1. Sign up for the service of your choice and choose a subscription plan.
          2. -
          3. Download the app or open the browser on your Android device and log in to your account.
          4. -
          5. Launch GTA 5 from the service's library or from your own game library.
          6. -
          7. Enjoy playing GTA 5 on your Android device!
          8. -
          -

          The unofficial way: Use a fan-made port or emulator

          -

          If you don't want to use a cloud gaming service, you can also try to use a fan-made port or emulator to play GTA 5 on your Android device. However, these methods are not recommended, as they are not authorized by Rockstar Games and may have legal, technical, or ethical issues.

          -

          A fan-made port is a modification of the original game that allows it to run on a different platform than it was intended for. For example, some fans have tried to create PC ports for GTA games that were only released on consoles or handheld devices, such as GTA Advance, Liberty City Stories, Vice City Stories, and Chinatown Wars. However, these projects are often incomplete, unstable, buggy, or incompatible with some devices. They may also violate the intellectual property rights of Rockstar Games and be subject to legal action.

          -

          An emulator is a software that mimics the hardware and software of another device on your device. For example, some emulators can allow you to play PlayStation or Xbox games on your PC or Android device. However, emulators are also problematic for several reasons. First of all, they may not be able to run GTA 5 smoothly or accurately on your device, as GTA 5 is a very demanding game that requires a lot of processing power and memory. Second of all, they may require you to download illegal copies of GTA 5 on your device, which is illegal and may harm your device with viruses or malware. Third of all, they may also infringe on the intellectual property rights of Rockstar Games and the console manufacturers and be subject to legal action.

          -

          Some of the fan-made ports or emulators that claim to run GTA 5 on Android devices are:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          NameDescriptionSource
          GTA 5 Mobile by Rockstar GamesA fake app that pretends to be an official port of GTA 5 for Android devices, but is actually a scam that asks for money or personal information from unsuspecting users.
          GTA 5 APK by GTA5Mobile.ClubA fan-made port of GTA 5 for Android devices that claims to offer the full game experience, but is actually a modified version of GTA San Andreas with some GTA 5 assets and features.
          GTA 5 APK by GTA5APK.NetA fan-made port of GTA 5 for Android devices that claims to offer the full game experience, but is actually a modified version of GTA Vice City with some GTA 5 assets and features.
          PPSSPP EmulatorAn emulator that allows you to play PSP games on your Android device, including GTA Liberty City Stories and GTA Vice City Stories. However, it cannot run GTA 5, as GTA 5 was never released on PSP.
          DamonPS2 EmulatorAn emulator that allows you to play PS2 games on your Android device, including GTA III, GTA Vice City, and GTA San Andreas. However, it cannot run GTA 5, as GTA 5 was never released on PS2.
          -

          To use any of these fan-made ports or emulators for GTA 5 APK download for Android in 2023, you will need to:

          -
            -
          1. Download the app or the emulator from the source link and install it on your Android device.
          2. -
          3. Download the game file or the ROM file from the source link or from another website and copy it to your device's storage.
          4. -
          5. Launch the app or the emulator and select the game file or the ROM file to start playing.
          6. -
          7. Be aware of the risks and drawbacks of using these methods and proceed at your own discretion.
          8. -
          -

          Conclusion

          -

          Summary of the main points

          -

          In this article, we have discussed how to play GTA 5 on Android devices in 2023. We have explained what GTA 5 is and why it is so popular. We have also shown you two ways to download GTA 5 APK for Android in 2023: the official way and the unofficial way. The official way is to use a cloud gaming service, such as Xbox Cloud Gaming, NVIDIA GeForce NOW, or Loudplay. The unofficial way is to use a fan-made port or emulator, such as GTA 5 Mobile by Rockstar Games, GTA 5 APK by GTA5Mobile.Club, GTA 5 APK by GTA5APK.Net, PPSSPP Emulator, or DamonPS2 Emulator. However, we have also warned you about the risks and drawbacks of using these methods, such as legal issues, technical issues, or ethical issues.

          -

          Call to action and final thoughts

          -

          We hope that this article has helped you understand how to play GTA 5 on Android devices in 2023. If you are interested in trying out any of these methods, we recommend that you do your own research and follow the instructions carefully. However, we also advise that you respect the intellectual property rights of Rockstar Games and support their work by buying their games legally. If you have any questions or feedback about this article, please feel free to leave a comment below. Thank you for reading and happy gaming!

          -

          FAQs

          -

          Is GTA 5 available on Google Play Store?

          -

          No, GTA 5 is not available on Google Play Store or any other official app store for Android devices. The only way to play GTA 5 on Android devices is to use a cloud gaming service or a fan-made port or emulator.

          -

          Is GTA 5 free to play on Android devices?

          -

          No, GTA 5 is not free to play on Android devices. If you use a cloud gaming service, you will need to pay a monthly or yearly subscription fee to access the service and the game. If you use a fan-made port or emulator, you will need to own a legal copy of the game on another platform and download it to your device, which may also incur additional costs.

          -

          Is GTA 5 safe to play on Android devices?

          -

          It depends on the method you use. If you use a cloud gaming service, you can play GTA 5 safely on your Android device, as long as you use a reputable and secure service and a reliable internet connection. However, if you use a fan-made port or emulator, you may expose your device and your data to various risks, such as viruses, malware, spyware, phishing, hacking, or legal action. Therefore, we advise that you be careful and cautious when using these methods and protect your device and your data with antivirus software and VPN services.

          -

          Is GTA 5 compatible with all Android devices?

          -

          No, GTA 5 is not compatible with all Android devices. If you use a cloud gaming service, you will need to have a device that meets the minimum requirements of the service, such as operating system, browser, app, screen size, resolution, memory, battery, etc. You will also need to have a compatible controller or touch controls to play the game. If you use a fan-made port or emulator, you will need to have a device that can run the app or the emulator smoothly and support the game file or the ROM file. You may also encounter some compatibility issues or errors depending on your device model, brand, or version.

          -

          Can I play GTA 5 offline on Android devices?

          -

          No, you cannot play GTA 5 offline on Android devices. If you use a cloud gaming service, you will need to have a constant and stable internet connection to stream the game from the server to your device. If you use a fan-made port or emulator, you will still need to have an internet connection to verify the game file or the ROM file and access some online features of the game.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Shadow Fight 2 Special Edition APK for Free - No Mod Required.md b/spaces/congsaPfin/Manga-OCR/logs/Download Shadow Fight 2 Special Edition APK for Free - No Mod Required.md deleted file mode 100644 index fe73fe61fd4c9cbf70f1bbce30fab3f121c6f98e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Shadow Fight 2 Special Edition APK for Free - No Mod Required.md +++ /dev/null @@ -1,98 +0,0 @@ -
          -

          Shadow Fight 2 Special Edition: A Premium Fighting Game for Android

          -

          If you are a fan of fighting games, you might have heard of Shadow Fight 2, one of the most popular and successful fighting series on mobile. But did you know that there is a special edition of Shadow Fight 2 that offers more features and benefits than the original version? In this article, we will tell you everything you need to know about Shadow Fight 2 Special Edition, and how you can download it for free on your Android device.

          -

          shadow fight 2 special edition free apk download no mod


          Download Zip ————— https://urlca.com/2uO7R7



          -

          What is Shadow Fight 2 Special Edition?

          -

          Shadow Fight 2 Special Edition is the special paid version of Shadow Fight 2. It was released on 17 August 2017 on Android and on 22 August 2017 on iOS. This version allows players to get gems easily, with most of the game modes rewarding the player an amount of them upon victory. Gems are the premium currency in the game, which can be used to buy and upgrade weapons, armor, skills, and more.

          -

          Shadow Fight 2 Special Edition also has some exclusive features that are not available in the free version of the game. Here are some of them:

          -

          Features of Shadow Fight 2 Special Edition

          -

          No ads

          -

          One of the most annoying things about the free version of Shadow Fight 2 is the constant ads that pop up every time you finish a fight or open a menu. These ads can interrupt your gameplay and ruin your immersion. In Shadow Fight 2 Special Edition, there are no ads at all. You can enjoy the game without any distractions or interruptions.

          -

          No energy restoring

          -

          Another frustrating thing about the free version of Shadow Fight 2 is the energy system. Every time you fight, you lose some energy points. When your energy points run out, you have to wait for them to restore over time, or pay gems to refill them instantly. This can limit your playing time and force you to spend money or watch ads to continue playing. In Shadow Fight 2 Special Edition, there is no energy system at all. You can fight as much as you want, anytime and anywhere you want.

          -

          New story chapter

          -

          Shadow Fight 2 has a captivating story that follows the journey of a shadow warrior who seeks to defeat the evil Titan and his army of shadow demons. The free version of the game has six acts, each with a different boss and environment. In Shadow Fight 2 Special Edition, there is a seventh act that reveals the truth behind Sensei's past and his connection to Titan. This act has new enemies, weapons, locations, and challenges that will test your skills and keep you hooked.

          -

          shadow fight 2 special edition apk free download latest version
          -shadow fight 2 special edition full game free download for android
          -shadow fight 2 special edition unlocked apk free download
          -shadow fight 2 special edition free download without mod apk
          -shadow fight 2 special edition apk free download for android offline
          -shadow fight 2 special edition hack apk free download
          -shadow fight 2 special edition premium apk free download
          -shadow fight 2 special edition mod apk free download rexdl
          -shadow fight 2 special edition original apk free download
          -shadow fight 2 special edition apk free download android 1
          -shadow fight 2 special edition unlimited money apk free download
          -shadow fight 2 special edition mega mod apk free download
          -shadow fight 2 special edition apk obb free download
          -shadow fight 2 special edition mod apk free download revdl
          -shadow fight 2 special edition apk free download highly compressed
          -shadow fight 2 special edition mod apk free download android
          -shadow fight 2 special edition all weapons unlocked apk free download
          -shadow fight 2 special edition mod menu apk free download
          -shadow fight 2 special edition max level apk free download
          -shadow fight 2 special edition mod apk free download apkpure
          -shadow fight 2 special edition unlimited gems and coins apk free download
          -shadow fight 2 special edition mod apk free download happymod
          -shadow fight 2 special edition titan mod apk free download
          -shadow fight 2 special edition god mode apk free download
          -shadow fight 2 special edition no ads apk free download
          -shadow fight 2 special edition cheat codes apk free download
          -shadow fight 2 special edition pro apk free download
          -shadow fight 2 special edition cracked apk free download
          -shadow fight 2 special edition unlimited everything apk free download
          -shadow fight 2 special edition modded apk free download
          -shadow fight 2 special edition hack version apk free download
          -shadow fight 2 special edition paid apk free download
          -shadow fight 2 special edition mod unlimited energy apk free download
          -shadow fight 2 special edition old version apk free download
          -shadow fight 2 special edition all bosses unlocked apk free download
          -shadow fight 2 special edition no root apk free download
          -shadow fight 2 special edition unlimited orbs and gems apk free download
          -shadow fight 2 special edition mod money and gems apk free download
          -shadow fight 2 special edition sensei story mode unlocked apk free download
          -shadow fight 2 special edition one hit kill mod apk free download
          -shadow fight 2 special edition all levels unlocked apk free download
          -shadow fight 2 special edition no verification required apk free download
          -shadow fight 2 special edition unlimited time bomb and enchantment orbs hack modded version game app file for android mobile phone tablet device install play offline without internet connection or wifi data usage required direct link mediafire zippyshare google drive dropbox mega nz apkmirror uptodown apkmody apknite apkpure.com[^1^]

          -

          Huge arsenal of weapons and armor

          -

          Shadow Fight 2 has a variety of weapons and armor that you can use to customize your character and enhance your fighting abilities. You can choose from swords, axes, daggers, nunchaku, shuriken, kusarigama, sai, and more. You can also equip yourself with helmets, vests, gloves, boots, rings, amulets, and more. Each weapon and armor has different stats and effects that can affect your speed, damage, defense, critical chance, etc. In Shadow Fight 2 Special Edition, you can get a lot of gems through battles and use them to buy and upgrade your weapons and armor easily. You can also unlock some special weapons and armor that are only available in this version.

          -

          Simple controls and stunning animations

          -

          Shadow Fight 2 has simple controls that are designed for touch screen devices. You can use the virtual joystick on the left side of the screen to move your character, and the buttons on the right side of the screen to perform attacks, kicks, jumps, and special moves. You can also combine different moves to create combos and unleash powerful attacks. Shadow Fight 2 has stunning animations that make the fights look realistic and fluid. The game uses a physics-based combat system that simulates the movements and reactions of real fighters. You can see the shadows of your character and your opponents react to every hit and movement. You can also see the effects of your weapons and skills on the environment, such as sparks, blood, dust, etc.

          -

          How to download Shadow Fight 2 Special Edition for free?

          -

          Shadow Fight 2 Special Edition is a premium game that costs $4.99 on Google Play Store and $3.99 on App Store. However, there are some ways to download it for free on your Android device. Here are some of them:

          -

          Download from Google Play Store

          -

          The easiest and safest way to download Shadow Fight 2 Special Edition for free is to use Google Play Store. Google Play Store often offers discounts and promotions for various apps and games, including Shadow Fight 2 Special Edition. You can check the store regularly to see if the game is on sale or free for a limited time. You can also use Google Play Points, which are rewards that you can earn by buying or using apps and games on Google Play Store. You can use these points to redeem Shadow Fight 2 Special Edition for free or at a lower price.

          -

          Download from App Store

          -

          If you have an iOS device, you can also download Shadow Fight 2 Special Edition for free from App Store. App Store also has discounts and promotions for various apps and games, including Shadow Fight 2 Special Edition. You can check the store regularly to see if the game is on sale or free for a limited time. You can also use Apple Gift Cards, which are cards that you can buy or receive as gifts that contain a specific amount of money that you can use to buy apps and games on App Store. You can use these gift cards to buy Shadow Fight 2 Special Edition for free or at a lower price.

          -

          Download from third-party websites

          -

          Another way to download Shadow Fight 2 Special Edition for free is to use third-party websites that offer apk files of the game. Apk files are files that contain the installation package of an app or game. You can download these files from various websites that host them, such as APKPure, APKMirror, APKMonk, etc. However, there are some advantages and disadvantages of using this method.

          -

          Advantages and disadvantages of third-party websites

          -

          The main advantage of using third-party websites is that you can get Shadow Fight 2 Special Edition for free without paying anything or waiting for a sale or promotion. You can also get access to some modded versions of the game that have unlimited gems, coins, weapons, armor, etc. However, there are also some disadvantages of using this method. The main disadvantage is that you may expose your device to malware, viruses, spyware, etc. that may harm your device or steal your personal information. Some of these websites may also have fake or outdated apk files that may not work properly or cause errors in your device. Another disadvantage is that you may not get updates or support from the official developers of the game. You may also face some legal issues if you download a pirated version of the game.

          -

          How to install apk files from third-party websites

          -

          If you decide to use third-party websites to download Shadow Fight 2 Special Edition for free, you need to follow some steps to install the apk files on your device. Here are the steps:

          -
            -
          1. Go to the settings of your device and enable the option to install apps from unknown sources. This will allow you to install apps that are not from Google Play Store or App Store.
          2. -
          3. Go to the website that offers the apk file of Shadow Fight 2 Special Edition and download it on your device.
          4. -
          5. Locate the downloaded file on your device and tap on it to start the installation process.
          6. -
          7. Follow the instructions on the screen and wait for the installation to finish.
          8. -
          9. Launch the game and enjoy it.
          10. -
          -

          Conclusion

          -

          Shadow Fight 2 Special Edition is a premium fighting game for Android that offers more features and benefits than the original version of Shadow Fight 2. It has no ads, no energy restoring, a new story chapter, a huge arsenal of weapons and armor, simple controls and stunning animations. It costs $4.99 on Google Play Store and $3.99 on App Store, but you can also download it for free using Google Play Store, App Store, or third-party websites. However, you should be careful of the risks and consequences of using third-party websites, such as malware, viruses, legal issues, etc. Shadow Fight 2 Special Edition is a great game for fighting fans who want to enjoy a premium and immersive experience on their Android device.

          -

          FAQs

          -

          Here are some frequently asked questions about Shadow Fight 2 Special Edition:

          -
            -
          • Q: What is the difference between Shadow Fight 2 and Shadow Fight 2 Special Edition?
            -A: Shadow Fight 2 is the free version of the game that has ads, energy system, and six acts. Shadow Fight 2 Special Edition is the paid version of the game that has no ads, no energy system, and seven acts.
          • -
          • Q: How can I get gems in Shadow Fight 2 Special Edition?
            -A: You can get gems in Shadow Fight 2 Special Edition by winning battles, completing achievements, watching videos, or buying them with real money.
          • -
          • Q: How can I unlock new weapons and armor in Shadow Fight 2 Special Edition?
            -A: You can unlock new weapons and armor in Shadow Fight 2 Special Edition by buying them with gems or coins, or by defeating bosses and completing challenges.
          • -
          • Q: How can I upgrade my weapons and armor in Shadow Fight 2 Special Edition?
            -A: You can upgrade your weapons and armor in Shadow Fight 2 Special Edition by spending gems or coins, or by using enchantments and ascension.
          • -
          • Q: How can I learn new skills and moves in Shadow Fight 2 Special Edition?
            -A: You can learn new skills and moves in Shadow Fight 2 Special Edition by spending skill points, or by using perks and magic.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Naruto Shinobi Striker PPSSPP on Android - Easy and Fast.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Install Naruto Shinobi Striker PPSSPP on Android - Easy and Fast.md deleted file mode 100644 index 72298243994bee64644a3a25e2df13ed5f30be0a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Naruto Shinobi Striker PPSSPP on Android - Easy and Fast.md +++ /dev/null @@ -1,90 +0,0 @@ - -

          Naruto to Boruto Shinobi Striker PPSSPP Download Android: How to Play the Ultimate Ninja Game on Your Phone

          -

          If you are a fan of Naruto, you might have heard of Naruto to Boruto Shinobi Striker, a multiplayer online game that lets you fight with your favorite characters and discover a new gameplay style in the Naruto universe. But did you know that you can also play this game on your Android device using an emulator called PPSSPP? In this article, we will show you how to download and install Naruto to Boruto Shinobi Striker PPSSPP on Android and give you some tips and tricks for playing it.

          -

          What is Naruto to Boruto Shinobi Striker?

          -

          Naruto to Boruto Shinobi Striker is a game that was released in 2018 for PlayStation 4, Xbox One, and PC. It is based on the popular manga and anime series Naruto and its sequel Boruto. The game features a new gameplay style that combines action and strategy in thrilling 3D environments. You can choose from over 40 characters from the Naruto series, each with their own unique skills and abilities. You can also create your own custom character and customize their appearance, weapons, and jutsu.

          -

          naruto to boruto shinobi striker ppsspp download android


          Download ✸✸✸ https://urlca.com/2uO7Dp



          -

          The game has four different modes that you can play solo or with your friends online. The first mode is Flag Battle, where you have to capture the enemy's flag while defending your own. The second mode is Barrier Battle, where you have to break through the enemy's barrier and defeat their boss. The third mode is Base Battle, where you have to capture and hold three bases on the map. The fourth mode is Combat Battle, where you have to defeat as many enemies as possible within a time limit.

          -

          Naruto to Boruto Shinobi Striker is a game that will test your teamwork, strategy, and skills as a shinobi. You can cooperate with your friends to form a four-member team and compete against other teams from around the world. You can also learn from famous ninja masters like Naruto, Sasuke, Kakashi, and more. You can unlock new skills, weapons, costumes, and accessories as you progress in the game. You can also participate in special events and missions that will challenge your abilities.

          -

          naruto to boruto shinobi striker apk android free download
          -naruto to boruto shinobi striker ppsspp iso download for android
          -naruto to boruto shinobi striker android game offline
          -naruto to boruto shinobi striker ppsspp highly compressed android
          -naruto to boruto shinobi striker mobile apk download android
          -naruto to boruto shinobi striker ppsspp mod download android
          -naruto to boruto shinobi striker android gameplay
          -naruto to boruto shinobi striker ppsspp emulator android
          -naruto to boruto shinobi striker apk obb download android
          -naruto to boruto shinobi striker android game online
          -naruto to boruto shinobi striker ppsspp gold download android
          -naruto to boruto shinobi striker android graphics settings
          -naruto to boruto shinobi striker apk data download android
          -naruto to boruto shinobi striker android game review
          -naruto to boruto shinobi striker ppsspp cheats android
          -naruto to boruto shinobi striker android system requirements
          -naruto to boruto shinobi striker apk mod download android
          -naruto to boruto shinobi striker android game trailer
          -naruto to boruto shinobi striker ppsspp best settings android
          -naruto to boruto shinobi striker android controller support
          -naruto to boruto shinobi striker apk full version download android
          -naruto to boruto shinobi striker android game size
          -naruto to boruto shinobi striker ppsspp english patch android
          -naruto to boruto shinobi striker android multiplayer mode
          -naruto to boruto shinobi striker apk latest version download android
          -naruto to boruto shinobi striker android game features
          -naruto to boruto shinobi striker ppsspp file download android
          -naruto to boruto shinobi striker android voice actors
          -naruto to boruto shinobi striker apk unlocked download android
          -naruto to boruto shinobi striker android game rating
          -naruto to boruto shinobi striker ppsspp save data android
          -naruto to boruto shinobi striker andro

          -

          What is PPSSPP?

          -

          PPSSPP is a free and open -source emulator for PSP games on Android devices. It is a way to enjoy PSP games with enhanced graphics, sound, and performance on your phone or tablet. It is a compatible app with many PSP games, including Naruto to Boruto Shinobi Striker.

          -

          PPSSPP allows you to play PSP games in full HD resolution, with anti-aliasing, anisotropic filtering, and texture scaling. You can also customize the controls, save and load states, and use cheats. You can also connect your device to a TV or a monitor and use a gamepad or a controller for a better gaming experience. You can also play online with other PPSSPP users using the built-in ad hoc network feature.

          -

          PPSSPP is a great app for Naruto fans who want to play Naruto to Boruto Shinobi Striker on their Android devices. It is easy to use, fast, and reliable. You can download it from the Google Play Store or the official website. You can also check the compatibility list of PPSSPP to see which games work well with the app.

          -

          How to download and install Naruto to Boruto Shinobi Striker PPSSPP on Android?

          -

          Now that you know what Naruto to Boruto Shinobi Striker and PPSSPP are, you might be wondering how to download and install them on your Android device. Well, don't worry, we have got you covered. Just follow these simple steps and you will be playing the game in no time.

          -

          Step 1: Download the PPSSPP app from the Google Play Store or the official website

          -

          The first step is to download the PPSSPP app on your Android device. You can do this by going to the Google Play Store and searching for PPSSPP or by visiting the official website and downloading the APK file. Once you have downloaded the app, install it on your device by tapping on it and following the instructions.

          -

          Step 2: Download the Naruto to Boruto Shinobi Striker ISO file from a reliable source

          -

          The next step is to download the Naruto to Boruto Shinobi Striker ISO file on your device. This is the file that contains the game data and allows you to play it on PPSSPP. You can download it from a reliable source that offers safe and fast downloads. Make sure you download the correct version of the game that matches your region and language. The file size of the game is about 5 GB, so make sure you have enough space on your device before downloading it.

          -

          Step 3: Extract the ISO file using a file manager app or a zip extractor app

          -

          After downloading the ISO file, you need to extract it using a file manager app or a zip extractor app. You can use any app that can handle zip files, such as ZArchiver, RAR, or ES File Explorer. To extract the ISO file, locate it in your device storage and tap on it. Then, choose an option to extract it to a folder of your choice. Remember the folder where you extracted the ISO file as you will need it later.

          -

          Step 4: Launch the PPSSPP app and locate the extracted ISO file in your device storage

          -

          Now that you have extracted the ISO file, you are ready to play Naruto to Boruto Shinobi Striker on your Android device. To do this, launch the PPSSPP app and tap on the "Games" tab. Then, navigate to the folder where you extracted the ISO file and tap on it. The game will start loading and you will see the title screen.

          -

          Step 5: Tap on the ISO file and enjoy playing Naruto to Boruto Shinobi Striker on your Android device

          -

          Congratulations! You have successfully downloaded and installed Naruto to Boruto Shinobi Striker PPSSPP on Android. Now you can enjoy playing this amazing game on your phone or tablet. You can choose from different game modes, characters, and skills in Naruto to Boruto Shinobi Striker. You can also play online with your friends or other players from around the world. Have fun!

          -

          Tips and tricks for playing Naruto to Boruto Shinobi Striker PPSSPP on Android

          -

          To make your gaming experience even better, here are some tips and tricks for playing Naruto to Boruto Shinobi Striker PPSSPP on Android.

          -

          Adjust the settings of the PPSSPP app to optimize the game performance and graphics

          -

          If you want to improve the game performance and graphics, you can adjust some settings of the PPSSPP app. To do this, tap on the "Settings" tab and then tap on "Graphics". Here you can change the rendering mode, resolution, frame rate, texture filtering, and more. You can also enable or disable some features like immersive mode, buffer rendering, and post-processing shaders. Experiment with different settings until you find the best balance between quality and performance for your device.

          -

          Use a gamepad or a controller for a better gaming experience

          -

          If you want to have a more comfortable and precise gaming experience, you can use a gamepad or a controller to play Naruto to Boruto Shinobi Striker PPSSPP on Android. You can connect your device to a gamepad or a controller via Bluetooth or USB. You can also use an app like Octopus to map the buttons and joysticks of your gamepad or controller to the touch screen controls of PPSSPP. You can also customize the layout and size of the touch screen controls in the PPSSPP app.

          -

          Connect your device to a Wi-Fi network or use mobile data for online multiplayer mode

          -

          If you want to play online with your friends or other players from around the world, you need to connect your device to a Wi-Fi network or use mobile data. You also need to have a stable and fast internet connection to avoid lag and disconnects. To play online, tap on the "Online" tab in the PPSSPP app and then tap on "Ad hoc server". Here you can create or join a room with other PPSSPP users who are playing Naruto to Boruto Shinobi Striker. You can also chat with them using the built-in chat feature.

          -

          Explore the different game modes, characters, and skills in Naruto to Boruto Shinobi Striker

          -

          Naruto to Boruto Shinobi Striker is a game that offers a lot of variety and fun. You can explore the different game modes, characters, and skills in the game. You can also unlock new items and rewards as you progress in the game. Here are some of the things you can do in Naruto to Boruto Shinobi Striker:

          -
            -
          • Play Flag Battle, Barrier Battle, Base Battle, or Combat Battle with your friends or other players online
          • -
          • Choose from over 40 characters from the Naruto series, each with their own unique skills and abilities
          • -
          • Create your own custom character and customize their appearance, weapons, and jutsu
          • -
          • Learn from famous ninja masters like Naruto, Sasuke, Kakashi, and more
          • -
          • Unlock new skills, weapons, costumes, and accessories as you progress in the game
          • -
          • Participate in special events and missions that will challenge your abilities
          • -
          -

          Conclusion

          -

          Naruto to Boruto Shinobi Striker is a game that will appeal to any Naruto fan who wants to experience a new gameplay style in the Naruto universe. It is a game that will test your teamwork, strategy, and skills as a shinobi. It is also a game that you can play on your Android device using an emulator called PPSSPP. In this article, we showed you how to download and install Naruto to Boruto Shinobi Striker PPSSPP on Android and gave you some tips and tricks for playing it. We hope you found this article helpful and informative. Now go ahead and enjoy playing Naruto to Boruto Shinobi Striker on your Android device!

          -

          FAQs

          -

          Here are some frequently asked questions about Naruto to Boruto Shinobi Striker PPSSPP on Android:

          -

          Q: Is Naruto to Boruto Shinobi Striker PPSSPP legal?

          -

          A: Yes, as long as you own the original copy of the game and use it for personal use only. However, downloading the ISO file from an unauthorized source may be illegal in some countries. We do not condone piracy or any illegal activity.

          -

          Q: Is Naruto to Boruto Shinobi Striker PPSSPP safe?

          -

          A: Yes, as long as you download the PPSSPP app from the Google Play Store or the official website and download the ISO file from a reliable source. However, be careful of malware or viruses that may harm your device or steal your data.

          -

          Q: Is Naruto to Boruto Shinobi Striker PPSSPP free?

          -

          A: Yes, both the PPSSPP app and the ISO file are free to download and use. However, you may need to pay for some in-game items or features if you want to access them.

          -

          Q: How much space does Naruto to Boruto Shinobi Striker PPSSPP take on my device?

          -

          A: The PPSSPP app takes about 30 MB of space on your device, while the ISO file takes about 5 GB of space. You may need to free up some space on your device before downloading and installing them.

          -

          Q: Can I play Naruto to Boruto Shinobi Striker PPSSPP offline?

          -

          A: Yes, you can play Naruto to Boruto Shinobi Striker PPSSPP offline if you want to play solo or with your friends using the local multiplayer feature. However, you will need an internet connection if you want to play online with other players from around the world.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy State Connect The No Ads MOD APK for Traffic Control.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy State Connect The No Ads MOD APK for Traffic Control.md deleted file mode 100644 index 9de8aa069fb286b4294bdfec964f2bee702e981b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy State Connect The No Ads MOD APK for Traffic Control.md +++ /dev/null @@ -1,89 +0,0 @@ -
          -

          State Connect: Traffic Control Mod APK No Ads

          -

          Do you love casual puzzle games that challenge your brain and test your logic skills? If yes, then you might want to try State Connect: Traffic Control, a fun and addictive game for Android devices. In this game, you have to build roads and highways to connect different states and cities in the USA. Sounds easy, right? Well, not so fast. You have to avoid traffic jams, collisions, and other obstacles that might ruin your plans. You also have to manage your budget and resources wisely, as you only have a limited amount of money and materials to work with. Can you complete all the levels and become the ultimate traffic controller?

          -

          What is State Connect: Traffic Control?

          -

          State Connect: Traffic Control is a casual puzzle game developed by Game Studio North - 37. It was released in 2020 and has received positive reviews from players and critics alike. The game has over 100 levels of increasing difficulty, each with a unique map and objective. You have to use your finger to draw roads and bridges between different states and cities, making sure that they are connected and that there are no gaps or overlaps. You also have to watch out for traffic lights, cars, trucks, trains, planes, boats, and other vehicles that might cause accidents or delays. You have to complete each level within a given time limit and budget, otherwise you will fail and have to start over.

          -

          state connect traffic control mod apk no ads


          Downloadhttps://urlca.com/2uO8fA



          -

          How to play State Connect: Traffic Control

          -

          The gameplay of State Connect: Traffic Control is simple and intuitive. You just have to tap and drag your finger on the screen to draw roads and bridges between different points on the map. You can also undo or erase your moves if you make a mistake or want to change something. You can zoom in or out of the map by pinching the screen with two fingers. You can also pause the game by tapping the menu button on the top right corner of the screen.

          -

          You have to follow some rules when drawing roads and bridges. First, you have to make sure that all the states and cities on the map are connected by at least one road or bridge. Second, you have to avoid crossing or overlapping your roads and bridges with each other or with other objects on the map. Third, you have to respect the traffic signs and signals on the map, such as stop signs, yield signs, traffic lights, etc. Fourth, you have to avoid causing traffic jams or collisions by regulating the flow of vehicles on your roads and bridges. Fifth, you have to complete each level within a given time limit and budget.

          -

          Why download State Connect: Traffic Control mod apk no ads?

          -

          State Connect: Traffic Control is a free game that you can download from Google Play Store or other app stores. However, there are some drawbacks that might affect your gaming experience. For example, the game has ads that pop up every now and then, interrupting your gameplay and annoying you. The game also has in-app purchases that require real money if you want to buy more hints or coins. These hints and coins are useful if you want to skip a level or get some help when you are stuck.

          -

          If you want to enjoy State Connect: Traffic Control without these limitations, then you might want to download State Connect: Traffic Control mod apk no ads. This is a modified version of the game that has some features that are not available in the original version. Here are some of the benefits of downloading State Connect: Traffic Control mod apk no ads:

          -

          No annoying ads

          -

          One of the main advantages of downloading State Connect: Traffic Control mod apk no ads is that it removes all the ads from the game. This means that you will not be bothered by any ads that might distract you or waste your time. You can focus on the game and enjoy it without any interruptions.

          -

          state connect traffic control mod apk download
          -state connect traffic control mod apk free
          -state connect traffic control mod apk latest version
          -state connect traffic control mod apk unlimited money
          -state connect traffic control mod apk android
          -state connect traffic control mod apk offline
          -state connect traffic control mod apk hack
          -state connect traffic control mod apk premium
          -state connect traffic control mod apk full
          -state connect traffic control mod apk unlocked
          -state connect traffic control game mod apk
          -state connect traffic control simulation mod apk
          -state connect traffic control puzzle mod apk
          -state connect traffic control casual mod apk
          -state connect traffic control strategy mod apk
          -state connect traffic control city builder mod apk
          -state connect traffic control road network mod apk
          -state connect traffic control highway mod apk
          -state connect traffic control bridge mod apk
          -state connect traffic control car mod apk
          -state connect traffic control no ads apk
          -state connect traffic control no ads download
          -state connect traffic control no ads free
          -state connect traffic control no ads latest version
          -state connect traffic control no ads unlimited money
          -state connect traffic control no ads android
          -state connect traffic control no ads offline
          -state connect traffic control no ads hack
          -state connect traffic control no ads premium
          -state connect traffic control no ads full
          -state connect traffic control no ads unlocked
          -state connect traffic control game no ads
          -state connect traffic control simulation no ads
          -state connect traffic control puzzle no ads
          -state connect traffic control casual no ads
          -state connect traffic control strategy no ads
          -state connect traffic control city builder no ads
          -state connect traffic control road network no ads
          -state connect traffic control highway no ads
          -state connect traffic control bridge no ads
          -state connect traffic control car no ads

          -

          Unlimited hints and coins

          -

          Another benefit of downloading State Connect: Traffic Control mod apk no ads is that it gives you unlimited hints and coins. Hints are useful if you need some guidance or clues on how to complete a level. Coins are useful if you want to unlock more levels or buy more materials for your roads and bridges. Normally, you have to watch ads or pay real money to get more hints or coins. But with State Connect: Traffic Control mod apk no ads, you can get as many hints or coins as you want for free. You can use them anytime you want without any restrictions.

          -

          Easy installation and compatibility

          -

          The last benefit of downloading State Connect: Traffic Control mod apk no ads is that it is easy to install and compatible with most Android devices. You do not need to root your device or do any complicated steps to install the mod apk file. You just have to follow some simple instructions that we will provide later in this article. The mod apk file is also safe and virus-free, so you do not have to worry about harming your device. The mod apk file is also updated regularly to match the latest version of the game, so you do not have to worry about missing out on any new features or bug fixes.

          -

          How to download and install State Connect: Traffic Control mod apk no ads?

          -

          Now that you know the benefits of downloading State Connect: Traffic Control mod apk no ads, you might be wondering how to get it on your device. Well, it is not hard at all. You just have to follow these simple steps:

          -

          Step 1: Download the mod apk file from a trusted source

          -

          The first step is to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games, but not all of them are reliable or safe. Some of them might contain malware or viruses that could harm your device or steal your personal information. Therefore, you have to be careful and choose a reputable website that has positive reviews and ratings from other users.

          -

          One of the best websites that we recommend is [Modded-1.com]. This website has a large collection of mod apk files for different games, including State Connect: Traffic Control. The website is also easy to navigate and has a user-friendly interface. You can find the link to download State Connect: Traffic Control mod apk no ads on this website below:

          -

          [State Connect: Traffic Control Mod APK No Ads v1.0.7 (Unlimited Hints/Coins)]

          -

          Step 2: Enable unknown sources on your device

          -

          The second step is to enable unknown sources on your device. This is necessary because Android devices do not allow the installation of apps from sources other than Google Play Store by default. Therefore, you have to change this setting manually before you can install the mod apk file.

          -

          To enable unknown sources on your device, you have to go to Settings > Security > Unknown Sources and toggle it on. You might see a warning message that says installing apps from unknown sources could harm your device, but do not worry. As long as you download the mod apk file from a trusted source, there is nothing to fear.

          -

          Step 3: Install the mod apk file and enjoy the game

          -

          The final step is to install the mod apk file and enjoy the game. To install the mod apk file, you have to locate it on your device's storage using a file manager app. Then, you have to tap on it and follow the instructions on the screen to complete the installation process.

          -

          Once the installation is done, you can launch the game from your app drawer or home screen. You will see that there are no ads in the game and that you have unlimited hints and coins. You can use them to play the game without any limitations or difficulties.

          -

          Conclusion

          -

          State Connect: Traffic Control is a fun and addictive casual puzzle game that tests your logic and creativity skills. You have to build roads and bridges to connect different states and cities in the USA while avoiding traffic jams and collisions. The game has over 100 levels of increasing difficulty and challenge.

          -

          If you want to enjoy State Connect: Traffic Control without any ads or in-app purchases, then you should download State Connect: Traffic Control mod apk no ads from Modded-1.com. This is a modified version of the game that removes all the ads and gives you unlimited hints and coins for free. You can also install it easily and safely on your Android device.

          -

          We hope that this article has helped you learn more about State Connect: Traffic Control mod apk no ads and how to download and install it on your device. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Thank you for reading and happy gaming!

          -

          FAQs

          -

          Here are some of the frequently asked questions about State Connect: Traffic Control mod apk no ads:

          -

          Q: Is State Connect: Traffic Control mod apk no ads safe to use?

          -

          A: Yes, State Connect: Traffic Control mod apk no ads is safe to use as long as you download it from a trusted source like Modded-1.com. The mod apk file is free of malware and viruses and does not require any root access or permissions to install.

          -

          Q: Does State Connect: Traffic Control mod apk no ads work on all Android devices?

          -

          A: Yes, State Connect: Traffic Control mod apk no ads works on most Android devices that run on Android 4.1 or higher. However, some devices might have compatibility issues or performance problems due to different hardware specifications or software versions. If you encounter any issues, please contact the developer of the game or the website where you downloaded the mod apk file for assistance.

          -

          Q: Can I play State Connect: Traffic Control mod apk no ads online or offline?

          -

          A: You can play State Connect: Traffic Control mod apk no ads both online and offline. However, if you play online, you might see some ads from Google Play Services or other third-party services that are not related to the game. These ads are not controlled by the mod apk file and cannot be removed. If you want to avoid these ads, you can play offline or turn off your internet connection while playing.

          -

          Q: Can I update State Connect: Traffic Control mod apk no ads to the latest version of the game?

          -

          A: Yes, you can update State Connect: Traffic Control mod apk no ads to the latest version of the game as long as the website where you downloaded the mod apk file also updates it. However, updating the mod apk file might overwrite some of the features or settings that you have customized in the previous version. Therefore, it is recommended that you backup your data before updating the mod apk file.

          -

          Q: Can I share State Connect: Traffic Control mod apk no ads with my friends or family?

          -

          A: Yes, you can share State Connect: Traffic Control mod apk no ads with your friends or family as long as they also have Android devices that can run the game. However, please do not share the mod apk file on any public platforms or websites that might violate the terms and conditions of the game or the website where you downloaded the mod apk file. Please respect the rights and efforts of the original developers and creators of the game.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/ZArchiver APK A Powerful and Versatile Program for Archive Creation and Extraction.md b/spaces/congsaPfin/Manga-OCR/logs/ZArchiver APK A Powerful and Versatile Program for Archive Creation and Extraction.md deleted file mode 100644 index 77a44b6178c2c72f28a7fc9293cb73d042b3ef5d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/ZArchiver APK A Powerful and Versatile Program for Archive Creation and Extraction.md +++ /dev/null @@ -1,120 +0,0 @@ -
          -

          ZArchiver Download APK: A Complete Guide

          -

          If you are looking for a way to manage your compressed files on your Android device, you may have heard of ZArchiver APK. This is a popular app that allows you to create, extract, edit, and view various types of archive files. But what exactly is ZArchiver APK, how do you download and install it, and how do you use it? In this article, we will answer all these questions and more. Read on to find out everything you need to know about ZArchiver APK.

          -

          How to download and install ZArchiver APK on your Android device

          -

          Downloading and installing ZArchiver APK is very easy and straightforward. Just follow these simple steps:

          -

          zarchiver download apk


          Download File ★★★ https://urlca.com/2uOgh8



          -
            -
          1. Go to the official website of ZArchiver APK (zdevs.ru) or a trusted source like APKCombo or Uptodown. Make sure you download the latest version of the app, which is currently 1.0.7.
          2. -
          3. Tap on the download button and wait for the file to be downloaded. The file size is about 4 MB, so it should not take long.
          4. -
          5. Enable unknown sources in your settings. This is necessary because ZArchiver APK is not available on Google Play Store, so you need to allow your device to install apps from other sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.
          6. -
          7. Locate the downloaded file and tap on it to install. You may see a warning message asking you to confirm the installation. Tap on Install and wait for the process to finish.
          8. -
          -

          Congratulations! You have successfully downloaded and installed ZArchiver APK on your Android device. You can now open the app and start using it.

          -

          How to use ZArchiver APK to manage your compressed files

          -

          Using ZArchiver APK is very simple and intuitive. Here are the basic steps you need to follow:

          -
            -
          1. Open the app and grant the necessary permissions. The app will ask you to allow access to your files and media. This is required for the app to function properly. Tap on Allow and proceed.
          2. -
          3. Browse through your files and folders. The app will show you a list of all the files and folders on your device. You can use the icons at the top to switch between different views, such as list, grid, or details. You can also use the search bar to find a specific file or folder.
          4. -
          5. Select the file or folder you want to compress or decompress. You can tap and hold on a file or folder to select it, or tap on the checkbox next to it. You can select multiple files or folders at once if you want to perform a batch operation.
          6. -
          7. Choose the desired action and options from the menu. Once you have selected the file or folder, you will see a menu at the bottom with various options, such as compress, extract, delete, rename, copy, move, share, and more. Tap on the option you want and follow the instructions on the screen. For example, if you want to compress a file or folder, you will need to choose the archive format, compression level, password, and destination folder.
          8. -
          -

          That's it! You have successfully used ZArchiver APK to manage your compressed files. You can now view, edit, or share your archives as you wish.

          -

          What are the features and benefits of ZArchiver APK

          -

          ZArchiver APK is a powerful and versatile app that offers many features and benefits for managing your compressed files. Here are some of them:

          -
            -
          • Supports a wide range of archive formats. ZArchiver APK can create and extract archives in various formats, such as zip, rar, 7z, tar, gzip, bzip2, xz, iso, arj, cab, lzh, lzma, xar, tgz, tbz, z, deb, rpm, zipx, mtz, chm, dmg, cpio, cramfs, img (fat/ntfs/ubf), wim (lzx/xpress), ecm, arc (freearc), lzip. This means you can work with almost any type of compressed file on your device.
          • -
          • Allows password protection and encryption of archives. ZArchiver APK lets you protect your archives with passwords and encrypt them with AES-256 algorithm. This adds an extra layer of security and privacy to your files and prevents unauthorized access.
          • -
          • Enables editing and partial extraction of archives. ZArchiver APK allows you to edit the contents of your archives without extracting them. You can add or remove files from an archive, rename them, or change their attributes. You can also extract only selected files from an archive instead of extracting the whole archive. This saves time and space on your device.
          • -
          • Has a simple and functional interface. ZArchiver APK has a user-friendly and intuitive interface that makes it easy to use. You can customize the app's appearance by changing the theme or language. You can also access the app's settings and help section from the menu. The app also supports multi-threading and multi-core processing for faster performance.
          • -
          -

          What are the drawbacks and limitations of ZArchiver APK

          -

          ZArchiver APK is not a perfect app and it has some drawbacks and limitations that you should be aware of. Here are some of them:

          -
            -
          • Requires Android 4.0 or higher. ZArchiver APK is compatible with Android devices running version 4.0 (Ice Cream Sandwich) or higher. This means that older devices may not be able to run the app or enjoy its full functionality.
          • -
          • Does not support cloud storage or network access. ZArchiver APK does not have integration with cloud storage services like Google Drive or Dropbox. It also does not support network access protocols like FTP or SMB. This means that you cannot access your files stored online or on other devices using ZArchiver APK.
          • -
          • May not work well with large or corrupted files. ZArchiver APK may encounter problems when dealing with large (>4 GB) or corrupted files. It may fail to create or extract such files or cause errors or crashes. It is advisable to check the integrity of your files before using ZArchiver APK.
          • -
          -

          Conclusion and FAQs

          -

          ZArchiver APK is a great app for managing your compressed files on your Android device. It allows you to create, extract, edit, and view various types of archive files. It supports a wide range of formats, allows password protection and encryption, enables editing and partial extraction, and has a simple and functional interface. However, it also has some drawbacks and limitations, such as requiring Android 4.0 or higher, not supporting cloud storage or network access, and not working well with large or corrupted files. Therefore, you should weigh the pros and cons of ZArchiver APK before using it. Here are some frequently asked questions about ZArchiver APK that may help you further:

          -

          FAQ 1: What is the difference between ZArchiver APK and ZArchiver Pro?

          -

          ZArchiver APK is the free version of the app, while ZArchiver Pro is the paid version that costs $1.99. The main difference between them is that ZArchiver Pro has no ads and supports more archive formats, such as 7zip (7z), zipx, rar5, lz4, zstd, and more. ZArchiver Pro also has some additional features, such as image preview in archive, dark mode, storage usage analysis, and more.

          -

          zarchiver apk free download for android
          -zarchiver pro apk download latest version
          -zarchiver apk download uptodown
          -zarchiver app download apk pure
          -zarchiver apk download for pc
          -zarchiver mod apk download no ads
          -zarchiver donate apk download
          -zarchiver apk download apkpure
          -zarchiver apk download for ios
          -zarchiver apk download old version
          -zarchiver zip rar 7z extractor apk download
          -zarchiver pro apk download 2023
          -zarchiver apk download for windows 10
          -zarchiver dark mode apk download
          -zarchiver apk download for jio phone
          -zarchiver pro apk free download for android
          -zarchiver apk download filehippo
          -zarchiver apk download for laptop
          -zarchiver premium apk download
          -zarchiver beta apk download
          -zarchiver pro mod apk download
          -zarchiver apk download android 4.0.4
          -zarchiver app download apkmirror
          -zarchiver pro apk free download uptodown
          -zarchiver pro donate 0.9.5.8 apk download
          -zarchiver app free download for android mobile
          -zarchiver pro 0.9.5.8 mod apk download
          -zarchiver app latest version free download
          -zarchiver pro 0.9.4 mod lite arm64-v8a apk download
          -zarchiver app free download for pc windows 7
          -zarchiver pro 0.9.5.8 mod lite armv7a neon apk download
          -zarchiver app free download for pc windows 10
          -zarchiver pro 0.9.5.8 mod lite x86_64 apk download
          -zarchiver app free download for pc windows 8.1
          -zarchiver pro 0.9.5.8 mod lite x86 apk download
          -zarchiver app free download for pc windows xp
          -zarchiver pro 0.9.5.8 mod lite armv7a apk download
          -zarchiver app free download for pc windows 8
          -how to use zarchiver app to extract files on android phone
          -how to install and run android apps on pc using bluestacks and zarchiver
          -how to compress and decompress files on android using zarchiver
          -how to create password protected archives on android with zarchiver
          -how to edit archives on android with zarchiver
          -how to open compressed files on android with zarchiver
          -how to extract split archives on android with zarchiver
          -how to change the interface language of the app in the settings of the program in ZArchivier
          -how to enable multithreading support in ZArchivier for faster compression and decompression
          -how to view the contents of an archive without extracting it using ZArchivier
          -how to create and decompress multi-part archives using ZArchivier

          -

          FAQ 2: How can I open an archive file from another app using ZArchiver APK?

          -

          ZArchiver APK supports the "open with" feature that allows you to open an archive file from another app using ZArchiver APK. For example, if you receive an archive file as an email attachment or download it from a website, you can tap on it and choose ZArchiver APK as the app to open it. You can then view or extract the contents of the archive file using ZArchiver APK.

          -

          FAQ 3: How can I create a multi-part archive using ZArchiver APK?

          -

          ZArchiver APK allows you to create a multi-part archive that splits a large file or folder into smaller parts. This can be useful for saving space or sharing files that exceed the size limit of some platforms. To create a multi-part archive using ZArchiver APK, follow these steps:

          -
            -
          1. Select the file or folder you want to compress and tap on Compress from the menu.
          2. -
          3. Choose the archive format and compression level as usual.
          4. -
          5. Tap on the Split option and choose the size of each part. You can choose from predefined sizes or enter a custom size.
          6. -
          7. Tap on OK and wait for the compression to finish.
          8. -
          -

          You will see a series of files with extensions like .001, .002, .003, etc. These are the parts of your multi-part archive. To extract them, you need to select all of them and tap on Extract from the menu.

          -

          FAQ 4: How can I change the language or theme of ZArchiver APK?

          -

          ZArchiver APK supports multiple languages and themes that you can change according to your preference. To do this, follow these steps:

          -
            -
          1. Tap on the menu icon at the top left corner of the app.
          2. -
          3. Tap on Settings from the menu.
          4. -
          5. Tap on Language or Theme from the settings.
          6. -
          7. Choose the language or theme you want from the list.
          8. -
          9. Tap on OK and restart the app to apply the changes.
          10. -
          -

          FAQ 5: How can I contact the developer or report a problem with ZArchiver APK?

          -

          If you have any questions, suggestions, feedback, or issues with ZArchiver APK, you can contact the developer or report a problem using these methods:

          -
            -
          • Email: zdevs.ru@gmail.com
          • -
          • Telegram: @zdevs_chat
          • -
          • Forum: https://4pda.ru/forum/index.php?showtopic=271502
          • -
          -

          I hope this article has helped you understand what ZArchiver APK is and how to use it. If you find this app useful, please share it with your friends and leave a positive review on the source website. Thank you for reading!

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Zsh Permission Denied Root Desktop Love.apk.md b/spaces/congsaPfin/Manga-OCR/logs/Zsh Permission Denied Root Desktop Love.apk.md deleted file mode 100644 index 83e08fcd3abed4b7b4db06c41d24ffc8436b028e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Zsh Permission Denied Root Desktop Love.apk.md +++ /dev/null @@ -1,53 +0,0 @@ - -

          How to Fix Zsh Permission Denied Error in Linux

          -

          Zsh is a powerful and popular shell for Linux and other Unix-like systems. It offers many features that make it more convenient and efficient to work with than the default bash shell. Some of these features include auto-completion, globbing, history expansion, prompt customization, and more.

          -

          zsh permission denied root desktop love.apk


          Download Ziphttps://urlca.com/2uO5f8



          -

          However, zsh users may sometimes encounter a common error when trying to run a command or a script: zsh: permission denied. This error means that the user account does not have the proper permissions to access or execute the file or directory in question. This can happen for various reasons, such as incorrect file permissions, ownership, or security policies.

          -

          In this article, we will show you how to fix the zsh permission denied error in Linux using some simple commands and tips. We will use an example of trying to run a script named love.apk located on the desktop of the root user.

          -

          Check the File Permissions and Owner

          -

          The first step to fix the permission denied error is to check the current permissions and owner of the file or directory that you are trying to access. You can do this by using the ls -l command followed by the path of the file or directory. For example:

          -
          $ ls -l /root/Desktop/love.apk -rw-r--r-- 1 root root 12345 Jun 23 02:01 /root/Desktop/love.apk 
          -

          The output shows us that the file love.apk has read and write permissions for the owner (root), and only read permissions for the group and others. The owner and group are both root. This means that only root can modify or execute this file, while other users can only read it.

          -

          -

          If you are not root, you will get a permission denied error when trying to run this file. For example:

          -
          $ zsh /root/Desktop/love.apk zsh: permission denied: /root/Desktop/love.apk 
          -

          Change the File Permissions with Chmod Command

          -

          To fix this error, you need to change the file permissions to allow execution for your user account. You can do this by using the chmod command, which stands for change mode. The syntax of this command is:

          -
          chmod flags permissions filename 
          -

          The flags are optional parameters that specify how to apply the permissions. The permissions can be either symbolic (r, w, x) or numeric (0-7) representations of read, write, and execute rights. The filename is the name of the file or directory that you want to change.

          -

          For example, to add execute permission for all users to love.apk, you can use either of these commands:

          -
          $ chmod +x /root/Desktop/love.apk $ chmod 755 /root/Desktop/love.apk 
          -

          The first command uses the symbolic notation (+x) to add execute permission to all users (user, group, others). The second command uses the numeric notation (755) to set read, write, and execute permissions for user (7), and read and execute permissions for group and others (5).

          -

          Note that you need to have write permission on the file or directory in order to change its permissions. If you don't have write permission, you will get another permission denied error. In that case, you need to use sudo or su to run chmod as root.

          -

          Use

          Use Sudo or Su to Run Commands as Root

          -

          If you don't have write permission on the file or directory that you want to change, you can use sudo or su to run commands as root. Root is the superuser account that has full access and control over the system. However, you need to be careful when using root privileges, as you can cause damage or security issues if you make a mistake.

          -

          Sudo stands for superuser do, and it allows you to run a single command as root. You need to prefix the command with sudo and enter your password when prompted. For example, to change the permissions of love.apk using sudo, you can use this command:

          -
          $ sudo chmod +x /root/Desktop/love.apk [sudo] password for user: 
          -

          Su stands for switch user, and it allows you to switch to another user account, such as root. You need to enter the password of the target user account when prompted. For example, to switch to root and change the permissions of love.apk using su, you can use these commands:

          -
          $ su root Password: # chmod +x /root/Desktop/love.apk # exit 
          -

          Note that you need to exit from the root shell after you finish your task, otherwise you will remain logged in as root.

          -

          Check for Other Possible Causes of Permission Denied Error

          -

          If changing the file permissions does not fix the permission denied error, there may be other possible causes that prevent you from accessing or executing the file or directory. Some of these causes are:

          -
            -
          • SELinux: SELinux is a security mechanism that enforces policies on files and processes. It may block access or execution of files that do not have the correct context or label. You can check the SELinux status and context of a file using the sestatus and ls -Z commands respectively. You can also use the chcon command to change the context of a file.
          • -
          • Mount options: Mount options are parameters that affect how a file system is mounted and accessed. Some mount options may restrict the execution of files on a file system, such as noexec, nouser, or ro. You can check the mount options of a file system using the mount command. You can also use the mount -o remount command to change the mount options of a file system.
          • -
          • File system errors: File system errors are corruptions or inconsistencies that affect the integrity and functionality of a file system. They may cause permission errors or other problems when accessing or executing files on a file system. You can check and repair file system errors using the fsck command.
          • -
          -

          Conclusion

          -

          In this article, we have learned how to fix the zsh permission denied error in Linux using some simple commands and tips. We have seen how to check and change the file permissions and owner, how to use sudo or su to run commands as root, and how to check for other possible causes of permission denied error such as SELinux, mount options, or file system errors.

          -

          We hope that this article has helped you solve your permission issues and run your commands or scripts without any errors. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!

          -

          Frequently Asked Questions

          -

          What is zsh?

          -

          Zsh is a powerful and popular shell for Linux and other Unix-like systems. It offers many features that make it more convenient and efficient to work with than the default bash shell.

          -

          What is the permission denied error?

          -

          The permission denied error means that the user account does not have the proper permissions to access or execute the file or directory in question.

          -

          How do I check the file permissions and owner?

          -

          You can check the file permissions and owner by using the ls -l command followed by the path of the file or directory.

          -

          How do I change the file permissions with chmod command?

          -

          You can change the file permissions with chmod command by using either symbolic (r, w, x) or numeric (0-7) representations of read, write, and execute rights.

          -

          How do I use sudo or su to run commands as root?

          -

          You can use sudo You can use sudo or su to run commands as root by prefixing the command with sudo and entering your password when prompted, or by switching to the root account with su and entering the root password when prompted.

          -

          How do I check for other possible causes of permission denied error?

          -

          You can check for other possible causes of permission denied error such as SELinux, mount options, or file system errors by using commands such as sestatus, ls -Z, chcon, mount, mount -o remount, or fsck.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Graphisoft-Archicad-22-Build-3006-Win64-Utorrent.md b/spaces/contluForse/HuggingGPT/Graphisoft-Archicad-22-Build-3006-Win64-Utorrent.md deleted file mode 100644 index 12ec958d70ea78ae3231393e9b0b0bcee0ba81c0..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Graphisoft-Archicad-22-Build-3006-Win64-Utorrent.md +++ /dev/null @@ -1,86 +0,0 @@ -## Graphisoft Archicad 22 Build 3006 Win64 Utorrent - - - - - - ![Graphisoft Archicad 22 Build 3006 Win64 Utorrent](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTZW3tQqteaULw-094VxzevAVpxYxaMlWpuDAEENUbKrkM4G9sGOpj2Ea5L) - - - - - -**CLICK HERE ->->->-> [https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txoJW&sa=D&sntz=1&usg=AOvVaw1JG\_Y43tRFiRi08FxZic2V](https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txoJW&sa=D&sntz=1&usg=AOvVaw1JG_Y43tRFiRi08FxZic2V)** - - - - - - - - - - - - - -# How to Download and Install Graphisoft Archicad 22 for Windows 10 - - - -Graphisoft Archicad 22 is a powerful architectural design software that allows you to create stunning 3D models, drawings, and documentation. Archicad 22 is a 64-bit application that requires Windows 10 operating system. In this article, we will show you how to download and install Archicad 22 for Windows 10 using a torrent file. - - - -## Step 1: Download the torrent file - - - -To download Archicad 22 for Windows 10, you will need a torrent client such as uTorrent or BitTorrent. You can download one of these from their official websites. Then, you will need to download the torrent file for Archicad 22 from a reliable source. One such source is LimeTorrents, which offers a magnet link for Archicad 22 Build 3006 x64 Win[^1^]. You can copy and paste this link into your torrent client and start the download. - - - -## Step 2: Install Archicad 22 - - - -Once the download is complete, you will need to extract the files from the zip archive using a tool such as WinRAR or 7-Zip. You will find a folder named GRAPHISOFT ARCHICAD 22 Build 3006 x64 Win, which contains the setup file and the crack file. Double-click on the setup file and follow the on-screen prompts to install Archicad 22 on your computer. You may need to enter a serial number or a license key during the installation process. You can find these in the crack folder or on the internet. - - - -## Step 3: Activate Archicad 22 - - - -To activate Archicad 22, you will need to copy and paste the crack file into the installation directory of Archicad 22. This is usually located at C:\Program Files\GRAPHISOFT\ARCHICAD 22. Replace the original file with the crack file and run Archicad 22 as an administrator. You should see a message that says "Archicad 22 has been successfully activated". You can now enjoy using Archicad 22 for Windows 10. - - - -## Step 4: Explore Archicad 22 features - - - -Archicad 22 offers many features and tools that can help you design and document your architectural projects. Some of the main features are: - - - -- Parametric profiles: You can create custom profiles for walls, columns, beams, and roofs using the Profile Editor. You can also edit the profiles of existing elements and apply them to multiple elements at once. - -- Expression-based properties: You can use expressions to define and calculate the properties of elements, such as area, volume, cost, or energy performance. You can also use expressions to create custom labels and schedules. - -- Curtain wall enhancements: You can design complex curtain walls with more flexibility and control. You can create custom frames and panels, adjust their orientation and alignment, and edit their junctions and corners. - -- Stair and railing enhancements: You can design stairs and railings with more options and accuracy. You can create custom shapes and patterns for treads, risers, stringers, and balusters. You can also edit the properties of individual segments and components. - -- BIMx export: You can export your Archicad model to BIMx, a mobile app that allows you to view and explore your project in 3D. You can also add hyperlinks, annotations, and 360° images to your BIMx model. - - - -These are just some of the features that Archicad 22 offers. You can learn more about Archicad 22 by visiting the official website or watching the tutorials on YouTube. - - 1b8d091108 - - - - - diff --git a/spaces/contluForse/HuggingGPT/assets/Azov Film FKK Ranch Party 17.md b/spaces/contluForse/HuggingGPT/assets/Azov Film FKK Ranch Party 17.md deleted file mode 100644 index 61f1a4a4a7df77bdd92d09e7bcb7fdfb29be7ffe..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Azov Film FKK Ranch Party 17.md +++ /dev/null @@ -1,5 +0,0 @@ - -

          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Photos
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo XXX Videos
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo HD Videos
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Indian Videos
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo MP4 Videos
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Indian Images
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Leaked Videos
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo Leaked Pics
          Search azov film fkk ranch party gamessi andi indian sxyxxvdeo XXX Posts

          -

          Azov film FKK Ranch Party 17


          Download ……… https://ssurll.com/2uzwq5



          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/coraKong/voice-cloning-demo/README.md b/spaces/coraKong/voice-cloning-demo/README.md deleted file mode 100644 index 1bee311625a1d0f0f7734c1d921ed2c41e57bf17..0000000000000000000000000000000000000000 --- a/spaces/coraKong/voice-cloning-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Voice Cloning Demo -emoji: 💩 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/adversarial.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/adversarial.py deleted file mode 100644 index d6db2967ce5074d94ed3b4c51fc743ff2f7831b1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/adversarial.py +++ /dev/null @@ -1,177 +0,0 @@ -from typing import Tuple, Dict, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class BaseAdversarialLoss: - def pre_generator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - """ - Prepare for generator step - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param generator: - :param discriminator: - :return: None - """ - - def pre_discriminator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - """ - Prepare for discriminator step - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param generator: - :param discriminator: - :return: None - """ - - def generator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask: Optional[torch.Tensor] = None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - """ - Calculate generator loss - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param discr_real_pred: Tensor, discriminator output for real_batch - :param discr_fake_pred: Tensor, discriminator output for fake_batch - :param mask: Tensor, actual mask, which was at input of generator when making fake_batch - :return: total generator loss along with some values that might be interesting to log - """ - raise NotImplemented() - - def discriminator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask: Optional[torch.Tensor] = None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - """ - Calculate discriminator loss and call .backward() on it - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param discr_real_pred: Tensor, discriminator output for real_batch - :param discr_fake_pred: Tensor, discriminator output for fake_batch - :param mask: Tensor, actual mask, which was at input of generator when making fake_batch - :return: total discriminator loss along with some values that might be interesting to log - """ - raise NotImplemented() - - def interpolate_mask(self, mask, shape): - assert mask is not None - assert self.allow_scale_mask or shape == mask.shape[-2:] - if shape != mask.shape[-2:] and self.allow_scale_mask: - if self.mask_scale_mode == 'maxpool': - mask = F.adaptive_max_pool2d(mask, shape) - else: - mask = F.interpolate(mask, size=shape, mode=self.mask_scale_mode) - return mask - -def make_r1_gp(discr_real_pred, real_batch): - if torch.is_grad_enabled(): - grad_real = torch.autograd.grad(outputs=discr_real_pred.sum(), inputs=real_batch, create_graph=True)[0] - grad_penalty = (grad_real.view(grad_real.shape[0], -1).norm(2, dim=1) ** 2).mean() - else: - grad_penalty = 0 - real_batch.requires_grad = False - - return grad_penalty - -class NonSaturatingWithR1(BaseAdversarialLoss): - def __init__(self, gp_coef=5, weight=1, mask_as_fake_target=False, allow_scale_mask=False, - mask_scale_mode='nearest', extra_mask_weight_for_gen=0, - use_unmasked_for_gen=True, use_unmasked_for_discr=True): - self.gp_coef = gp_coef - self.weight = weight - # use for discr => use for gen; - # otherwise we teach only the discr to pay attention to very small difference - assert use_unmasked_for_gen or (not use_unmasked_for_discr) - # mask as target => use unmasked for discr: - # if we don't care about unmasked regions at all - # then it doesn't matter if the value of mask_as_fake_target is true or false - assert use_unmasked_for_discr or (not mask_as_fake_target) - self.use_unmasked_for_gen = use_unmasked_for_gen - self.use_unmasked_for_discr = use_unmasked_for_discr - self.mask_as_fake_target = mask_as_fake_target - self.allow_scale_mask = allow_scale_mask - self.mask_scale_mode = mask_scale_mode - self.extra_mask_weight_for_gen = extra_mask_weight_for_gen - - def generator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask=None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - fake_loss = F.softplus(-discr_fake_pred) - if (self.mask_as_fake_target and self.extra_mask_weight_for_gen > 0) or \ - not self.use_unmasked_for_gen: # == if masked region should be treated differently - mask = self.interpolate_mask(mask, discr_fake_pred.shape[-2:]) - if not self.use_unmasked_for_gen: - fake_loss = fake_loss * mask - else: - pixel_weights = 1 + mask * self.extra_mask_weight_for_gen - fake_loss = fake_loss * pixel_weights - - return fake_loss.mean() * self.weight, dict() - - def pre_discriminator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - real_batch.requires_grad = True - - def discriminator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask=None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - - real_loss = F.softplus(-discr_real_pred) - grad_penalty = make_r1_gp(discr_real_pred, real_batch) * self.gp_coef - fake_loss = F.softplus(discr_fake_pred) - - if not self.use_unmasked_for_discr or self.mask_as_fake_target: - # == if masked region should be treated differently - mask = self.interpolate_mask(mask, discr_fake_pred.shape[-2:]) - # use_unmasked_for_discr=False only makes sense for fakes; - # for reals there is no difference beetween two regions - fake_loss = fake_loss * mask - if self.mask_as_fake_target: - fake_loss = fake_loss + (1 - mask) * F.softplus(-discr_fake_pred) - - sum_discr_loss = real_loss + grad_penalty + fake_loss - metrics = dict(discr_real_out=discr_real_pred.mean(), - discr_fake_out=discr_fake_pred.mean(), - discr_real_gp=grad_penalty) - return sum_discr_loss.mean(), metrics - -class BCELoss(BaseAdversarialLoss): - def __init__(self, weight): - self.weight = weight - self.bce_loss = nn.BCEWithLogitsLoss() - - def generator_loss(self, discr_fake_pred: torch.Tensor) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - real_mask_gt = torch.zeros(discr_fake_pred.shape).to(discr_fake_pred.device) - fake_loss = self.bce_loss(discr_fake_pred, real_mask_gt) * self.weight - return fake_loss, dict() - - def pre_discriminator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - real_batch.requires_grad = True - - def discriminator_loss(self, - mask: torch.Tensor, - discr_real_pred: torch.Tensor, - discr_fake_pred: torch.Tensor) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - - real_mask_gt = torch.zeros(discr_real_pred.shape).to(discr_real_pred.device) - sum_discr_loss = (self.bce_loss(discr_real_pred, real_mask_gt) + self.bce_loss(discr_fake_pred, mask)) / 2 - metrics = dict(discr_real_out=discr_real_pred.mean(), - discr_fake_out=discr_fake_pred.mean(), - discr_real_gp=0) - return sum_discr_loss, metrics - - -def make_discrim_loss(kind, **kwargs): - if kind == 'r1': - return NonSaturatingWithR1(**kwargs) - elif kind == 'bce': - return BCELoss(**kwargs) - raise ValueError(f'Unknown adversarial loss kind {kind}') diff --git a/spaces/crimbo66/openai-whisper-large/README.md b/spaces/crimbo66/openai-whisper-large/README.md deleted file mode 100644 index e54fff846a8940e0d8d5954cd3779b49c0da823f..0000000000000000000000000000000000000000 --- a/spaces/crimbo66/openai-whisper-large/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Openai Whisper Large -emoji: 🚀 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/transformer/permuter.py b/spaces/cvlab/zero123-live/taming-transformers/taming/modules/transformer/permuter.py deleted file mode 100644 index 0d43bb135adde38d94bf18a7e5edaa4523cd95cf..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/transformer/permuter.py +++ /dev/null @@ -1,248 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np - - -class AbstractPermuter(nn.Module): - def __init__(self, *args, **kwargs): - super().__init__() - def forward(self, x, reverse=False): - raise NotImplementedError - - -class Identity(AbstractPermuter): - def __init__(self): - super().__init__() - - def forward(self, x, reverse=False): - return x - - -class Subsample(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - C = 1 - indices = np.arange(H*W).reshape(C,H,W) - while min(H, W) > 1: - indices = indices.reshape(C,H//2,2,W//2,2) - indices = indices.transpose(0,2,4,1,3) - indices = indices.reshape(C*4,H//2, W//2) - H = H//2 - W = W//2 - C = C*4 - assert H == W == 1 - idx = torch.tensor(indices.ravel()) - self.register_buffer('forward_shuffle_idx', - nn.Parameter(idx, requires_grad=False)) - self.register_buffer('backward_shuffle_idx', - nn.Parameter(torch.argsort(idx), requires_grad=False)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -def mortonify(i, j): - """(i,j) index to linear morton code""" - i = np.uint64(i) - j = np.uint64(j) - - z = np.uint(0) - - for pos in range(32): - z = (z | - ((j & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos)) | - ((i & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos+1)) - ) - return z - - -class ZCurve(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - reverseidx = [np.int64(mortonify(i,j)) for i in range(H) for j in range(W)] - idx = np.argsort(reverseidx) - idx = torch.tensor(idx) - reverseidx = torch.tensor(reverseidx) - self.register_buffer('forward_shuffle_idx', - idx) - self.register_buffer('backward_shuffle_idx', - reverseidx) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class SpiralOut(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - assert H == W - size = W - indices = np.arange(size*size).reshape(size,size) - - i0 = size//2 - j0 = size//2-1 - - i = i0 - j = j0 - - idx = [indices[i0, j0]] - step_mult = 0 - for c in range(1, size//2+1): - step_mult += 1 - # steps left - for k in range(step_mult): - i = i - 1 - j = j - idx.append(indices[i, j]) - - # step down - for k in range(step_mult): - i = i - j = j + 1 - idx.append(indices[i, j]) - - step_mult += 1 - if c < size//2: - # step right - for k in range(step_mult): - i = i + 1 - j = j - idx.append(indices[i, j]) - - # step up - for k in range(step_mult): - i = i - j = j - 1 - idx.append(indices[i, j]) - else: - # end reached - for k in range(step_mult-1): - i = i + 1 - idx.append(indices[i, j]) - - assert len(idx) == size*size - idx = torch.tensor(idx) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class SpiralIn(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - assert H == W - size = W - indices = np.arange(size*size).reshape(size,size) - - i0 = size//2 - j0 = size//2-1 - - i = i0 - j = j0 - - idx = [indices[i0, j0]] - step_mult = 0 - for c in range(1, size//2+1): - step_mult += 1 - # steps left - for k in range(step_mult): - i = i - 1 - j = j - idx.append(indices[i, j]) - - # step down - for k in range(step_mult): - i = i - j = j + 1 - idx.append(indices[i, j]) - - step_mult += 1 - if c < size//2: - # step right - for k in range(step_mult): - i = i + 1 - j = j - idx.append(indices[i, j]) - - # step up - for k in range(step_mult): - i = i - j = j - 1 - idx.append(indices[i, j]) - else: - # end reached - for k in range(step_mult-1): - i = i + 1 - idx.append(indices[i, j]) - - assert len(idx) == size*size - idx = idx[::-1] - idx = torch.tensor(idx) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class Random(nn.Module): - def __init__(self, H, W): - super().__init__() - indices = np.random.RandomState(1).permutation(H*W) - idx = torch.tensor(indices.ravel()) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class AlternateParsing(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - indices = np.arange(W*H).reshape(H,W) - for i in range(1, H, 2): - indices[i, :] = indices[i, ::-1] - idx = indices.flatten() - assert len(idx) == H*W - idx = torch.tensor(idx) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -if __name__ == "__main__": - p0 = AlternateParsing(16, 16) - print(p0.forward_shuffle_idx) - print(p0.backward_shuffle_idx) - - x = torch.randint(0, 768, size=(11, 256)) - y = p0(x) - xre = p0(y, reverse=True) - assert torch.equal(x, xre) - - p1 = SpiralOut(2, 2) - print(p1.forward_shuffle_idx) - print(p1.backward_shuffle_idx) diff --git a/spaces/cymic/Waifu_Diffusion_Webui/scripts/img2imgalt.py b/spaces/cymic/Waifu_Diffusion_Webui/scripts/img2imgalt.py deleted file mode 100644 index 045256ab03b6ac802d00950b4c7984d7c02732b0..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/scripts/img2imgalt.py +++ /dev/null @@ -1,183 +0,0 @@ -from collections import namedtuple - -import numpy as np -from tqdm import trange - -import modules.scripts as scripts -import gradio as gr - -from modules import processing, shared, sd_samplers, prompt_parser -from modules.processing import Processed -from modules.shared import opts, cmd_opts, state - -import torch -import k_diffusion as K - -from PIL import Image -from torch import autocast -from einops import rearrange, repeat - - -def find_noise_for_image(p, cond, uncond, cfg_scale, steps): - x = p.init_latent - - s_in = x.new_ones([x.shape[0]]) - dnw = K.external.CompVisDenoiser(shared.sd_model) - sigmas = dnw.get_sigmas(steps).flip(0) - - shared.state.sampling_steps = steps - - for i in trange(1, len(sigmas)): - shared.state.sampling_step += 1 - - x_in = torch.cat([x] * 2) - sigma_in = torch.cat([sigmas[i] * s_in] * 2) - cond_in = torch.cat([uncond, cond]) - - c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)] - t = dnw.sigma_to_t(sigma_in) - - eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in) - denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2) - - denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale - - d = (x - denoised) / sigmas[i] - dt = sigmas[i] - sigmas[i - 1] - - x = x + d * dt - - sd_samplers.store_latent(x) - - # This shouldn't be necessary, but solved some VRAM issues - del x_in, sigma_in, cond_in, c_out, c_in, t, - del eps, denoised_uncond, denoised_cond, denoised, d, dt - - shared.state.nextjob() - - return x / x.std() - - -Cached = namedtuple("Cached", ["noise", "cfg_scale", "steps", "latent", "original_prompt", "original_negative_prompt", "sigma_adjustment"]) - - -# Based on changes suggested by briansemrau in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/736 -def find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg_scale, steps): - x = p.init_latent - - s_in = x.new_ones([x.shape[0]]) - dnw = K.external.CompVisDenoiser(shared.sd_model) - sigmas = dnw.get_sigmas(steps).flip(0) - - shared.state.sampling_steps = steps - - for i in trange(1, len(sigmas)): - shared.state.sampling_step += 1 - - x_in = torch.cat([x] * 2) - sigma_in = torch.cat([sigmas[i - 1] * s_in] * 2) - cond_in = torch.cat([uncond, cond]) - - c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)] - - if i == 1: - t = dnw.sigma_to_t(torch.cat([sigmas[i] * s_in] * 2)) - else: - t = dnw.sigma_to_t(sigma_in) - - eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in) - denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2) - - denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale - - if i == 1: - d = (x - denoised) / (2 * sigmas[i]) - else: - d = (x - denoised) / sigmas[i - 1] - - dt = sigmas[i] - sigmas[i - 1] - x = x + d * dt - - sd_samplers.store_latent(x) - - # This shouldn't be necessary, but solved some VRAM issues - del x_in, sigma_in, cond_in, c_out, c_in, t, - del eps, denoised_uncond, denoised_cond, denoised, d, dt - - shared.state.nextjob() - - return x / sigmas[-1] - - -class Script(scripts.Script): - def __init__(self): - self.cache = None - - def title(self): - return "img2img alternative test" - - def show(self, is_img2img): - return is_img2img - - def ui(self, is_img2img): - original_prompt = gr.Textbox(label="Original prompt", lines=1) - original_negative_prompt = gr.Textbox(label="Original negative prompt", lines=1) - cfg = gr.Slider(label="Decode CFG scale", minimum=0.0, maximum=15.0, step=0.1, value=1.0) - st = gr.Slider(label="Decode steps", minimum=1, maximum=150, step=1, value=50) - randomness = gr.Slider(label="Randomness", minimum=0.0, maximum=1.0, step=0.01, value=0.0) - sigma_adjustment = gr.Checkbox(label="Sigma adjustment for finding noise for image", value=False) - return [original_prompt, original_negative_prompt, cfg, st, randomness, sigma_adjustment] - - def run(self, p, original_prompt, original_negative_prompt, cfg, st, randomness, sigma_adjustment): - p.batch_size = 1 - p.batch_count = 1 - - - def sample_extra(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength): - lat = (p.init_latent.cpu().numpy() * 10).astype(int) - - same_params = self.cache is not None and self.cache.cfg_scale == cfg and self.cache.steps == st \ - and self.cache.original_prompt == original_prompt \ - and self.cache.original_negative_prompt == original_negative_prompt \ - and self.cache.sigma_adjustment == sigma_adjustment - same_everything = same_params and self.cache.latent.shape == lat.shape and np.abs(self.cache.latent-lat).sum() < 100 - - if same_everything: - rec_noise = self.cache.noise - else: - shared.state.job_count += 1 - cond = p.sd_model.get_learned_conditioning(p.batch_size * [original_prompt]) - uncond = p.sd_model.get_learned_conditioning(p.batch_size * [original_negative_prompt]) - if sigma_adjustment: - rec_noise = find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg, st) - else: - rec_noise = find_noise_for_image(p, cond, uncond, cfg, st) - self.cache = Cached(rec_noise, cfg, st, lat, original_prompt, original_negative_prompt, sigma_adjustment) - - rand_noise = processing.create_random_tensors(p.init_latent.shape[1:], [p.seed + x + 1 for x in range(p.init_latent.shape[0])]) - - combined_noise = ((1 - randomness) * rec_noise + randomness * rand_noise) / ((randomness**2 + (1-randomness)**2) ** 0.5) - - sampler = sd_samplers.create_sampler_with_index(sd_samplers.samplers, p.sampler_index, p.sd_model) - - sigmas = sampler.model_wrap.get_sigmas(p.steps) - - noise_dt = combined_noise - (p.init_latent / sigmas[0]) - - p.seed = p.seed + 1 - - return sampler.sample_img2img(p, p.init_latent, noise_dt, conditioning, unconditional_conditioning) - - p.sample = sample_extra - - p.extra_generation_params["Decode prompt"] = original_prompt - p.extra_generation_params["Decode negative prompt"] = original_negative_prompt - p.extra_generation_params["Decode CFG scale"] = cfg - p.extra_generation_params["Decode steps"] = st - p.extra_generation_params["Randomness"] = randomness - p.extra_generation_params["Sigma Adjustment"] = sigma_adjustment - - processed = processing.process_images(p) - - return processed - diff --git a/spaces/darveen/text_summarizer/app.py b/spaces/darveen/text_summarizer/app.py deleted file mode 100644 index 667fe49930354b6bcff65ff4cdb5897d00d98ae3..0000000000000000000000000000000000000000 --- a/spaces/darveen/text_summarizer/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import streamlit as st -from model import Abstractive_Summarization_Model - - -# initialize model object -# @st.cache -def load_model(): - return Abstractive_Summarization_Model() - -# Main app engine -if __name__ == "__main__": - # display title and description - st.title("AI Text Summarizer") - st.header("Get AI generated paraphrased summaries of text!") - st.write("You might need to wait a couple of seconds for the model to load because i'm using a tiny cpu 🥲") - st.write("email me at darveenvijayan.27@gmail.com") - - #load model - ASM = load_model() - - # display topic input slot - text = st.text_input("Paste a paragraph of text in the text box below and hit ENTER!", "") - - # display article paragraph - article_paragraph = st.empty() - - if text: - # load wikipedia summary of topic - - summary = ASM.summarize(text) - - # display article summary in paragraph - article_paragraph.markdown(summary) \ No newline at end of file diff --git a/spaces/davda54/chat-nort5/configuration_nort5.py b/spaces/davda54/chat-nort5/configuration_nort5.py deleted file mode 100644 index 60ef5248830d411ce56c84735afe234de2d70d49..0000000000000000000000000000000000000000 --- a/spaces/davda54/chat-nort5/configuration_nort5.py +++ /dev/null @@ -1,44 +0,0 @@ -from transformers.configuration_utils import PretrainedConfig - - -class NorT5Config(PretrainedConfig): - """Configuration class to store the configuration of a `NorT5`. - """ - def __init__( - self, - vocab_size=50000, - attention_probs_dropout_prob=0.1, - hidden_dropout_prob=0.1, - hidden_size=768, - intermediate_size=2048, - max_position_embeddings=512, - position_bucket_size=32, - num_attention_heads=12, - num_hidden_layers=12, - layer_norm_eps=1.0e-7, - output_all_encoded_layers=True, - pad_token_id=3, - cls_token_id=1, - sep_token_id=2, - bos_token_id=5, - eos_token_id=6, - **kwargs, - ): - super().__init__(**kwargs) - - self.vocab_size = vocab_size - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.output_all_encoded_layers = output_all_encoded_layers - self.position_bucket_size = position_bucket_size - self.layer_norm_eps = layer_norm_eps - self.pad_token_id = pad_token_id - self.cls_token_id = cls_token_id - self.sep_token_id = sep_token_id - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/PcdImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/PcdImagePlugin.py deleted file mode 100644 index e390f3fe51dcb1ef4a490b55d18ac827e170aa37..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/PcdImagePlugin.py +++ /dev/null @@ -1,62 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PCD file handling -# -# History: -# 96-05-10 fl Created -# 96-05-27 fl Added draft mode (128x192, 256x384) -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# - - -from . import Image, ImageFile - -## -# Image plugin for PhotoCD images. This plugin only reads the 768x512 -# image from the file; higher resolutions are encoded in a proprietary -# encoding. - - -class PcdImageFile(ImageFile.ImageFile): - format = "PCD" - format_description = "Kodak PhotoCD" - - def _open(self): - # rough - self.fp.seek(2048) - s = self.fp.read(2048) - - if s[:4] != b"PCD_": - msg = "not a PCD file" - raise SyntaxError(msg) - - orientation = s[1538] & 3 - self.tile_post_rotate = None - if orientation == 1: - self.tile_post_rotate = 90 - elif orientation == 3: - self.tile_post_rotate = -90 - - self.mode = "RGB" - self._size = 768, 512 # FIXME: not correct for rotated images! - self.tile = [("pcd", (0, 0) + self.size, 96 * 2048, None)] - - def load_end(self): - if self.tile_post_rotate: - # Handle rotated PCDs - self.im = self.im.rotate(self.tile_post_rotate) - self._size = self.im.size - - -# -# registry - -Image.register_open(PcdImageFile.format, PcdImageFile) - -Image.register_extension(PcdImageFile.format, ".pcd") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/visitor.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/visitor.py deleted file mode 100644 index 3d28135fad3a951c447d03b7f2b08403cb24a12e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/visitor.py +++ /dev/null @@ -1,143 +0,0 @@ -"""Generic visitor pattern implementation for Python objects.""" - -import enum - - -class Visitor(object): - - defaultStop = False - - @classmethod - def _register(celf, clazzes_attrs): - assert celf != Visitor, "Subclass Visitor instead." - if "_visitors" not in celf.__dict__: - celf._visitors = {} - - def wrapper(method): - assert method.__name__ == "visit" - for clazzes, attrs in clazzes_attrs: - if type(clazzes) != tuple: - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - for clazz in clazzes: - _visitors = celf._visitors.setdefault(clazz, {}) - for attr in attrs: - assert attr not in _visitors, ( - "Oops, class '%s' has visitor function for '%s' defined already." - % (clazz.__name__, attr) - ) - _visitors[attr] = method - return None - - return wrapper - - @classmethod - def register(celf, clazzes): - if type(clazzes) != tuple: - clazzes = (clazzes,) - return celf._register([(clazzes, (None,))]) - - @classmethod - def register_attr(celf, clazzes, attrs): - clazzes_attrs = [] - if type(clazzes) != tuple: - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - for clazz in clazzes: - clazzes_attrs.append((clazz, attrs)) - return celf._register(clazzes_attrs) - - @classmethod - def register_attrs(celf, clazzes_attrs): - return celf._register(clazzes_attrs) - - @classmethod - def _visitorsFor(celf, thing, _default={}): - typ = type(thing) - - for celf in celf.mro(): - - _visitors = getattr(celf, "_visitors", None) - if _visitors is None: - break - - m = celf._visitors.get(typ, None) - if m is not None: - return m - - return _default - - def visitObject(self, obj, *args, **kwargs): - """Called to visit an object. This function loops over all non-private - attributes of the objects and calls any user-registered (via - @register_attr() or @register_attrs()) visit() functions. - - If there is no user-registered visit function, of if there is and it - returns True, or it returns None (or doesn't return anything) and - visitor.defaultStop is False (default), then the visitor will proceed - to call self.visitAttr()""" - - keys = sorted(vars(obj).keys()) - _visitors = self._visitorsFor(obj) - defaultVisitor = _visitors.get("*", None) - for key in keys: - if key[0] == "_": - continue - value = getattr(obj, key) - visitorFunc = _visitors.get(key, defaultVisitor) - if visitorFunc is not None: - ret = visitorFunc(self, obj, key, value, *args, **kwargs) - if ret == False or (ret is None and self.defaultStop): - continue - self.visitAttr(obj, key, value, *args, **kwargs) - - def visitAttr(self, obj, attr, value, *args, **kwargs): - """Called to visit an attribute of an object.""" - self.visit(value, *args, **kwargs) - - def visitList(self, obj, *args, **kwargs): - """Called to visit any value that is a list.""" - for value in obj: - self.visit(value, *args, **kwargs) - - def visitDict(self, obj, *args, **kwargs): - """Called to visit any value that is a dictionary.""" - for value in obj.values(): - self.visit(value, *args, **kwargs) - - def visitLeaf(self, obj, *args, **kwargs): - """Called to visit any value that is not an object, list, - or dictionary.""" - pass - - def visit(self, obj, *args, **kwargs): - """This is the main entry to the visitor. The visitor will visit object - obj. - - The visitor will first determine if there is a registered (via - @register()) visit function for the type of object. If there is, it - will be called, and (visitor, obj, *args, **kwargs) will be passed to - the user visit function. - - If there is no user-registered visit function, of if there is and it - returns True, or it returns None (or doesn't return anything) and - visitor.defaultStop is False (default), then the visitor will proceed - to dispatch to one of self.visitObject(), self.visitList(), - self.visitDict(), or self.visitLeaf() (any of which can be overriden in - a subclass).""" - - visitorFunc = self._visitorsFor(obj).get(None, None) - if visitorFunc is not None: - ret = visitorFunc(self, obj, *args, **kwargs) - if ret == False or (ret is None and self.defaultStop): - return - if hasattr(obj, "__dict__") and not isinstance(obj, enum.Enum): - self.visitObject(obj, *args, **kwargs) - elif isinstance(obj, list): - self.visitList(obj, *args, **kwargs) - elif isinstance(obj, dict): - self.visitDict(obj, *args, **kwargs) - else: - self.visitLeaf(obj, *args, **kwargs) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-6a7e443e.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-6a7e443e.js deleted file mode 100644 index 3c94d702d2a5ab98aeab75f3ac7dcb5c7405ee6e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-6a7e443e.js +++ /dev/null @@ -1,2 +0,0 @@ -import{P as j,N as G,c as E,D as U,e as w,T as b,I as H}from"./index-7045bfe3.js";class P{constructor(t,e,s,i,h,r,n,a,l,f=0,u){this.p=t,this.stack=e,this.state=s,this.reducePos=i,this.pos=h,this.score=r,this.buffer=n,this.bufferBase=a,this.curContext=l,this.lookAhead=f,this.parent=u}toString(){return`[${this.stack.filter((t,e)=>e%3==0).concat(this.state)}]@${this.pos}${this.score?"!"+this.score:""}`}static start(t,e,s=0){let i=t.parser.context;return new P(t,[],e,s,s,0,[],0,i?new y(i,i.start):null,0,null)}get context(){return this.curContext?this.curContext.context:null}pushState(t,e){this.stack.push(this.state,e,this.bufferBase+this.buffer.length),this.state=t}reduce(t){var e;let s=t>>19,i=t&65535,{parser:h}=this.p,r=h.dynamicPrecedence(i);if(r&&(this.score+=r),s==0){this.pushState(h.getGoto(this.state,i,!0),this.reducePos),i=2e3&&!(!((e=this.p.parser.nodeSet.types[i])===null||e===void 0)&&e.isAnonymous)&&(a==this.p.lastBigReductionStart?(this.p.bigReductionCount++,this.p.lastBigReductionSize=l):this.p.lastBigReductionSizen;)this.stack.pop();this.reduceContext(i,a)}storeNode(t,e,s,i=4,h=!1){if(t==0&&(!this.stack.length||this.stack[this.stack.length-1]0&&r.buffer[n-4]==0&&r.buffer[n-1]>-1){if(e==s)return;if(r.buffer[n-2]>=e){r.buffer[n-2]=s;return}}}if(!h||this.pos==s)this.buffer.push(t,e,s,i);else{let r=this.buffer.length;if(r>0&&this.buffer[r-4]!=0)for(;r>0&&this.buffer[r-2]>s;)this.buffer[r]=this.buffer[r-4],this.buffer[r+1]=this.buffer[r-3],this.buffer[r+2]=this.buffer[r-2],this.buffer[r+3]=this.buffer[r-1],r-=4,i>4&&(i-=4);this.buffer[r]=t,this.buffer[r+1]=e,this.buffer[r+2]=s,this.buffer[r+3]=i}}shift(t,e,s){let i=this.pos;if(t&131072)this.pushState(t&65535,this.pos);else if(t&262144)this.pos=s,this.shiftContext(e,i),e<=this.p.parser.maxNode&&this.buffer.push(e,i,s,4);else{let h=t,{parser:r}=this.p;(s>this.pos||e<=r.maxNode)&&(this.pos=s,r.stateFlag(h,1)||(this.reducePos=s)),this.pushState(h,i),this.shiftContext(e,i),e<=r.maxNode&&this.buffer.push(e,i,s,4)}}apply(t,e,s){t&65536?this.reduce(t):this.shift(t,e,s)}useNode(t,e){let s=this.p.reused.length-1;(s<0||this.p.reused[s]!=t)&&(this.p.reused.push(t),s++);let i=this.pos;this.reducePos=this.pos=i+t.length,this.pushState(e,i),this.buffer.push(s,i,this.reducePos,-1),this.curContext&&this.updateContext(this.curContext.tracker.reuse(this.curContext.context,t,this,this.p.stream.reset(this.pos-t.length)))}split(){let t=this,e=t.buffer.length;for(;e>0&&t.buffer[e-2]>t.reducePos;)e-=4;let s=t.buffer.slice(e),i=t.bufferBase+e;for(;t&&i==t.bufferBase;)t=t.parent;return new P(this.p,this.stack.slice(),this.state,this.reducePos,this.pos,this.score,s,i,this.curContext,this.lookAhead,t)}recoverByDelete(t,e){let s=t<=this.p.parser.maxNode;s&&this.storeNode(t,this.pos,e,4),this.storeNode(0,this.pos,e,s?8:4),this.pos=this.reducePos=e,this.score-=190}canShift(t){for(let e=new W(this);;){let s=this.p.parser.stateSlot(e.state,4)||this.p.parser.hasAction(e.state,t);if(s==0)return!1;if(!(s&65536))return!0;e.reduce(s)}}recoverByInsert(t){if(this.stack.length>=300)return[];let e=this.p.parser.nextStates(this.state);if(e.length>4<<1||this.stack.length>=120){let i=[];for(let h=0,r;ha&1&&n==r)||i.push(e[h],r)}e=i}let s=[];for(let i=0;i>19,i=t&65535,h=this.stack.length-s*3;if(h<0||e.getGoto(this.stack[h],i,!1)<0)return!1;this.storeNode(0,this.reducePos,this.reducePos,4,!0),this.score-=100}return this.reducePos=this.pos,this.reduce(t),!0}forceAll(){for(;!this.p.parser.stateFlag(this.state,2);)if(!this.forceReduce()){this.storeNode(0,this.pos,this.pos,4,!0);break}return this}get deadEnd(){if(this.stack.length!=3)return!1;let{parser:t}=this.p;return t.data[t.stateSlot(this.state,1)]==65535&&!t.stateSlot(this.state,4)}restart(){this.state=this.stack[0],this.stack.length=0}sameState(t){if(this.state!=t.state||this.stack.length!=t.stack.length)return!1;for(let e=0;ethis.lookAhead&&(this.emitLookAhead(),this.lookAhead=t)}close(){this.curContext&&this.curContext.tracker.strict&&this.emitContext(),this.lookAhead>0&&this.emitLookAhead()}}class y{constructor(t,e){this.tracker=t,this.context=e,this.hash=t.strict?t.hash(e):0}}var N;(function(o){o[o.Insert=200]="Insert",o[o.Delete=190]="Delete",o[o.Reduce=100]="Reduce",o[o.MaxNext=4]="MaxNext",o[o.MaxInsertStackDepth=300]="MaxInsertStackDepth",o[o.DampenInsertStackDepth=120]="DampenInsertStackDepth",o[o.MinBigReduction=2e3]="MinBigReduction"})(N||(N={}));class W{constructor(t){this.start=t,this.state=t.state,this.stack=t.stack,this.base=this.stack.length}reduce(t){let e=t&65535,s=t>>19;s==0?(this.stack==this.start.stack&&(this.stack=this.stack.slice()),this.stack.push(this.state,0,0),this.base+=3):this.base-=(s-1)*3;let i=this.start.p.parser.getGoto(this.stack[this.base-3],e,!0);this.state=i}}class C{constructor(t,e,s){this.stack=t,this.pos=e,this.index=s,this.buffer=t.buffer,this.index==0&&this.maybeNext()}static create(t,e=t.bufferBase+t.buffer.length){return new C(t,e,e-t.bufferBase)}maybeNext(){let t=this.stack.parent;t!=null&&(this.index=this.stack.bufferBase-t.bufferBase,this.stack=t,this.buffer=t.buffer)}get id(){return this.buffer[this.index-4]}get start(){return this.buffer[this.index-3]}get end(){return this.buffer[this.index-2]}get size(){return this.buffer[this.index-1]}next(){this.index-=4,this.pos-=4,this.index==0&&this.maybeNext()}fork(){return new C(this.stack,this.pos,this.index)}}function x(o,t=Uint16Array){if(typeof o!="string")return o;let e=null;for(let s=0,i=0;s=92&&r--,r>=34&&r--;let a=r-32;if(a>=46&&(a-=46,n=!0),h+=a,n)break;h*=46}e?e[i++]=h:e=new t(h)}return e}class S{constructor(){this.start=-1,this.value=-1,this.end=-1,this.extended=-1,this.lookAhead=0,this.mask=0,this.context=0}}const D=new S;class q{constructor(t,e){this.input=t,this.ranges=e,this.chunk="",this.chunkOff=0,this.chunk2="",this.chunk2Pos=0,this.next=-1,this.token=D,this.rangeIndex=0,this.pos=this.chunkPos=e[0].from,this.range=e[0],this.end=e[e.length-1].to,this.readNext()}resolveOffset(t,e){let s=this.range,i=this.rangeIndex,h=this.pos+t;for(;hs.to:h>=s.to;){if(i==this.ranges.length-1)return null;let r=this.ranges[++i];h+=r.from-s.to,s=r}return h}clipPos(t){if(t>=this.range.from&&tt)return Math.max(t,e.from);return this.end}peek(t){let e=this.chunkOff+t,s,i;if(e>=0&&e=this.chunk2Pos&&sn.to&&(this.chunk2=this.chunk2.slice(0,n.to-s)),i=this.chunk2.charCodeAt(0)}}return s>=this.token.lookAhead&&(this.token.lookAhead=s+1),i}acceptToken(t,e=0){let s=e?this.resolveOffset(e,-1):this.pos;if(s==null||s=this.chunk2Pos&&this.posthis.range.to?t.slice(0,this.range.to-this.pos):t,this.chunkPos=this.pos,this.chunkOff=0}}readNext(){return this.chunkOff>=this.chunk.length&&(this.getChunk(),this.chunkOff==this.chunk.length)?this.next=-1:this.next=this.chunk.charCodeAt(this.chunkOff)}advance(t=1){for(this.chunkOff+=t;this.pos+t>=this.range.to;){if(this.rangeIndex==this.ranges.length-1)return this.setDone();t-=this.range.to-this.pos,this.range=this.ranges[++this.rangeIndex],this.pos=this.range.from}return this.pos+=t,this.pos>=this.token.lookAhead&&(this.token.lookAhead=this.pos+1),this.readNext()}setDone(){return this.pos=this.chunkPos=this.end,this.range=this.ranges[this.rangeIndex=this.ranges.length-1],this.chunk="",this.next=-1}reset(t,e){if(e?(this.token=e,e.start=t,e.lookAhead=t+1,e.value=e.extended=-1):this.token=D,this.pos!=t){if(this.pos=t,t==this.end)return this.setDone(),this;for(;t=this.range.to;)this.range=this.ranges[++this.rangeIndex];t>=this.chunkPos&&t=this.chunkPos&&e<=this.chunkPos+this.chunk.length)return this.chunk.slice(t-this.chunkPos,e-this.chunkPos);if(t>=this.chunk2Pos&&e<=this.chunk2Pos+this.chunk2.length)return this.chunk2.slice(t-this.chunk2Pos,e-this.chunk2Pos);if(t>=this.range.from&&e<=this.range.to)return this.input.read(t,e);let s="";for(let i of this.ranges){if(i.from>=e)break;i.to>t&&(s+=this.input.read(Math.max(i.from,t),Math.min(i.to,e)))}return s}}class m{constructor(t,e){this.data=t,this.id=e}token(t,e){let{parser:s}=e.p;F(this.data,t,e,this.id,s.data,s.tokenPrecTable)}}m.prototype.contextual=m.prototype.fallback=m.prototype.extend=!1;class J{constructor(t,e,s){this.precTable=e,this.elseToken=s,this.data=typeof t=="string"?x(t):t}token(t,e){let s=t.pos,i;for(;i=t.pos,F(this.data,t,e,0,this.data,this.precTable),!(t.token.value>-1);){if(this.elseToken==null)return;if(t.next<0)break;t.advance(),t.reset(i+1,t.token)}i>s&&(t.reset(s,t.token),t.acceptToken(this.elseToken,i-s))}}J.prototype.contextual=m.prototype.fallback=m.prototype.extend=!1;class tt{constructor(t,e={}){this.token=t,this.contextual=!!e.contextual,this.fallback=!!e.fallback,this.extend=!!e.extend}}function F(o,t,e,s,i,h){let r=0,n=1<0){let d=o[p];if(a.allows(d)&&(t.token.value==-1||t.token.value==d||K(d,t.token.value,i,h))){t.acceptToken(d);break}}let f=t.next,u=0,c=o[r+2];if(t.next<0&&c>u&&o[l+c*3-3]==65535&&o[l+c*3-3]==65535){r=o[l+c*3-1];continue t}for(;u>1,d=l+p+(p<<1),L=o[d],$=o[d+1]||65536;if(f=$)u=p+1;else{r=o[d+2],t.advance();continue t}}break}}function I(o,t,e){for(let s=t,i;(i=o[s])!=65535;s++)if(i==e)return s-t;return-1}function K(o,t,e,s){let i=I(e,s,t);return i<0||I(e,s,o)t)&&!s.type.isError)return e<0?Math.max(0,Math.min(s.to-1,t-25)):Math.min(o.length,Math.max(s.from+1,t+25));if(e<0?s.prevSibling():s.nextSibling())break;if(!s.parent())return e<0?0:o.length}}class Q{constructor(t,e){this.fragments=t,this.nodeSet=e,this.i=0,this.fragment=null,this.safeFrom=-1,this.safeTo=-1,this.trees=[],this.start=[],this.index=[],this.nextFragment()}nextFragment(){let t=this.fragment=this.i==this.fragments.length?null:this.fragments[this.i++];if(t){for(this.safeFrom=t.openStart?B(t.tree,t.from+t.offset,1)-t.offset:t.from,this.safeTo=t.openEnd?B(t.tree,t.to+t.offset,-1)-t.offset:t.to;this.trees.length;)this.trees.pop(),this.start.pop(),this.index.pop();this.trees.push(t.tree),this.start.push(-t.offset),this.index.push(0),this.nextStart=this.safeFrom}else this.nextStart=1e9}nodeAt(t){if(tt)return this.nextStart=r,null;if(h instanceof b){if(r==t){if(r=Math.max(this.safeFrom,t)&&(this.trees.push(h),this.start.push(r),this.index.push(0))}else this.index[e]++,this.nextStart=r+h.length}}}class V{constructor(t,e){this.stream=e,this.tokens=[],this.mainToken=null,this.actions=[],this.tokens=t.tokenizers.map(s=>new S)}getActions(t){let e=0,s=null,{parser:i}=t.p,{tokenizers:h}=i,r=i.stateSlot(t.state,3),n=t.curContext?t.curContext.hash:0,a=0;for(let l=0;lu.end+25&&(a=Math.max(u.lookAhead,a)),u.value!=0)){let c=e;if(u.extended>-1&&(e=this.addActions(t,u.extended,u.end,e)),e=this.addActions(t,u.value,u.end,e),!f.extend&&(s=u,e>c))break}}for(;this.actions.length>e;)this.actions.pop();return a&&t.setLookAhead(a),!s&&t.pos==this.stream.end&&(s=new S,s.value=t.p.parser.eofTerm,s.start=s.end=t.pos,e=this.addActions(t,s.value,s.end,e)),this.mainToken=s,this.actions}getMainToken(t){if(this.mainToken)return this.mainToken;let e=new S,{pos:s,p:i}=t;return e.start=s,e.end=Math.min(s+1,i.stream.end),e.value=s==i.stream.end?i.parser.eofTerm:0,e}updateCachedToken(t,e,s){let i=this.stream.clipPos(s.pos);if(e.token(this.stream.reset(i,t),s),t.value>-1){let{parser:h}=s.p;for(let r=0;r=0&&s.p.parser.dialect.allows(n>>1)){n&1?t.extended=n>>1:t.value=n>>1;break}}}else t.value=0,t.end=this.stream.clipPos(i+1)}putAction(t,e,s,i){for(let h=0;ht.bufferLength*4?new Q(s,t.nodeSet):null}get parsedPos(){return this.minStackPos}advance(){let t=this.stacks,e=this.minStackPos,s=this.stacks=[],i,h;if(this.bigReductionCount>300&&t.length==1){let[r]=t;for(;r.forceReduce()&&r.stack.length&&r.stack[r.stack.length-2]>=this.lastBigReductionStart;);this.bigReductionCount=this.lastBigReductionSize=0}for(let r=0;re)s.push(n);else{if(this.advanceStack(n,s,t))continue;{i||(i=[],h=[]),i.push(n);let a=this.tokens.getMainToken(n);h.push(a.value,a.end)}}break}}if(!s.length){let r=i&&Z(i);if(r)return this.stackToTree(r);if(this.parser.strict)throw g&&i&&console.log("Stuck with token "+(this.tokens.mainToken?this.parser.getName(this.tokens.mainToken.value):"none")),new SyntaxError("No parse at "+e);this.recovering||(this.recovering=5)}if(this.recovering&&i){let r=this.stoppedAt!=null&&i[0].pos>this.stoppedAt?i[0]:this.runRecovery(i,h,s);if(r)return this.stackToTree(r.forceAll())}if(this.recovering){let r=this.recovering==1?1:this.recovering*3;if(s.length>r)for(s.sort((n,a)=>a.score-n.score);s.length>r;)s.pop();s.some(n=>n.reducePos>e)&&this.recovering--}else if(s.length>1){t:for(let r=0;r500&&l.buffer.length>500)if((n.score-l.score||n.buffer.length-l.buffer.length)>0)s.splice(a--,1);else{s.splice(r--,1);continue t}}}s.length>12&&s.splice(12,s.length-12)}this.minStackPos=s[0].pos;for(let r=1;r ":"";if(this.stoppedAt!=null&&i>this.stoppedAt)return t.forceReduce()?t:null;if(this.fragments){let l=t.curContext&&t.curContext.tracker.strict,f=l?t.curContext.hash:0;for(let u=this.fragments.nodeAt(i);u;){let c=this.parser.nodeSet.types[u.type.id]==u.type?h.getGoto(t.state,u.type.id):-1;if(c>-1&&u.length&&(!l||(u.prop(w.contextHash)||0)==f))return t.useNode(u,c),g&&console.log(r+this.stackID(t)+` (via reuse of ${h.getName(u.type.id)})`),!0;if(!(u instanceof b)||u.children.length==0||u.positions[0]>0)break;let p=u.children[0];if(p instanceof b&&u.positions[0]==0)u=p;else break}}let n=h.stateSlot(t.state,4);if(n>0)return t.reduce(n),g&&console.log(r+this.stackID(t)+` (via always-reduce ${h.getName(n&65535)})`),!0;if(t.stack.length>=15e3)for(;t.stack.length>9e3&&t.forceReduce(););let a=this.tokens.getActions(t);for(let l=0;li?e.push(d):s.push(d)}return!1}advanceFully(t,e){let s=t.pos;for(;;){if(!this.advanceStack(t,null,null))return!1;if(t.pos>s)return R(t,e),!0}}runRecovery(t,e,s){let i=null,h=!1;for(let r=0;r ":"";if(n.deadEnd&&(h||(h=!0,n.restart(),g&&console.log(f+this.stackID(n)+" (restarted)"),this.advanceFully(n,s))))continue;let u=n.split(),c=f;for(let p=0;u.forceReduce()&&p<10&&(g&&console.log(c+this.stackID(u)+" (via force-reduce)"),!this.advanceFully(u,s));p++)g&&(c=this.stackID(u)+" -> ");for(let p of n.recoverByInsert(a))g&&console.log(f+this.stackID(p)+" (via recover-insert)"),this.advanceFully(p,s);this.stream.end>n.pos?(l==n.pos&&(l++,a=0),n.recoverByDelete(a,l),g&&console.log(f+this.stackID(n)+` (via recover-delete ${this.parser.getName(a)})`),R(n,s)):(!i||i.scoreo;class et{constructor(t){this.start=t.start,this.shift=t.shift||T,this.reduce=t.reduce||T,this.reuse=t.reuse||T,this.hash=t.hash||(()=>0),this.strict=t.strict!==!1}}class v extends j{constructor(t){if(super(),this.wrappers=[],t.version!=14)throw new RangeError(`Parser version (${t.version}) doesn't match runtime version (14)`);let e=t.nodeNames.split(" ");this.minRepeatTerm=e.length;for(let n=0;nt.topRules[n][1]),i=[];for(let n=0;n=0)h(f,a,n[l++]);else{let u=n[l+-f];for(let c=-f;c>0;c--)h(n[l++],a,u);l++}}}this.nodeSet=new G(e.map((n,a)=>E.define({name:a>=this.minRepeatTerm?void 0:n,id:a,props:i[a],top:s.indexOf(a)>-1,error:a==0,skipped:t.skippedNodes&&t.skippedNodes.indexOf(a)>-1}))),t.propSources&&(this.nodeSet=this.nodeSet.extend(...t.propSources)),this.strict=!1,this.bufferLength=U;let r=x(t.tokenData);this.context=t.context,this.specializerSpecs=t.specialized||[],this.specialized=new Uint16Array(this.specializerSpecs.length);for(let n=0;ntypeof n=="number"?new m(r,n):n),this.topRules=t.topRules,this.dialects=t.dialects||{},this.dynamicPrecedences=t.dynamicPrecedences||null,this.tokenPrecTable=t.tokenPrec,this.termNames=t.termNames||null,this.maxNode=this.nodeSet.types.length-1,this.dialect=this.parseDialect(),this.top=this.topRules[Object.keys(this.topRules)[0]]}createParse(t,e,s){let i=new X(this,t,e,s);for(let h of this.wrappers)i=h(i,t,e,s);return i}getGoto(t,e,s=!1){let i=this.goto;if(e>=i[0])return-1;for(let h=i[e+1];;){let r=i[h++],n=r&1,a=i[h++];if(n&&s)return a;for(let l=h+(r>>1);h0}validAction(t,e){if(e==this.stateSlot(t,4))return!0;for(let s=this.stateSlot(t,1);;s+=3){if(this.data[s]==65535)if(this.data[s+1]==1)s=k(this.data,s+2);else return!1;if(e==k(this.data,s+1))return!0}}nextStates(t){let e=[];for(let s=this.stateSlot(t,1);;s+=3){if(this.data[s]==65535)if(this.data[s+1]==1)s=k(this.data,s+2);else break;if(!(this.data[s+2]&1)){let i=this.data[s+1];e.some((h,r)=>r&1&&h==i)||e.push(this.data[s],i)}}return e}configure(t){let e=Object.assign(Object.create(v.prototype),this);if(t.props&&(e.nodeSet=this.nodeSet.extend(...t.props)),t.top){let s=this.topRules[t.top];if(!s)throw new RangeError(`Invalid top rule name ${t.top}`);e.top=s}return t.tokenizers&&(e.tokenizers=this.tokenizers.map(s=>{let i=t.tokenizers.find(h=>h.from==s);return i?i.to:s})),t.specializers&&(e.specializers=this.specializers.slice(),e.specializerSpecs=this.specializerSpecs.map((s,i)=>{let h=t.specializers.find(n=>n.from==s.external);if(!h)return s;let r=Object.assign(Object.assign({},s),{external:h.to});return e.specializers[i]=O(r),r})),t.contextTracker&&(e.context=t.contextTracker),t.dialect&&(e.dialect=this.parseDialect(t.dialect)),t.strict!=null&&(e.strict=t.strict),t.wrap&&(e.wrappers=e.wrappers.concat(t.wrap)),t.bufferLength!=null&&(e.bufferLength=t.bufferLength),e}hasWrappers(){return this.wrappers.length>0}getName(t){return this.termNames?this.termNames[t]:String(t<=this.maxNode&&this.nodeSet.types[t].name||t)}get eofTerm(){return this.maxNode+1}get topNode(){return this.nodeSet.types[this.top[1]]}dynamicPrecedence(t){let e=this.dynamicPrecedences;return e==null?0:e[t]||0}parseDialect(t){let e=Object.keys(this.dialects),s=e.map(()=>!1);if(t)for(let h of t.split(" ")){let r=e.indexOf(h);r>=0&&(s[r]=!0)}let i=null;for(let h=0;hs)&&e.p.parser.stateFlag(e.state,2)&&(!t||t.scoreo.external(e,s)<<1|t}return o.get}export{et as C,tt as E,v as L,J as a}; -//# sourceMappingURL=index-6a7e443e.js.map diff --git a/spaces/diacanFperku/AutoGPT/((INSTALL)) Download Film Mahabharata Bahasa Indonesia Di Antv.md b/spaces/diacanFperku/AutoGPT/((INSTALL)) Download Film Mahabharata Bahasa Indonesia Di Antv.md deleted file mode 100644 index bdd89840911c91454dd7b491862a82c9f534bed0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/((INSTALL)) Download Film Mahabharata Bahasa Indonesia Di Antv.md +++ /dev/null @@ -1,14 +0,0 @@ -

          download film mahabharata bahasa indonesia di antv


          DOWNLOADhttps://gohhs.com/2uFVvZ



          - -Four years later, in 1923, especially for the download Film India, a mystery of units were turned to feel the textbooks of the constructive economics. essential time, the minutes of Bookmarkby by the full and straightforward women were projected. The subject Prerequisite dispatched always Retrieved in Britain. In England, the obligation culture in a mainstream and s new power of books gave. - -The download Film India Bahasa Indonesia, of the DbDG is to be the of the Recommendations of record, and of the copious of the office. This ebook is not requested as' Membranes of Manufacturing, New Edition ', but its download is sent in reports. It is a time of the most spiritual and sociological cultures of the course. It can be directed to speak both the email and development of the available renewal and the book of the music into thirties. - -The download you are ed has in no model been. You may Be sent the Science or the error may get thrown. There're political conversations on this function. prior, it may Submit beautifully or Only. - -The download Film India Bahasa Indonesia, Mahabharata of download-responsive signals, other as subjects and updates, is dissolved called for by no without a important volume of the Prerequisite of these honest data. not, can the download Film India Bahasa Indonesia, Mahabharata existative, who needs the book of the one who is this file of reviews, get much email, since he describes the intuitive one who is it? As both the research of the one who is the review, and of the one who represents it, seek that this download may have built both in the General and in the third death, which please the one who is the American download is as a server. The Ponderings, which do to this download, download Well been for the title of a Logic. It may then continue that an download exists been, and a knowledge of it is shown, because it is the object of a environmental software of the government, or at least is the server of a percent. - -By download Film India Bahasa Indonesia, Mahabharata, they've and are with other frameworks. By this American download, they are their Personal treatment that they have out widely. They decide in an download Film India Bahasa Indonesia, Mahabharata of war, but each has a it&rsquo of the years of email; they use one another, and are from them, but often from 4fefd39f24
          -
          -
          -

          diff --git a/spaces/diacanFperku/AutoGPT/Adolix Split Merge Professional Edition Crack.md b/spaces/diacanFperku/AutoGPT/Adolix Split Merge Professional Edition Crack.md deleted file mode 100644 index 3d9d58fbc59a596f2b61990a5db6fdee1e810cf0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Adolix Split Merge Professional Edition Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Adolix Split Merge Professional Edition Crack


          DOWNLOAD ○○○ https://gohhs.com/2uFVn7



          -
          -PDF Split & Merge PRO is designed to help you split and merge PDF files. A preview of each added PDF file ensures accurate results. The program allows. select multiple PDF files or documents on the web (one at a time or at the same time), split them into parts, or change their size and position on the page. In addition, the program can merge or split PDF files into multiple documents. PDF Split & Merge PRO works with both 32-bit and 64-bit Windows systems. You can download the program via a direct link (from the cloud) at the bottom of the page. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/diacanFperku/AutoGPT/Drama Keong Mas Dalam Bahasa Jawa104.md b/spaces/diacanFperku/AutoGPT/Drama Keong Mas Dalam Bahasa Jawa104.md deleted file mode 100644 index 42f48bc2df6a1c831611cbbc6b7fb9235cf66e79..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Drama Keong Mas Dalam Bahasa Jawa104.md +++ /dev/null @@ -1,23 +0,0 @@ - -

          Drama Keong Mas Dalam Bahasa Jawa104: Apa itu dan Bagaimana Cara Menikmatinya?

          - -

          Drama Keong Mas Dalam Bahasa Jawa104 adalah salah satu jenis drama rakyat yang berasal dari Jawa Tengah. Drama ini menceritakan kisah cinta antara seorang putri bernama Dewi Sekartaji dan seorang pangeran bernama Raden Inu Kertapati. Karena kecantikan Dewi Sekartaji, ia diincar oleh seorang raja jahat bernama Prabu Menakjingga yang ingin menjadikannya istri. Namun, Dewi Sekartaji menolak lamaran Prabu Menakjingga dan memilih untuk melarikan diri bersama Raden Inu Kertapati. Dalam pelariannya, mereka dibantu oleh seorang pertapa sakti yang mengubah mereka menjadi dua ekor keong mas (siput emas) agar tidak terdeteksi oleh pasukan Prabu Menakjingga.

          -

          Drama Keong Mas Dalam Bahasa Jawa104


          Download File ››››› https://gohhs.com/2uFVCt



          - -

          Drama Keong Mas Dalam Bahasa Jawa104 memiliki banyak pesan moral dan nilai-nilai budaya yang dapat kita pelajari. Drama ini mengajarkan kita tentang cinta yang tulus, kesetiaan, keberanian, pengorbanan, dan kebaikan hati. Drama ini juga menampilkan berbagai unsur seni dan budaya Jawa, seperti bahasa, musik, tari, busana, dan aksesoris. Drama ini biasanya dipentaskan di panggung terbuka dengan menggunakan alat musik gamelan sebagai iringan. Para pemain drama ini harus memiliki kemampuan berbahasa Jawa yang baik, serta keterampilan menari dan berakting yang memukau.

          - -

          Bagaimana Cara Menikmati Drama Keong Mas Dalam Bahasa Jawa104?

          - -

          Jika Anda tertarik untuk menonton Drama Keong Mas Dalam Bahasa Jawa104, ada beberapa cara yang dapat Anda lakukan. Salah satunya adalah dengan mengunjungi tempat-tempat yang sering menyelenggarakan pertunjukan drama ini, seperti Taman Mini Indonesia Indah (TMII), Taman Budaya Yogyakarta (TBY), atau Taman Budaya Surakarta (TBS). Di sana, Anda dapat menyaksikan drama ini secara langsung dengan suasana yang autentik dan meriah.

          - -

          Cara lainnya adalah dengan mendengarkan rekaman Drama Keong Mas Dalam Bahasa Jawa104 yang tersedia di internet. Anda dapat menemukan berbagai situs web atau aplikasi yang menyediakan rekaman drama ini secara gratis atau berbayar. Salah satu situs web yang populer adalah SoundCloud, di mana Anda dapat menemukan berbagai playlist yang berisi Drama Keong Mas Dalam Bahasa Jawa104 dalam berbagai versi dan durasi. Anda dapat mendengarkan drama ini kapan saja dan di mana saja dengan menggunakan perangkat elektronik Anda.

          -

          - -

          Cara ketiga adalah dengan membaca naskah Drama Keong Mas Dalam Bahasa Jawa104 yang juga dapat Anda temukan di internet. Anda dapat membaca naskah drama ini untuk mempelajari lebih dalam tentang alur cerita, tokoh-tokoh, dialog-dialog, serta makna dan pesan yang terkandung di dalamnya. Anda juga dapat menggunakan naskah drama ini sebagai bahan belajar bahasa Jawa atau sebagai inspirasi untuk membuat naskah drama sendiri.

          - -

          Kesimpulan

          - -

          Drama Keong Mas Dalam Bahasa Jawa104 adalah sebuah drama rakyat yang berasal dari Jawa Tengah yang menceritakan kisah cinta antara Dewi Sekartaji dan Raden Inu Kertapati yang diubah menjadi keong mas oleh seorang pertapa sakti. Drama ini memiliki banyak pesan moral dan nilai-nilai budaya yang dapat kita pelajari. Drama ini juga menampilkan berbagai unsur seni dan budaya Jawa yang menarik dan indah. Anda dapat menikmati drama ini dengan cara menontonnya secara langsung di tempat-tempat tertentu, mendengarkannya melalui rekaman online, atau membacanya melalui naskah online.

          -

          Kesimpulan

          . Please check it and let me know if you have any feedback. Thank you.?

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/README.md b/spaces/diacanFperku/AutoGPT/README.md deleted file mode 100644 index ec1e50b2d2213f58c0bfacd11ec8bf012312c50f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AutoGPT -emoji: 🦾 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: ui/app.py -pinned: false -license: mit -duplicated_from: Kevin676/AutoGPT ---- - diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/core.c b/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/core.c deleted file mode 100644 index 5f8af54d32474f821e9d1f4d2679d78128722596..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/core.c +++ /dev/null @@ -1,26530 +0,0 @@ -/* Generated by Cython 3.0.0 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#if defined(CYTHON_LIMITED_API) && 0 - #ifndef Py_LIMITED_API - #if CYTHON_LIMITED_API+0 > 0x03030000 - #define Py_LIMITED_API CYTHON_LIMITED_API - #else - #define Py_LIMITED_API 0x03030000 - #endif - #endif -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.7+ or Python 3.3+. -#else -#define CYTHON_ABI "3_0_0" -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." -#define CYTHON_HEX_VERSION 0x030000F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(CYTHON_LIMITED_API) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_CLINE_IN_TRACEBACK - #define CYTHON_CLINE_IN_TRACEBACK 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 1 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL (PY_MAJOR_VERSION < 3 || PY_VERSION_HEX >= 0x03060000 && PY_VERSION_HEX < 0x030C00A6) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (PY_VERSION_HEX >= 0x030700A1) - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #if PY_VERSION_HEX < 0x030400a1 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #elif !defined(CYTHON_USE_TP_FINALIZE) - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #if PY_VERSION_HEX < 0x030600B1 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX < 0x030C00A5) - #endif - #if PY_VERSION_HEX < 0x030700A3 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if !defined(CYTHON_VECTORCALL) -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(maybe_unused) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(maybe_unused) - #define CYTHON_UNUSED [[maybe_unused]] - #endif - #endif - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(fallthrough) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif -#ifdef __cplusplus - template - struct __PYX_IS_UNSIGNED_IMPL {static const bool value = T(0) < T(-1);}; - #define __PYX_IS_UNSIGNED(type) (__PYX_IS_UNSIGNED_IMPL::value) -#else - #define __PYX_IS_UNSIGNED(type) (((type)-1) > 0) -#endif -#if CYTHON_COMPILING_IN_PYPY == 1 - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x030A0000) -#else - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000) -#endif -#define __PYX_REINTERPRET_FUNCION(func_pointer, other_pointer) ((func_pointer)(void(*)(void))(other_pointer)) - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_DefaultClassType PyClass_Type - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject *co=NULL, *result=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(p))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto end; - if (!(empty = PyTuple_New(0))) goto end; - result = (PyCodeObject*) PyObject_Call(replace, empty, kwds); - end: - Py_XDECREF((PyObject*) co); - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return result; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -#endif -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_Is) - #define __Pyx_Py_Is(x, y) Py_Is(x, y) -#else - #define __Pyx_Py_Is(x, y) ((x) == (y)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsNone) - #define __Pyx_Py_IsNone(ob) Py_IsNone(ob) -#else - #define __Pyx_Py_IsNone(ob) __Pyx_Py_Is((ob), Py_None) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsTrue) - #define __Pyx_Py_IsTrue(ob) Py_IsTrue(ob) -#else - #define __Pyx_Py_IsTrue(ob) __Pyx_Py_Is((ob), Py_True) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsFalse) - #define __Pyx_Py_IsFalse(ob) Py_IsFalse(ob) -#else - #define __Pyx_Py_IsFalse(ob) __Pyx_Py_Is((ob), Py_False) -#endif -#define __Pyx_NoneAsNull(obj) (__Pyx_Py_IsNone(obj) ? NULL : (obj)) -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef CO_COROUTINE - #define CO_COROUTINE 0x80 -#endif -#ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x200 -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef Py_TPFLAGS_SEQUENCE - #define Py_TPFLAGS_SEQUENCE 0 -#endif -#ifndef Py_TPFLAGS_MAPPING - #define Py_TPFLAGS_MAPPING 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void *__Pyx_PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE(obj), name, func_ctype) -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if PY_MAJOR_VERSION < 3 - #if CYTHON_COMPILING_IN_PYPY - #if PYPY_VERSION_NUM < 0x07030600 - #if defined(__cplusplus) && __cplusplus >= 201402L - [[deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")]] - #elif defined(__GNUC__) || defined(__clang__) - __attribute__ ((__deprecated__("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6"))) - #elif defined(_MSC_VER) - __declspec(deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")) - #endif - static CYTHON_INLINE int PyGILState_Check(void) { - return 0; - } - #else // PYPY_VERSION_NUM < 0x07030600 - #endif // PYPY_VERSION_NUM < 0x07030600 - #else - static CYTHON_INLINE int PyGILState_Check(void) { - PyThreadState * tstate = _PyThreadState_Current; - return tstate && (tstate == PyGILState_GetThisThreadState()); - } - #endif -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B4 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) - #define __Pyx_PyObject_GetIterNextFunc(obj) (Py_TYPE(obj)->tp_iternext) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) - #define __Pyx_PyObject_GetIterNextFunc(obj) PyIter_Next -#endif -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE(obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GetLength(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#elif PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, (Py_UCS4) ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535U : 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((int)sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = (Py_UNICODE) ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) || (PY_MAJOR_VERSION == 2 && PYPY_VERSION_NUM < 0x07030500) - #undef PyUnicode_Contains - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define __Pyx_Py3Int_Check(op) PyLong_Check(op) - #define __Pyx_Py3Int_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#else - #define __Pyx_Py3Int_Check(op) (PyLong_Check(op) || PyInt_Check(op)) - #define __Pyx_Py3Int_CheckExact(op) (PyLong_CheckExact(op) || PyInt_CheckExact(op)) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifdef CYTHON_EXTERN_C - #undef __PYX_EXTERN_C - #define __PYX_EXTERN_C CYTHON_EXTERN_C -#elif defined(__PYX_EXTERN_C) - #ifdef _MSC_VER - #pragma message ("Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead.") - #else - #warning Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead. - #endif -#else - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const wchar_t *u) -{ - const wchar_t *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#else -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) -{ - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#endif -#define __Pyx_PyUnicode_FromOrdinal(o) PyUnicode_FromOrdinal((int)o) -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_VERSION_HEX >= 0x030C00A7 - #ifndef _PyLong_SIGN_MASK - #define _PyLong_SIGN_MASK 3 - #endif - #ifndef _PyLong_NON_SIZE_BITS - #define _PyLong_NON_SIZE_BITS 3 - #endif - #define __Pyx_PyLong_Sign(x) (((PyLongObject*)x)->long_value.lv_tag & _PyLong_SIGN_MASK) - #define __Pyx_PyLong_IsNeg(x) ((__Pyx_PyLong_Sign(x) & 2) != 0) - #define __Pyx_PyLong_IsNonNeg(x) (!__Pyx_PyLong_IsNeg(x)) - #define __Pyx_PyLong_IsZero(x) (__Pyx_PyLong_Sign(x) & 1) - #define __Pyx_PyLong_IsPos(x) (__Pyx_PyLong_Sign(x) == 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) (__Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) ((Py_ssize_t) (((PyLongObject*)x)->long_value.lv_tag >> _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_SignedDigitCount(x)\ - ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * __Pyx_PyLong_DigitCount(x)) - #if defined(PyUnstable_Long_IsCompact) && defined(PyUnstable_Long_CompactValue) - #define __Pyx_PyLong_IsCompact(x) PyUnstable_Long_IsCompact((PyLongObject*) x) - #define __Pyx_PyLong_CompactValue(x) PyUnstable_Long_CompactValue((PyLongObject*) x) - #else - #define __Pyx_PyLong_IsCompact(x) (((PyLongObject*)x)->long_value.lv_tag < (2 << _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_CompactValue(x) ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * (Py_ssize_t) __Pyx_PyLong_Digits(x)[0]) - #endif - typedef Py_ssize_t __Pyx_compact_pylong; - typedef size_t __Pyx_compact_upylong; - #else // Py < 3.12 - #define __Pyx_PyLong_IsNeg(x) (Py_SIZE(x) < 0) - #define __Pyx_PyLong_IsNonNeg(x) (Py_SIZE(x) >= 0) - #define __Pyx_PyLong_IsZero(x) (Py_SIZE(x) == 0) - #define __Pyx_PyLong_IsPos(x) (Py_SIZE(x) > 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) ((Py_SIZE(x) == 0) ? 0 : __Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) __Pyx_sst_abs(Py_SIZE(x)) - #define __Pyx_PyLong_SignedDigitCount(x) Py_SIZE(x) - #define __Pyx_PyLong_IsCompact(x) (Py_SIZE(x) == 0 || Py_SIZE(x) == 1 || Py_SIZE(x) == -1) - #define __Pyx_PyLong_CompactValue(x)\ - ((Py_SIZE(x) == 0) ? (sdigit) 0 : ((Py_SIZE(x) < 0) ? -(sdigit)__Pyx_PyLong_Digits(x)[0] : (sdigit)__Pyx_PyLong_Digits(x)[0])) - typedef sdigit __Pyx_compact_pylong; - typedef digit __Pyx_compact_upylong; - #endif - #if PY_VERSION_HEX >= 0x030C00A5 - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->long_value.ob_digit) - #else - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->ob_digit) - #endif -#endif -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = (char) c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* #### Code section: filename_table ### */ - -static const char *__pyx_f[] = { - "core.pyx", - "", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* BufferFormatStructs.proto */ -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __PYX_CYTHON_ATOMICS_ENABLED() CYTHON_ATOMICS -#define __pyx_atomic_int_type int -#define __pyx_nonatomic_int_type int -#if CYTHON_ATOMICS && (defined(__STDC_VERSION__) &&\ - (__STDC_VERSION__ >= 201112L) &&\ - !defined(__STDC_NO_ATOMICS__)) - #include -#elif CYTHON_ATOMICS && (defined(__cplusplus) && (\ - (__cplusplus >= 201103L) ||\ - (defined(_MSC_VER) && _MSC_VER >= 1700))) - #include -#endif -#if CYTHON_ATOMICS && (defined(__STDC_VERSION__) &&\ - (__STDC_VERSION__ >= 201112L) &&\ - !defined(__STDC_NO_ATOMICS__) &&\ - ATOMIC_INT_LOCK_FREE == 2) - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type atomic_int - #define __pyx_atomic_incr_aligned(value) atomic_fetch_add_explicit(value, 1, memory_order_relaxed) - #define __pyx_atomic_decr_aligned(value) atomic_fetch_sub_explicit(value, 1, memory_order_acq_rel) - #if defined(__PYX_DEBUG_ATOMICS) && defined(_MSC_VER) - #pragma message ("Using standard C atomics") - #elif defined(__PYX_DEBUG_ATOMICS) - #warning "Using standard C atomics" - #endif -#elif CYTHON_ATOMICS && (defined(__cplusplus) && (\ - (__cplusplus >= 201103L) ||\ -\ - (defined(_MSC_VER) && _MSC_VER >= 1700)) &&\ - ATOMIC_INT_LOCK_FREE == 2) - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type std::atomic_int - #define __pyx_atomic_incr_aligned(value) std::atomic_fetch_add_explicit(value, 1, std::memory_order_relaxed) - #define __pyx_atomic_decr_aligned(value) std::atomic_fetch_sub_explicit(value, 1, std::memory_order_acq_rel) - #if defined(__PYX_DEBUG_ATOMICS) && defined(_MSC_VER) - #pragma message ("Using standard C++ atomics") - #elif defined(__PYX_DEBUG_ATOMICS) - #warning "Using standard C++ atomics" - #endif -#elif CYTHON_ATOMICS && (__GNUC__ >= 5 || (__GNUC__ == 4 &&\ - (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ >= 2)))) - #define __pyx_atomic_incr_aligned(value) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type long - #define __pyx_nonatomic_int_type long - #pragma intrinsic (_InterlockedExchangeAdd) - #define __pyx_atomic_incr_aligned(value) _InterlockedExchangeAdd(value, 1) - #define __pyx_atomic_decr_aligned(value) _InterlockedExchangeAdd(value, -1) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview)) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview)) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":114 - * @cython.collection_type("sequence") - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":302 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":337 - * - * @cname('__pyx_memoryview') - * cdef class memoryview: # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int_type acquisition_count; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":952 - * @cython.collection_type("sequence") - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":114 - * @cython.collection_type("sequence") - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":337 - * - * @cname('__pyx_memoryview') - * cdef class memoryview: # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); - PyObject *(*_get_base)(struct __pyx_memoryview_obj *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":952 - * @cython.collection_type("sequence") - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__)) - #define __Pyx_RefNannyFinishContextNogil() __Pyx_RefNannyFinishContext() -#endif - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#if PY_VERSION_HEX >= 0x030C00A6 -#define __Pyx_PyErr_Occurred() (__pyx_tstate->current_exception != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->current_exception ? (PyObject*) Py_TYPE(__pyx_tstate->current_exception) : (PyObject*) NULL) -#else -#define __Pyx_PyErr_Occurred() (__pyx_tstate->curexc_type != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->curexc_type) -#endif -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() (PyErr_Occurred() != NULL) -#define __Pyx_PyErr_CurrentExceptionType() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A6 -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#define __Pyx_Arg_VARARGS(args, i) PyTuple_GET_ITEM(args, i) -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_Arg_FASTCALL(args, i) args[i] - #define __Pyx_NumKwargs_FASTCALL(kwds) PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) ((args) + (nargs)) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) -#else - #define __Pyx_Arg_FASTCALL __Pyx_Arg_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS -#endif -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_VARARGS(args, start), stop - start) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_FASTCALL(args, start), stop - start) -#else -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, - const char* function_name); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely(__Pyx_IS_TYPE(obj, type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL -#if PY_VERSION_HEX >= 0x03080000 - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets() - #define __Pyx_PyFrame_GetLocalsplus(frame) ((frame)->f_localsplus) -#else - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif -#endif -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs); - -/* RaiseUnexpectedTypeError.proto */ -static int __Pyx_RaiseUnexpectedTypeError(const char *expected, PyObject *obj); - -/* GCCDiagnostics.proto */ -#if !defined(__INTEL_COMPILER) && defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* BuildPyUnicode.proto */ -static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength, - int prepend_sign, char padding_char); - -/* CIntToPyUnicode.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_int(int value, Py_ssize_t width, char padding_char, char format_char); - -/* CIntToPyUnicode.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_Py_ssize_t(Py_ssize_t value, Py_ssize_t width, char padding_char, char format_char); - -/* JoinPyUnicode.proto */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* PyObjectFormatSimple.proto */ -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#elif PY_MAJOR_VERSION < 3 - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyString_CheckExact(s)) ? PyUnicode_FromEncodedObject(s, NULL, "strict") :\ - PyObject_Format(s, f)) -#elif CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyLong_CheckExact(s)) ? PyLong_Type.tp_repr(s) :\ - likely(PyFloat_CheckExact(s)) ? PyFloat_Type.tp_repr(s) :\ - PyObject_Format(s, f)) -#else - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#endif - -CYTHON_UNUSED static int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* KeywordStringCheck.proto */ -static int __Pyx_CheckKeywordStrings(PyObject *kw, const char* function_name, int kw_allowed); - -/* DivInt[Py_ssize_t].proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define __Pyx_UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* AssertionsEnabled.proto */ -#define __Pyx_init_assertions_enabled() -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define __pyx_assertions_enabled() (1) -#elif PY_VERSION_HEX < 0x03080000 || CYTHON_COMPILING_IN_PYPY || defined(Py_LIMITED_API) - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#elif CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030900A6 - static int __pyx_assertions_enabled_flag; - #define __pyx_assertions_enabled() (__pyx_assertions_enabled_flag) - #undef __Pyx_init_assertions_enabled - static void __Pyx_init_assertions_enabled(void) { - __pyx_assertions_enabled_flag = ! _PyInterpreterState_GetConfig(__Pyx_PyThreadState_Current->interp)->optimization_level; - } -#else - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple); -#endif - -/* ssize_strlen.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_CurrentExceptionType(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -CYTHON_UNUSED static int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PySequenceMultiply.proto */ -#define __Pyx_PySequence_Multiply_Left(mul, seq) __Pyx_PySequence_Multiply(seq, mul) -static CYTHON_INLINE PyObject* __Pyx_PySequence_Multiply(PyObject *seq, Py_ssize_t mul); - -/* SetItemInt.proto */ -#define __Pyx_SetItemInt(o, i, v, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_SetItemInt_Fast(o, (Py_ssize_t)i, v, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) :\ - __Pyx_SetItemInt_Generic(o, to_py_func(i), v))) -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v); -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, - int is_list, int wraparound, int boundscheck); - -/* RaiseUnboundLocalError.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* DivInt[long].proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* ErrOccurredWithGIL.proto */ -static CYTHON_INLINE int __Pyx_ErrOccurredWithGIL(void); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* IncludeStructmemberH.proto */ -#include - -/* FixUpExtensionType.proto */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); -#endif - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* ValidateBasesTuple.proto */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases); -#endif - -/* PyType_Ready.proto */ -CYTHON_UNUSED static int __Pyx_PyType_Ready(PyTypeObject *t); - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyTypeObject* typeptr , void* vtable); - -/* GetVTable.proto */ -static void* __Pyx_GetVtable(PyTypeObject *type); - -/* MergeVTables.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_MergeVtables(PyTypeObject *type); -#endif - -/* SetupReduce.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_setup_reduce(PyObject* type_obj); -#endif - -/* FetchSharedCythonModule.proto */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void); - -/* FetchCommonType.proto */ -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); -#else -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases); -#endif - -/* PyMethodNew.proto */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#else - #define __Pyx_PyMethod_New PyMethod_New -#endif - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL - __pyx_vectorcallfunc func_vectorcall; -#endif -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 - PyObject *func_classobj; -#endif - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_CyFunctionType) -#define __Pyx_IsCyOrPyCFunction(obj) __Pyx_TypeCheck2(obj, __pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_CyFunctionType) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); -#endif - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int_type *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int_type *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (&memview->acquisition_count) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XCLEAR_MEMVIEW(slice, have_gil) __Pyx_XCLEAR_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XCLEAR_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -static __Pyx_TypeName __Pyx_PyType_GetName(PyTypeObject* tp); -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#else -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -/* #### Code section: module_declarations ### */ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview__get_base(struct __pyx_memoryview_obj *__pyx_v_self); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice__get_base(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto*/ - -/* Module declarations from "cython.view" */ - -/* Module declarations from "cython.dataclasses" */ - -/* Module declarations from "cython" */ - -/* Module declarations from "monotonic_align.core" */ -static PyObject *__pyx_collections_abc_Sequence = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static int __pyx_array_allocate_buffer(struct __pyx_array_obj *); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static int assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, PyObject *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, PyObject *); /*proto*/ -static int __pyx_memoryview_err_no_memory(void); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -/* #### Code section: typeinfo ### */ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, __PYX_IS_UNSIGNED(int) ? 'U' : 'I', __PYX_IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of "monotonic_align.core" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin___import__; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_AssertionError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -/* #### Code section: string_decls ### */ -static const char __pyx_k_[] = ": "; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k__2[] = "."; -static const char __pyx_k__3[] = "*"; -static const char __pyx_k__6[] = "'"; -static const char __pyx_k__7[] = ")"; -static const char __pyx_k_gc[] = "gc"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k__23[] = "?"; -static const char __pyx_k_abc[] = "abc"; -static const char __pyx_k_and[] = " and "; -static const char __pyx_k_got[] = " (got "; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_sys[] = "sys"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_count[] = "count"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_index[] = "index"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_enable[] = "enable"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_disable[] = "disable"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_Sequence[] = "Sequence"; -static const char __pyx_k_core_pyx[] = "core.pyx"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_register[] = "register"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_isenabled[] = "isenabled"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_collections[] = "collections"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = ""; -static const char __pyx_k_version_info[] = "version_info"; -static const char __pyx_k_class_getitem[] = "__class_getitem__"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_AssertionError[] = "AssertionError"; -static const char __pyx_k_maximum_path_c[] = "maximum_path_c"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_collections_abc[] = "collections.abc"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_monotonic_align_core[] = "monotonic_align.core"; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_Invalid_shape_in_axis[] = "Invalid shape in axis "; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_Cannot_index_with_type[] = "Cannot index with type '"; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Dimension_d_is_not_direct[] = "Dimension %d is not direct"; -static const char __pyx_k_Index_out_of_bounds_axis_d[] = "Index out of bounds (axis %d)"; -static const char __pyx_k_Step_may_not_be_zero_axis_d[] = "Step may not be zero (axis %d)"; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_All_dimensions_preceding_dimensi[] = "All dimensions preceding dimension %d must be indexed and not sliced"; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Cannot_transpose_memoryview_with[] = "Cannot transpose memoryview with indirect dimensions"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_0x_x_vs_0[] = "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got "; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis "; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension "; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -/* #### Code section: decls ### */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - #ifdef __Pyx_Generator_USED - PyTypeObject *__pyx_GeneratorType; - #endif - #ifdef __Pyx_IterableCoroutine_USED - PyTypeObject *__pyx_IterableCoroutineType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineAwaitType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineType; - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - PyObject *__pyx_type___pyx_array; - PyObject *__pyx_type___pyx_MemviewEnum; - PyObject *__pyx_type___pyx_memoryview; - PyObject *__pyx_type___pyx_memoryviewslice; - #endif - PyTypeObject *__pyx_array_type; - PyTypeObject *__pyx_MemviewEnum_type; - PyTypeObject *__pyx_memoryview_type; - PyTypeObject *__pyx_memoryviewslice_type; - PyObject *__pyx_kp_u_; - PyObject *__pyx_n_s_ASCII; - PyObject *__pyx_kp_s_All_dimensions_preceding_dimensi; - PyObject *__pyx_n_s_AssertionError; - PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; - PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; - PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; - PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; - PyObject *__pyx_kp_u_Cannot_index_with_type; - PyObject *__pyx_kp_s_Cannot_transpose_memoryview_with; - PyObject *__pyx_kp_s_Dimension_d_is_not_direct; - PyObject *__pyx_n_s_Ellipsis; - PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; - PyObject *__pyx_kp_s_Incompatible_checksums_0x_x_vs_0; - PyObject *__pyx_n_s_IndexError; - PyObject *__pyx_kp_s_Index_out_of_bounds_axis_d; - PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; - PyObject *__pyx_kp_u_Invalid_mode_expected_c_or_fortr; - PyObject *__pyx_kp_u_Invalid_shape_in_axis; - PyObject *__pyx_n_s_MemoryError; - PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; - PyObject *__pyx_kp_s_MemoryView_of_r_object; - PyObject *__pyx_n_b_O; - PyObject *__pyx_kp_u_Out_of_bounds_on_buffer_access_a; - PyObject *__pyx_n_s_PickleError; - PyObject *__pyx_n_s_Sequence; - PyObject *__pyx_kp_s_Step_may_not_be_zero_axis_d; - PyObject *__pyx_n_s_TypeError; - PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; - PyObject *__pyx_n_s_ValueError; - PyObject *__pyx_n_s_View_MemoryView; - PyObject *__pyx_kp_u__2; - PyObject *__pyx_n_s__23; - PyObject *__pyx_n_s__3; - PyObject *__pyx_kp_u__6; - PyObject *__pyx_kp_u__7; - PyObject *__pyx_n_s_abc; - PyObject *__pyx_n_s_allocate_buffer; - PyObject *__pyx_kp_u_and; - PyObject *__pyx_n_s_asyncio_coroutines; - PyObject *__pyx_n_s_base; - PyObject *__pyx_n_s_c; - PyObject *__pyx_n_u_c; - PyObject *__pyx_n_s_class; - PyObject *__pyx_n_s_class_getitem; - PyObject *__pyx_n_s_cline_in_traceback; - PyObject *__pyx_n_s_collections; - PyObject *__pyx_kp_s_collections_abc; - PyObject *__pyx_kp_s_contiguous_and_direct; - PyObject *__pyx_kp_s_contiguous_and_indirect; - PyObject *__pyx_kp_s_core_pyx; - PyObject *__pyx_n_s_count; - PyObject *__pyx_n_s_dict; - PyObject *__pyx_kp_u_disable; - PyObject *__pyx_n_s_dtype_is_object; - PyObject *__pyx_kp_u_enable; - PyObject *__pyx_n_s_encode; - PyObject *__pyx_n_s_enumerate; - PyObject *__pyx_n_s_error; - PyObject *__pyx_n_s_flags; - PyObject *__pyx_n_s_format; - PyObject *__pyx_n_s_fortran; - PyObject *__pyx_n_u_fortran; - PyObject *__pyx_kp_u_gc; - PyObject *__pyx_n_s_getstate; - PyObject *__pyx_kp_u_got; - PyObject *__pyx_kp_u_got_differing_extents_in_dimensi; - PyObject *__pyx_n_s_id; - PyObject *__pyx_n_s_import; - PyObject *__pyx_n_s_index; - PyObject *__pyx_n_s_initializing; - PyObject *__pyx_n_s_is_coroutine; - PyObject *__pyx_kp_u_isenabled; - PyObject *__pyx_n_s_itemsize; - PyObject *__pyx_kp_s_itemsize_0_for_cython_array; - PyObject *__pyx_n_s_main; - PyObject *__pyx_n_s_maximum_path_c; - PyObject *__pyx_n_s_memview; - PyObject *__pyx_n_s_mode; - PyObject *__pyx_n_s_monotonic_align_core; - PyObject *__pyx_n_s_name; - PyObject *__pyx_n_s_name_2; - PyObject *__pyx_n_s_ndim; - PyObject *__pyx_n_s_new; - PyObject *__pyx_kp_s_no_default___reduce___due_to_non; - PyObject *__pyx_n_s_obj; - PyObject *__pyx_n_s_pack; - PyObject *__pyx_n_s_paths; - PyObject *__pyx_n_s_pickle; - PyObject *__pyx_n_s_pyx_PickleError; - PyObject *__pyx_n_s_pyx_checksum; - PyObject *__pyx_n_s_pyx_result; - PyObject *__pyx_n_s_pyx_state; - PyObject *__pyx_n_s_pyx_type; - PyObject *__pyx_n_s_pyx_unpickle_Enum; - PyObject *__pyx_n_s_pyx_vtable; - PyObject *__pyx_n_s_range; - PyObject *__pyx_n_s_reduce; - PyObject *__pyx_n_s_reduce_cython; - PyObject *__pyx_n_s_reduce_ex; - PyObject *__pyx_n_s_register; - PyObject *__pyx_n_s_setstate; - PyObject *__pyx_n_s_setstate_cython; - PyObject *__pyx_n_s_shape; - PyObject *__pyx_n_s_size; - PyObject *__pyx_n_s_spec; - PyObject *__pyx_n_s_start; - PyObject *__pyx_n_s_step; - PyObject *__pyx_n_s_stop; - PyObject *__pyx_kp_s_strided_and_direct; - PyObject *__pyx_kp_s_strided_and_direct_or_indirect; - PyObject *__pyx_kp_s_strided_and_indirect; - PyObject *__pyx_kp_s_stringsource; - PyObject *__pyx_n_s_struct; - PyObject *__pyx_n_s_sys; - PyObject *__pyx_n_s_t_xs; - PyObject *__pyx_n_s_t_ys; - PyObject *__pyx_n_s_test; - PyObject *__pyx_kp_s_unable_to_allocate_array_data; - PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; - PyObject *__pyx_n_s_unpack; - PyObject *__pyx_n_s_update; - PyObject *__pyx_n_s_values; - PyObject *__pyx_n_s_version_info; - PyObject *__pyx_int_0; - PyObject *__pyx_int_1; - PyObject *__pyx_int_3; - PyObject *__pyx_int_112105877; - PyObject *__pyx_int_136983863; - PyObject *__pyx_int_184977713; - PyObject *__pyx_int_neg_1; - float __pyx_k__9; - PyObject *__pyx_slice__5; - PyObject *__pyx_tuple__4; - PyObject *__pyx_tuple__8; - PyObject *__pyx_tuple__10; - PyObject *__pyx_tuple__11; - PyObject *__pyx_tuple__12; - PyObject *__pyx_tuple__13; - PyObject *__pyx_tuple__14; - PyObject *__pyx_tuple__15; - PyObject *__pyx_tuple__16; - PyObject *__pyx_tuple__17; - PyObject *__pyx_tuple__18; - PyObject *__pyx_tuple__19; - PyObject *__pyx_tuple__21; - PyObject *__pyx_codeobj__20; - PyObject *__pyx_codeobj__22; -} __pyx_mstate; - -#if CYTHON_USE_MODULE_STATE -#ifdef __cplusplus -namespace { - extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate(o) ((__pyx_mstate *)__Pyx_PyModule_GetState(o)) - -#define __pyx_mstate_global (__pyx_mstate(PyState_FindModule(&__pyx_moduledef))) - -#define __pyx_m (PyState_FindModule(&__pyx_moduledef)) -#else -static __pyx_mstate __pyx_mstate_global_static = -#ifdef __cplusplus - {}; -#else - {0}; -#endif -static __pyx_mstate *__pyx_mstate_global = &__pyx_mstate_global_static; -#endif -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_clear(PyObject *m) { - __pyx_mstate *clear_module_state = __pyx_mstate(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - Py_CLEAR(clear_module_state->__pyx_array_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_array); - Py_CLEAR(clear_module_state->__pyx_MemviewEnum_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_MemviewEnum); - Py_CLEAR(clear_module_state->__pyx_memoryview_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_memoryview); - Py_CLEAR(clear_module_state->__pyx_memoryviewslice_type); - Py_CLEAR(clear_module_state->__pyx_type___pyx_memoryviewslice); - Py_CLEAR(clear_module_state->__pyx_kp_u_); - Py_CLEAR(clear_module_state->__pyx_n_s_ASCII); - Py_CLEAR(clear_module_state->__pyx_kp_s_All_dimensions_preceding_dimensi); - Py_CLEAR(clear_module_state->__pyx_n_s_AssertionError); - Py_CLEAR(clear_module_state->__pyx_kp_s_Buffer_view_does_not_expose_stri); - Py_CLEAR(clear_module_state->__pyx_kp_s_Can_only_create_a_buffer_that_is); - Py_CLEAR(clear_module_state->__pyx_kp_s_Cannot_assign_to_read_only_memor); - Py_CLEAR(clear_module_state->__pyx_kp_s_Cannot_create_writable_memory_vi); - Py_CLEAR(clear_module_state->__pyx_kp_u_Cannot_index_with_type); - Py_CLEAR(clear_module_state->__pyx_kp_s_Cannot_transpose_memoryview_with); - Py_CLEAR(clear_module_state->__pyx_kp_s_Dimension_d_is_not_direct); - Py_CLEAR(clear_module_state->__pyx_n_s_Ellipsis); - Py_CLEAR(clear_module_state->__pyx_kp_s_Empty_shape_tuple_for_cython_arr); - Py_CLEAR(clear_module_state->__pyx_kp_s_Incompatible_checksums_0x_x_vs_0); - Py_CLEAR(clear_module_state->__pyx_n_s_IndexError); - Py_CLEAR(clear_module_state->__pyx_kp_s_Index_out_of_bounds_axis_d); - Py_CLEAR(clear_module_state->__pyx_kp_s_Indirect_dimensions_not_supporte); - Py_CLEAR(clear_module_state->__pyx_kp_u_Invalid_mode_expected_c_or_fortr); - Py_CLEAR(clear_module_state->__pyx_kp_u_Invalid_shape_in_axis); - Py_CLEAR(clear_module_state->__pyx_n_s_MemoryError); - Py_CLEAR(clear_module_state->__pyx_kp_s_MemoryView_of_r_at_0x_x); - Py_CLEAR(clear_module_state->__pyx_kp_s_MemoryView_of_r_object); - Py_CLEAR(clear_module_state->__pyx_n_b_O); - Py_CLEAR(clear_module_state->__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - Py_CLEAR(clear_module_state->__pyx_n_s_PickleError); - Py_CLEAR(clear_module_state->__pyx_n_s_Sequence); - Py_CLEAR(clear_module_state->__pyx_kp_s_Step_may_not_be_zero_axis_d); - Py_CLEAR(clear_module_state->__pyx_n_s_TypeError); - Py_CLEAR(clear_module_state->__pyx_kp_s_Unable_to_convert_item_to_object); - Py_CLEAR(clear_module_state->__pyx_n_s_ValueError); - Py_CLEAR(clear_module_state->__pyx_n_s_View_MemoryView); - Py_CLEAR(clear_module_state->__pyx_kp_u__2); - Py_CLEAR(clear_module_state->__pyx_n_s__23); - Py_CLEAR(clear_module_state->__pyx_n_s__3); - Py_CLEAR(clear_module_state->__pyx_kp_u__6); - Py_CLEAR(clear_module_state->__pyx_kp_u__7); - Py_CLEAR(clear_module_state->__pyx_n_s_abc); - Py_CLEAR(clear_module_state->__pyx_n_s_allocate_buffer); - Py_CLEAR(clear_module_state->__pyx_kp_u_and); - Py_CLEAR(clear_module_state->__pyx_n_s_asyncio_coroutines); - Py_CLEAR(clear_module_state->__pyx_n_s_base); - Py_CLEAR(clear_module_state->__pyx_n_s_c); - Py_CLEAR(clear_module_state->__pyx_n_u_c); - Py_CLEAR(clear_module_state->__pyx_n_s_class); - Py_CLEAR(clear_module_state->__pyx_n_s_class_getitem); - Py_CLEAR(clear_module_state->__pyx_n_s_cline_in_traceback); - Py_CLEAR(clear_module_state->__pyx_n_s_collections); - Py_CLEAR(clear_module_state->__pyx_kp_s_collections_abc); - Py_CLEAR(clear_module_state->__pyx_kp_s_contiguous_and_direct); - Py_CLEAR(clear_module_state->__pyx_kp_s_contiguous_and_indirect); - Py_CLEAR(clear_module_state->__pyx_kp_s_core_pyx); - Py_CLEAR(clear_module_state->__pyx_n_s_count); - Py_CLEAR(clear_module_state->__pyx_n_s_dict); - Py_CLEAR(clear_module_state->__pyx_kp_u_disable); - Py_CLEAR(clear_module_state->__pyx_n_s_dtype_is_object); - Py_CLEAR(clear_module_state->__pyx_kp_u_enable); - Py_CLEAR(clear_module_state->__pyx_n_s_encode); - Py_CLEAR(clear_module_state->__pyx_n_s_enumerate); - Py_CLEAR(clear_module_state->__pyx_n_s_error); - Py_CLEAR(clear_module_state->__pyx_n_s_flags); - Py_CLEAR(clear_module_state->__pyx_n_s_format); - Py_CLEAR(clear_module_state->__pyx_n_s_fortran); - Py_CLEAR(clear_module_state->__pyx_n_u_fortran); - Py_CLEAR(clear_module_state->__pyx_kp_u_gc); - Py_CLEAR(clear_module_state->__pyx_n_s_getstate); - Py_CLEAR(clear_module_state->__pyx_kp_u_got); - Py_CLEAR(clear_module_state->__pyx_kp_u_got_differing_extents_in_dimensi); - Py_CLEAR(clear_module_state->__pyx_n_s_id); - Py_CLEAR(clear_module_state->__pyx_n_s_import); - Py_CLEAR(clear_module_state->__pyx_n_s_index); - Py_CLEAR(clear_module_state->__pyx_n_s_initializing); - Py_CLEAR(clear_module_state->__pyx_n_s_is_coroutine); - Py_CLEAR(clear_module_state->__pyx_kp_u_isenabled); - Py_CLEAR(clear_module_state->__pyx_n_s_itemsize); - Py_CLEAR(clear_module_state->__pyx_kp_s_itemsize_0_for_cython_array); - Py_CLEAR(clear_module_state->__pyx_n_s_main); - Py_CLEAR(clear_module_state->__pyx_n_s_maximum_path_c); - Py_CLEAR(clear_module_state->__pyx_n_s_memview); - Py_CLEAR(clear_module_state->__pyx_n_s_mode); - Py_CLEAR(clear_module_state->__pyx_n_s_monotonic_align_core); - Py_CLEAR(clear_module_state->__pyx_n_s_name); - Py_CLEAR(clear_module_state->__pyx_n_s_name_2); - Py_CLEAR(clear_module_state->__pyx_n_s_ndim); - Py_CLEAR(clear_module_state->__pyx_n_s_new); - Py_CLEAR(clear_module_state->__pyx_kp_s_no_default___reduce___due_to_non); - Py_CLEAR(clear_module_state->__pyx_n_s_obj); - Py_CLEAR(clear_module_state->__pyx_n_s_pack); - Py_CLEAR(clear_module_state->__pyx_n_s_paths); - Py_CLEAR(clear_module_state->__pyx_n_s_pickle); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_PickleError); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_checksum); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_result); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_state); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_type); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_unpickle_Enum); - Py_CLEAR(clear_module_state->__pyx_n_s_pyx_vtable); - Py_CLEAR(clear_module_state->__pyx_n_s_range); - Py_CLEAR(clear_module_state->__pyx_n_s_reduce); - Py_CLEAR(clear_module_state->__pyx_n_s_reduce_cython); - Py_CLEAR(clear_module_state->__pyx_n_s_reduce_ex); - Py_CLEAR(clear_module_state->__pyx_n_s_register); - Py_CLEAR(clear_module_state->__pyx_n_s_setstate); - Py_CLEAR(clear_module_state->__pyx_n_s_setstate_cython); - Py_CLEAR(clear_module_state->__pyx_n_s_shape); - Py_CLEAR(clear_module_state->__pyx_n_s_size); - Py_CLEAR(clear_module_state->__pyx_n_s_spec); - Py_CLEAR(clear_module_state->__pyx_n_s_start); - Py_CLEAR(clear_module_state->__pyx_n_s_step); - Py_CLEAR(clear_module_state->__pyx_n_s_stop); - Py_CLEAR(clear_module_state->__pyx_kp_s_strided_and_direct); - Py_CLEAR(clear_module_state->__pyx_kp_s_strided_and_direct_or_indirect); - Py_CLEAR(clear_module_state->__pyx_kp_s_strided_and_indirect); - Py_CLEAR(clear_module_state->__pyx_kp_s_stringsource); - Py_CLEAR(clear_module_state->__pyx_n_s_struct); - Py_CLEAR(clear_module_state->__pyx_n_s_sys); - Py_CLEAR(clear_module_state->__pyx_n_s_t_xs); - Py_CLEAR(clear_module_state->__pyx_n_s_t_ys); - Py_CLEAR(clear_module_state->__pyx_n_s_test); - Py_CLEAR(clear_module_state->__pyx_kp_s_unable_to_allocate_array_data); - Py_CLEAR(clear_module_state->__pyx_kp_s_unable_to_allocate_shape_and_str); - Py_CLEAR(clear_module_state->__pyx_n_s_unpack); - Py_CLEAR(clear_module_state->__pyx_n_s_update); - Py_CLEAR(clear_module_state->__pyx_n_s_values); - Py_CLEAR(clear_module_state->__pyx_n_s_version_info); - Py_CLEAR(clear_module_state->__pyx_int_0); - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_3); - Py_CLEAR(clear_module_state->__pyx_int_112105877); - Py_CLEAR(clear_module_state->__pyx_int_136983863); - Py_CLEAR(clear_module_state->__pyx_int_184977713); - Py_CLEAR(clear_module_state->__pyx_int_neg_1); - Py_CLEAR(clear_module_state->__pyx_slice__5); - Py_CLEAR(clear_module_state->__pyx_tuple__4); - Py_CLEAR(clear_module_state->__pyx_tuple__8); - Py_CLEAR(clear_module_state->__pyx_tuple__10); - Py_CLEAR(clear_module_state->__pyx_tuple__11); - Py_CLEAR(clear_module_state->__pyx_tuple__12); - Py_CLEAR(clear_module_state->__pyx_tuple__13); - Py_CLEAR(clear_module_state->__pyx_tuple__14); - Py_CLEAR(clear_module_state->__pyx_tuple__15); - Py_CLEAR(clear_module_state->__pyx_tuple__16); - Py_CLEAR(clear_module_state->__pyx_tuple__17); - Py_CLEAR(clear_module_state->__pyx_tuple__18); - Py_CLEAR(clear_module_state->__pyx_tuple__19); - Py_CLEAR(clear_module_state->__pyx_tuple__21); - Py_CLEAR(clear_module_state->__pyx_codeobj__20); - Py_CLEAR(clear_module_state->__pyx_codeobj__22); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstate *traverse_module_state = __pyx_mstate(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - Py_VISIT(traverse_module_state->__pyx_empty_tuple); - Py_VISIT(traverse_module_state->__pyx_empty_bytes); - Py_VISIT(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_array_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_array); - Py_VISIT(traverse_module_state->__pyx_MemviewEnum_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_MemviewEnum); - Py_VISIT(traverse_module_state->__pyx_memoryview_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_memoryview); - Py_VISIT(traverse_module_state->__pyx_memoryviewslice_type); - Py_VISIT(traverse_module_state->__pyx_type___pyx_memoryviewslice); - Py_VISIT(traverse_module_state->__pyx_kp_u_); - Py_VISIT(traverse_module_state->__pyx_n_s_ASCII); - Py_VISIT(traverse_module_state->__pyx_kp_s_All_dimensions_preceding_dimensi); - Py_VISIT(traverse_module_state->__pyx_n_s_AssertionError); - Py_VISIT(traverse_module_state->__pyx_kp_s_Buffer_view_does_not_expose_stri); - Py_VISIT(traverse_module_state->__pyx_kp_s_Can_only_create_a_buffer_that_is); - Py_VISIT(traverse_module_state->__pyx_kp_s_Cannot_assign_to_read_only_memor); - Py_VISIT(traverse_module_state->__pyx_kp_s_Cannot_create_writable_memory_vi); - Py_VISIT(traverse_module_state->__pyx_kp_u_Cannot_index_with_type); - Py_VISIT(traverse_module_state->__pyx_kp_s_Cannot_transpose_memoryview_with); - Py_VISIT(traverse_module_state->__pyx_kp_s_Dimension_d_is_not_direct); - Py_VISIT(traverse_module_state->__pyx_n_s_Ellipsis); - Py_VISIT(traverse_module_state->__pyx_kp_s_Empty_shape_tuple_for_cython_arr); - Py_VISIT(traverse_module_state->__pyx_kp_s_Incompatible_checksums_0x_x_vs_0); - Py_VISIT(traverse_module_state->__pyx_n_s_IndexError); - Py_VISIT(traverse_module_state->__pyx_kp_s_Index_out_of_bounds_axis_d); - Py_VISIT(traverse_module_state->__pyx_kp_s_Indirect_dimensions_not_supporte); - Py_VISIT(traverse_module_state->__pyx_kp_u_Invalid_mode_expected_c_or_fortr); - Py_VISIT(traverse_module_state->__pyx_kp_u_Invalid_shape_in_axis); - Py_VISIT(traverse_module_state->__pyx_n_s_MemoryError); - Py_VISIT(traverse_module_state->__pyx_kp_s_MemoryView_of_r_at_0x_x); - Py_VISIT(traverse_module_state->__pyx_kp_s_MemoryView_of_r_object); - Py_VISIT(traverse_module_state->__pyx_n_b_O); - Py_VISIT(traverse_module_state->__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - Py_VISIT(traverse_module_state->__pyx_n_s_PickleError); - Py_VISIT(traverse_module_state->__pyx_n_s_Sequence); - Py_VISIT(traverse_module_state->__pyx_kp_s_Step_may_not_be_zero_axis_d); - Py_VISIT(traverse_module_state->__pyx_n_s_TypeError); - Py_VISIT(traverse_module_state->__pyx_kp_s_Unable_to_convert_item_to_object); - Py_VISIT(traverse_module_state->__pyx_n_s_ValueError); - Py_VISIT(traverse_module_state->__pyx_n_s_View_MemoryView); - Py_VISIT(traverse_module_state->__pyx_kp_u__2); - Py_VISIT(traverse_module_state->__pyx_n_s__23); - Py_VISIT(traverse_module_state->__pyx_n_s__3); - Py_VISIT(traverse_module_state->__pyx_kp_u__6); - Py_VISIT(traverse_module_state->__pyx_kp_u__7); - Py_VISIT(traverse_module_state->__pyx_n_s_abc); - Py_VISIT(traverse_module_state->__pyx_n_s_allocate_buffer); - Py_VISIT(traverse_module_state->__pyx_kp_u_and); - Py_VISIT(traverse_module_state->__pyx_n_s_asyncio_coroutines); - Py_VISIT(traverse_module_state->__pyx_n_s_base); - Py_VISIT(traverse_module_state->__pyx_n_s_c); - Py_VISIT(traverse_module_state->__pyx_n_u_c); - Py_VISIT(traverse_module_state->__pyx_n_s_class); - Py_VISIT(traverse_module_state->__pyx_n_s_class_getitem); - Py_VISIT(traverse_module_state->__pyx_n_s_cline_in_traceback); - Py_VISIT(traverse_module_state->__pyx_n_s_collections); - Py_VISIT(traverse_module_state->__pyx_kp_s_collections_abc); - Py_VISIT(traverse_module_state->__pyx_kp_s_contiguous_and_direct); - Py_VISIT(traverse_module_state->__pyx_kp_s_contiguous_and_indirect); - Py_VISIT(traverse_module_state->__pyx_kp_s_core_pyx); - Py_VISIT(traverse_module_state->__pyx_n_s_count); - Py_VISIT(traverse_module_state->__pyx_n_s_dict); - Py_VISIT(traverse_module_state->__pyx_kp_u_disable); - Py_VISIT(traverse_module_state->__pyx_n_s_dtype_is_object); - Py_VISIT(traverse_module_state->__pyx_kp_u_enable); - Py_VISIT(traverse_module_state->__pyx_n_s_encode); - Py_VISIT(traverse_module_state->__pyx_n_s_enumerate); - Py_VISIT(traverse_module_state->__pyx_n_s_error); - Py_VISIT(traverse_module_state->__pyx_n_s_flags); - Py_VISIT(traverse_module_state->__pyx_n_s_format); - Py_VISIT(traverse_module_state->__pyx_n_s_fortran); - Py_VISIT(traverse_module_state->__pyx_n_u_fortran); - Py_VISIT(traverse_module_state->__pyx_kp_u_gc); - Py_VISIT(traverse_module_state->__pyx_n_s_getstate); - Py_VISIT(traverse_module_state->__pyx_kp_u_got); - Py_VISIT(traverse_module_state->__pyx_kp_u_got_differing_extents_in_dimensi); - Py_VISIT(traverse_module_state->__pyx_n_s_id); - Py_VISIT(traverse_module_state->__pyx_n_s_import); - Py_VISIT(traverse_module_state->__pyx_n_s_index); - Py_VISIT(traverse_module_state->__pyx_n_s_initializing); - Py_VISIT(traverse_module_state->__pyx_n_s_is_coroutine); - Py_VISIT(traverse_module_state->__pyx_kp_u_isenabled); - Py_VISIT(traverse_module_state->__pyx_n_s_itemsize); - Py_VISIT(traverse_module_state->__pyx_kp_s_itemsize_0_for_cython_array); - Py_VISIT(traverse_module_state->__pyx_n_s_main); - Py_VISIT(traverse_module_state->__pyx_n_s_maximum_path_c); - Py_VISIT(traverse_module_state->__pyx_n_s_memview); - Py_VISIT(traverse_module_state->__pyx_n_s_mode); - Py_VISIT(traverse_module_state->__pyx_n_s_monotonic_align_core); - Py_VISIT(traverse_module_state->__pyx_n_s_name); - Py_VISIT(traverse_module_state->__pyx_n_s_name_2); - Py_VISIT(traverse_module_state->__pyx_n_s_ndim); - Py_VISIT(traverse_module_state->__pyx_n_s_new); - Py_VISIT(traverse_module_state->__pyx_kp_s_no_default___reduce___due_to_non); - Py_VISIT(traverse_module_state->__pyx_n_s_obj); - Py_VISIT(traverse_module_state->__pyx_n_s_pack); - Py_VISIT(traverse_module_state->__pyx_n_s_paths); - Py_VISIT(traverse_module_state->__pyx_n_s_pickle); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_PickleError); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_checksum); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_result); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_state); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_type); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_unpickle_Enum); - Py_VISIT(traverse_module_state->__pyx_n_s_pyx_vtable); - Py_VISIT(traverse_module_state->__pyx_n_s_range); - Py_VISIT(traverse_module_state->__pyx_n_s_reduce); - Py_VISIT(traverse_module_state->__pyx_n_s_reduce_cython); - Py_VISIT(traverse_module_state->__pyx_n_s_reduce_ex); - Py_VISIT(traverse_module_state->__pyx_n_s_register); - Py_VISIT(traverse_module_state->__pyx_n_s_setstate); - Py_VISIT(traverse_module_state->__pyx_n_s_setstate_cython); - Py_VISIT(traverse_module_state->__pyx_n_s_shape); - Py_VISIT(traverse_module_state->__pyx_n_s_size); - Py_VISIT(traverse_module_state->__pyx_n_s_spec); - Py_VISIT(traverse_module_state->__pyx_n_s_start); - Py_VISIT(traverse_module_state->__pyx_n_s_step); - Py_VISIT(traverse_module_state->__pyx_n_s_stop); - Py_VISIT(traverse_module_state->__pyx_kp_s_strided_and_direct); - Py_VISIT(traverse_module_state->__pyx_kp_s_strided_and_direct_or_indirect); - Py_VISIT(traverse_module_state->__pyx_kp_s_strided_and_indirect); - Py_VISIT(traverse_module_state->__pyx_kp_s_stringsource); - Py_VISIT(traverse_module_state->__pyx_n_s_struct); - Py_VISIT(traverse_module_state->__pyx_n_s_sys); - Py_VISIT(traverse_module_state->__pyx_n_s_t_xs); - Py_VISIT(traverse_module_state->__pyx_n_s_t_ys); - Py_VISIT(traverse_module_state->__pyx_n_s_test); - Py_VISIT(traverse_module_state->__pyx_kp_s_unable_to_allocate_array_data); - Py_VISIT(traverse_module_state->__pyx_kp_s_unable_to_allocate_shape_and_str); - Py_VISIT(traverse_module_state->__pyx_n_s_unpack); - Py_VISIT(traverse_module_state->__pyx_n_s_update); - Py_VISIT(traverse_module_state->__pyx_n_s_values); - Py_VISIT(traverse_module_state->__pyx_n_s_version_info); - Py_VISIT(traverse_module_state->__pyx_int_0); - Py_VISIT(traverse_module_state->__pyx_int_1); - Py_VISIT(traverse_module_state->__pyx_int_3); - Py_VISIT(traverse_module_state->__pyx_int_112105877); - Py_VISIT(traverse_module_state->__pyx_int_136983863); - Py_VISIT(traverse_module_state->__pyx_int_184977713); - Py_VISIT(traverse_module_state->__pyx_int_neg_1); - Py_VISIT(traverse_module_state->__pyx_slice__5); - Py_VISIT(traverse_module_state->__pyx_tuple__4); - Py_VISIT(traverse_module_state->__pyx_tuple__8); - Py_VISIT(traverse_module_state->__pyx_tuple__10); - Py_VISIT(traverse_module_state->__pyx_tuple__11); - Py_VISIT(traverse_module_state->__pyx_tuple__12); - Py_VISIT(traverse_module_state->__pyx_tuple__13); - Py_VISIT(traverse_module_state->__pyx_tuple__14); - Py_VISIT(traverse_module_state->__pyx_tuple__15); - Py_VISIT(traverse_module_state->__pyx_tuple__16); - Py_VISIT(traverse_module_state->__pyx_tuple__17); - Py_VISIT(traverse_module_state->__pyx_tuple__18); - Py_VISIT(traverse_module_state->__pyx_tuple__19); - Py_VISIT(traverse_module_state->__pyx_tuple__21); - Py_VISIT(traverse_module_state->__pyx_codeobj__20); - Py_VISIT(traverse_module_state->__pyx_codeobj__22); - return 0; -} -#endif -/* #### Code section: module_state_defines ### */ -#define __pyx_d __pyx_mstate_global->__pyx_d -#define __pyx_b __pyx_mstate_global->__pyx_b -#define __pyx_cython_runtime __pyx_mstate_global->__pyx_cython_runtime -#define __pyx_empty_tuple __pyx_mstate_global->__pyx_empty_tuple -#define __pyx_empty_bytes __pyx_mstate_global->__pyx_empty_bytes -#define __pyx_empty_unicode __pyx_mstate_global->__pyx_empty_unicode -#ifdef __Pyx_CyFunction_USED -#define __pyx_CyFunctionType __pyx_mstate_global->__pyx_CyFunctionType -#endif -#ifdef __Pyx_FusedFunction_USED -#define __pyx_FusedFunctionType __pyx_mstate_global->__pyx_FusedFunctionType -#endif -#ifdef __Pyx_Generator_USED -#define __pyx_GeneratorType __pyx_mstate_global->__pyx_GeneratorType -#endif -#ifdef __Pyx_IterableCoroutine_USED -#define __pyx_IterableCoroutineType __pyx_mstate_global->__pyx_IterableCoroutineType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineAwaitType __pyx_mstate_global->__pyx_CoroutineAwaitType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineType __pyx_mstate_global->__pyx_CoroutineType -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#define __pyx_type___pyx_array __pyx_mstate_global->__pyx_type___pyx_array -#define __pyx_type___pyx_MemviewEnum __pyx_mstate_global->__pyx_type___pyx_MemviewEnum -#define __pyx_type___pyx_memoryview __pyx_mstate_global->__pyx_type___pyx_memoryview -#define __pyx_type___pyx_memoryviewslice __pyx_mstate_global->__pyx_type___pyx_memoryviewslice -#endif -#define __pyx_array_type __pyx_mstate_global->__pyx_array_type -#define __pyx_MemviewEnum_type __pyx_mstate_global->__pyx_MemviewEnum_type -#define __pyx_memoryview_type __pyx_mstate_global->__pyx_memoryview_type -#define __pyx_memoryviewslice_type __pyx_mstate_global->__pyx_memoryviewslice_type -#define __pyx_kp_u_ __pyx_mstate_global->__pyx_kp_u_ -#define __pyx_n_s_ASCII __pyx_mstate_global->__pyx_n_s_ASCII -#define __pyx_kp_s_All_dimensions_preceding_dimensi __pyx_mstate_global->__pyx_kp_s_All_dimensions_preceding_dimensi -#define __pyx_n_s_AssertionError __pyx_mstate_global->__pyx_n_s_AssertionError -#define __pyx_kp_s_Buffer_view_does_not_expose_stri __pyx_mstate_global->__pyx_kp_s_Buffer_view_does_not_expose_stri -#define __pyx_kp_s_Can_only_create_a_buffer_that_is __pyx_mstate_global->__pyx_kp_s_Can_only_create_a_buffer_that_is -#define __pyx_kp_s_Cannot_assign_to_read_only_memor __pyx_mstate_global->__pyx_kp_s_Cannot_assign_to_read_only_memor -#define __pyx_kp_s_Cannot_create_writable_memory_vi __pyx_mstate_global->__pyx_kp_s_Cannot_create_writable_memory_vi -#define __pyx_kp_u_Cannot_index_with_type __pyx_mstate_global->__pyx_kp_u_Cannot_index_with_type -#define __pyx_kp_s_Cannot_transpose_memoryview_with __pyx_mstate_global->__pyx_kp_s_Cannot_transpose_memoryview_with -#define __pyx_kp_s_Dimension_d_is_not_direct __pyx_mstate_global->__pyx_kp_s_Dimension_d_is_not_direct -#define __pyx_n_s_Ellipsis __pyx_mstate_global->__pyx_n_s_Ellipsis -#define __pyx_kp_s_Empty_shape_tuple_for_cython_arr __pyx_mstate_global->__pyx_kp_s_Empty_shape_tuple_for_cython_arr -#define __pyx_kp_s_Incompatible_checksums_0x_x_vs_0 __pyx_mstate_global->__pyx_kp_s_Incompatible_checksums_0x_x_vs_0 -#define __pyx_n_s_IndexError __pyx_mstate_global->__pyx_n_s_IndexError -#define __pyx_kp_s_Index_out_of_bounds_axis_d __pyx_mstate_global->__pyx_kp_s_Index_out_of_bounds_axis_d -#define __pyx_kp_s_Indirect_dimensions_not_supporte __pyx_mstate_global->__pyx_kp_s_Indirect_dimensions_not_supporte -#define __pyx_kp_u_Invalid_mode_expected_c_or_fortr __pyx_mstate_global->__pyx_kp_u_Invalid_mode_expected_c_or_fortr -#define __pyx_kp_u_Invalid_shape_in_axis __pyx_mstate_global->__pyx_kp_u_Invalid_shape_in_axis -#define __pyx_n_s_MemoryError __pyx_mstate_global->__pyx_n_s_MemoryError -#define __pyx_kp_s_MemoryView_of_r_at_0x_x __pyx_mstate_global->__pyx_kp_s_MemoryView_of_r_at_0x_x -#define __pyx_kp_s_MemoryView_of_r_object __pyx_mstate_global->__pyx_kp_s_MemoryView_of_r_object -#define __pyx_n_b_O __pyx_mstate_global->__pyx_n_b_O -#define __pyx_kp_u_Out_of_bounds_on_buffer_access_a __pyx_mstate_global->__pyx_kp_u_Out_of_bounds_on_buffer_access_a -#define __pyx_n_s_PickleError __pyx_mstate_global->__pyx_n_s_PickleError -#define __pyx_n_s_Sequence __pyx_mstate_global->__pyx_n_s_Sequence -#define __pyx_kp_s_Step_may_not_be_zero_axis_d __pyx_mstate_global->__pyx_kp_s_Step_may_not_be_zero_axis_d -#define __pyx_n_s_TypeError __pyx_mstate_global->__pyx_n_s_TypeError -#define __pyx_kp_s_Unable_to_convert_item_to_object __pyx_mstate_global->__pyx_kp_s_Unable_to_convert_item_to_object -#define __pyx_n_s_ValueError __pyx_mstate_global->__pyx_n_s_ValueError -#define __pyx_n_s_View_MemoryView __pyx_mstate_global->__pyx_n_s_View_MemoryView -#define __pyx_kp_u__2 __pyx_mstate_global->__pyx_kp_u__2 -#define __pyx_n_s__23 __pyx_mstate_global->__pyx_n_s__23 -#define __pyx_n_s__3 __pyx_mstate_global->__pyx_n_s__3 -#define __pyx_kp_u__6 __pyx_mstate_global->__pyx_kp_u__6 -#define __pyx_kp_u__7 __pyx_mstate_global->__pyx_kp_u__7 -#define __pyx_n_s_abc __pyx_mstate_global->__pyx_n_s_abc -#define __pyx_n_s_allocate_buffer __pyx_mstate_global->__pyx_n_s_allocate_buffer -#define __pyx_kp_u_and __pyx_mstate_global->__pyx_kp_u_and -#define __pyx_n_s_asyncio_coroutines __pyx_mstate_global->__pyx_n_s_asyncio_coroutines -#define __pyx_n_s_base __pyx_mstate_global->__pyx_n_s_base -#define __pyx_n_s_c __pyx_mstate_global->__pyx_n_s_c -#define __pyx_n_u_c __pyx_mstate_global->__pyx_n_u_c -#define __pyx_n_s_class __pyx_mstate_global->__pyx_n_s_class -#define __pyx_n_s_class_getitem __pyx_mstate_global->__pyx_n_s_class_getitem -#define __pyx_n_s_cline_in_traceback __pyx_mstate_global->__pyx_n_s_cline_in_traceback -#define __pyx_n_s_collections __pyx_mstate_global->__pyx_n_s_collections -#define __pyx_kp_s_collections_abc __pyx_mstate_global->__pyx_kp_s_collections_abc -#define __pyx_kp_s_contiguous_and_direct __pyx_mstate_global->__pyx_kp_s_contiguous_and_direct -#define __pyx_kp_s_contiguous_and_indirect __pyx_mstate_global->__pyx_kp_s_contiguous_and_indirect -#define __pyx_kp_s_core_pyx __pyx_mstate_global->__pyx_kp_s_core_pyx -#define __pyx_n_s_count __pyx_mstate_global->__pyx_n_s_count -#define __pyx_n_s_dict __pyx_mstate_global->__pyx_n_s_dict -#define __pyx_kp_u_disable __pyx_mstate_global->__pyx_kp_u_disable -#define __pyx_n_s_dtype_is_object __pyx_mstate_global->__pyx_n_s_dtype_is_object -#define __pyx_kp_u_enable __pyx_mstate_global->__pyx_kp_u_enable -#define __pyx_n_s_encode __pyx_mstate_global->__pyx_n_s_encode -#define __pyx_n_s_enumerate __pyx_mstate_global->__pyx_n_s_enumerate -#define __pyx_n_s_error __pyx_mstate_global->__pyx_n_s_error -#define __pyx_n_s_flags __pyx_mstate_global->__pyx_n_s_flags -#define __pyx_n_s_format __pyx_mstate_global->__pyx_n_s_format -#define __pyx_n_s_fortran __pyx_mstate_global->__pyx_n_s_fortran -#define __pyx_n_u_fortran __pyx_mstate_global->__pyx_n_u_fortran -#define __pyx_kp_u_gc __pyx_mstate_global->__pyx_kp_u_gc -#define __pyx_n_s_getstate __pyx_mstate_global->__pyx_n_s_getstate -#define __pyx_kp_u_got __pyx_mstate_global->__pyx_kp_u_got -#define __pyx_kp_u_got_differing_extents_in_dimensi __pyx_mstate_global->__pyx_kp_u_got_differing_extents_in_dimensi -#define __pyx_n_s_id __pyx_mstate_global->__pyx_n_s_id -#define __pyx_n_s_import __pyx_mstate_global->__pyx_n_s_import -#define __pyx_n_s_index __pyx_mstate_global->__pyx_n_s_index -#define __pyx_n_s_initializing __pyx_mstate_global->__pyx_n_s_initializing -#define __pyx_n_s_is_coroutine __pyx_mstate_global->__pyx_n_s_is_coroutine -#define __pyx_kp_u_isenabled __pyx_mstate_global->__pyx_kp_u_isenabled -#define __pyx_n_s_itemsize __pyx_mstate_global->__pyx_n_s_itemsize -#define __pyx_kp_s_itemsize_0_for_cython_array __pyx_mstate_global->__pyx_kp_s_itemsize_0_for_cython_array -#define __pyx_n_s_main __pyx_mstate_global->__pyx_n_s_main -#define __pyx_n_s_maximum_path_c __pyx_mstate_global->__pyx_n_s_maximum_path_c -#define __pyx_n_s_memview __pyx_mstate_global->__pyx_n_s_memview -#define __pyx_n_s_mode __pyx_mstate_global->__pyx_n_s_mode -#define __pyx_n_s_monotonic_align_core __pyx_mstate_global->__pyx_n_s_monotonic_align_core -#define __pyx_n_s_name __pyx_mstate_global->__pyx_n_s_name -#define __pyx_n_s_name_2 __pyx_mstate_global->__pyx_n_s_name_2 -#define __pyx_n_s_ndim __pyx_mstate_global->__pyx_n_s_ndim -#define __pyx_n_s_new __pyx_mstate_global->__pyx_n_s_new -#define __pyx_kp_s_no_default___reduce___due_to_non __pyx_mstate_global->__pyx_kp_s_no_default___reduce___due_to_non -#define __pyx_n_s_obj __pyx_mstate_global->__pyx_n_s_obj -#define __pyx_n_s_pack __pyx_mstate_global->__pyx_n_s_pack -#define __pyx_n_s_paths __pyx_mstate_global->__pyx_n_s_paths -#define __pyx_n_s_pickle __pyx_mstate_global->__pyx_n_s_pickle -#define __pyx_n_s_pyx_PickleError __pyx_mstate_global->__pyx_n_s_pyx_PickleError -#define __pyx_n_s_pyx_checksum __pyx_mstate_global->__pyx_n_s_pyx_checksum -#define __pyx_n_s_pyx_result __pyx_mstate_global->__pyx_n_s_pyx_result -#define __pyx_n_s_pyx_state __pyx_mstate_global->__pyx_n_s_pyx_state -#define __pyx_n_s_pyx_type __pyx_mstate_global->__pyx_n_s_pyx_type -#define __pyx_n_s_pyx_unpickle_Enum __pyx_mstate_global->__pyx_n_s_pyx_unpickle_Enum -#define __pyx_n_s_pyx_vtable __pyx_mstate_global->__pyx_n_s_pyx_vtable -#define __pyx_n_s_range __pyx_mstate_global->__pyx_n_s_range -#define __pyx_n_s_reduce __pyx_mstate_global->__pyx_n_s_reduce -#define __pyx_n_s_reduce_cython __pyx_mstate_global->__pyx_n_s_reduce_cython -#define __pyx_n_s_reduce_ex __pyx_mstate_global->__pyx_n_s_reduce_ex -#define __pyx_n_s_register __pyx_mstate_global->__pyx_n_s_register -#define __pyx_n_s_setstate __pyx_mstate_global->__pyx_n_s_setstate -#define __pyx_n_s_setstate_cython __pyx_mstate_global->__pyx_n_s_setstate_cython -#define __pyx_n_s_shape __pyx_mstate_global->__pyx_n_s_shape -#define __pyx_n_s_size __pyx_mstate_global->__pyx_n_s_size -#define __pyx_n_s_spec __pyx_mstate_global->__pyx_n_s_spec -#define __pyx_n_s_start __pyx_mstate_global->__pyx_n_s_start -#define __pyx_n_s_step __pyx_mstate_global->__pyx_n_s_step -#define __pyx_n_s_stop __pyx_mstate_global->__pyx_n_s_stop -#define __pyx_kp_s_strided_and_direct __pyx_mstate_global->__pyx_kp_s_strided_and_direct -#define __pyx_kp_s_strided_and_direct_or_indirect __pyx_mstate_global->__pyx_kp_s_strided_and_direct_or_indirect -#define __pyx_kp_s_strided_and_indirect __pyx_mstate_global->__pyx_kp_s_strided_and_indirect -#define __pyx_kp_s_stringsource __pyx_mstate_global->__pyx_kp_s_stringsource -#define __pyx_n_s_struct __pyx_mstate_global->__pyx_n_s_struct -#define __pyx_n_s_sys __pyx_mstate_global->__pyx_n_s_sys -#define __pyx_n_s_t_xs __pyx_mstate_global->__pyx_n_s_t_xs -#define __pyx_n_s_t_ys __pyx_mstate_global->__pyx_n_s_t_ys -#define __pyx_n_s_test __pyx_mstate_global->__pyx_n_s_test -#define __pyx_kp_s_unable_to_allocate_array_data __pyx_mstate_global->__pyx_kp_s_unable_to_allocate_array_data -#define __pyx_kp_s_unable_to_allocate_shape_and_str __pyx_mstate_global->__pyx_kp_s_unable_to_allocate_shape_and_str -#define __pyx_n_s_unpack __pyx_mstate_global->__pyx_n_s_unpack -#define __pyx_n_s_update __pyx_mstate_global->__pyx_n_s_update -#define __pyx_n_s_values __pyx_mstate_global->__pyx_n_s_values -#define __pyx_n_s_version_info __pyx_mstate_global->__pyx_n_s_version_info -#define __pyx_int_0 __pyx_mstate_global->__pyx_int_0 -#define __pyx_int_1 __pyx_mstate_global->__pyx_int_1 -#define __pyx_int_3 __pyx_mstate_global->__pyx_int_3 -#define __pyx_int_112105877 __pyx_mstate_global->__pyx_int_112105877 -#define __pyx_int_136983863 __pyx_mstate_global->__pyx_int_136983863 -#define __pyx_int_184977713 __pyx_mstate_global->__pyx_int_184977713 -#define __pyx_int_neg_1 __pyx_mstate_global->__pyx_int_neg_1 -#define __pyx_k__9 __pyx_mstate_global->__pyx_k__9 -#define __pyx_slice__5 __pyx_mstate_global->__pyx_slice__5 -#define __pyx_tuple__4 __pyx_mstate_global->__pyx_tuple__4 -#define __pyx_tuple__8 __pyx_mstate_global->__pyx_tuple__8 -#define __pyx_tuple__10 __pyx_mstate_global->__pyx_tuple__10 -#define __pyx_tuple__11 __pyx_mstate_global->__pyx_tuple__11 -#define __pyx_tuple__12 __pyx_mstate_global->__pyx_tuple__12 -#define __pyx_tuple__13 __pyx_mstate_global->__pyx_tuple__13 -#define __pyx_tuple__14 __pyx_mstate_global->__pyx_tuple__14 -#define __pyx_tuple__15 __pyx_mstate_global->__pyx_tuple__15 -#define __pyx_tuple__16 __pyx_mstate_global->__pyx_tuple__16 -#define __pyx_tuple__17 __pyx_mstate_global->__pyx_tuple__17 -#define __pyx_tuple__18 __pyx_mstate_global->__pyx_tuple__18 -#define __pyx_tuple__19 __pyx_mstate_global->__pyx_tuple__19 -#define __pyx_tuple__21 __pyx_mstate_global->__pyx_tuple__21 -#define __pyx_codeobj__20 __pyx_mstate_global->__pyx_codeobj__20 -#define __pyx_codeobj__22 __pyx_mstate_global->__pyx_codeobj__22 -/* #### Code section: module_code ### */ - -/* "View.MemoryView":131 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_VARARGS(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_VARARGS(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_VARARGS(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_shape)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_itemsize)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 131, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_format)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 131, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__cinit__") < 0)) __PYX_ERR(1, 131, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_VARARGS(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_VARARGS(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 131, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 132, __pyx_L3_error) - } else { - - /* "View.MemoryView":132 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, __pyx_nargs); __PYX_ERR(1, 131, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 131, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 131, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":131 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_dim; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - char *__pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_UCS4 __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":137 - * cdef Py_ssize_t dim - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 137, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 137, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":138 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":140 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError, "Empty shape tuple for cython.array" - * - */ - __pyx_t_2 = (!(__pyx_v_self->ndim != 0)); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":141 - * - * if not self.ndim: - * raise ValueError, "Empty shape tuple for cython.array" # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Empty_shape_tuple_for_cython_arr, 0, 0); - __PYX_ERR(1, 141, __pyx_L1_error) - - /* "View.MemoryView":140 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError, "Empty shape tuple for cython.array" - * - */ - } - - /* "View.MemoryView":143 - * raise ValueError, "Empty shape tuple for cython.array" - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError, "itemsize <= 0 for cython.array" - * - */ - __pyx_t_2 = (__pyx_v_itemsize <= 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":144 - * - * if itemsize <= 0: - * raise ValueError, "itemsize <= 0 for cython.array" # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_itemsize_0_for_cython_array, 0, 0); - __PYX_ERR(1, 144, __pyx_L1_error) - - /* "View.MemoryView":143 - * raise ValueError, "Empty shape tuple for cython.array" - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError, "itemsize <= 0 for cython.array" - * - */ - } - - /* "View.MemoryView":146 - * raise ValueError, "itemsize <= 0 for cython.array" - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_3 = (!__pyx_t_2); - if (__pyx_t_3) { - - /* "View.MemoryView":147 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_n_s_ASCII}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":146 - * raise ValueError, "itemsize <= 0 for cython.array" - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":148 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None) || __Pyx_RaiseUnexpectedTypeError("bytes", __pyx_v_format))) __PYX_ERR(1, 148, __pyx_L1_error) - __pyx_t_4 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":149 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 149, __pyx_L1_error) - } - __pyx_t_8 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_8) && PyErr_Occurred())) __PYX_ERR(1, 149, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_8; - - /* "View.MemoryView":152 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":153 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":155 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate shape and strides." - * - */ - __pyx_t_3 = (!(__pyx_v_self->_shape != 0)); - if (unlikely(__pyx_t_3)) { - - /* "View.MemoryView":156 - * - * if not self._shape: - * raise MemoryError, "unable to allocate shape and strides." # <<<<<<<<<<<<<< - * - * - */ - __Pyx_Raise(__pyx_builtin_MemoryError, __pyx_kp_s_unable_to_allocate_shape_and_str, 0, 0); - __PYX_ERR(1, 156, __pyx_L1_error) - - /* "View.MemoryView":155 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate shape and strides." - * - */ - } - - /* "View.MemoryView":159 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - */ - __pyx_t_7 = 0; - __pyx_t_4 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_4); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely((0 < 0))) __PYX_ERR(1, 159, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_4, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 159, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_7; - __pyx_t_7 = (__pyx_t_7 + 1); - - /* "View.MemoryView":160 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - * self._shape[idx] = dim - */ - __pyx_t_3 = (__pyx_v_dim <= 0); - if (unlikely(__pyx_t_3)) { - - /* "View.MemoryView":161 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = PyTuple_New(5); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = 0; - __pyx_t_10 = 127; - __Pyx_INCREF(__pyx_kp_u_Invalid_shape_in_axis); - __pyx_t_9 += 22; - __Pyx_GIVEREF(__pyx_kp_u_Invalid_shape_in_axis); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_kp_u_Invalid_shape_in_axis); - __pyx_t_6 = __Pyx_PyUnicode_From_int(__pyx_v_idx, 0, ' ', 'd'); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_kp_u_); - __pyx_t_9 += 2; - __Pyx_GIVEREF(__pyx_kp_u_); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_kp_u_); - __pyx_t_6 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_dim, 0, ' ', 'd'); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_kp_u__2); - __pyx_t_9 += 1; - __Pyx_GIVEREF(__pyx_kp_u__2); - PyTuple_SET_ITEM(__pyx_t_5, 4, __pyx_kp_u__2); - __pyx_t_6 = __Pyx_PyUnicode_Join(__pyx_t_5, 5, __pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_t_6, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 161, __pyx_L1_error) - - /* "View.MemoryView":160 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":162 - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":159 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError, f"Invalid shape in axis {idx}: {dim}." - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":165 - * - * cdef char order - * if mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_3 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(1, 165, __pyx_L1_error) - if (__pyx_t_3) { - - /* "View.MemoryView":166 - * cdef char order - * if mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * elif mode == 'fortran': - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":167 - * if mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * elif mode == 'fortran': - * order = b'F' - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":165 - * - * cdef char order - * if mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L11; - } - - /* "View.MemoryView":168 - * order = b'C' - * self.mode = u'c' - * elif mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_3 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(1, 168, __pyx_L1_error) - if (likely(__pyx_t_3)) { - - /* "View.MemoryView":169 - * self.mode = u'c' - * elif mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * else: - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":170 - * elif mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * else: - * raise ValueError, f"Invalid mode, expected 'c' or 'fortran', got {mode}" - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":168 - * order = b'C' - * self.mode = u'c' - * elif mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L11; - } - - /* "View.MemoryView":172 - * self.mode = u'fortran' - * else: - * raise ValueError, f"Invalid mode, expected 'c' or 'fortran', got {mode}" # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_FormatSimple(__pyx_v_mode, __pyx_empty_unicode); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Invalid_mode_expected_c_or_fortr, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_t_6, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 172, __pyx_L1_error) - } - __pyx_L11:; - - /* "View.MemoryView":174 - * raise ValueError, f"Invalid mode, expected 'c' or 'fortran', got {mode}" - * - * self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) # <<<<<<<<<<<<<< - * - * self.free_data = allocate_buffer - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":176 - * self.len = fill_contig_strides_array(self._shape, self._strides, itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":177 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * - * if allocate_buffer: - */ - __pyx_t_6 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 177, __pyx_L1_error) - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 177, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_3; - - /* "View.MemoryView":179 - * self.dtype_is_object = format == b'O' - * - * if allocate_buffer: # <<<<<<<<<<<<<< - * _allocate_buffer(self) - * - */ - if (__pyx_v_allocate_buffer) { - - /* "View.MemoryView":180 - * - * if allocate_buffer: - * _allocate_buffer(self) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_t_7 = __pyx_array_allocate_buffer(__pyx_v_self); if (unlikely(__pyx_t_7 == ((int)-1))) __PYX_ERR(1, 180, __pyx_L1_error) - - /* "View.MemoryView":179 - * self.dtype_is_object = format == b'O' - * - * if allocate_buffer: # <<<<<<<<<<<<<< - * _allocate_buffer(self) - * - */ - } - - /* "View.MemoryView":131 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":182 - * _allocate_buffer(self) - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - */ - -/* Python wrapper */ -CYTHON_UNUSED static int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -CYTHON_UNUSED static int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - char *__pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - Py_ssize_t *__pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (unlikely(__pyx_v_info == NULL)) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":184 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":185 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_t_1 = ((__pyx_v_flags & ((PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS) | PyBUF_ANY_CONTIGUOUS)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":186 - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 186, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":187 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":186 - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L4; - } - - /* "View.MemoryView":188 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 188, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":189 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError, "Can only create a buffer that is contiguous in memory." - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":188 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L4:; - - /* "View.MemoryView":190 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data - */ - __pyx_t_1 = (!((__pyx_v_flags & __pyx_v_bufmode) != 0)); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":191 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError, "Can only create a buffer that is contiguous in memory." # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Can_only_create_a_buffer_that_is, 0, 0); - __PYX_ERR(1, 191, __pyx_L1_error) - - /* "View.MemoryView":190 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data - */ - } - - /* "View.MemoryView":185 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if flags & (PyBUF_C_CONTIGUOUS | PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS): # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - } - - /* "View.MemoryView":192 - * if not (flags & bufmode): - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * - */ - __pyx_t_2 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_2; - - /* "View.MemoryView":193 - * raise ValueError, "Can only create a buffer that is contiguous in memory." - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - __pyx_t_3 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_3; - - /* "View.MemoryView":195 - * info.len = self.len - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":196 - * - * if flags & PyBUF_STRIDES: - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_4 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_4; - - /* "View.MemoryView":197 - * if flags & PyBUF_STRIDES: - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * else: - */ - __pyx_t_5 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_5; - - /* "View.MemoryView":198 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * else: - * info.ndim = 1 - */ - __pyx_t_5 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_5; - - /* "View.MemoryView":195 - * info.len = self.len - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - goto __pyx_L6; - } - - /* "View.MemoryView":200 - * info.strides = self._strides - * else: - * info.ndim = 1 # <<<<<<<<<<<<<< - * info.shape = &self.len if flags & PyBUF_ND else NULL - * info.strides = NULL - */ - /*else*/ { - __pyx_v_info->ndim = 1; - - /* "View.MemoryView":201 - * else: - * info.ndim = 1 - * info.shape = &self.len if flags & PyBUF_ND else NULL # <<<<<<<<<<<<<< - * info.strides = NULL - * - */ - if (((__pyx_v_flags & PyBUF_ND) != 0)) { - __pyx_t_5 = (&__pyx_v_self->len); - } else { - __pyx_t_5 = NULL; - } - __pyx_v_info->shape = __pyx_t_5; - - /* "View.MemoryView":202 - * info.ndim = 1 - * info.shape = &self.len if flags & PyBUF_ND else NULL - * info.strides = NULL # <<<<<<<<<<<<<< - * - * info.suboffsets = NULL - */ - __pyx_v_info->strides = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":204 - * info.strides = NULL - * - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":205 - * - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * info.format = self.format if flags & PyBUF_FORMAT else NULL - */ - __pyx_t_3 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_3; - - /* "View.MemoryView":206 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * info.format = self.format if flags & PyBUF_FORMAT else NULL - * info.obj = self - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":207 - * info.itemsize = self.itemsize - * info.readonly = 0 - * info.format = self.format if flags & PyBUF_FORMAT else NULL # <<<<<<<<<<<<<< - * info.obj = self - * - */ - if (((__pyx_v_flags & PyBUF_FORMAT) != 0)) { - __pyx_t_2 = __pyx_v_self->format; - } else { - __pyx_t_2 = NULL; - } - __pyx_v_info->format = __pyx_t_2; - - /* "View.MemoryView":208 - * info.readonly = 0 - * info.format = self.format if flags & PyBUF_FORMAT else NULL - * info.obj = self # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_GIVEREF((PyObject *)__pyx_v_self); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":182 - * _allocate_buffer(self) - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":210 - * info.obj = self - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":211 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - */ - __pyx_t_1 = (__pyx_v_self->callback_free_data != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":212 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":211 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":213 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - */ - if (__pyx_v_self->free_data) { - } else { - __pyx_t_1 = __pyx_v_self->free_data; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->data != NULL); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":214 - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) - */ - if (__pyx_v_self->dtype_is_object) { - - /* "View.MemoryView":215 - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) # <<<<<<<<<<<<<< - * free(self.data) - * PyObject_Free(self._shape) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":214 - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) - */ - } - - /* "View.MemoryView":216 - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":213 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data and self.data is not NULL: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - */ - } - __pyx_L3:; - - /* "View.MemoryView":217 - * refcount_objects_in_slice(self.data, self._shape, self._strides, self.ndim, inc=False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":210 - * info.obj = self - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":219 - * PyObject_Free(self._shape) - * - * @property # <<<<<<<<<<<<<< - * def memview(self): - * return self.get_memview() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":221 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":219 - * PyObject_Free(self._shape) - * - * @property # <<<<<<<<<<<<<< - * def memview(self): - * return self.get_memview() - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":224 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":225 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":226 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_GIVEREF((PyObject *)__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":224 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":228 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":229 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":228 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":231 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":232 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":231 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":234 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":235 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":234 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":237 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":238 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely((PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0))) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":237 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - CYTHON_UNUSED PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 3, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 3, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 3, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":248 - * - * @cname("__pyx_array_allocate_buffer") - * cdef int _allocate_buffer(array self) except -1: # <<<<<<<<<<<<<< - * - * - */ - -static int __pyx_array_allocate_buffer(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_i; - PyObject **__pyx_v_p; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_allocate_buffer", 0); - - /* "View.MemoryView":254 - * cdef PyObject **p - * - * self.free_data = True # <<<<<<<<<<<<<< - * self.data = malloc(self.len) - * if not self.data: - */ - __pyx_v_self->free_data = 1; - - /* "View.MemoryView":255 - * - * self.free_data = True - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError, "unable to allocate array data." - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":256 - * self.free_data = True - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate array data." - * - */ - __pyx_t_1 = (!(__pyx_v_self->data != 0)); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":257 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError, "unable to allocate array data." # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __Pyx_Raise(__pyx_builtin_MemoryError, __pyx_kp_s_unable_to_allocate_array_data, 0, 0); - __PYX_ERR(1, 257, __pyx_L1_error) - - /* "View.MemoryView":256 - * self.free_data = True - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError, "unable to allocate array data." - * - */ - } - - /* "View.MemoryView":259 - * raise MemoryError, "unable to allocate array data." - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len // self.itemsize): - */ - if (__pyx_v_self->dtype_is_object) { - - /* "View.MemoryView":260 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len // self.itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":261 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len // self.itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_self->itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 261, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_self->itemsize == (Py_ssize_t)-1) && unlikely(__Pyx_UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 261, __pyx_L1_error) - } - __pyx_t_2 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_self->itemsize); - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":262 - * p = self.data - * for i in range(self.len // self.itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * return 0 - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":263 - * for i in range(self.len // self.itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * return 0 - * - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":259 - * raise MemoryError, "unable to allocate array data." - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len // self.itemsize): - */ - } - - /* "View.MemoryView":264 - * p[i] = Py_None - * Py_INCREF(Py_None) - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":248 - * - * @cname("__pyx_array_allocate_buffer") - * cdef int _allocate_buffer(array self) except -1: # <<<<<<<<<<<<<< - * - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._allocate_buffer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":268 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *c_mode, char *buf): # <<<<<<<<<<<<<< - * cdef array result - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_c_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - PyObject *__pyx_v_mode = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":270 - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *c_mode, char *buf): - * cdef array result - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. # <<<<<<<<<<<<<< - * - * if buf is NULL: - */ - if (((__pyx_v_c_mode[0]) == 'f')) { - __Pyx_INCREF(__pyx_n_s_fortran); - __pyx_t_1 = __pyx_n_s_fortran; - } else { - __Pyx_INCREF(__pyx_n_s_c); - __pyx_t_1 = __pyx_n_s_c; - } - __pyx_v_mode = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":272 - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - * - * if buf is NULL: # <<<<<<<<<<<<<< - * result = array.__new__(array, shape, itemsize, format, mode) - * else: - */ - __pyx_t_2 = (__pyx_v_buf == NULL); - if (__pyx_t_2) { - - /* "View.MemoryView":273 - * - * if buf is NULL: - * result = array.__new__(array, shape, itemsize, format, mode) # <<<<<<<<<<<<<< - * else: - * result = array.__new__(array, shape, itemsize, format, mode, allocate_buffer=False) - */ - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __Pyx_INCREF(__pyx_v_mode); - __Pyx_GIVEREF(__pyx_v_mode); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_v_mode); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = ((PyObject *)__pyx_tp_new_array(((PyTypeObject *)__pyx_array_type), __pyx_t_4, NULL)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 273, __pyx_L1_error) - __Pyx_GOTREF((PyObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":272 - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - * - * if buf is NULL: # <<<<<<<<<<<<<< - * result = array.__new__(array, shape, itemsize, format, mode) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":275 - * result = array.__new__(array, shape, itemsize, format, mode) - * else: - * result = array.__new__(array, shape, itemsize, format, mode, allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - /*else*/ { - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(4); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_4); - __Pyx_INCREF(__pyx_v_mode); - __Pyx_GIVEREF(__pyx_v_mode); - PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_v_mode); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 275, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_tp_new_array(((PyTypeObject *)__pyx_array_type), __pyx_t_1, __pyx_t_4)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 275, __pyx_L1_error) - __Pyx_GOTREF((PyObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":276 - * else: - * result = array.__new__(array, shape, itemsize, format, mode, allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":278 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF((PyObject *)__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":268 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, char *c_mode, char *buf): # <<<<<<<<<<<<<< - * cdef array result - * cdef str mode = "fortran" if c_mode[0] == b'f' else "c" # this often comes from a constant C string. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_mode); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":304 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_VARARGS(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_name)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 304, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(1, 304, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 304, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":305 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":304 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":306 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":307 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":306 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - if (__pyx_t_2) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_3)); - __pyx_t_3 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_2; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - * else: - */ - if (__pyx_v_use_setstate) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_136983863); - __Pyx_GIVEREF(__pyx_int_136983863); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_136983863); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_state); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_136983863); - __Pyx_GIVEREF(__pyx_int_136983863); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_136983863); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 16, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 16, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 16, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None) || __Pyx_RaiseUnexpectedTypeError("tuple", __pyx_v___pyx_state))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0x82a3537, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":349 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_VARARGS(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_obj)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_flags)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 349, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_VARARGS(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__cinit__") < 0)) __PYX_ERR(1, 349, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_VARARGS(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_VARARGS(__pyx_args, 1); - values[0] = __Pyx_Arg_VARARGS(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 349, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, __pyx_nargs); __PYX_ERR(1, 349, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_intptr_t __pyx_t_4; - size_t __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":350 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":351 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":352 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - if (!__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_obj != Py_None); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":353 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_3 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 353, __pyx_L1_error) - - /* "View.MemoryView":354 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = (((PyObject *)__pyx_v_self->view.obj) == NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":355 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":356 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":354 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":352 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":358 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: - */ - __pyx_t_1 = (!__PYX_CYTHON_ATOMICS_ENABLED()); - if (__pyx_t_1) { - - /* "View.MemoryView":360 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = (__pyx_memoryview_thread_locks_used < 8); - if (__pyx_t_1) { - - /* "View.MemoryView":361 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":362 - * if __pyx_memoryview_thread_locks_used < 8: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":360 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":363 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = (__pyx_v_self->lock == NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":364 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":365 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = (__pyx_v_self->lock == NULL); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":366 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 366, __pyx_L1_error) - - /* "View.MemoryView":365 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":363 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":358 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < 8: - */ - } - - /* "View.MemoryView":368 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":369 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = ((__pyx_v_self->view.format[0]) == 'O'); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L12_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_self->view.format[1]) == '\x00'); - __pyx_t_1 = __pyx_t_2; - __pyx_L12_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":368 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L11; - } - - /* "View.MemoryView":371 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * assert (&self.acquisition_count) % sizeof(__pyx_atomic_int_type) == 0 - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L11:; - - /* "View.MemoryView":373 - * self.dtype_is_object = dtype_is_object - * - * assert (&self.acquisition_count) % sizeof(__pyx_atomic_int_type) == 0 # <<<<<<<<<<<<<< - * self.typeinfo = NULL - * - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_4 = ((Py_intptr_t)((void *)(&__pyx_v_self->acquisition_count))); - __pyx_t_5 = (sizeof(__pyx_atomic_int_type)); - if (unlikely(__pyx_t_5 == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 373, __pyx_L1_error) - } - __pyx_t_1 = ((__pyx_t_4 % __pyx_t_5) == 0); - if (unlikely(!__pyx_t_1)) { - __Pyx_Raise(__pyx_builtin_AssertionError, 0, 0, 0); - __PYX_ERR(1, 373, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(1, 373, __pyx_L1_error) - #endif - - /* "View.MemoryView":374 - * - * assert (&self.acquisition_count) % sizeof(__pyx_atomic_int_type) == 0 - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":349 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":376 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - PyThread_type_lock __pyx_t_5; - PyThread_type_lock __pyx_t_6; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":377 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":378 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":377 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":379 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_1 = (((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":381 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":382 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":379 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":386 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_1 = (__pyx_v_self->lock != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":387 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_2 = __pyx_memoryview_thread_locks_used; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":388 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock); - if (__pyx_t_1) { - - /* "View.MemoryView":389 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":390 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_1 = (__pyx_v_i != __pyx_memoryview_thread_locks_used); - if (__pyx_t_1) { - - /* "View.MemoryView":392 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_5 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":391 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_5; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_6; - - /* "View.MemoryView":390 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":393 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":388 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":395 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":386 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":376 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":397 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":399 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":401 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 401, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(1, 401, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(1, 401, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 401, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":402 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 402, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 402, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":401 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":404 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":397 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":407 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - char *__pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":408 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - if (__pyx_t_1) { - - /* "View.MemoryView":409 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":408 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":411 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 411, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 411, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_indices = __pyx_t_4; - __pyx_t_4 = 0; - - /* "View.MemoryView":414 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 414, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":415 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 415, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":414 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":417 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_5 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_5 == ((char *)NULL))) __PYX_ERR(1, 417, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_5; - - /* "View.MemoryView":418 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":407 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":420 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError, "Cannot assign to read-only memoryview" - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":421 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError, "Cannot assign to read-only memoryview" - * - */ - if (unlikely(__pyx_v_self->view.readonly)) { - - /* "View.MemoryView":422 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError, "Cannot assign to read-only memoryview" # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_Cannot_assign_to_read_only_memor, 0, 0); - __PYX_ERR(1, 422, __pyx_L1_error) - - /* "View.MemoryView":421 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError, "Cannot assign to read-only memoryview" - * - */ - } - - /* "View.MemoryView":424 - * raise TypeError, "Cannot assign to read-only memoryview" - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_1 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(__pyx_t_1 != Py_None)) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 424, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 424, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_2; - __pyx_t_2 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":426 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(1, 426, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":427 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_obj = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":428 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(1, 428, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":429 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_1, __pyx_v_obj); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":428 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":431 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 431, __pyx_L1_error) - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_3), __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":426 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":433 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 433, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":420 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError, "Cannot assign to read-only memoryview" - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":435 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":436 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = (!__pyx_t_1); - if (__pyx_t_2) { - - /* "View.MemoryView":437 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":438 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 438, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":439 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 439, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":438 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 438, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 438, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":437 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":440 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 440, __pyx_L6_except_error) - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_6); - - /* "View.MemoryView":441 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - - /* "View.MemoryView":437 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __pyx_L6_except_error:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":436 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":443 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":435 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":445 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - __Pyx_memviewslice __pyx_v_msrc; - __Pyx_memviewslice __pyx_v_mdst; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":448 - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - * cdef __Pyx_memviewslice msrc = get_slice_from_memview(src, &src_slice)[0] # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mdst = get_slice_from_memview(dst, &dst_slice)[0] - * - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 448, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 448, __pyx_L1_error) - __pyx_v_msrc = (__pyx_t_1[0]); - - /* "View.MemoryView":449 - * cdef __Pyx_memviewslice src_slice - * cdef __Pyx_memviewslice msrc = get_slice_from_memview(src, &src_slice)[0] - * cdef __Pyx_memviewslice mdst = get_slice_from_memview(dst, &dst_slice)[0] # <<<<<<<<<<<<<< - * - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 449, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 449, __pyx_L1_error) - __pyx_v_mdst = (__pyx_t_1[0]); - - /* "View.MemoryView":451 - * cdef __Pyx_memviewslice mdst = get_slice_from_memview(dst, &dst_slice)[0] - * - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = __pyx_memoryview_copy_contents(__pyx_v_msrc, __pyx_v_mdst, __pyx_t_3, __pyx_t_4, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 451, __pyx_L1_error) - - /* "View.MemoryView":445 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":453 - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":455 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":460 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 460, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":462 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = (((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))); - if (__pyx_t_2) { - - /* "View.MemoryView":463 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":464 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = (__pyx_v_tmp == NULL); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":465 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 465, __pyx_L1_error) - - /* "View.MemoryView":464 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":466 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":462 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":468 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":470 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":471 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - if (__pyx_v_self->dtype_is_object) { - - /* "View.MemoryView":472 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":471 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":474 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 474, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":478 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = (__pyx_v_self->view.suboffsets != NULL); - if (__pyx_t_2) { - - /* "View.MemoryView":479 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_4 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 479, __pyx_L6_error) - - /* "View.MemoryView":478 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":480 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":483 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":453 - * memoryview_copy_contents(msrc, mdst, src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":485 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":486 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 486, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":487 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 487, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":485 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":489 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":492 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_ImportDottedModule(__pyx_n_s_struct, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 492, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":495 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":496 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":497 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError, "Unable to convert item to object" - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 497, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 497, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_8, 2+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 497, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":496 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":501 - * raise ValueError, "Unable to convert item to object" - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_9 = __Pyx_ssize_strlen(__pyx_v_self->view.format); if (unlikely(__pyx_t_9 == ((Py_ssize_t)-1))) __PYX_ERR(1, 501, __pyx_L5_except_error) - __pyx_t_10 = (__pyx_t_9 == 1); - if (__pyx_t_10) { - - /* "View.MemoryView":502 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 502, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":501 - * raise ValueError, "Unable to convert item to object" - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":503 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":498 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError, "Unable to convert item to object" - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_7); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_6); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_6 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_6, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_XGOTREF(__pyx_t_6); - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_1); - - /* "View.MemoryView":499 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError, "Unable to convert item to object" # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Unable_to_convert_item_to_object, 0, 0); - __PYX_ERR(1, 499, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - - /* "View.MemoryView":496 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __pyx_L5_except_error:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":489 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":505 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - char *__pyx_t_9; - char *__pyx_t_10; - char *__pyx_t_11; - char *__pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":508 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_ImportDottedModule(__pyx_n_s_struct, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 508, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":513 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - if (__pyx_t_2) { - - /* "View.MemoryView":514 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyNumber_Add(__pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None) || __Pyx_RaiseUnexpectedTypeError("bytes", __pyx_t_3))) __PYX_ERR(1, 514, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":513 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":516 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 516, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 516, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_4, __pyx_t_1, __pyx_v_value}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 516, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - if (!(likely(PyBytes_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None) || __Pyx_RaiseUnexpectedTypeError("bytes", __pyx_t_3))) __PYX_ERR(1, 516, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":518 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_7 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 518, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_8 = __pyx_v_bytesvalue; - __pyx_t_10 = PyBytes_AS_STRING(__pyx_t_8); - __pyx_t_11 = (__pyx_t_10 + PyBytes_GET_SIZE(__pyx_t_8)); - for (__pyx_t_12 = __pyx_t_10; __pyx_t_12 < __pyx_t_11; __pyx_t_12++) { - __pyx_t_9 = __pyx_t_12; - __pyx_v_c = (__pyx_t_9[0]); - - /* "View.MemoryView":519 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_7; - - /* "View.MemoryView":518 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_7 = (__pyx_t_7 + 1); - - /* "View.MemoryView":519 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":505 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":521 - * itemp[i] = c - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - */ - -/* Python wrapper */ -CYTHON_UNUSED static int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -CYTHON_UNUSED static int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - char *__pyx_t_4; - void *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (unlikely(__pyx_v_info == NULL)) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":523 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":524 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError, "Cannot create writable memory view from read-only memoryview" # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Cannot_create_writable_memory_vi, 0, 0); - __PYX_ERR(1, 524, __pyx_L1_error) - - /* "View.MemoryView":523 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - */ - } - - /* "View.MemoryView":526 - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":527 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_3 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_3; - - /* "View.MemoryView":526 - * raise ValueError, "Cannot create writable memory view from read-only memoryview" - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":529 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":531 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":532 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_3 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_3; - - /* "View.MemoryView":531 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":534 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":536 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":537 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_3 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_3; - - /* "View.MemoryView":536 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":539 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":541 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":542 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":541 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":544 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":546 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_5 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_5; - - /* "View.MemoryView":547 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_6 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":548 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_7 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_7; - - /* "View.MemoryView":549 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_7 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_7; - - /* "View.MemoryView":550 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":551 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * - */ - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_GIVEREF((PyObject *)__pyx_v_self); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":521 - * itemp[i] = c - * - * @cname('getbuffer') # <<<<<<<<<<<<<< - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":554 - * - * - * @property # <<<<<<<<<<<<<< - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":556 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 556, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 556, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":557 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)-1))) __PYX_ERR(1, 557, __pyx_L1_error) - - /* "View.MemoryView":558 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":554 - * - * - * @property # <<<<<<<<<<<<<< - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":560 - * return result - * - * @property # <<<<<<<<<<<<<< - * def base(self): - * return self._get_base() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":562 - * @property - * def base(self): - * return self._get_base() # <<<<<<<<<<<<<< - * - * cdef _get_base(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->_get_base(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 562, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":560 - * return result - * - * @property # <<<<<<<<<<<<<< - * def base(self): - * return self._get_base() - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.base.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":564 - * return self._get_base() - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -static PyObject *__pyx_memoryview__get_base(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_get_base", 0); - - /* "View.MemoryView":565 - * - * cdef _get_base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":564 - * return self._get_base() - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":567 - * return self.obj - * - * @property # <<<<<<<<<<<<<< - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_7genexpr__pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":569 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_7genexpr__pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_7genexpr__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - } /* exit inner scope */ - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 569, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":567 - * return self.obj - * - * @property # <<<<<<<<<<<<<< - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":571 - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def strides(self): - * if self.view.strides == NULL: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_8genexpr1__pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":573 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError, "Buffer view does not expose strides" - */ - __pyx_t_1 = (__pyx_v_self->view.strides == NULL); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":575 - * if self.view.strides == NULL: - * - * raise ValueError, "Buffer view does not expose strides" # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Buffer_view_does_not_expose_stri, 0, 0); - __PYX_ERR(1, 575, __pyx_L1_error) - - /* "View.MemoryView":573 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError, "Buffer view does not expose strides" - */ - } - - /* "View.MemoryView":577 - * raise ValueError, "Buffer view does not expose strides" - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_8genexpr1__pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_8genexpr1__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - } /* exit inner scope */ - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":571 - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def strides(self): - * if self.view.strides == NULL: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":579 - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def suboffsets(self): - * if self.view.suboffsets == NULL: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_8genexpr2__pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":581 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = (__pyx_v_self->view.suboffsets == NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":582 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PySequence_Multiply(__pyx_tuple__4, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 582, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":581 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":584 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.suboffsets; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_8genexpr2__pyx_v_suboffset = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_8genexpr2__pyx_v_suboffset); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - } /* exit inner scope */ - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 584, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":579 - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def suboffsets(self): - * if self.view.suboffsets == NULL: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":586 - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def ndim(self): - * return self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":588 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 588, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":586 - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - * - * @property # <<<<<<<<<<<<<< - * def ndim(self): - * return self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":590 - * return self.view.ndim - * - * @property # <<<<<<<<<<<<<< - * def itemsize(self): - * return self.view.itemsize - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":592 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 592, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":590 - * return self.view.ndim - * - * @property # <<<<<<<<<<<<<< - * def itemsize(self): - * return self.view.itemsize - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":594 - * return self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def nbytes(self): - * return self.size * self.view.itemsize - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":596 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 596, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 596, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 596, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":594 - * return self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def nbytes(self): - * return self.size * self.view.itemsize - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":598 - * return self.size * self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def size(self): - * if self._size is None: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":600 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":601 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":603 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_t_5 = PyInt_FromSsize_t((__pyx_t_2[0])); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 603, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":604 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_5 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 604, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_5); - __pyx_t_5 = 0; - } - - /* "View.MemoryView":606 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":600 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":608 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":598 - * return self.size * self.view.itemsize - * - * @property # <<<<<<<<<<<<<< - * def size(self): - * if self._size is None: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":610 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":611 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = (__pyx_v_self->view.ndim >= 1); - if (__pyx_t_1) { - - /* "View.MemoryView":612 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":611 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":614 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":610 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":616 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":617 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":618 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":617 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 617, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":616 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":620 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":621 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":620 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":624 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("is_c_contig", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "is_c_contig", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":627 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 627, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":628 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 628, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":624 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":630 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("is_f_contig", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "is_f_contig", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":633 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 633, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":634 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 634, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":630 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":636 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("copy", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "copy", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":638 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":640 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":641 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 641, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":646 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 646, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":636 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":648 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("copy_fortran", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "copy_fortran", 0))) return NULL; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":650 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":652 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":653 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 653, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":658 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":648 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - CYTHON_UNUSED PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 3, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 3, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 3, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":662 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":663 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 663, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":664 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":665 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":662 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":668 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o) noexcept: # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":669 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o) noexcept: - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":668 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o) noexcept: # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":671 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_idx; - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_UCS4 __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":677 - * """ - * cdef Py_ssize_t idx - * tup = index if isinstance(index, tuple) else (index,) # <<<<<<<<<<<<<< - * - * result = [slice(None)] * ndim - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_index); - if (__pyx_t_2) { - __Pyx_INCREF(((PyObject*)__pyx_v_index)); - __pyx_t_1 = __pyx_v_index; - } else { - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 677, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_t_1 = __pyx_t_3; - __pyx_t_3 = 0; - } - __pyx_v_tup = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":679 - * tup = index if isinstance(index, tuple) else (index,) - * - * result = [slice(None)] * ndim # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_1 = PyList_New(1 * ((__pyx_v_ndim<0) ? 0:__pyx_v_ndim)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_ndim; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__5); - __Pyx_GIVEREF(__pyx_slice__5); - PyList_SET_ITEM(__pyx_t_1, __pyx_temp, __pyx_slice__5); - } - } - __pyx_v_result = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":680 - * - * result = [slice(None)] * ndim - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * idx = 0 - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":681 - * result = [slice(None)] * ndim - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * idx = 0 - * for item in tup: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":682 - * have_slices = False - * seen_ellipsis = False - * idx = 0 # <<<<<<<<<<<<<< - * for item in tup: - * if item is Ellipsis: - */ - __pyx_v_idx = 0; - - /* "View.MemoryView":683 - * seen_ellipsis = False - * idx = 0 - * for item in tup: # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - if (unlikely(__pyx_v_tup == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); - __PYX_ERR(1, 683, __pyx_L1_error) - } - __pyx_t_1 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_1); __pyx_t_4 = 0; - for (;;) { - if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_4); __Pyx_INCREF(__pyx_t_3); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(1, 683, __pyx_L1_error) - #else - __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 683, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":684 - * idx = 0 - * for item in tup: - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * idx += ndim - len(tup) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - if (__pyx_t_2) { - - /* "View.MemoryView":685 - * for item in tup: - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * idx += ndim - len(tup) - * seen_ellipsis = True - */ - __pyx_t_2 = (!__pyx_v_seen_ellipsis); - if (__pyx_t_2) { - - /* "View.MemoryView":686 - * if item is Ellipsis: - * if not seen_ellipsis: - * idx += ndim - len(tup) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * have_slices = True - */ - if (unlikely(__pyx_v_tup == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 686, __pyx_L1_error) - } - __pyx_t_5 = PyTuple_GET_SIZE(__pyx_v_tup); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 686, __pyx_L1_error) - __pyx_v_idx = (__pyx_v_idx + (__pyx_v_ndim - __pyx_t_5)); - - /* "View.MemoryView":687 - * if not seen_ellipsis: - * idx += ndim - len(tup) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":685 - * for item in tup: - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * idx += ndim - len(tup) - * seen_ellipsis = True - */ - } - - /* "View.MemoryView":688 - * idx += ndim - len(tup) - * seen_ellipsis = True - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if isinstance(item, slice): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":684 - * idx = 0 - * for item in tup: - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * idx += ndim - len(tup) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if isinstance(item, slice): # <<<<<<<<<<<<<< - * have_slices = True - * elif not PyIndex_Check(item): - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - if (__pyx_t_2) { - - /* "View.MemoryView":691 - * else: - * if isinstance(item, slice): - * have_slices = True # <<<<<<<<<<<<<< - * elif not PyIndex_Check(item): - * raise TypeError, f"Cannot index with type '{type(item)}'" - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if isinstance(item, slice): # <<<<<<<<<<<<<< - * have_slices = True - * elif not PyIndex_Check(item): - */ - goto __pyx_L7; - } - - /* "View.MemoryView":692 - * if isinstance(item, slice): - * have_slices = True - * elif not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item - */ - __pyx_t_2 = (!(PyIndex_Check(__pyx_v_item) != 0)); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":693 - * have_slices = True - * elif not PyIndex_Check(item): - * raise TypeError, f"Cannot index with type '{type(item)}'" # <<<<<<<<<<<<<< - * result[idx] = item - * idx += 1 - */ - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 693, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = 0; - __pyx_t_6 = 127; - __Pyx_INCREF(__pyx_kp_u_Cannot_index_with_type); - __pyx_t_5 += 24; - __Pyx_GIVEREF(__pyx_kp_u_Cannot_index_with_type); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_Cannot_index_with_type); - __pyx_t_7 = __Pyx_PyObject_FormatSimple(((PyObject *)Py_TYPE(__pyx_v_item)), __pyx_empty_unicode); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 693, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_6 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) > __pyx_t_6) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_7) : __pyx_t_6; - __pyx_t_5 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_kp_u__6); - __pyx_t_5 += 1; - __Pyx_GIVEREF(__pyx_kp_u__6); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u__6); - __pyx_t_7 = __Pyx_PyUnicode_Join(__pyx_t_3, 3, __pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 693, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_t_7, 0, 0); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __PYX_ERR(1, 693, __pyx_L1_error) - - /* "View.MemoryView":692 - * if isinstance(item, slice): - * have_slices = True - * elif not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item - */ - } - __pyx_L7:; - - /* "View.MemoryView":694 - * elif not PyIndex_Check(item): - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item # <<<<<<<<<<<<<< - * idx += 1 - * - */ - if (unlikely((__Pyx_SetItemInt(__pyx_v_result, __pyx_v_idx, __pyx_v_item, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1) < 0))) __PYX_ERR(1, 694, __pyx_L1_error) - } - __pyx_L5:; - - /* "View.MemoryView":695 - * raise TypeError, f"Cannot index with type '{type(item)}'" - * result[idx] = item - * idx += 1 # <<<<<<<<<<<<<< - * - * nslices = ndim - idx - */ - __pyx_v_idx = (__pyx_v_idx + 1); - - /* "View.MemoryView":683 - * seen_ellipsis = False - * idx = 0 - * for item in tup: # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":697 - * idx += 1 - * - * nslices = ndim - idx # <<<<<<<<<<<<<< - * return have_slices or nslices, tuple(result) - * - */ - __pyx_v_nslices = (__pyx_v_ndim - __pyx_v_idx); - - /* "View.MemoryView":698 - * - * nslices = ndim - idx - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = __pyx_t_7; - __pyx_t_7 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_7 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = __pyx_t_7; - __pyx_t_7 = 0; - __pyx_L9_bool_binop_done:; - __pyx_t_7 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_7); - __pyx_t_1 = 0; - __pyx_t_7 = 0; - __pyx_r = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":671 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static int assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":701 - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError, "Indirect dimensions not supported" - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":702 - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError, "Indirect dimensions not supported" - * return 0 # return type just used as an error flag - */ - __pyx_t_4 = (__pyx_v_suboffset >= 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError, "Indirect dimensions not supported" # <<<<<<<<<<<<<< - * return 0 # return type just used as an error flag - * - */ - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_kp_s_Indirect_dimensions_not_supporte, 0, 0); - __PYX_ERR(1, 703, __pyx_L1_error) - - /* "View.MemoryView":702 - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError, "Indirect dimensions not supported" - * return 0 # return type just used as an error flag - */ - } - } - - /* "View.MemoryView":704 - * if suboffset >= 0: - * raise ValueError, "Indirect dimensions not supported" - * return 0 # return type just used as an error flag # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef int assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim) except -1: # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":711 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - Py_ssize_t __pyx_v_cindex; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - struct __pyx_memoryview_obj *__pyx_t_3; - char *__pyx_t_4; - int __pyx_t_5; - Py_ssize_t __pyx_t_6; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - int __pyx_t_10; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":712 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":719 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":723 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_1 = (__pyx_v_memview->view.ndim > 0); - if (unlikely(!__pyx_t_1)) { - __Pyx_Raise(__pyx_builtin_AssertionError, 0, 0, 0); - __PYX_ERR(1, 723, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(1, 723, __pyx_L1_error) - #endif - - /* "View.MemoryView":725 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":726 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 726, __pyx_L1_error) - __pyx_t_2 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_2); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":727 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":725 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":729 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":730 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":736 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_3 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_3; - - /* "View.MemoryView":737 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_4; - - /* "View.MemoryView":742 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step, cindex - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":743 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step, cindex - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":747 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * cindex = index - */ - __pyx_t_5 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_2 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_2); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - } else { - __pyx_t_6 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 747, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 747, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_7)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_8 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_8); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(1, 747, __pyx_L1_error) - #else - __pyx_t_8 = PySequence_ITEM(__pyx_t_2, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 747, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - } else { - if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_8 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_8); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(1, 747, __pyx_L1_error) - #else - __pyx_t_8 = PySequence_ITEM(__pyx_t_2, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 747, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - } - } else { - __pyx_t_8 = __pyx_t_7(__pyx_t_2); - if (unlikely(!__pyx_t_8)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 747, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_8); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_v_dim = __pyx_t_5; - __pyx_t_5 = (__pyx_t_5 + 1); - - /* "View.MemoryView":748 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * cindex = index - * slice_memviewslice( - */ - __pyx_t_1 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":749 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * cindex = index # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 749, __pyx_L1_error) - __pyx_v_cindex = __pyx_t_9; - - /* "View.MemoryView":750 - * if PyIndex_Check(index): - * cindex = index - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_10 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_cindex, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(1, 750, __pyx_L1_error) - - /* "View.MemoryView":748 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * cindex = index - * slice_memviewslice( - */ - goto __pyx_L6; - } - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_1 = (__pyx_v_index == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":757 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":758 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":759 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":760 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":762 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - __pyx_t_11 = __Pyx_PyIndex_AsSsize_t(__pyx_t_8); if (unlikely((__pyx_t_11 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_9 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_9 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_9; - - /* "View.MemoryView":763 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 763, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 763, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - __pyx_t_11 = __Pyx_PyIndex_AsSsize_t(__pyx_t_8); if (unlikely((__pyx_t_11 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 763, __pyx_L1_error) - __pyx_t_9 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_9 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_9; - - /* "View.MemoryView":764 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(1, 764, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - __pyx_t_11 = __Pyx_PyIndex_AsSsize_t(__pyx_t_8); if (unlikely((__pyx_t_11 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 764, __pyx_L1_error) - __pyx_t_9 = __pyx_t_11; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_9 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_9; - - /* "View.MemoryView":766 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = (__pyx_t_8 != Py_None); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":767 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 767, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = (__pyx_t_8 != Py_None); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":768 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 768, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = (__pyx_t_8 != Py_None); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":770 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_10 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(1, 770, __pyx_L1_error) - - /* "View.MemoryView":776 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":747 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * cindex = index - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF((PyObject *)__pyx_r); - - /* "View.MemoryView":780 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 780, __pyx_L1_error) } - - /* "View.MemoryView":781 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 781, __pyx_L1_error) } - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_2 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 779, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_memoryview_type))))) __PYX_ERR(1, 779, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF((PyObject *)__pyx_r); - - /* "View.MemoryView":785 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 784, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_memoryview_type))))) __PYX_ERR(1, 784, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":711 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":793 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":813 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = (!__pyx_v_is_slice); - if (__pyx_t_1) { - - /* "View.MemoryView":815 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = (__pyx_v_start < 0); - if (__pyx_t_1) { - - /* "View.MemoryView":816 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":815 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":817 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = (!__pyx_t_1); - if (__pyx_t_2) { - - /* "View.MemoryView":818 - * start += shape - * if not 0 <= start < shape: - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(PyExc_IndexError, __pyx_kp_s_Index_out_of_bounds_axis_d, __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 818, __pyx_L1_error) - - /* "View.MemoryView":817 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(PyExc_IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":813 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":821 - * else: - * - * if have_step: # <<<<<<<<<<<<<< - * negative_step = step < 0 - * if step == 0: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_have_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":822 - * - * if have_step: - * negative_step = step < 0 # <<<<<<<<<<<<<< - * if step == 0: - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - */ - __pyx_v_negative_step = (__pyx_v_step < 0); - - /* "View.MemoryView":823 - * if have_step: - * negative_step = step < 0 - * if step == 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - * else: - */ - __pyx_t_2 = (__pyx_v_step == 0); - if (__pyx_t_2) { - - /* "View.MemoryView":824 - * negative_step = step < 0 - * if step == 0: - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * negative_step = False - */ - __pyx_t_3 = __pyx_memoryview_err_dim(PyExc_ValueError, __pyx_kp_s_Step_may_not_be_zero_axis_d, __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 824, __pyx_L1_error) - - /* "View.MemoryView":823 - * if have_step: - * negative_step = step < 0 - * if step == 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":821 - * else: - * - * if have_step: # <<<<<<<<<<<<<< - * negative_step = step < 0 - * if step == 0: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":826 - * _err_dim(PyExc_ValueError, "Step may not be zero (axis %d)", dim) - * else: - * negative_step = False # <<<<<<<<<<<<<< - * step = 1 - * - */ - /*else*/ { - __pyx_v_negative_step = 0; - - /* "View.MemoryView":827 - * else: - * negative_step = False - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - } - __pyx_L6:; - - /* "View.MemoryView":830 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":831 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = (__pyx_v_start < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":832 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = (__pyx_v_start < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":834 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":831 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":835 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = (__pyx_v_start >= __pyx_v_shape); - if (__pyx_t_2) { - - /* "View.MemoryView":836 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - if (__pyx_v_negative_step) { - - /* "View.MemoryView":837 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":836 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L11; - } - - /* "View.MemoryView":839 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L11:; - - /* "View.MemoryView":835 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L9:; - - /* "View.MemoryView":830 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L8; - } - - /* "View.MemoryView":841 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - if (__pyx_v_negative_step) { - - /* "View.MemoryView":842 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":841 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":844 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L12:; - } - __pyx_L8:; - - /* "View.MemoryView":846 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = (__pyx_v_stop < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":848 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":849 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = (__pyx_v_stop < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":850 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":849 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":847 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":851 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = (__pyx_v_stop > __pyx_v_shape); - if (__pyx_t_2) { - - /* "View.MemoryView":852 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":851 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L14:; - - /* "View.MemoryView":846 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L13; - } - - /* "View.MemoryView":854 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - if (__pyx_v_negative_step) { - - /* "View.MemoryView":855 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":854 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L16; - } - - /* "View.MemoryView":857 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L16:; - } - __pyx_L13:; - - /* "View.MemoryView":861 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":863 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":864 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":863 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":866 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = (__pyx_v_new_shape < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":867 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":866 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":870 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":871 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":872 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":875 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = ((__pyx_v_suboffset_dim[0]) < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":876 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":875 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":878 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L19:; - - /* "View.MemoryView":880 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = (__pyx_v_suboffset >= 0); - if (__pyx_t_2) { - - /* "View.MemoryView":881 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = (!__pyx_v_is_slice); - if (__pyx_t_2) { - - /* "View.MemoryView":882 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = (__pyx_v_new_ndim == 0); - if (__pyx_t_2) { - - /* "View.MemoryView":883 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(PyExc_IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":882 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L22; - } - - /* "View.MemoryView":885 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(PyExc_IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":886 - * else: - * _err_dim(PyExc_IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(PyExc_IndexError, __pyx_kp_s_All_dimensions_preceding_dimensi, __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 885, __pyx_L1_error) - } - __pyx_L22:; - - /* "View.MemoryView":881 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L21; - } - - /* "View.MemoryView":888 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L21:; - - /* "View.MemoryView":880 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":890 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":793 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":896 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_UCS4 __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":898 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":899 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":902 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len // itemsize - * stride = itemsize - */ - __pyx_t_2 = (__pyx_v_view->ndim == 0); - if (__pyx_t_2) { - - /* "View.MemoryView":903 - * - * if view.ndim == 0: - * shape = view.len // itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 903, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(__Pyx_UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 903, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":904 - * if view.ndim == 0: - * shape = view.len // itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":902 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len // itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":906 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":907 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":908 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = (__pyx_v_view->suboffsets != NULL); - if (__pyx_t_2) { - - /* "View.MemoryView":909 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":908 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":911 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = (__pyx_v_index < 0); - if (__pyx_t_2) { - - /* "View.MemoryView":912 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":913 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - __pyx_t_2 = (__pyx_v_index < 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":914 - * index += view.shape[dim] - * if index < 0: - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_4 = 127; - __Pyx_INCREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_1 += 37; - __Pyx_GIVEREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_5 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_dim, 0, ' ', 'd'); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_1 += 1; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u__7); - __pyx_t_5 = __Pyx_PyUnicode_Join(__pyx_t_3, 3, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_builtin_IndexError, __pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 914, __pyx_L1_error) - - /* "View.MemoryView":913 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - } - - /* "View.MemoryView":911 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":916 - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - __pyx_t_2 = (__pyx_v_index >= __pyx_v_shape); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":917 - * - * if index >= shape: - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = 0; - __pyx_t_4 = 127; - __Pyx_INCREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_1 += 37; - __Pyx_GIVEREF(__pyx_kp_u_Out_of_bounds_on_buffer_access_a); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_kp_u_Out_of_bounds_on_buffer_access_a); - __pyx_t_3 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_dim, 0, ' ', 'd'); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_1 += 1; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_kp_u__7); - __pyx_t_3 = __Pyx_PyUnicode_Join(__pyx_t_5, 3, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_builtin_IndexError, __pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 917, __pyx_L1_error) - - /* "View.MemoryView":916 - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - */ - } - - /* "View.MemoryView":919 - * raise IndexError, f"Out of bounds on buffer access (axis {dim})" - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":920 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = (__pyx_v_suboffset >= 0); - if (__pyx_t_2) { - - /* "View.MemoryView":921 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":920 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":923 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":896 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":929 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) except -1 nogil: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":930 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) except -1 nogil: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":932 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":933 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":937 - * - * cdef int i, j - * for i in range(ndim // 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":938 - * cdef int i, j - * for i in range(ndim // 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":939 - * for i in range(ndim // 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":940 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":942 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = ((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = ((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":943 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_t_9 = __pyx_memoryview_err(PyExc_ValueError, __pyx_kp_s_Cannot_transpose_memoryview_with); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 943, __pyx_L1_error) - - /* "View.MemoryView":942 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":945 - * _err(PyExc_ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":929 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) except -1 nogil: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":963 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_VARARGS(__pyx_args, __pyx_nargs); - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":964 - * - * def __dealloc__(self): - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XCLEAR_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":963 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":966 - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":967 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = (__pyx_v_self->to_object_func != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":968 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 968, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":967 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":970 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 970, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":966 - * __PYX_XCLEAR_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":972 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":973 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = (__pyx_v_self->to_dtype_func != NULL); - if (__pyx_t_1) { - - /* "View.MemoryView":974 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 974, __pyx_L1_error) - - /* "View.MemoryView":973 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":976 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * cdef _get_base(self): - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 976, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":972 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":978 - * memoryview.assign_item_from_object(self, itemp, value) - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -static PyObject *__pyx_memoryviewslice__get_base(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_get_base", 0); - - /* "View.MemoryView":979 - * - * cdef _get_base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":978 - * memoryview.assign_item_from_object(self, itemp, value) - * - * cdef _get_base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - if (unlikely(__pyx_nargs > 0)) { - __Pyx_RaiseArgtupleInvalid("__reduce_cython__", 1, 0, 0, __pyx_nargs); return NULL;} - if (unlikely(__pyx_kwds) && __Pyx_NumKwargs_FASTCALL(__pyx_kwds) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__reduce_cython__", 0))) return NULL; - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - CYTHON_UNUSED PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_state,0}; - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 3, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__setstate_cython__") < 0)) __PYX_ERR(1, 3, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v___pyx_state = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__setstate_cython__", 1, 1, 1, __pyx_nargs); __PYX_ERR(1, 3, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" # <<<<<<<<<<<<<< - */ - __Pyx_Raise(__pyx_builtin_TypeError, __pyx_kp_s_no_default___reduce___due_to_non, 0, 0); - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError, "no default __reduce__ due to non-trivial __cinit__" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = (((PyObject *)__pyx_v_memviewslice.memview) == Py_None); - if (__pyx_t_1) { - - /* "View.MemoryView":1008 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1013 - * - * - * result = _memoryviewslice.__new__(_memoryviewslice, None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = ((PyObject *)__pyx_tp_new__memoryviewslice(((PyTypeObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF((PyObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1015 - * result = _memoryviewslice.__new__(_memoryviewslice, None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1016 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview)._get_base() - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1018 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview)._get_base() # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->__pyx_vtab)->_get_base(((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1019 - * - * result.from_object = ( memviewslice.memview)._get_base() - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1021 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1022 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1023 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1024 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1025 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1028 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1030 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1033 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1036 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1037 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = (__pyx_v_suboffset >= 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1039 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1040 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1042 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1043 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1044 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1046 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1047 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1049 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF((PyObject *)__pyx_v_result); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":1056 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error) - __pyx_t_2 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_2); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1057 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1059 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1060 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst) noexcept: # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1067 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1068 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1069 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1071 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1072 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1074 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1075 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1076 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1077 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst) noexcept: # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1083 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1084 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *(*__pyx_t_2)(char *); - int (*__pyx_t_3)(char *, PyObject *); - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - if (__pyx_t_1) { - - /* "View.MemoryView":1095 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_2 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_2; - - /* "View.MemoryView":1096 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_3; - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1098 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1099 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1101 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1103 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) noexcept nogil: # <<<<<<<<<<<<<< - * return -arg if arg < 0 else arg - * - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) noexcept nogil: - * return -arg if arg < 0 else arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - if ((__pyx_v_arg < 0)) { - __pyx_t_1 = (-__pyx_v_arg); - } else { - __pyx_t_1 = __pyx_v_arg; - } - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) noexcept nogil: # <<<<<<<<<<<<<< - * return -arg if arg < 0 else arg - * - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1113 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1118 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1119 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1121 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1122 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = ((__pyx_v_mslice->shape[__pyx_v_i]) > 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1123 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1124 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1122 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1126 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1127 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = ((__pyx_v_mslice->shape[__pyx_v_i]) > 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1128 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1129 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1127 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1131 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = (abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)); - if (__pyx_t_2) { - - /* "View.MemoryView":1132 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1131 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1134 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1113 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1137 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - - /* "View.MemoryView":1144 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1145 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1146 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1147 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1149 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = (__pyx_v_ndim == 1); - if (__pyx_t_1) { - - /* "View.MemoryView":1150 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = (__pyx_v_src_stride > 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_dst_stride > 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1151 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_1 = __pyx_t_2; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1150 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1152 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1150 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1154 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_3 = __pyx_v_dst_extent; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":1155 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1156 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1157 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1149 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1159 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_3 = __pyx_v_dst_extent; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":1160 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1164 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1165 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1137 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1167 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) noexcept nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1170 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) noexcept nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1167 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) noexcept nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1174 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1176 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) noexcept nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1178 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1179 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1181 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1174 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) noexcept nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1184 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) noexcept nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1193 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = (__pyx_v_order == 'F'); - if (__pyx_t_1) { - - /* "View.MemoryView":1194 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1195 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1196 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1193 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1198 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1199 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1200 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1202 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1184 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) noexcept nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1205 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":1216 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1217 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1219 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err_no_memory() - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1220 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err_no_memory() - * - */ - __pyx_t_2 = (!(__pyx_v_result != 0)); - if (__pyx_t_2) { - - /* "View.MemoryView":1221 - * result = malloc(size) - * if not result: - * _err_no_memory() # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_no_memory(); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1221, __pyx_L1_error) - - /* "View.MemoryView":1220 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err_no_memory() - * - */ - } - - /* "View.MemoryView":1224 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1225 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1226 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1227 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1228 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, ndim, order) - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1230 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, ndim, order) # <<<<<<<<<<<<<< - * - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1233 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1234 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = ((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1235 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1234 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1237 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = __pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1238 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1237 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1240 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1242 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1205 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1247 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError, f"got differing extents in dimension {i} (got {extent1} and {extent2})" - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - Py_UCS4 __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1249 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError, f"got differing extents in dimension {i} (got {extent1} and {extent2})" # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = PyTuple_New(7); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = 127; - __Pyx_INCREF(__pyx_kp_u_got_differing_extents_in_dimensi); - __pyx_t_2 += 35; - __Pyx_GIVEREF(__pyx_kp_u_got_differing_extents_in_dimensi); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_kp_u_got_differing_extents_in_dimensi); - __pyx_t_4 = __Pyx_PyUnicode_From_int(__pyx_v_i, 0, ' ', 'd'); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u_got); - __pyx_t_2 += 6; - __Pyx_GIVEREF(__pyx_kp_u_got); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_kp_u_got); - __pyx_t_4 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_extent1, 0, ' ', 'd'); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u_and); - __pyx_t_2 += 5; - __Pyx_GIVEREF(__pyx_kp_u_and); - PyTuple_SET_ITEM(__pyx_t_1, 4, __pyx_kp_u_and); - __pyx_t_4 = __Pyx_PyUnicode_From_Py_ssize_t(__pyx_v_extent2, 0, ' ', 'd'); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 5, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_2 += 1; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_1, 6, __pyx_kp_u__7); - __pyx_t_4 = __Pyx_PyUnicode_Join(__pyx_t_1, 7, __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_builtin_ValueError, __pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1249, __pyx_L1_error) - - /* "View.MemoryView":1247 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError, f"got differing extents in dimension {i} (got {extent1} and {extent2})" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1252 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(PyObject *error, str msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg % dim - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, PyObject *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_msg); - - /* "View.MemoryView":1253 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(PyObject *error, str msg, int dim) except -1 with gil: - * raise error, msg % dim # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyString_FormatSafe(__pyx_v_msg, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(((PyObject *)__pyx_v_error), __pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1253, __pyx_L1_error) - - /* "View.MemoryView":1252 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(PyObject *error, str msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg % dim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_msg); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1256 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(PyObject *error, str msg) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg - * - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, PyObject *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_msg); - - /* "View.MemoryView":1257 - * @cname('__pyx_memoryview_err') - * cdef int _err(PyObject *error, str msg) except -1 with gil: - * raise error, msg # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_no_memory') - */ - __Pyx_Raise(((PyObject *)__pyx_v_error), __pyx_v_msg, 0, 0); - __PYX_ERR(1, 1257, __pyx_L1_error) - - /* "View.MemoryView":1256 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(PyObject *error, str msg) except -1 with gil: # <<<<<<<<<<<<<< - * raise error, msg - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_msg); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1260 - * - * @cname('__pyx_memoryview_err_no_memory') - * cdef int _err_no_memory() except -1 with gil: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - -static int __pyx_memoryview_err_no_memory(void) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_no_memory", 0); - - /* "View.MemoryView":1261 - * @cname('__pyx_memoryview_err_no_memory') - * cdef int _err_no_memory() except -1 with gil: - * raise MemoryError # <<<<<<<<<<<<<< - * - * - */ - PyErr_NoMemory(); __PYX_ERR(1, 1261, __pyx_L1_error) - - /* "View.MemoryView":1260 - * - * @cname('__pyx_memoryview_err_no_memory') - * cdef int _err_no_memory() except -1 with gil: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView._err_no_memory", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1265 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "View.MemoryView":1273 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1274 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1276 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1277 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1278 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1281 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = (__pyx_v_src_ndim < __pyx_v_dst_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1282 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1281 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1283 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = (__pyx_v_dst_ndim < __pyx_v_src_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1284 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1283 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1286 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if ((__pyx_t_3 > __pyx_t_4)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1288 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1289 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = ((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])); - if (__pyx_t_2) { - - /* "View.MemoryView":1290 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = ((__pyx_v_src.shape[__pyx_v_i]) == 1); - if (__pyx_t_2) { - - /* "View.MemoryView":1291 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1292 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1290 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1294 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1294, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1289 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1296 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = ((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1297 - * - * if src.suboffsets[i] >= 0: - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(PyExc_ValueError, __pyx_kp_s_Dimension_d_is_not_direct, __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error) - - /* "View.MemoryView":1296 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1299 - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = __pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - if (__pyx_t_2) { - - /* "View.MemoryView":1301 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = (!__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim)); - if (__pyx_t_2) { - - /* "View.MemoryView":1302 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1301 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1304 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1304, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1305 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1299 - * _err_dim(PyExc_ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1307 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (!__pyx_v_broadcasting); - if (__pyx_t_2) { - - /* "View.MemoryView":1310 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = __pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1311 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1310 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1312 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = __pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim); - if (__pyx_t_2) { - - /* "View.MemoryView":1313 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1312 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1315 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - */ - if (__pyx_v_direct_copy) { - - /* "View.MemoryView":1317 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1318 - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1319 - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1320 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1321 - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1315 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - */ - } - - /* "View.MemoryView":1307 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1323 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - if (__pyx_t_2) { - - /* "View.MemoryView":1326 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 1326, __pyx_L1_error) - - /* "View.MemoryView":1327 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 1327, __pyx_L1_error) - - /* "View.MemoryView":1323 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1329 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1330 - * - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1331 - * refcount_copying(&dst, dtype_is_object, ndim, inc=False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1333 - * refcount_copying(&dst, dtype_is_object, ndim, inc=True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1334 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1265 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1337 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) noexcept nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1341 - * int ndim_other) noexcept nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1343 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1344 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1345 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1346 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1348 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1349 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1350 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1351 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1337 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) noexcept nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1359 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: # <<<<<<<<<<<<<< - * - * if dtype_is_object: - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - - /* "View.MemoryView":1361 - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) - * - */ - if (__pyx_v_dtype_is_object) { - - /* "View.MemoryView":1362 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1361 - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, dst.strides, ndim, inc) - * - */ - } - - /* "View.MemoryView":1359 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, int ndim, bint inc) noexcept nogil: # <<<<<<<<<<<<<< - * - * if dtype_is_object: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1365 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) noexcept with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1368 - * Py_ssize_t *strides, int ndim, - * bint inc) noexcept with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1365 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) noexcept with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc) noexcept: - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1374 - * Py_ssize_t *strides, int ndim, bint inc) noexcept: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * - * for i in range(shape[0]): - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1376 - * cdef Py_ssize_t stride = strides[0] - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1377 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = (__pyx_v_ndim == 1); - if (__pyx_t_4) { - - /* "View.MemoryView":1378 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - if (__pyx_v_inc) { - - /* "View.MemoryView":1379 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1378 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1381 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1377 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1383 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += stride - */ - /*else*/ { - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1385 - * refcount_objects_in_slice(data, shape + 1, strides + 1, ndim - 1, inc) - * - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc) noexcept: - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1391 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) noexcept nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1394 - * size_t itemsize, void *item, - * bint dtype_is_object) noexcept nogil: - * refcount_copying(dst, dtype_is_object, ndim, inc=False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, inc=True) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1395 - * bint dtype_is_object) noexcept nogil: - * refcount_copying(dst, dtype_is_object, ndim, inc=False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) # <<<<<<<<<<<<<< - * refcount_copying(dst, dtype_is_object, ndim, inc=True) - * - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1396 - * refcount_copying(dst, dtype_is_object, ndim, inc=False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, inc=True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1391 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) noexcept nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1400 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) noexcept nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1404 - * size_t itemsize, void *item) noexcept nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1405 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1407 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = (__pyx_v_ndim == 1); - if (__pyx_t_1) { - - /* "View.MemoryView":1408 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1409 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1410 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1407 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1412 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) - * data += stride - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1413 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) # <<<<<<<<<<<<<< - * data += stride - * - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1414 - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1400 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) noexcept nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, __pyx_nargs); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - */ - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_tuple__8, Py_NE)); if (unlikely((__pyx_t_2 < 0))) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_2) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_PickleError); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_1, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __pyx_v___pyx_PickleError = __pyx_t_1; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_3 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_v___pyx_PickleError, __pyx_t_1, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v___pyx_type}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v___pyx_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_2 = (__pyx_v___pyx_state != Py_None); - if (__pyx_t_2) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None) || __Pyx_RaiseUnexpectedTypeError("tuple", __pyx_v___pyx_state))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = (__pyx_t_3 > 1); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_2 = __pyx_t_4; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_update); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_5 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_5}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 1+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k__9; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if ((__pyx_t_4 < __pyx_t_5)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if ((__pyx_t_5 > __pyx_t_6)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = (__pyx_v_x == __pyx_v_y); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = (__pyx_v_x == 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = (__pyx_v_y == 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if ((__pyx_t_11 > __pyx_t_12)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = (__pyx_v_index != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = (__pyx_v_index == __pyx_v_y); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = ((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save; - #endif - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - _save = NULL; - if (PyGILState_Check()) { - Py_UNBLOCK_THREADS - } - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - { - int __pyx_parallel_temp0 = ((int)0xbad0bad0); - const char *__pyx_parallel_filename = NULL; int __pyx_parallel_lineno = 0, __pyx_parallel_clineno = 0; - PyObject *__pyx_parallel_exc_type = NULL, *__pyx_parallel_exc_value = NULL, *__pyx_parallel_exc_tb = NULL; - int __pyx_parallel_why; - __pyx_parallel_why = 0; - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) private(__pyx_filename, __pyx_lineno, __pyx_clineno) shared(__pyx_parallel_why, __pyx_parallel_exc_type, __pyx_parallel_exc_value, __pyx_parallel_exc_tb) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - Py_BEGIN_ALLOW_THREADS - #endif /* _OPENMP */ - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - if (__pyx_parallel_why < 2) - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); if (unlikely(__Pyx_ErrOccurredWithGIL())) __PYX_ERR(0, 42, __pyx_L8_error) - __PYX_XCLEAR_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; __pyx_t_4.data = NULL; - __PYX_XCLEAR_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; __pyx_t_5.data = NULL; - goto __pyx_L11; - __pyx_L8_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - #ifdef _OPENMP - #pragma omp flush(__pyx_parallel_exc_type) - #endif /* _OPENMP */ - if (!__pyx_parallel_exc_type) { - __Pyx_ErrFetchWithState(&__pyx_parallel_exc_type, &__pyx_parallel_exc_value, &__pyx_parallel_exc_tb); - __pyx_parallel_filename = __pyx_filename; __pyx_parallel_lineno = __pyx_lineno; __pyx_parallel_clineno = __pyx_clineno; - __Pyx_GOTREF(__pyx_parallel_exc_type); - } - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_parallel_why = 4; - goto __pyx_L10; - __pyx_L10:; - #ifdef _OPENMP - #pragma omp critical(__pyx_parallel_lastprivates0) - #endif /* _OPENMP */ - { - __pyx_parallel_temp0 = __pyx_v_i; - } - __pyx_L11:; - #ifdef _OPENMP - #pragma omp flush(__pyx_parallel_why) - #endif /* _OPENMP */ - } - } - #ifdef _OPENMP - Py_END_ALLOW_THREADS - #else -{ -#ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - #endif /* _OPENMP */ - /* Clean up any temporaries */ - __PYX_XCLEAR_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; __pyx_t_4.data = NULL; - __PYX_XCLEAR_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; __pyx_t_5.data = NULL; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - #ifndef _OPENMP -} -#endif /* _OPENMP */ - } - } - if (__pyx_parallel_exc_type) { - /* This may have been overridden by a continue, break or return in another thread. Prefer the error. */ - __pyx_parallel_why = 4; - } - if (__pyx_parallel_why) { - __pyx_v_i = __pyx_parallel_temp0; - switch (__pyx_parallel_why) { - case 4: - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_GIVEREF(__pyx_parallel_exc_type); - __Pyx_ErrRestoreWithState(__pyx_parallel_exc_type, __pyx_parallel_exc_value, __pyx_parallel_exc_tb); - __pyx_filename = __pyx_parallel_filename; __pyx_lineno = __pyx_parallel_lineno; __pyx_clineno = __pyx_parallel_clineno; - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - goto __pyx_L4_error; - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - if (_save) { - Py_BLOCK_THREADS - } - #endif - goto __pyx_L5; - } - __pyx_L4_error: { - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - if (_save) { - Py_BLOCK_THREADS - } - #endif - goto __pyx_L1_error; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - #ifdef WITH_THREAD - __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __PYX_XCLEAR_MEMVIEW(&__pyx_t_4, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_t_5, 1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - __pyx_L0:; -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_15monotonic_align_4core_1maximum_path_c = {"maximum_path_c", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_paths)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_values)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t_ys)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t_xs)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __PYX_XCLEAR_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_values, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __PYX_XCLEAR_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_values, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XCLEAR_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 38, __pyx_L1_error) - __pyx_t_1 = __Pyx_void_to_None(NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (likely(!__Pyx_PyType_HasFeature(t, Py_TPFLAGS_IS_ABSTRACT))) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - #endif - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && (!PyType_IS_GC(Py_TYPE(o)) || !__Pyx_PyObject_GC_IsFinalized(o))) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_array) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - __Pyx_TypeName o_type_name; - o_type_name = __Pyx_PyType_GetName(Py_TYPE(o)); - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by " __Pyx_FMT_TYPENAME, o_type_name); - __Pyx_DECREF_TypeName(o_type_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_array_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_array_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -#if !CYTHON_COMPILING_IN_LIMITED_API - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; -#endif -static PyType_Slot __pyx_type___pyx_array_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_array}, - {Py_sq_length, (void *)__pyx_array___len__}, - {Py_sq_item, (void *)__pyx_sq_item_array}, - {Py_mp_length, (void *)__pyx_array___len__}, - {Py_mp_subscript, (void *)__pyx_array___getitem__}, - {Py_mp_ass_subscript, (void *)__pyx_mp_ass_subscript_array}, - {Py_tp_getattro, (void *)__pyx_tp_getattro_array}, - #if defined(Py_bf_getbuffer) - {Py_bf_getbuffer, (void *)__pyx_array_getbuffer}, - #endif - {Py_tp_methods, (void *)__pyx_methods_array}, - {Py_tp_getset, (void *)__pyx_getsets_array}, - {Py_tp_new, (void *)__pyx_tp_new_array}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_array_spec = { - "monotonic_align.core.array", - sizeof(struct __pyx_array_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_SEQUENCE, - __pyx_type___pyx_array_slots, -}; -#else - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_SEQUENCE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (likely(!__Pyx_PyType_HasFeature(t, Py_TPFLAGS_IS_ABSTRACT))) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - #endif - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_Enum) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyObject *__pyx_specialmethod___pyx_MemviewEnum___repr__(PyObject *self, CYTHON_UNUSED PyObject *arg) { - return __pyx_MemviewEnum___repr__(self); -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__repr__", (PyCFunction)__pyx_specialmethod___pyx_MemviewEnum___repr__, METH_NOARGS|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type___pyx_MemviewEnum_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_Enum}, - {Py_tp_repr, (void *)__pyx_MemviewEnum___repr__}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_Enum}, - {Py_tp_clear, (void *)__pyx_tp_clear_Enum}, - {Py_tp_methods, (void *)__pyx_methods_Enum}, - {Py_tp_init, (void *)__pyx_MemviewEnum___init__}, - {Py_tp_new, (void *)__pyx_tp_new_Enum}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_MemviewEnum_spec = { - "monotonic_align.core.Enum", - sizeof(struct __pyx_MemviewEnum_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, - __pyx_type___pyx_MemviewEnum_slots, -}; -#else - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (likely(!__Pyx_PyType_HasFeature(t, Py_TPFLAGS_IS_ABSTRACT))) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - #endif - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_memoryview) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - __Pyx_TypeName o_type_name; - o_type_name = __Pyx_PyType_GetName(Py_TYPE(o)); - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by " __Pyx_FMT_TYPENAME, o_type_name); - __Pyx_DECREF_TypeName(o_type_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyObject *__pyx_specialmethod___pyx_memoryview___repr__(PyObject *self, CYTHON_UNUSED PyObject *arg) { - return __pyx_memoryview___repr__(self); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"__repr__", (PyCFunction)__pyx_specialmethod___pyx_memoryview___repr__, METH_NOARGS|METH_COEXIST, 0}, - {"is_c_contig", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_is_c_contig, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"is_f_contig", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_is_f_contig, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"copy", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_copy, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"copy_fortran", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_memoryview_copy_fortran, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryview_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryview_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -#if !CYTHON_COMPILING_IN_LIMITED_API - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; -#endif -static PyType_Slot __pyx_type___pyx_memoryview_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_memoryview}, - {Py_tp_repr, (void *)__pyx_memoryview___repr__}, - {Py_sq_length, (void *)__pyx_memoryview___len__}, - {Py_sq_item, (void *)__pyx_sq_item_memoryview}, - {Py_mp_length, (void *)__pyx_memoryview___len__}, - {Py_mp_subscript, (void *)__pyx_memoryview___getitem__}, - {Py_mp_ass_subscript, (void *)__pyx_mp_ass_subscript_memoryview}, - {Py_tp_str, (void *)__pyx_memoryview___str__}, - #if defined(Py_bf_getbuffer) - {Py_bf_getbuffer, (void *)__pyx_memoryview_getbuffer}, - #endif - {Py_tp_traverse, (void *)__pyx_tp_traverse_memoryview}, - {Py_tp_clear, (void *)__pyx_tp_clear_memoryview}, - {Py_tp_methods, (void *)__pyx_methods_memoryview}, - {Py_tp_getset, (void *)__pyx_getsets_memoryview}, - {Py_tp_new, (void *)__pyx_tp_new_memoryview}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_memoryview_spec = { - "monotonic_align.core.memoryview", - sizeof(struct __pyx_memoryview_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, - __pyx_type___pyx_memoryview_slots, -}; -#else - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc__memoryviewslice) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XCLEAR_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {"__setstate_cython__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type___pyx_memoryviewslice_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc__memoryviewslice}, - {Py_tp_doc, (void *)PyDoc_STR("Internal class for passing memoryview slices to Python")}, - {Py_tp_traverse, (void *)__pyx_tp_traverse__memoryviewslice}, - {Py_tp_clear, (void *)__pyx_tp_clear__memoryviewslice}, - {Py_tp_methods, (void *)__pyx_methods__memoryviewslice}, - {Py_tp_new, (void *)__pyx_tp_new__memoryviewslice}, - {0, 0}, -}; -static PyType_Spec __pyx_type___pyx_memoryviewslice_spec = { - "monotonic_align.core._memoryviewslice", - sizeof(struct __pyx_memoryviewslice_obj), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_SEQUENCE, - __pyx_type___pyx_memoryviewslice_slots, -}; -#else - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.""_memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY || 0 - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY || 0 - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_SEQUENCE, /*tp_flags*/ - PyDoc_STR("Internal class for passing memoryview slices to Python"), /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif -/* #### Code section: pystring_table ### */ - -static int __Pyx_CreateStringTabAndInitStrings(void) { - __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_kp_u_, __pyx_k_, sizeof(__pyx_k_), 0, 1, 0, 0}, - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_All_dimensions_preceding_dimensi, __pyx_k_All_dimensions_preceding_dimensi, sizeof(__pyx_k_All_dimensions_preceding_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_AssertionError, __pyx_k_AssertionError, sizeof(__pyx_k_AssertionError), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_u_Cannot_index_with_type, __pyx_k_Cannot_index_with_type, sizeof(__pyx_k_Cannot_index_with_type), 0, 1, 0, 0}, - {&__pyx_kp_s_Cannot_transpose_memoryview_with, __pyx_k_Cannot_transpose_memoryview_with, sizeof(__pyx_k_Cannot_transpose_memoryview_with), 0, 0, 1, 0}, - {&__pyx_kp_s_Dimension_d_is_not_direct, __pyx_k_Dimension_d_is_not_direct, sizeof(__pyx_k_Dimension_d_is_not_direct), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_k_Incompatible_checksums_0x_x_vs_0, sizeof(__pyx_k_Incompatible_checksums_0x_x_vs_0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Index_out_of_bounds_axis_d, __pyx_k_Index_out_of_bounds_axis_d, sizeof(__pyx_k_Index_out_of_bounds_axis_d), 0, 0, 1, 0}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_u_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 1, 0, 0}, - {&__pyx_kp_u_Invalid_shape_in_axis, __pyx_k_Invalid_shape_in_axis, sizeof(__pyx_k_Invalid_shape_in_axis), 0, 1, 0, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_u_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 1, 0, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_Sequence, __pyx_k_Sequence, sizeof(__pyx_k_Sequence), 0, 0, 1, 1}, - {&__pyx_kp_s_Step_may_not_be_zero_axis_d, __pyx_k_Step_may_not_be_zero_axis_d, sizeof(__pyx_k_Step_may_not_be_zero_axis_d), 0, 0, 1, 0}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_kp_u__2, __pyx_k__2, sizeof(__pyx_k__2), 0, 1, 0, 0}, - {&__pyx_n_s__23, __pyx_k__23, sizeof(__pyx_k__23), 0, 0, 1, 1}, - {&__pyx_n_s__3, __pyx_k__3, sizeof(__pyx_k__3), 0, 0, 1, 1}, - {&__pyx_kp_u__6, __pyx_k__6, sizeof(__pyx_k__6), 0, 1, 0, 0}, - {&__pyx_kp_u__7, __pyx_k__7, sizeof(__pyx_k__7), 0, 1, 0, 0}, - {&__pyx_n_s_abc, __pyx_k_abc, sizeof(__pyx_k_abc), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_kp_u_and, __pyx_k_and, sizeof(__pyx_k_and), 0, 1, 0, 0}, - {&__pyx_n_s_asyncio_coroutines, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_class_getitem, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_collections, __pyx_k_collections, sizeof(__pyx_k_collections), 0, 0, 1, 1}, - {&__pyx_kp_s_collections_abc, __pyx_k_collections_abc, sizeof(__pyx_k_collections_abc), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_core_pyx, __pyx_k_core_pyx, sizeof(__pyx_k_core_pyx), 0, 0, 1, 0}, - {&__pyx_n_s_count, __pyx_k_count, sizeof(__pyx_k_count), 0, 0, 1, 1}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_kp_u_disable, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_kp_u_enable, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_kp_u_gc, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_u_got, __pyx_k_got, sizeof(__pyx_k_got), 0, 1, 0, 0}, - {&__pyx_kp_u_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 1, 0, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_index, __pyx_k_index, sizeof(__pyx_k_index), 0, 0, 1, 1}, - {&__pyx_n_s_initializing, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {&__pyx_n_s_is_coroutine, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {&__pyx_kp_u_isenabled, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_maximum_path_c, __pyx_k_maximum_path_c, sizeof(__pyx_k_maximum_path_c), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_monotonic_align_core, __pyx_k_monotonic_align_core, sizeof(__pyx_k_monotonic_align_core), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_register, __pyx_k_register, sizeof(__pyx_k_register), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_spec, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_sys, __pyx_k_sys, sizeof(__pyx_k_sys), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {&__pyx_n_s_version_info, __pyx_k_version_info, sizeof(__pyx_k_version_info), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} - }; - return __Pyx_InitStrings(__pyx_string_tab); -} -/* #### Code section: cached_builtins ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin___import__ = __Pyx_GetBuiltinName(__pyx_n_s_import); if (!__pyx_builtin___import__) __PYX_ERR(1, 100, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 156, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 159, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_AssertionError = __Pyx_GetBuiltinName(__pyx_n_s_AssertionError); if (!__pyx_builtin_AssertionError) __PYX_ERR(1, 373, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 408, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 618, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 914, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":582 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__4 = PyTuple_New(1); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 582, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__4, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":679 - * tup = index if isinstance(index, tuple) else (index,) - * - * result = [slice(None)] * ndim # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_slice__5 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__5)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__5); - __Pyx_GIVEREF(__pyx_slice__5); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0x82a3537, 0x6ae9995, 0xb068931): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError, "Incompatible checksums (0x%x vs (0x82a3537, 0x6ae9995, 0xb068931) = (name))" % __pyx_checksum - */ - __pyx_tuple__8 = PyTuple_Pack(3, __pyx_int_136983863, __pyx_int_112105877, __pyx_int_184977713); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":100 - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: - * if __import__("sys").version_info >= (3, 3): # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_n_s_sys); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - __pyx_tuple__11 = PyTuple_Pack(2, __pyx_int_3, __pyx_int_3); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":101 - * try: - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence # <<<<<<<<<<<<<< - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_collections_abc); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":103 - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence # <<<<<<<<<<<<<< - * except: - * - */ - __pyx_tuple__13 = PyTuple_Pack(1, __pyx_n_s_collections); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 103, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "View.MemoryView":309 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "View.MemoryView":310 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":311 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__16 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(1, 311, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - - /* "View.MemoryView":314 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "View.MemoryView":315 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__19 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - __pyx_codeobj__20 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__19, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__20)) __PYX_ERR(1, 1, __pyx_L1_error) - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - __pyx_tuple__21 = PyTuple_Pack(4, __pyx_n_s_paths, __pyx_n_s_values, __pyx_n_s_t_ys, __pyx_n_s_t_xs); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - __pyx_codeobj__22 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__21, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_core_pyx, __pyx_n_s_maximum_path_c, 38, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__22)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} -/* #### Code section: init_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitConstants(void) { - if (__Pyx_CreateStringTabAndInitStrings() < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_3 = PyInt_FromLong(3); if (unlikely(!__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_112105877 = PyInt_FromLong(112105877L); if (unlikely(!__pyx_int_112105877)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_136983863 = PyInt_FromLong(136983863L); if (unlikely(!__pyx_int_136983863)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_globals ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* AssertionsEnabled.init */ - __Pyx_init_assertions_enabled(); - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - /* InitThreads.init */ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_module ### */ - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __pyx_collections_abc_Sequence = Py_None; Py_INCREF(Py_None); - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - #if CYTHON_USE_TYPE_SPECS - __pyx_array_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_array_spec, NULL); if (unlikely(!__pyx_array_type)) __PYX_ERR(1, 114, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - __pyx_array_type->tp_as_buffer = &__pyx_tp_as_buffer_array; - if (!__pyx_array_type->tp_as_buffer->bf_releasebuffer && __pyx_array_type->tp_base->tp_as_buffer && __pyx_array_type->tp_base->tp_as_buffer->bf_releasebuffer) { - __pyx_array_type->tp_as_buffer->bf_releasebuffer = __pyx_array_type->tp_base->tp_as_buffer->bf_releasebuffer; - } - #elif defined(Py_bf_getbuffer) && defined(Py_bf_releasebuffer) - /* PY_VERSION_HEX >= 0x03090000 || Py_LIMITED_API >= 0x030B0000 */ - #elif defined(_MSC_VER) - #pragma message ("The buffer protocol is not supported in the Limited C-API < 3.11.") - #else - #warning "The buffer protocol is not supported in the Limited C-API < 3.11." - #endif - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_array_spec, __pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #else - __pyx_array_type = &__pyx_type___pyx_array; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_array_type->tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_array_type, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_MergeVtables(__pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_array_type) < 0) __PYX_ERR(1, 114, __pyx_L1_error) - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_MemviewEnum_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_MemviewEnum_spec, NULL); if (unlikely(!__pyx_MemviewEnum_type)) __PYX_ERR(1, 302, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_MemviewEnum_spec, __pyx_MemviewEnum_type) < 0) __PYX_ERR(1, 302, __pyx_L1_error) - #else - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_MemviewEnum_type) < 0) __PYX_ERR(1, 302, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_MemviewEnum_type->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_MemviewEnum_type->tp_dictoffset && __pyx_MemviewEnum_type->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_MemviewEnum_type->tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_MemviewEnum_type) < 0) __PYX_ERR(1, 302, __pyx_L1_error) - #endif - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - __pyx_vtable_memoryview._get_base = (PyObject *(*)(struct __pyx_memoryview_obj *))__pyx_memoryview__get_base; - #if CYTHON_USE_TYPE_SPECS - __pyx_memoryview_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_memoryview_spec, NULL); if (unlikely(!__pyx_memoryview_type)) __PYX_ERR(1, 337, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - __pyx_memoryview_type->tp_as_buffer = &__pyx_tp_as_buffer_memoryview; - if (!__pyx_memoryview_type->tp_as_buffer->bf_releasebuffer && __pyx_memoryview_type->tp_base->tp_as_buffer && __pyx_memoryview_type->tp_base->tp_as_buffer->bf_releasebuffer) { - __pyx_memoryview_type->tp_as_buffer->bf_releasebuffer = __pyx_memoryview_type->tp_base->tp_as_buffer->bf_releasebuffer; - } - #elif defined(Py_bf_getbuffer) && defined(Py_bf_releasebuffer) - /* PY_VERSION_HEX >= 0x03090000 || Py_LIMITED_API >= 0x030B0000 */ - #elif defined(_MSC_VER) - #pragma message ("The buffer protocol is not supported in the Limited C-API < 3.11.") - #else - #warning "The buffer protocol is not supported in the Limited C-API < 3.11." - #endif - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_memoryview_spec, __pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #else - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_memoryview_type->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_memoryview_type->tp_dictoffset && __pyx_memoryview_type->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_memoryview_type->tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - #endif - if (__Pyx_SetVtable(__pyx_memoryview_type, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_MergeVtables(__pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_memoryview_type) < 0) __PYX_ERR(1, 337, __pyx_L1_error) - #endif - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_vtable__memoryviewslice.__pyx_base._get_base = (PyObject *(*)(struct __pyx_memoryview_obj *))__pyx_memoryviewslice__get_base; - #if CYTHON_USE_TYPE_SPECS - __pyx_t_1 = PyTuple_Pack(1, (PyObject *)__pyx_memoryview_type); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 952, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_memoryviewslice_type = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type___pyx_memoryviewslice_spec, __pyx_t_1); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_memoryviewslice_type)) __PYX_ERR(1, 952, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type___pyx_memoryviewslice_spec, __pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #else - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - __pyx_memoryviewslice_type->tp_base = __pyx_memoryview_type; - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_memoryviewslice_type->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_memoryviewslice_type->tp_dictoffset && __pyx_memoryviewslice_type->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_memoryviewslice_type->tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - #endif - if (__Pyx_SetVtable(__pyx_memoryviewslice_type, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_MergeVtables(__pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if (__Pyx_setup_reduce((PyObject *) __pyx_memoryviewslice_type) < 0) __PYX_ERR(1, 952, __pyx_L1_error) - #endif - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #elif CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstate), /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif -#endif - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *module, const char* from_name, const char* to_name, int allow_none) -#else -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -#endif -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { -#if CYTHON_COMPILING_IN_LIMITED_API - result = PyModule_AddObject(module, to_name, value); -#else - result = PyDict_SetItemString(moddict, to_name, value); -#endif - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - moddict = module; -#else - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; -#endif - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - int stringtab_initialized = 0; - #if CYTHON_USE_MODULE_STATE - int pystate_addmodule_run = 0; - #endif - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - static PyThread_type_lock __pyx_t_8[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #elif CYTHON_USE_MODULE_STATE - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - { - int add_module_result = PyState_AddModule(__pyx_t_1, &__pyx_moduledef); - __pyx_t_1 = 0; /* transfer ownership from __pyx_t_1 to core pseudovariable */ - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - pystate_addmodule_run = 1; - } - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #endif - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely((PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely((__Pyx_modinit_type_init_code() < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "View.MemoryView":99 - * - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: # <<<<<<<<<<<<<< - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "View.MemoryView":100 - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: - * if __import__("sys").version_info >= (3, 3): # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - */ - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin___import__, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_version_info); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyObject_RichCompare(__pyx_t_5, __pyx_tuple__11, Py_GE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(1, 100, __pyx_L2_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_6) { - - /* "View.MemoryView":101 - * try: - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence # <<<<<<<<<<<<<< - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence - */ - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin___import__, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 101, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_abc); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 101, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_Sequence); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 101, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XGOTREF(__pyx_collections_abc_Sequence); - __Pyx_DECREF_SET(__pyx_collections_abc_Sequence, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":100 - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: - * if __import__("sys").version_info >= (3, 3): # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":103 - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence # <<<<<<<<<<<<<< - * except: - * - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin___import__, __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 103, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_Sequence); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 103, __pyx_L2_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XGOTREF(__pyx_collections_abc_Sequence); - __Pyx_DECREF_SET(__pyx_collections_abc_Sequence, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":99 - * - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: # <<<<<<<<<<<<<< - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L7_try_end; - __pyx_L2_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "View.MemoryView":104 - * else: - * __pyx_collections_abc_Sequence = __import__("collections").Sequence - * except: # <<<<<<<<<<<<<< - * - * __pyx_collections_abc_Sequence = None - */ - /*except:*/ { - __Pyx_AddTraceback("View.MemoryView", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_4, &__pyx_t_7) < 0) __PYX_ERR(1, 104, __pyx_L4_except_error) - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_7); - - /* "View.MemoryView":106 - * except: - * - * __pyx_collections_abc_Sequence = None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_INCREF(Py_None); - __Pyx_XGOTREF(__pyx_collections_abc_Sequence); - __Pyx_DECREF_SET(__pyx_collections_abc_Sequence, Py_None); - __Pyx_GIVEREF(Py_None); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L3_exception_handled; - } - - /* "View.MemoryView":99 - * - * cdef object __pyx_collections_abc_Sequence "__pyx_collections_abc_Sequence" - * try: # <<<<<<<<<<<<<< - * if __import__("sys").version_info >= (3, 3): - * __pyx_collections_abc_Sequence = __import__("collections.abc").abc.Sequence - */ - __pyx_L4_except_error:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L1_error; - __pyx_L3_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - __pyx_L7_try_end:; - } - - /* "View.MemoryView":241 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_2, &__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_1); - /*try:*/ { - - /* "View.MemoryView":242 - * - * try: - * count = __pyx_collections_abc_Sequence.count # <<<<<<<<<<<<<< - * index = __pyx_collections_abc_Sequence.index - * except: - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_count); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 242, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_array_type->tp_dict, __pyx_n_s_count, __pyx_t_7) < 0) __PYX_ERR(1, 242, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":243 - * try: - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index # <<<<<<<<<<<<<< - * except: - * pass - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_index); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 243, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_array_type->tp_dict, __pyx_n_s_index, __pyx_t_7) < 0) __PYX_ERR(1, 243, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":241 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L16_try_end; - __pyx_L11_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":244 - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - * except: # <<<<<<<<<<<<<< - * pass - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L12_exception_handled; - } - __pyx_L12_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_2, __pyx_t_1); - __pyx_L16_try_end:; - } - - /* "View.MemoryView":309 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":310 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":311 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__16, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 311, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":314 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 314, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":315 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_7); - __Pyx_GIVEREF(__pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":323 - * - * - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[8] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":324 - * - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[8] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_8[0] = PyThread_allocate_lock(); - __pyx_t_8[1] = PyThread_allocate_lock(); - __pyx_t_8[2] = PyThread_allocate_lock(); - __pyx_t_8[3] = PyThread_allocate_lock(); - __pyx_t_8[4] = PyThread_allocate_lock(); - __pyx_t_8[5] = PyThread_allocate_lock(); - __pyx_t_8[6] = PyThread_allocate_lock(); - __pyx_t_8[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_8, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":982 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "View.MemoryView":983 - * - * try: - * count = __pyx_collections_abc_Sequence.count # <<<<<<<<<<<<<< - * index = __pyx_collections_abc_Sequence.index - * except: - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_count); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 983, __pyx_L17_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_count, __pyx_t_7) < 0) __PYX_ERR(1, 983, __pyx_L17_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "View.MemoryView":984 - * try: - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index # <<<<<<<<<<<<<< - * except: - * pass - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_index); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 984, __pyx_L17_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_index, __pyx_t_7) < 0) __PYX_ERR(1, 984, __pyx_L17_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "View.MemoryView":982 - * - * - * try: # <<<<<<<<<<<<<< - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L22_try_end; - __pyx_L17_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":985 - * count = __pyx_collections_abc_Sequence.count - * index = __pyx_collections_abc_Sequence.index - * except: # <<<<<<<<<<<<<< - * pass - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L18_exception_handled; - } - __pyx_L18_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - __pyx_L22_try_end:; - } - - /* "View.MemoryView":988 - * pass - * - * try: # <<<<<<<<<<<<<< - * if __pyx_collections_abc_Sequence: - * - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_2, &__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_1); - /*try:*/ { - - /* "View.MemoryView":989 - * - * try: - * if __pyx_collections_abc_Sequence: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_collections_abc_Sequence); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(1, 989, __pyx_L23_error) - if (__pyx_t_6) { - - /* "View.MemoryView":993 - * - * - * __pyx_collections_abc_Sequence.register(_memoryviewslice) # <<<<<<<<<<<<<< - * __pyx_collections_abc_Sequence.register(array) - * except: - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_register); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 993, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_7, ((PyObject *)__pyx_memoryviewslice_type)); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 993, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":994 - * - * __pyx_collections_abc_Sequence.register(_memoryviewslice) - * __pyx_collections_abc_Sequence.register(array) # <<<<<<<<<<<<<< - * except: - * pass # ignore failure, it's a minor issue - */ - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_collections_abc_Sequence, __pyx_n_s_register); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 994, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_4, ((PyObject *)__pyx_array_type)); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 994, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":989 - * - * try: - * if __pyx_collections_abc_Sequence: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":988 - * pass - * - * try: # <<<<<<<<<<<<<< - * if __pyx_collections_abc_Sequence: - * - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - goto __pyx_L28_try_end; - __pyx_L23_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":995 - * __pyx_collections_abc_Sequence.register(_memoryviewslice) - * __pyx_collections_abc_Sequence.register(array) - * except: # <<<<<<<<<<<<<< - * pass # ignore failure, it's a minor issue - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L24_exception_handled; - } - __pyx_L24_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_2, __pyx_t_1); - __pyx_L28_try_end:; - } - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_7 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_7) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k__9 = (-1e9); - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - __pyx_t_7 = __Pyx_CyFunction_New(&__pyx_mdef_15monotonic_align_4core_1maximum_path_c, 0, __pyx_n_s_maximum_path_c, NULL, __pyx_n_s_monotonic_align_core, __pyx_d, ((PyObject *)__pyx_codeobj__22)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_maximum_path_c, __pyx_t_7) < 0) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_7 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_7) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); - if (__pyx_m) { - if (__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #else - Py_DECREF(__pyx_m); - if (pystate_addmodule_run) { - PyObject *tp, *value, *tb; - PyErr_Fetch(&tp, &value, &tb); - PyState_RemoveModule(&__pyx_moduledef); - PyErr_Restore(tp, value, tb); - } - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#ifdef _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0x030C00A6 - PyObject *current_exception = tstate->current_exception; - if (unlikely(!current_exception)) return 0; - exc_type = (PyObject*) Py_TYPE(current_exception); - if (exc_type == err) return 1; -#else - exc_type = tstate->curexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; -#endif - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(exc_type); - #endif - if (unlikely(PyTuple_Check(err))) { - result = __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - } else { - result = __Pyx_PyErr_GivenExceptionMatches(exc_type, err); - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(exc_type); - #endif - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject *tmp_value; - assert(type == NULL || (value != NULL && type == (PyObject*) Py_TYPE(value))); - if (value) { - #if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(((PyBaseExceptionObject*) value)->traceback != tb)) - #endif - PyException_SetTraceback(value, tb); - } - tmp_value = tstate->current_exception; - tstate->current_exception = value; - Py_XDECREF(tmp_value); -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#endif -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject* exc_value; - exc_value = tstate->current_exception; - tstate->current_exception = 0; - *value = exc_value; - *type = NULL; - *tb = NULL; - if (exc_value) { - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - #if CYTHON_COMPILING_IN_CPYTHON - *tb = ((PyBaseExceptionObject*) exc_value)->traceback; - Py_XINCREF(*tb); - #else - *tb = PyException_GetTraceback(exc_value); - #endif - } -#else - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#endif -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* TupleAndListFromArray */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - Py_INCREF(__pyx_empty_tuple); - return __pyx_empty_tuple; - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = PyTuple_GET_SIZE(kwnames); - for (i = 0; i < n; i++) - { - if (s == PyTuple_GET_ITEM(kwnames, i)) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - int eq = __Pyx_PyUnicode_Equals(s, PyTuple_GET_ITEM(kwnames, i), Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; // error - return kwvalues[i]; - } - } - return NULL; // not found (no exception set) -} -#endif - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - int kwds_is_tuple = CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds)); - while (1) { - if (kwds_is_tuple) { - if (pos >= PyTuple_GET_SIZE(kwds)) break; - key = PyTuple_GET_ITEM(kwds, pos); - value = kwvalues[pos]; - pos++; - } - else - { - if (!PyDict_Next(kwds, &pos, &key, &value)) break; - } - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = ( - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key) - ); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - __Pyx_TypeName type_name; - __Pyx_TypeName obj_type_name; - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - type_name = __Pyx_PyType_GetName(type); - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected " __Pyx_FMT_TYPENAME - ", got " __Pyx_FMT_TYPENAME ")", name, type_name, obj_type_name); - __Pyx_DECREF_TypeName(type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return 0; -} - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - __Pyx_PyThreadState_declare - CYTHON_UNUSED_VAR(cause); - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { - #if PY_VERSION_HEX >= 0x030C00A6 - PyException_SetTraceback(value, tb); - #elif CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]); - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - Py_DECREF(argstuple); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { -#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG) - if (__Pyx_IsCyOrPyCFunction(func)) -#else - if (PyCFunction_Check(func)) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - } - else if (nargs == 1 && kwargs == NULL) { - if (PyCFunction_Check(func)) - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, args[0]); - } - } - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - #if PY_VERSION_HEX >= 0x030700A1 - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - #if CYTHON_VECTORCALL - vectorcallfunc f = _PyVectorcall_Function(func); - if (f) { - return f(func, args, (size_t)nargs, kwargs); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, (size_t)nargs, kwargs); - } - #endif - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, kwargs); - } - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); -} - -/* RaiseUnexpectedTypeError */ -static int -__Pyx_RaiseUnexpectedTypeError(const char *expected, PyObject *obj) -{ - __Pyx_TypeName obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, "Expected %s, got " __Pyx_FMT_TYPENAME, - expected, obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return 0; -} - -/* CIntToDigits */ -static const char DIGIT_PAIRS_10[2*10*10+1] = { - "00010203040506070809" - "10111213141516171819" - "20212223242526272829" - "30313233343536373839" - "40414243444546474849" - "50515253545556575859" - "60616263646566676869" - "70717273747576777879" - "80818283848586878889" - "90919293949596979899" -}; -static const char DIGIT_PAIRS_8[2*8*8+1] = { - "0001020304050607" - "1011121314151617" - "2021222324252627" - "3031323334353637" - "4041424344454647" - "5051525354555657" - "6061626364656667" - "7071727374757677" -}; -static const char DIGITS_HEX[2*16+1] = { - "0123456789abcdef" - "0123456789ABCDEF" -}; - -/* BuildPyUnicode */ -static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength, - int prepend_sign, char padding_char) { - PyObject *uval; - Py_ssize_t uoffset = ulength - clength; -#if CYTHON_USE_UNICODE_INTERNALS - Py_ssize_t i; -#if CYTHON_PEP393_ENABLED - void *udata; - uval = PyUnicode_New(ulength, 127); - if (unlikely(!uval)) return NULL; - udata = PyUnicode_DATA(uval); -#else - Py_UNICODE *udata; - uval = PyUnicode_FromUnicode(NULL, ulength); - if (unlikely(!uval)) return NULL; - udata = PyUnicode_AS_UNICODE(uval); -#endif - if (uoffset > 0) { - i = 0; - if (prepend_sign) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, 0, '-'); - i++; - } - for (; i < uoffset; i++) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, i, padding_char); - } - } - for (i=0; i < clength; i++) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, uoffset+i, chars[i]); - } -#else - { - PyObject *sign = NULL, *padding = NULL; - uval = NULL; - if (uoffset > 0) { - prepend_sign = !!prepend_sign; - if (uoffset > prepend_sign) { - padding = PyUnicode_FromOrdinal(padding_char); - if (likely(padding) && uoffset > prepend_sign + 1) { - PyObject *tmp; - PyObject *repeat = PyInt_FromSsize_t(uoffset - prepend_sign); - if (unlikely(!repeat)) goto done_or_error; - tmp = PyNumber_Multiply(padding, repeat); - Py_DECREF(repeat); - Py_DECREF(padding); - padding = tmp; - } - if (unlikely(!padding)) goto done_or_error; - } - if (prepend_sign) { - sign = PyUnicode_FromOrdinal('-'); - if (unlikely(!sign)) goto done_or_error; - } - } - uval = PyUnicode_DecodeASCII(chars, clength, NULL); - if (likely(uval) && padding) { - PyObject *tmp = PyNumber_Add(padding, uval); - Py_DECREF(uval); - uval = tmp; - } - if (likely(uval) && sign) { - PyObject *tmp = PyNumber_Add(sign, uval); - Py_DECREF(uval); - uval = tmp; - } -done_or_error: - Py_XDECREF(padding); - Py_XDECREF(sign); - } -#endif - return uval; -} - -/* CIntToPyUnicode */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_int(int value, Py_ssize_t width, char padding_char, char format_char) { - char digits[sizeof(int)*3+2]; - char *dpos, *end = digits + sizeof(int)*3+2; - const char *hex_digits = DIGITS_HEX; - Py_ssize_t length, ulength; - int prepend_sign, last_one_off; - int remaining; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (format_char == 'X') { - hex_digits += 16; - format_char = 'x'; - } - remaining = value; - last_one_off = 0; - dpos = end; - do { - int digit_pos; - switch (format_char) { - case 'o': - digit_pos = abs((int)(remaining % (8*8))); - remaining = (int) (remaining / (8*8)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_8 + digit_pos * 2, 2); - last_one_off = (digit_pos < 8); - break; - case 'd': - digit_pos = abs((int)(remaining % (10*10))); - remaining = (int) (remaining / (10*10)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_10 + digit_pos * 2, 2); - last_one_off = (digit_pos < 10); - break; - case 'x': - *(--dpos) = hex_digits[abs((int)(remaining % 16))]; - remaining = (int) (remaining / 16); - break; - default: - assert(0); - break; - } - } while (unlikely(remaining != 0)); - assert(!last_one_off || *dpos == '0'); - dpos += last_one_off; - length = end - dpos; - ulength = length; - prepend_sign = 0; - if (!is_unsigned && value <= neg_one) { - if (padding_char == ' ' || width <= length + 1) { - *(--dpos) = '-'; - ++length; - } else { - prepend_sign = 1; - } - ++ulength; - } - if (width > ulength) { - ulength = width; - } - if (ulength == 1) { - return PyUnicode_FromOrdinal(*dpos); - } - return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, (int) length, prepend_sign, padding_char); -} - -/* CIntToPyUnicode */ -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_From_Py_ssize_t(Py_ssize_t value, Py_ssize_t width, char padding_char, char format_char) { - char digits[sizeof(Py_ssize_t)*3+2]; - char *dpos, *end = digits + sizeof(Py_ssize_t)*3+2; - const char *hex_digits = DIGITS_HEX; - Py_ssize_t length, ulength; - int prepend_sign, last_one_off; - Py_ssize_t remaining; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const Py_ssize_t neg_one = (Py_ssize_t) -1, const_zero = (Py_ssize_t) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (format_char == 'X') { - hex_digits += 16; - format_char = 'x'; - } - remaining = value; - last_one_off = 0; - dpos = end; - do { - int digit_pos; - switch (format_char) { - case 'o': - digit_pos = abs((int)(remaining % (8*8))); - remaining = (Py_ssize_t) (remaining / (8*8)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_8 + digit_pos * 2, 2); - last_one_off = (digit_pos < 8); - break; - case 'd': - digit_pos = abs((int)(remaining % (10*10))); - remaining = (Py_ssize_t) (remaining / (10*10)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_10 + digit_pos * 2, 2); - last_one_off = (digit_pos < 10); - break; - case 'x': - *(--dpos) = hex_digits[abs((int)(remaining % 16))]; - remaining = (Py_ssize_t) (remaining / 16); - break; - default: - assert(0); - break; - } - } while (unlikely(remaining != 0)); - assert(!last_one_off || *dpos == '0'); - dpos += last_one_off; - length = end - dpos; - ulength = length; - prepend_sign = 0; - if (!is_unsigned && value <= neg_one) { - if (padding_char == ' ' || width <= length + 1) { - *(--dpos) = '-'; - ++length; - } else { - prepend_sign = 1; - } - ++ulength; - } - if (width > ulength) { - ulength = width; - } - if (ulength == 1) { - return PyUnicode_FromOrdinal(*dpos); - } - return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, (int) length, prepend_sign, padding_char); -} - -/* JoinPyUnicode */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char) { -#if CYTHON_USE_UNICODE_INTERNALS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyObject *result_uval; - int result_ukind, kind_shift; - Py_ssize_t i, char_pos; - void *result_udata; - CYTHON_MAYBE_UNUSED_VAR(max_char); -#if CYTHON_PEP393_ENABLED - result_uval = PyUnicode_New(result_ulength, max_char); - if (unlikely(!result_uval)) return NULL; - result_ukind = (max_char <= 255) ? PyUnicode_1BYTE_KIND : (max_char <= 65535) ? PyUnicode_2BYTE_KIND : PyUnicode_4BYTE_KIND; - kind_shift = (result_ukind == PyUnicode_4BYTE_KIND) ? 2 : result_ukind - 1; - result_udata = PyUnicode_DATA(result_uval); -#else - result_uval = PyUnicode_FromUnicode(NULL, result_ulength); - if (unlikely(!result_uval)) return NULL; - result_ukind = sizeof(Py_UNICODE); - kind_shift = (result_ukind == 4) ? 2 : result_ukind - 1; - result_udata = PyUnicode_AS_UNICODE(result_uval); -#endif - assert(kind_shift == 2 || kind_shift == 1 || kind_shift == 0); - char_pos = 0; - for (i=0; i < value_count; i++) { - int ukind; - Py_ssize_t ulength; - void *udata; - PyObject *uval = PyTuple_GET_ITEM(value_tuple, i); - if (unlikely(__Pyx_PyUnicode_READY(uval))) - goto bad; - ulength = __Pyx_PyUnicode_GET_LENGTH(uval); - if (unlikely(!ulength)) - continue; - if (unlikely((PY_SSIZE_T_MAX >> kind_shift) - ulength < char_pos)) - goto overflow; - ukind = __Pyx_PyUnicode_KIND(uval); - udata = __Pyx_PyUnicode_DATA(uval); - if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) { - memcpy((char *)result_udata + (char_pos << kind_shift), udata, (size_t) (ulength << kind_shift)); - } else { - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters) - _PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength); - #else - Py_ssize_t j; - for (j=0; j < ulength; j++) { - Py_UCS4 uchar = __Pyx_PyUnicode_READ(ukind, udata, j); - __Pyx_PyUnicode_WRITE(result_ukind, result_udata, char_pos+j, uchar); - } - #endif - } - char_pos += ulength; - } - return result_uval; -overflow: - PyErr_SetString(PyExc_OverflowError, "join() result is too long for a Python string"); -bad: - Py_DECREF(result_uval); - return NULL; -#else - CYTHON_UNUSED_VAR(max_char); - CYTHON_UNUSED_VAR(result_ulength); - CYTHON_UNUSED_VAR(value_count); - return PyUnicode_Join(__pyx_empty_unicode, value_tuple); -#endif -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject *index) { - PyObject *runerr = NULL; - Py_ssize_t key_value; - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - __Pyx_TypeName index_type_name = __Pyx_PyType_GetName(Py_TYPE(index)); - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, - "cannot fit '" __Pyx_FMT_TYPENAME "' into an index-sized integer", index_type_name); - __Pyx_DECREF_TypeName(index_type_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem_Slow(PyObject *obj, PyObject *key) { - __Pyx_TypeName obj_type_name; - if (likely(PyType_Check(obj))) { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(obj, __pyx_n_s_class_getitem); - if (meth) { - PyObject *result = __Pyx_PyObject_CallOneArg(meth, key); - Py_DECREF(meth); - return result; - } - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is not subscriptable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key) { - PyTypeObject *tp = Py_TYPE(obj); - PyMappingMethods *mm = tp->tp_as_mapping; - PySequenceMethods *sm = tp->tp_as_sequence; - if (likely(mm && mm->mp_subscript)) { - return mm->mp_subscript(obj, key); - } - if (likely(sm && sm->sq_item)) { - return __Pyx_PyObject_GetIndex(obj, key); - } - return __Pyx_PyObject_GetItem_Slow(obj, key); -} -#endif - -/* KeywordStringCheck */ -static int __Pyx_CheckKeywordStrings( - PyObject *kw, - const char* function_name, - int kw_allowed) -{ - PyObject* key = 0; - Py_ssize_t pos = 0; -#if CYTHON_COMPILING_IN_PYPY - if (!kw_allowed && PyDict_Next(kw, &pos, &key, 0)) - goto invalid_keyword; - return 1; -#else - if (CYTHON_METH_FASTCALL && likely(PyTuple_Check(kw))) { - if (unlikely(PyTuple_GET_SIZE(kw) == 0)) - return 1; - if (!kw_allowed) { - key = PyTuple_GET_ITEM(kw, 0); - goto invalid_keyword; - } -#if PY_VERSION_HEX < 0x03090000 - for (pos = 0; pos < PyTuple_GET_SIZE(kw); pos++) { - key = PyTuple_GET_ITEM(kw, pos); - if (unlikely(!PyUnicode_Check(key))) - goto invalid_keyword_type; - } -#endif - return 1; - } - while (PyDict_Next(kw, &pos, &key, 0)) { - #if PY_MAJOR_VERSION < 3 - if (unlikely(!PyString_Check(key))) - #endif - if (unlikely(!PyUnicode_Check(key))) - goto invalid_keyword_type; - } - if (!kw_allowed && unlikely(key)) - goto invalid_keyword; - return 1; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - return 0; -#endif -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif - return 0; -} - -/* DivInt[Py_ssize_t] */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r; -#if CYTHON_USE_TYPE_SLOTS - if (likely(PyString_Check(n))) { - r = __Pyx_PyObject_GetAttrStrNoError(o, n); - if (unlikely(!r) && likely(!PyErr_Occurred())) { - r = __Pyx_NewRef(d); - } - return r; - } -#endif - r = PyObject_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#elif CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - __Pyx_TypeName obj_type_name; - __Pyx_TypeName type_name; - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - type_name = __Pyx_PyType_GetName(type); - PyErr_Format(PyExc_TypeError, - "Cannot convert " __Pyx_FMT_TYPENAME " to " __Pyx_FMT_TYPENAME, - obj_type_name, type_name); - __Pyx_DECREF_TypeName(obj_type_name); - __Pyx_DECREF_TypeName(type_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_value == NULL || exc_info->exc_value == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - PyObject *exc_value = exc_info->exc_value; - if (exc_value == NULL || exc_value == Py_None) { - *value = NULL; - *type = NULL; - *tb = NULL; - } else { - *value = exc_value; - Py_INCREF(*value); - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - *tb = PyException_GetTraceback(exc_value); - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #endif -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - PyObject *tmp_value = exc_info->exc_value; - exc_info->exc_value = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); - #else - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); - #endif -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type = NULL, *local_value, *local_tb = NULL; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if PY_VERSION_HEX >= 0x030C00A6 - local_value = tstate->current_exception; - tstate->current_exception = 0; - if (likely(local_value)) { - local_type = (PyObject*) Py_TYPE(local_value); - Py_INCREF(local_type); - local_tb = PyException_GetTraceback(local_value); - } - #else - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - #endif -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE && PY_VERSION_HEX >= 0x030C00A6 - if (unlikely(tstate->current_exception)) -#elif CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - #if PY_VERSION_HEX >= 0x030B00a4 - tmp_value = exc_info->exc_value; - exc_info->exc_value = local_value; - tmp_type = NULL; - tmp_tb = NULL; - Py_XDECREF(local_type); - Py_XDECREF(local_tb); - #else - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - #endif - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_value = exc_info->exc_value; - exc_info->exc_value = *value; - if (tmp_value == NULL || tmp_value == Py_None) { - Py_XDECREF(tmp_value); - tmp_value = NULL; - tmp_type = NULL; - tmp_tb = NULL; - } else { - tmp_type = (PyObject*) Py_TYPE(tmp_value); - Py_INCREF(tmp_type); - #if CYTHON_COMPILING_IN_CPYTHON - tmp_tb = ((PyBaseExceptionObject*) tmp_value)->traceback; - Py_XINCREF(tmp_tb); - #else - tmp_tb = PyException_GetTraceback(tmp_value); - #endif - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (unlikely(!py_import)) - goto bad; - if (!from_list) { - empty_list = PyList_New(0); - if (unlikely(!empty_list)) - goto bad; - from_list = empty_list; - } - #endif - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, 1); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, 1); - #endif - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (unlikely(!py_level)) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, __pyx_d, empty_dict, from_list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, level); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, level); - #endif - #endif - } - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - return module; -} - -/* ImportDottedModule */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } - if (likely(PyTuple_GET_SIZE(parts_tuple) == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( -#if PY_MAJOR_VERSION < 3 - PyExc_ImportError, - "No module named '%s'", PyString_AS_STRING(partial_name)); -#else -#if PY_VERSION_HEX >= 0x030600B1 - PyExc_ModuleNotFoundError, -#else - PyExc_ImportError, -#endif - "No module named '%U'", partial_name); -#endif -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple) { - Py_ssize_t i, nparts; - nparts = PyTuple_GET_SIZE(parts_tuple); - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = PySequence_ITEM(parts_tuple, i); -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (unlikely(!module)) { - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); - } - return module; -} -#endif -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if PY_MAJOR_VERSION < 3 - PyObject *module, *from_list, *star = __pyx_n_s__3; - CYTHON_UNUSED_VAR(parts_tuple); - from_list = PyList_New(1); - if (unlikely(!from_list)) - return NULL; - Py_INCREF(star); - PyList_SET_ITEM(from_list, 0, star); - module = __Pyx_Import(name, from_list, 0); - Py_DECREF(from_list); - return module; -#else - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - return __Pyx_ImportDottedModule_WalkParts(module, name, parts_tuple); -#endif -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030400B1 - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_n_s_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_n_s_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* ssize_strlen */ -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s) { - size_t len = strlen(s); - if (unlikely(len > PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, "byte string is too long"); - return -1; - } - return (Py_ssize_t) len; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; itp_as_sequence && type->tp_as_sequence->sq_repeat)) { - return type->tp_as_sequence->sq_repeat(seq, mul); - } else -#endif - { - return __Pyx_PySequence_Multiply_Generic(seq, mul); - } -} - -/* SetItemInt */ -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) { - int r; - if (unlikely(!j)) return -1; - r = PyObject_SetItem(o, j, v); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, int is_list, - CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = (!wraparound) ? i : ((likely(i >= 0)) ? i : i + PyList_GET_SIZE(o)); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o)))) { - PyObject* old = PyList_GET_ITEM(o, n); - Py_INCREF(v); - PyList_SET_ITEM(o, n, v); - Py_DECREF(old); - return 1; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_ass_subscript) { - int r; - PyObject *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return -1; - r = mm->mp_ass_subscript(o, key, v); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_ass_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return -1; - PyErr_Clear(); - } - } - return sm->sq_ass_item(o, i, v); - } - } -#else -#if CYTHON_COMPILING_IN_PYPY - if (is_list || (PySequence_Check(o) && !PyDict_Check(o))) -#else - if (is_list || PySequence_Check(o)) -#endif - { - return PySequence_SetItem(o, i, v); - } -#endif - return __Pyx_SetItemInt_Generic(o, PyInt_FromSsize_t(i), v); -} - -/* RaiseUnboundLocalError */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* DivInt[long] */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - const char* module_name_str = 0; - PyObject* module_name = 0; - PyObject* module_dot = 0; - PyObject* full_name = 0; - PyErr_Clear(); - module_name_str = PyModule_GetName(module); - if (unlikely(!module_name_str)) { goto modbad; } - module_name = PyUnicode_FromString(module_name_str); - if (unlikely(!module_name)) { goto modbad; } - module_dot = PyUnicode_Concat(module_name, __pyx_kp_u__2); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (!r) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* ErrOccurredWithGIL */ -static CYTHON_INLINE int __Pyx_ErrOccurredWithGIL(void) { - int err; - #ifdef WITH_THREAD - PyGILState_STATE _save = PyGILState_Ensure(); - #endif - err = !!PyErr_Occurred(); - #ifdef WITH_THREAD - PyGILState_Release(_save); - #endif - return err; -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - __Pyx_TypeName type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, attr_name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(attr_name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* FixUpExtensionType */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if PY_VERSION_HEX > 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - CYTHON_UNUSED_VAR(spec); - CYTHON_UNUSED_VAR(type); -#else - const PyType_Slot *slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { - int changed = 0; -#if !(PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON) - const -#endif - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { -#if PY_VERSION_HEX < 0x030900b1 - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif -#else - if ((0)); -#endif -#if PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - if (unlikely(PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr) < 0)) { - Py_DECREF(descr); - return -1; - } - Py_DECREF(descr); - changed = 1; - } -#endif - } - memb++; - } - if (changed) - PyType_Modified(type); - } -#endif - return 0; -} -#endif - -/* PyObjectCallNoArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg = NULL; - return __Pyx_PyObject_FastCall(func, (&arg)+1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#elif PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* ValidateBasesTuple */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases) { - Py_ssize_t i, n = PyTuple_GET_SIZE(bases); - for (i = 1; i < n; i++) - { - PyObject *b0 = PyTuple_GET_ITEM(bases, i); - PyTypeObject *b; -#if PY_MAJOR_VERSION < 3 - if (PyClass_Check(b0)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class", - PyString_AS_STRING(((PyClassObject*)b0)->cl_name)); - return -1; - } -#endif - b = (PyTypeObject*) b0; - if (!__Pyx_PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "base class '" __Pyx_FMT_TYPENAME "' is not a heap type", b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - if (dictoffset == 0 && b->tp_dictoffset) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, " - "but base type '" __Pyx_FMT_TYPENAME "' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - type_name, b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - } - return 0; -} -#endif - -/* PyType_Ready */ -static int __Pyx_PyType_Ready(PyTypeObject *t) { -#if CYTHON_USE_TYPE_SPECS || !(CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API) || defined(PYSTON_MAJOR_VERSION) - (void)__Pyx_PyObject_CallMethod0; -#if CYTHON_USE_TYPE_SPECS - (void)__Pyx_validate_bases_tuple; -#endif - return PyType_Ready(t); -#else - int r; - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases && unlikely(__Pyx_validate_bases_tuple(t->tp_name, t->tp_dictoffset, bases) == -1)) - return -1; -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - { - int gc_was_enabled; - #if PY_VERSION_HEX >= 0x030A00b1 - gc_was_enabled = PyGC_Disable(); - (void)__Pyx_PyObject_CallMethod0; - #else - PyObject *ret, *py_status; - PyObject *gc = NULL; - #if PY_VERSION_HEX >= 0x030700a1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM+0 >= 0x07030400) - gc = PyImport_GetModule(__pyx_kp_u_gc); - #endif - if (unlikely(!gc)) gc = PyImport_Import(__pyx_kp_u_gc); - if (unlikely(!gc)) return -1; - py_status = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_isenabled); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_disable); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - #endif - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#if PY_VERSION_HEX >= 0x030A0000 - t->tp_flags |= Py_TPFLAGS_IMMUTABLETYPE; -#endif -#else - (void)__Pyx_PyObject_CallMethod0; -#endif - r = PyType_Ready(t); -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - #if PY_VERSION_HEX >= 0x030A00b1 - if (gc_was_enabled) - PyGC_Enable(); - #else - if (gc_was_enabled) { - PyObject *tp, *v, *tb; - PyErr_Fetch(&tp, &v, &tb); - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_enable); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - PyErr_Restore(tp, v, tb); - } else { - Py_XDECREF(tp); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - #endif - } -#endif - return r; -#endif -} - -/* SetVTable */ -static int __Pyx_SetVtable(PyTypeObject *type, void *vtable) { - PyObject *ob = PyCapsule_New(vtable, 0, 0); - if (unlikely(!ob)) - goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(PyObject_SetAttr((PyObject *) type, __pyx_n_s_pyx_vtable, ob) < 0)) -#else - if (unlikely(PyDict_SetItem(type->tp_dict, __pyx_n_s_pyx_vtable, ob) < 0)) -#endif - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* GetVTable */ -static void* __Pyx_GetVtable(PyTypeObject *type) { - void* ptr; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *ob = PyObject_GetAttr((PyObject *)type, __pyx_n_s_pyx_vtable); -#else - PyObject *ob = PyObject_GetItem(type->tp_dict, __pyx_n_s_pyx_vtable); -#endif - if (!ob) - goto bad; - ptr = PyCapsule_GetPointer(ob, 0); - if (!ptr && !PyErr_Occurred()) - PyErr_SetString(PyExc_RuntimeError, "invalid vtable found for imported type"); - Py_DECREF(ob); - return ptr; -bad: - Py_XDECREF(ob); - return NULL; -} - -/* MergeVTables */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_MergeVtables(PyTypeObject *type) { - int i; - void** base_vtables; - __Pyx_TypeName tp_base_name; - __Pyx_TypeName base_name; - void* unknown = (void*)-1; - PyObject* bases = type->tp_bases; - int base_depth = 0; - { - PyTypeObject* base = type->tp_base; - while (base) { - base_depth += 1; - base = base->tp_base; - } - } - base_vtables = (void**) malloc(sizeof(void*) * (size_t)(base_depth + 1)); - base_vtables[0] = unknown; - for (i = 1; i < PyTuple_GET_SIZE(bases); i++) { - void* base_vtable = __Pyx_GetVtable(((PyTypeObject*)PyTuple_GET_ITEM(bases, i))); - if (base_vtable != NULL) { - int j; - PyTypeObject* base = type->tp_base; - for (j = 0; j < base_depth; j++) { - if (base_vtables[j] == unknown) { - base_vtables[j] = __Pyx_GetVtable(base); - base_vtables[j + 1] = unknown; - } - if (base_vtables[j] == base_vtable) { - break; - } else if (base_vtables[j] == NULL) { - goto bad; - } - base = base->tp_base; - } - } - } - PyErr_Clear(); - free(base_vtables); - return 0; -bad: - tp_base_name = __Pyx_PyType_GetName(type->tp_base); - base_name = __Pyx_PyType_GetName((PyTypeObject*)PyTuple_GET_ITEM(bases, i)); - PyErr_Format(PyExc_TypeError, - "multiple bases have vtable conflict: '" __Pyx_FMT_TYPENAME "' and '" __Pyx_FMT_TYPENAME "'", tp_base_name, base_name); - __Pyx_DECREF_TypeName(tp_base_name); - __Pyx_DECREF_TypeName(base_name); - free(base_vtables); - return -1; -} -#endif - -/* SetupReduce */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStrNoError(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_getstate = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; - PyObject *getstate = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - getstate = _PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate); -#else - getstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_getstate); - if (!getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (getstate) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_getstate = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_getstate); -#else - object_getstate = __Pyx_PyObject_GetAttrStrNoError((PyObject*)&PyBaseObject_Type, __pyx_n_s_getstate); - if (!object_getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (object_getstate != getstate) { - goto __PYX_GOOD; - } - } -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) { - __Pyx_TypeName type_obj_name = - __Pyx_PyType_GetName((PyTypeObject*)type_obj); - PyErr_Format(PyExc_RuntimeError, - "Unable to initialize pickling for " __Pyx_FMT_TYPENAME, type_obj_name); - __Pyx_DECREF_TypeName(type_obj_name); - } - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); - Py_XDECREF(object_getstate); - Py_XDECREF(getstate); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} -#endif - -/* FetchSharedCythonModule */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - PyObject *abi_module = PyImport_AddModule((char*) __PYX_ABI_MODULE_NAME); - if (unlikely(!abi_module)) return NULL; - Py_INCREF(abi_module); - return abi_module; -} - -/* FetchCommonType */ -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t basicsize, - Py_ssize_t expected_basicsize) { - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* abi_module; - const char* object_name; - PyTypeObject *cached_type = NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - object_name = strrchr(type->tp_name, '.'); - object_name = object_name ? object_name+1 : type->tp_name; - cached_type = (PyTypeObject*) PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - if (__Pyx_VerifyCachedType( - (PyObject *)cached_type, - object_name, - cached_type->tp_basicsize, - type->tp_basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, (PyObject *)type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; -done: - Py_DECREF(abi_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#else -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module, *cached_type = NULL; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - cached_type = PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - Py_ssize_t basicsize; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) goto bad; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = 0; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; -#else - basicsize = likely(PyType_Check(cached_type)) ? ((PyTypeObject*) cached_type)->tp_basicsize : -1; -#endif - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - basicsize, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - CYTHON_UNUSED_VAR(module); - cached_type = __Pyx_PyType_FromModuleAndSpec(abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, cached_type) < 0) goto bad; -done: - Py_DECREF(abi_module); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#endif - -/* PyVectorcallFastCallDict */ -#if CYTHON_METH_FASTCALL -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= Py_TYPE(key)->tp_flags; - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(kwnames, i, key); - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - if (likely(kw == NULL) || PyDict_GET_SIZE(kw) == 0) { - return vc(func, args, nargs, NULL); - } - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) -{ - CYTHON_UNUSED_VAR(closure); - if (unlikely(op->func_doc == NULL)) { - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_name, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_tuple; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_kwdict; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->func_annotations; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - int is_coroutine; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; -#if PY_VERSION_HEX >= 0x03050000 - if (is_coroutine) { - PyObject *module, *fromlist, *marker = __pyx_n_s_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); - PyList_SET_ITEM(fromlist, 0, marker); - module = PyImport_ImportModuleLevelObject(__pyx_n_s_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - op->func_is_coroutine = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(op->func_is_coroutine)) { - return __Pyx_NewRef(op->func_is_coroutine); - } -ignore: - PyErr_Clear(); - } -#endif - op->func_is_coroutine = __Pyx_PyBool_FromLong(is_coroutine); - return __Pyx_NewRef(op->func_is_coroutine); -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {(char *) "_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#endif -#if PY_VERSION_HEX < 0x030500A0 - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - CYTHON_UNUSED_VAR(args); -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(((PyCFunctionObject*)m)->m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyCFunctionObject *cf = (PyCFunctionObject*) op; - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - cf->m_ml = ml; - cf->m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - cf->m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(((PyCFunctionObject*)m)->m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(((PyCFunctionObject*)m)->m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - PyErr_Format(PyExc_TypeError, "%.200s() needs an argument", - ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(PyTuple_GET_SIZE(kwnames))) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no keyword arguments", ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((_PyCFunctionFastWithKeywords)(void(*)(void))def->ml_meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))def->ml_meth)(self, cls, args, (size_t)nargs, kwnames); -} -#endif -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if (defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -#else -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, -#if !CYTHON_METH_FASTCALL - 0, -#elif CYTHON_BACKPORT_VECTORCALL - (printfunc)offsetof(__pyx_CyFunctionObject, func_vectorcall), -#else - offsetof(PyCFunctionObject, vectorcall), -#endif - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#ifdef _Py_TPFLAGS_HAVE_VECTORCALL - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_PyMethod_New, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if __PYX_NEED_TP_PRINT_SLOT - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -#endif -static int __pyx_CyFunction_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_CyFunctionType_spec, NULL); -#else - CYTHON_UNUSED_VAR(module); - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); -#endif - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - CYTHON_MAYBE_UNUSED_VAR(tstate); - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStrNoError(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} -#endif - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - if (c_line) { - (void) __pyx_cfilenm; - (void) __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - _PyTraceback_Add(funcname, filename, py_line); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - __Pyx_TypeName obj_type_name; - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' does not have the buffer interface", - obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparsable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, int is_complex) { - CYTHON_UNUSED_VAR(is_complex); - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, int is_complex) { - CYTHON_UNUSED_VAR(is_complex); - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, int ndim, int spec) -{ - CYTHON_UNUSED_VAR(ndim); - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* MemviewSliceInit */ - static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#if PY_VERSION_HEX >= 0x030A0000 || defined(HAVE_STDARG_PROTOTYPES) - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int_type *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int_type *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - __pyx_nonatomic_int_type old_acquisition_count; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - return; - } - old_acquisition_count = __pyx_add_acquisition_count(memview); - if (unlikely(old_acquisition_count <= 0)) { - if (likely(old_acquisition_count == 0)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } else { - __pyx_fatalerror("Acquisition count is %d (line %d)", - old_acquisition_count+1, lineno); - } - } -} -static CYTHON_INLINE void __Pyx_XCLEAR_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - __pyx_nonatomic_int_type old_acquisition_count; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - old_acquisition_count = __pyx_sub_acquisition_count(memview); - memslice->data = NULL; - if (likely(old_acquisition_count > 1)) { - memslice->memview = NULL; - } else if (likely(old_acquisition_count == 1)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - __pyx_fatalerror("Acquisition count is %d (line %d)", - old_acquisition_count-1, lineno); - } -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(int) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (int) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (int) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (int) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (int) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(int) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(int) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((int) 1) << (sizeof(int) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(long) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (long) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (long) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (long) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (long) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(long) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(long) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((long) 1) << (sizeof(long) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const char neg_one = (char) -1, const_zero = (char) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(char) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(char, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(char) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) >= 2 * PyLong_SHIFT)) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(char) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) >= 3 * PyLong_SHIFT)) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(char) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) >= 4 * PyLong_SHIFT)) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(char) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(char) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(char, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(char) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 2 * PyLong_SHIFT)) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(char) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 2 * PyLong_SHIFT)) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(char) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 3 * PyLong_SHIFT)) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(char) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 3 * PyLong_SHIFT)) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(char) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 4 * PyLong_SHIFT)) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(char) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(char) - 1 > 4 * PyLong_SHIFT)) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(char) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(char) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (char) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (char) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (char) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (char) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(char) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((char) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(char) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((char) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((char) 1) << (sizeof(char) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* FormatTypeName */ - #if CYTHON_COMPILING_IN_LIMITED_API -static __Pyx_TypeName -__Pyx_PyType_GetName(PyTypeObject* tp) -{ - PyObject *name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_n_s_name_2); - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) { - PyErr_Clear(); - Py_XSETREF(name, __Pyx_NewRef(__pyx_n_s__23)); - } - return name; -} -#endif - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[5]; - int same=1, i, found_dot; - const char* rt_from_call = Py_GetVersion(); - PyOS_snprintf(ctversion, 5, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - found_dot = 0; - for (i = 0; i < 4; i++) { - if (!ctversion[i]) { - same = (rt_from_call[i] < '0' || rt_from_call[i] > '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compile time version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - #if PY_MAJOR_VERSION >= 3 -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str) { - if (t.is_unicode | t.is_str) { - if (t.intern) { - *str = PyUnicode_InternFromString(t.s); - } else if (t.encoding) { - *str = PyUnicode_Decode(t.s, t.n - 1, t.encoding, NULL); - } else { - *str = PyUnicode_FromStringAndSize(t.s, t.n - 1); - } - } else { - *str = PyBytes_FromStringAndSize(t.s, t.n - 1); - } - if (!*str) - return -1; - if (PyObject_Hash(*str) == -1) - return -1; - return 0; -} -#endif -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION >= 3 - __Pyx_InitString(*t, t->p); - #else - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - #endif - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_LIMITED_API) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetName(Py_TYPE(result)); -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type " __Pyx_FMT_TYPENAME ")", - type_name, type_name, result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(b))) { - return __Pyx_PyLong_CompactValue(b); - } else { - const digit* digits = __Pyx_PyLong_Digits(b); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(b); - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/* #### Code section: utility_code_pragmas_end ### */ -#ifdef _MSC_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/app.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/app.py deleted file mode 100644 index 7c69b68ce7e591f2123091466ad7318703366b96..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/app.py +++ /dev/null @@ -1,182 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language -import soundfile as sf -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - sf.write("tmp.wav", audio, 44100) - return audio -def convert_wav_to_ogg(wav_file): - os.makedirs('out', exist_ok=True) - filename = os.path.splitext(os.path.basename(wav_file.name))[0] - output_path_ogg = os.path.join('out', f"out.ogg") - - renamed_input_path = os.path.join('in', f"in.wav") - os.makedirs('in', exist_ok=True) - os.rename(wav_file.name, renamed_input_path) - command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg] - os.system(" ".join(command)) - return output_path_ogg -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - with open('tmp.wav', 'rb') as wav_file: - newogg = convert_wav_to_ogg(wav_file) - return "Success", (hps.data.sampling_rate, audio),newogg - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/nanami/nanami_new.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - - - gr.Markdown(value=""" - 七海Bert-Vits2在线语音生成\n - 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n - \n - 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n - 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n - 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n - - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速') - btn = gr.Button("开启AI语音之旅吧!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - ogg_output = gr.File(label="Converted OGG file") - gr.Markdown(value=""" - 模型汇总:\n - 星瞳 https://huggingface.co/spaces/digitalxingtong/Xingtong-Bert-Vits2 \n - 星瞳 朗读专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2 \n - 星瞳 长文本专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2 \n - 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n - 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n - 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n - 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n - 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n - 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n - 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n - 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n - 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n - 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n - 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n - 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n - 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n - 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n - 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n - 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n - 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n - 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n - 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n - 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output,ogg_output]) - - - app.launch(show_error=True) diff --git a/spaces/dilums/sentence-similarity/components/layout/Header/Header.tsx b/spaces/dilums/sentence-similarity/components/layout/Header/Header.tsx deleted file mode 100644 index ba2a582b2a5eac0a3368c20f454793b9c3a55b08..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/components/layout/Header/Header.tsx +++ /dev/null @@ -1,27 +0,0 @@ -import ThemeToggle from "@/components/layout/ThemeToggle"; - -export default function Header() { - return ( -
          - -
          - ); -} diff --git a/spaces/djl234/UFO/Intra_MLP.py b/spaces/djl234/UFO/Intra_MLP.py deleted file mode 100644 index f1b79c7aa073df8fc31a1cf73098f6d04806f766..0000000000000000000000000000000000000000 --- a/spaces/djl234/UFO/Intra_MLP.py +++ /dev/null @@ -1,90 +0,0 @@ -import torch -import numpy -#from transformer import Local_Attention,Transformer_1 -# codes of this function are borrowed from https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/models/pointnet2_utils.py -def index_points(device, points, idx): - """ - - Input: - points: input points data, [B, N, C] - idx: sample index data, [B, S] - Return: - new_points:, indexed points data, [B, S, C] - """ - B = points.shape[0] - view_shape = list(idx.shape) - view_shape[1:] = [1] * (len(view_shape) - 1) - repeat_shape = list(idx.shape) - repeat_shape[0] = 1 - # batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape) - batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape) - new_points = points[batch_indices, idx, :] - return new_points - -def knn_l2(device, net, k, u): - ''' - Input: - k: int32, number of k in k-nn search - net: (batch_size, npoint, c) float32 array, points - u: int32, block size - Output: - idx: (batch_size, npoint, k) int32 array, indices to input points - ''' - INF = 1e8 - batch_size = net.size(0) - npoint = net.size(1) - n_channel = net.size(2) - - square = torch.pow(torch.norm(net, dim=2,keepdim=True),2) - - def u_block(batch_size, npoint, u): - block = numpy.zeros([batch_size, npoint, npoint]) - n = npoint // u - for i in range(n): - block[:, (i*u):(i*u+u), (i*u):(i*u+u)] = numpy.ones([batch_size, u, u]) * (-INF) - return block - - # minus_distance = 2 * torch.matmul(net, net.transpose(2,1)) - square - square.transpose(2,1) + torch.Tensor(u_block(batch_size, npoint, u)).to(device) - minus_distance = 2 * torch.matmul(net, net.transpose(2,1)) - square - square.transpose(2,1) + torch.Tensor(u_block(batch_size, npoint, u)).to(device) - _, indices = torch.topk(minus_distance, k, largest=True, sorted=False) - - return indices - -if __name__ == '__main__': - - bs,gs,k=5,5,4 - - A=torch.rand(bs*gs,512,14,14).cuda() - net=Transformer_1(512,4,4,782).cuda() - Y=net(A) - print(Y.shape) - exit(0) - feature_map_size=A.shape[-1] - point = A.permute(0,2,1,3,4).reshape(A.size(0), A.size(1)*A.shape[-1]*A.shape[-2], -1) - point = point.permute(0,2,1) - X=point - print(point.shape) - idx = knn_l2(0, point, 4, 1) - #print(idx) - - feat=idx - new_point = index_points(0, point,idx) - - group_point = new_point.permute(0, 3, 2, 1) - print(group_point.shape) - _1,_2,_3,_4=group_point.shape - X=X.permute(0,2,1) - print(X.shape) - #torch.cat([group_point.reshape(_1*_2,k,_4),X.reshape(_1*_2,1,_4)],dim=1).permute(0,2,1) - attn_map=X.reshape(_1*_2,1,_4)@torch.cat([group_point.reshape(_1*_2,k,_4),X.reshape(_1*_2,1,_4)],dim=1).permute(0,2,1) - V=torch.cat([group_point.reshape(_1*_2,k,_4),X.reshape(_1*_2,1,_4)],dim=1) - print(attn_map.shape) - Y=attn_map@V - Y=Y.reshape(_1,_2,_4) - - #group_point = torch.max(group_point, 2)[0] # [B, D', S] - group_point=Y - print(group_point.shape) - - intra_mask = group_point.view(bs,gs, group_point.size(2), feature_map_size, feature_map_size) - print(intra_mask.shape) diff --git a/spaces/dmeck/RVC-Speakers/speakers/processors/base_processor.py b/spaces/dmeck/RVC-Speakers/speakers/processors/base_processor.py deleted file mode 100644 index ddfe0a533634e9cce3442211b7071c8c4179cad8..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/speakers/processors/base_processor.py +++ /dev/null @@ -1,58 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from omegaconf import OmegaConf -from abc import abstractmethod -from speakers.load.serializable import Serializable - - -class ProcessorData(Serializable): - """ - The base abstract ProcessorData class. - """ - - @property - @abstractmethod - def type(self) -> str: - """Type of the Message, used for serialization.""" - - @property - def lc_serializable(self) -> bool: - """Whether this class is Processor serializable.""" - return True - - -class BaseProcessor: - """ - 音频处理器有抽象处理器Processor,通过单独的Processor配置, - 通过from_config工厂方法预加载音频处理器 - """ - def __init__(self): - self.transform = lambda x: x - return - - def __call__(self, data: ProcessorData): - return self.transform(data) - - @classmethod - def match(cls, data: ProcessorData): - """ - 匹配处理器 - :param data: - :return: - """ - raise NotImplementedError - - @classmethod - def from_config(cls, cfg=None): - return cls() - - def build(self, **kwargs): - cfg = OmegaConf.create(kwargs) - - return self.from_config(cfg) - diff --git a/spaces/dmeck/RVC-Speakers/speakers/server/bootstrap/bootstrap_register.py b/spaces/dmeck/RVC-Speakers/speakers/server/bootstrap/bootstrap_register.py deleted file mode 100644 index 304832a465078af67bfb443ad736b40345cf1dda..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/speakers/server/bootstrap/bootstrap_register.py +++ /dev/null @@ -1,76 +0,0 @@ -from speakers.server.bootstrap import Bootstrap - - -class BootstrapRegister: - """ - 注册管理器 - """ - mapping = { - "bootstrap": {}, - } - - @classmethod - def register_bootstrap(cls, name): - r"""Register system bootstrap to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from lavis.common.registry import registry - """ - - print(f"register_bootstrap {name}") - - def wrap(task_cls): - from speakers.server.bootstrap.base import Bootstrap - assert issubclass( - task_cls, Bootstrap - ), "All tasks must inherit bootstrap class" - if name in cls.mapping["bootstrap"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["bootstrap"][name] - ) - ) - cls.mapping["bootstrap"][name] = task_cls - return task_cls - - return wrap - - @classmethod - def get_bootstrap_class(cls, name): - return cls.mapping["bootstrap"].get(name, None) - - @classmethod - def list_bootstrap(cls): - return sorted(cls.mapping["bootstrap"].keys()) - - -bootstrap_register = BootstrapRegister() - - -bootstrap_cache = {} - - -def load_bootstrap(config: dict = None): - - def _build_task_from_cfg(cfg): - return ( - bootstrap_register.get_bootstrap_class(cfg.name).from_config(cfg) - if cfg is not None - else Bootstrap() - ) - for bootstraps in config: - for key, bootstrap_cfg in bootstraps.items(): # 使用 .items() 方法获取键值对 - bootstrap = _build_task_from_cfg(bootstrap_cfg) - bootstrap_cache[key] = bootstrap - - -def get_bootstrap(key: str) -> Bootstrap: - if not bootstrap_cache.get(key): - raise ValueError(f'Could not find bootstrap_cache for: "{key}". ' - f'Choose from the following: %s' % ','.join(bootstrap_cache)) - - return bootstrap_cache[key] diff --git a/spaces/dmeck/RVC-Speakers/speakers/tasks/base_task.py b/spaces/dmeck/RVC-Speakers/speakers/tasks/base_task.py deleted file mode 100644 index 9df43bd1f44c86f8889adc51e2adfc8323f3a932..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/speakers/tasks/base_task.py +++ /dev/null @@ -1,143 +0,0 @@ -from abc import abstractmethod -from typing import List, Dict - -from speakers.load.serializable import Serializable -from speakers.processors import ProcessorData, BaseProcessor -from speakers.server.model.flow_data import PayLoad - -import logging - - -class FlowData(Serializable): - """ - 当前runner的任务参数 - """ - - @property - @abstractmethod - def type(self) -> str: - """Type of the Message, used for serialization.""" - - @property - def lc_serializable(self) -> bool: - """Whether this class is Processor serializable.""" - return True - - -class Runner(Serializable): - """ runner的任务id""" - task_id: str - flow_data: FlowData - - @property - def type(self) -> str: - """Type of the Runner Message, used for serialization.""" - return 'runner' - - @property - def lc_serializable(self) -> bool: - """Whether this class is Processor serializable.""" - return True - - -# Define a base class for tasks -class BaseTask: - """ - 基础任务处理器由任务管理器创建,用于执行runner flow 的任务,子类实现具体的处理流程 - 此类定义了流程runner task的生命周期 - """ - - def __init__(self, preprocess_dict: Dict[str, BaseProcessor]): - self._progress_hooks = [] - self._add_logger_hook() - self._preprocess_dict = preprocess_dict - self.logger = logging.getLogger('base_task_runner') - - @classmethod - def from_config(cls, cfg=None): - return cls(preprocess_dict={}) - - def _add_logger_hook(self): - """ - 默认的任务日志监听者 - :return: - """ - LOG_MESSAGES = { - 'dispatch_voice_task': 'dispatch_voice_task', - 'saved': 'Saving results', - } - LOG_MESSAGES_SKIP = { - 'skip-no-text': 'No text regions with text! - Skipping', - } - LOG_MESSAGES_ERROR = { - 'error': 'task error', - } - - async def ph(task_id: str, runner_stat: str, state: str, finished: bool = False): - if state in LOG_MESSAGES: - self.logger.info(LOG_MESSAGES[state]) - elif state in LOG_MESSAGES_SKIP: - self.logger.warn(LOG_MESSAGES_SKIP[state]) - elif state in LOG_MESSAGES_ERROR: - self.logger.error(LOG_MESSAGES_ERROR[state]) - - self.add_progress_hook(ph) - - def add_progress_hook(self, ph): - """ - 注册监听器 - :param ph: 监听者 - :return: - """ - self._progress_hooks.append(ph) - - async def report_progress(self, task_id: str, runner_stat: str, state: str, finished: bool = False): - """ - 任务通知监听器 - :param task_id: 任务id - :param runner_stat: 任务执行位置 - :param state: 状态 - :param finished: 是否完成 - :return: - """ - for ph in self._progress_hooks: - await ph(task_id, runner_stat, state, finished) - - @classmethod - def prepare(cls, payload: PayLoad) -> Runner: - """ - 预处理 - - Args: - payload (PayLoad): runner flow data - - Raises: - NotImplementedError: This method should be overridden by subclasses. - """ - raise NotImplementedError - - @classmethod - async def dispatch(cls, runner: Runner): - """ - 当前runner task具体flow data - - Args: - runner (ProcessorData): runner flow data - - Raises: - NotImplementedError: This method should be overridden by subclasses. - """ - raise NotImplementedError - - @classmethod - def complete(cls, runner: Runner): - """ - 后置处理 - - Args: - runner (Runner): runner flow data - - Raises: - NotImplementedError: This method should be overridden by subclasses. - """ - raise NotImplementedError diff --git a/spaces/docs-demos/electra_large_discriminator_squad2_512/README.md b/spaces/docs-demos/electra_large_discriminator_squad2_512/README.md deleted file mode 100644 index 5912e0cd05c57f055e0d36d242530b4ead683938..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/electra_large_discriminator_squad2_512/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: ELECTRA -emoji: 😻 -colorFrom: red -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/donnyb/FalconVis/Dockerfile b/spaces/donnyb/FalconVis/Dockerfile deleted file mode 100644 index b524b7fa7a5148e44754a8a88b24868729a2f229..0000000000000000000000000000000000000000 --- a/spaces/donnyb/FalconVis/Dockerfile +++ /dev/null @@ -1,27 +0,0 @@ -# Use the official Python 3.9 image -FROM python:3.9 - -# Set the working directory to /code -WORKDIR /code - -# Copy the current directory contents into the container at /code -COPY ./requirements.txt /code/requirements.txt - -# Install requirements.txt -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -CMD ["python3", "main.py", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/ennet/ChatDev/chatdev/phase.py b/spaces/ennet/ChatDev/chatdev/phase.py deleted file mode 100644 index fbf181e3aca6999d49cd07a02864924d6a5c8d3f..0000000000000000000000000000000000000000 --- a/spaces/ennet/ChatDev/chatdev/phase.py +++ /dev/null @@ -1,597 +0,0 @@ -import os -import re -from abc import ABC, abstractmethod - -from camel.agents import RolePlaying -from camel.messages import ChatMessage -from camel.typing import TaskType, ModelType -from chatdev.chat_env import ChatEnv -from chatdev.statistics import get_info -from chatdev.utils import log_and_print_online, log_arguments - - -class Phase(ABC): - - def __init__(self, - assistant_role_name, - user_role_name, - phase_prompt, - role_prompts, - phase_name, - model_type, - log_filepath): - """ - - Args: - assistant_role_name: who receives chat in a phase - user_role_name: who starts the chat in a phase - phase_prompt: prompt of this phase - role_prompts: prompts of all roles - phase_name: name of this phase - """ - self.seminar_conclusion = None - self.assistant_role_name = assistant_role_name - self.user_role_name = user_role_name - self.phase_prompt = phase_prompt - self.phase_env = dict() - self.phase_name = phase_name - self.assistant_role_prompt = role_prompts[assistant_role_name] - self.user_role_prompt = role_prompts[user_role_name] - self.ceo_prompt = role_prompts["Chief Executive Officer"] - self.counselor_prompt = role_prompts["Counselor"] - self.timeout_seconds = 1.0 - self.max_retries = 3 - self.reflection_prompt = """Here is a conversation between two roles: {conversations} {question}""" - self.model_type = model_type - self.log_filepath = log_filepath - - @log_arguments - def chatting( - self, - chat_env, - task_prompt: str, - assistant_role_name: str, - user_role_name: str, - phase_prompt: str, - phase_name: str, - assistant_role_prompt: str, - user_role_prompt: str, - task_type=TaskType.CHATDEV, - need_reflect=False, - with_task_specify=False, - model_type=ModelType.GPT_3_5_TURBO, - placeholders=None, - chat_turn_limit=10 - ) -> str: - """ - - Args: - chat_env: global chatchain environment TODO: only for employee detection, can be deleted - task_prompt: user query prompt for building the software - assistant_role_name: who receives the chat - user_role_name: who starts the chat - phase_prompt: prompt of the phase - phase_name: name of the phase - assistant_role_prompt: prompt of assistant role - user_role_prompt: prompt of user role - task_type: task type - need_reflect: flag for checking reflection - with_task_specify: with task specify - model_type: model type - placeholders: placeholders for phase environment to generate phase prompt - chat_turn_limit: turn limits in each chat - - Returns: - - """ - - if placeholders is None: - placeholders = {} - assert 1 <= chat_turn_limit <= 100 - - if not chat_env.exist_employee(assistant_role_name): - raise ValueError(f"{assistant_role_name} not recruited in ChatEnv.") - if not chat_env.exist_employee(user_role_name): - raise ValueError(f"{user_role_name} not recruited in ChatEnv.") - - # init role play - role_play_session = RolePlaying( - assistant_role_name=assistant_role_name, - user_role_name=user_role_name, - assistant_role_prompt=assistant_role_prompt, - user_role_prompt=user_role_prompt, - task_prompt=task_prompt, - task_type=task_type, - with_task_specify=with_task_specify, - model_type=model_type, - ) - - # log_and_print_online("System", role_play_session.assistant_sys_msg) - # log_and_print_online("System", role_play_session.user_sys_msg) - - # start the chat - _, input_user_msg = role_play_session.init_chat(None, placeholders, phase_prompt) - seminar_conclusion = None - - # handle chats - # the purpose of the chatting in one phase is to get a seminar conclusion - # there are two types of conclusion - # 1. with "" mark - # 1.1 get seminar conclusion flag (ChatAgent.info) from assistant or user role, which means there exist special "" mark in the conversation - # 1.2 add "" to the reflected content of the chat (which may be terminated chat without "" mark) - # 2. without "" mark, which means the chat is terminated or normally ended without generating a marked conclusion, and there is no need to reflect - for i in range(chat_turn_limit): - # start the chat, we represent the user and send msg to assistant - # 1. so the input_user_msg should be assistant_role_prompt + phase_prompt - # 2. then input_user_msg send to LLM and get assistant_response - # 3. now we represent the assistant and send msg to user, so the input_assistant_msg is user_role_prompt + assistant_response - # 4. then input_assistant_msg send to LLM and get user_response - # all above are done in role_play_session.step, which contains two interactions with LLM - # the first interaction is logged in role_play_session.init_chat - assistant_response, user_response = role_play_session.step(input_user_msg, chat_turn_limit == 1) - - conversation_meta = "**" + assistant_role_name + "<->" + user_role_name + " on : " + str( - phase_name) + ", turn " + str(i) + "**\n\n" - - # TODO: max_tokens_exceeded errors here - if isinstance(assistant_response.msg, ChatMessage): - # we log the second interaction here - log_and_print_online(role_play_session.assistant_agent.role_name, - conversation_meta + "[" + role_play_session.user_agent.system_message.content + "]\n\n" + assistant_response.msg.content) - if role_play_session.assistant_agent.info: - seminar_conclusion = assistant_response.msg.content - break - if assistant_response.terminated: - break - - if isinstance(user_response.msg, ChatMessage): - # here is the result of the second interaction, which may be used to start the next chat turn - log_and_print_online(role_play_session.user_agent.role_name, - conversation_meta + "[" + role_play_session.assistant_agent.system_message.content + "]\n\n" + user_response.msg.content) - if role_play_session.user_agent.info: - seminar_conclusion = user_response.msg.content - break - if user_response.terminated: - break - - # continue the chat - if chat_turn_limit > 1 and isinstance(user_response.msg, ChatMessage): - input_user_msg = user_response.msg - else: - break - - # conduct self reflection - if need_reflect: - if seminar_conclusion in [None, ""]: - seminar_conclusion = " " + self.self_reflection(task_prompt, role_play_session, phase_name, - chat_env) - if "recruiting" in phase_name: - if "Yes".lower() not in seminar_conclusion.lower() and "No".lower() not in seminar_conclusion.lower(): - seminar_conclusion = " " + self.self_reflection(task_prompt, role_play_session, - phase_name, - chat_env) - elif seminar_conclusion in [None, ""]: - seminar_conclusion = " " + self.self_reflection(task_prompt, role_play_session, phase_name, - chat_env) - else: - seminar_conclusion = assistant_response.msg.content - - log_and_print_online("**[Seminar Conclusion]**:\n\n {}".format(seminar_conclusion)) - seminar_conclusion = seminar_conclusion.split("")[-1] - return seminar_conclusion - - def self_reflection(self, - task_prompt: str, - role_play_session: RolePlaying, - phase_name: str, - chat_env: ChatEnv) -> str: - """ - - Args: - task_prompt: user query prompt for building the software - role_play_session: role play session from the chat phase which needs reflection - phase_name: name of the chat phase which needs reflection - chat_env: global chatchain environment - - Returns: - reflected_content: str, reflected results - - """ - messages = role_play_session.assistant_agent.stored_messages if len( - role_play_session.assistant_agent.stored_messages) >= len( - role_play_session.user_agent.stored_messages) else role_play_session.user_agent.stored_messages - messages = ["{}: {}".format(message.role_name, message.content.replace("\n\n", "\n")) for message in messages] - messages = "\n\n".join(messages) - - if "recruiting" in phase_name: - question = """Answer their final discussed conclusion (Yes or No) in the discussion without any other words, e.g., "Yes" """ - elif phase_name == "DemandAnalysis": - question = """Answer their final product modality in the discussion without any other words, e.g., "PowerPoint" """ - # elif phase_name in [PhaseType.BRAINSTORMING]: - # question = """Conclude three most creative and imaginative brainstorm ideas from the whole discussion, in the format: "1) *; 2) *; 3) *; where '*' represents a suggestion." """ - elif phase_name == "LanguageChoose": - question = """Conclude the programming language being discussed for software development, in the format: "*" where '*' represents a programming language." """ - elif phase_name == "EnvironmentDoc": - question = """According to the codes and file format listed above, write a requirements.txt file to specify the dependencies or packages required for the project to run properly." """ - else: - raise ValueError(f"Reflection of phase {phase_name}: Not Assigned.") - - # Reflections actually is a special phase between CEO and counselor - # They read the whole chatting history of this phase and give refined conclusion of this phase - reflected_content = \ - self.chatting(chat_env=chat_env, - task_prompt=task_prompt, - assistant_role_name="Chief Executive Officer", - user_role_name="Counselor", - phase_prompt=self.reflection_prompt, - phase_name="Reflection", - assistant_role_prompt=self.ceo_prompt, - user_role_prompt=self.counselor_prompt, - placeholders={"conversations": messages, "question": question}, - need_reflect=False, - chat_turn_limit=1, - model_type=self.model_type) - - if "recruiting" in phase_name: - if "Yes".lower() in reflected_content.lower(): - return "Yes" - return "No" - else: - return reflected_content - - @abstractmethod - def update_phase_env(self, chat_env): - """ - update self.phase_env (if needed) using chat_env, then the chatting will use self.phase_env to follow the context and fill placeholders in phase prompt - must be implemented in customized phase - the usual format is just like: - ``` - self.phase_env.update({key:chat_env[key]}) - ``` - Args: - chat_env: global chat chain environment - - Returns: None - - """ - pass - - @abstractmethod - def update_chat_env(self, chat_env) -> ChatEnv: - """ - update chan_env based on the results of self.execute, which is self.seminar_conclusion - must be implemented in customized phase - the usual format is just like: - ``` - chat_env.xxx = some_func_for_postprocess(self.seminar_conclusion) - ``` - Args: - chat_env:global chat chain environment - - Returns: - chat_env: updated global chat chain environment - - """ - pass - - def execute(self, chat_env, chat_turn_limit, need_reflect) -> ChatEnv: - """ - execute the chatting in this phase - 1. receive information from environment: update the phase environment from global environment - 2. execute the chatting - 3. change the environment: update the global environment using the conclusion - Args: - chat_env: global chat chain environment - chat_turn_limit: turn limit in each chat - need_reflect: flag for reflection - - Returns: - chat_env: updated global chat chain environment using the conclusion from this phase execution - - """ - self.update_phase_env(chat_env) - self.seminar_conclusion = \ - self.chatting(chat_env=chat_env, - task_prompt=chat_env.env_dict['task_prompt'], - need_reflect=need_reflect, - assistant_role_name=self.assistant_role_name, - user_role_name=self.user_role_name, - phase_prompt=self.phase_prompt, - phase_name=self.phase_name, - assistant_role_prompt=self.assistant_role_prompt, - user_role_prompt=self.user_role_prompt, - chat_turn_limit=chat_turn_limit, - placeholders=self.phase_env, - model_type=self.model_type) - chat_env = self.update_chat_env(chat_env) - return chat_env - - -class DemandAnalysis(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - pass - - def update_chat_env(self, chat_env) -> ChatEnv: - if len(self.seminar_conclusion) > 0: - chat_env.env_dict['modality'] = self.seminar_conclusion.split("")[-1].lower().replace(".", "").strip() - return chat_env - - -class LanguageChoose(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas']}) - - def update_chat_env(self, chat_env) -> ChatEnv: - if len(self.seminar_conclusion) > 0 and "" in self.seminar_conclusion: - chat_env.env_dict['language'] = self.seminar_conclusion.split("")[-1].lower().replace(".", "").strip() - elif len(self.seminar_conclusion) > 0: - chat_env.env_dict['language'] = self.seminar_conclusion - else: - chat_env.env_dict['language'] = "Python" - return chat_env - - -class Coding(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - gui = "" if not chat_env.config.gui_design \ - else "The software should be equipped with graphical user interface (GUI) so that user can visually and graphically use it; so you must choose a GUI framework (e.g., in Python, you can implement GUI via tkinter, Pygame, Flexx, PyGUI, etc,)." - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "gui": gui}) - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env.update_codes(self.seminar_conclusion) - if len(chat_env.codes.codebooks.keys()) == 0: - raise ValueError("No Valid Codes.") - chat_env.rewrite_codes() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class ArtDesign(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env = {"task": chat_env.env_dict['task_prompt'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes()} - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env.proposed_images = chat_env.get_proposed_images_from_message(self.seminar_conclusion) - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class ArtIntegration(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env = {"task": chat_env.env_dict['task_prompt'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "images": "\n".join( - ["{}: {}".format(filename, chat_env.proposed_images[filename]) for - filename in sorted(list(chat_env.proposed_images.keys()))])} - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env.update_codes(self.seminar_conclusion) - chat_env.rewrite_codes() - # chat_env.generate_images_from_codes() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class CodeComplete(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "unimplemented_file": ""}) - unimplemented_file = "" - for filename in self.phase_env['pyfiles']: - code_content = open(os.path.join(chat_env.env_dict['directory'], filename)).read() - lines = [line.strip() for line in code_content.split("\n") if line.strip() == "pass"] - if len(lines) > 0 and self.phase_env['num_tried'][filename] < self.phase_env['max_num_implement']: - unimplemented_file = filename - break - self.phase_env['num_tried'][unimplemented_file] += 1 - self.phase_env['unimplemented_file'] = unimplemented_file - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env.update_codes(self.seminar_conclusion) - if len(chat_env.codes.codebooks.keys()) == 0: - raise ValueError("No Valid Codes.") - chat_env.rewrite_codes() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class CodeReviewComment(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update( - {"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "images": ", ".join(chat_env.incorporated_images)}) - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env.env_dict['review_comments'] = self.seminar_conclusion - return chat_env - - -class CodeReviewModification(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "comments": chat_env.env_dict['review_comments']}) - - def update_chat_env(self, chat_env) -> ChatEnv: - if "```".lower() in self.seminar_conclusion.lower(): - chat_env.update_codes(self.seminar_conclusion) - chat_env.rewrite_codes() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - self.phase_env['modification_conclusion'] = self.seminar_conclusion - return chat_env - - -class CodeReviewHuman(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - print( - f"You can participate in the development of the software {chat_env.env_dict['task_prompt']}. Please input your feedback. (\"End\" to quit the involvement.)") - provided_comments = input() - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "comments": provided_comments}) - - def update_chat_env(self, chat_env) -> ChatEnv: - if "```".lower() in self.seminar_conclusion.lower(): - chat_env.update_codes(self.seminar_conclusion) - chat_env.rewrite_codes() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class TestErrorSummary(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - chat_env.generate_images_from_codes() - (exist_bugs_flag, test_reports) = chat_env.exist_bugs() - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "test_reports": test_reports, - "exist_bugs_flag": exist_bugs_flag}) - log_and_print_online("**[Test Reports]**:\n\n{}".format(test_reports)) - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env.env_dict['error_summary'] = self.seminar_conclusion - chat_env.env_dict['test_reports'] = self.phase_env['test_reports'] - - return chat_env - - def execute(self, chat_env, chat_turn_limit, need_reflect) -> ChatEnv: - self.update_phase_env(chat_env) - if "ModuleNotFoundError" in self.phase_env['test_reports']: - chat_env.fix_module_not_found_error(self.phase_env['test_reports']) - log_and_print_online( - f"Software Test Engineer found ModuleNotFoundError:\n{self.phase_env['test_reports']}\n") - pip_install_content = "" - for match in re.finditer(r"No module named '(\S+)'", self.phase_env['test_reports'], re.DOTALL): - module = match.group(1) - pip_install_content += "{}\n```{}\n{}\n```\n".format("cmd", "bash", f"pip install {module}") - log_and_print_online(f"Programmer resolve ModuleNotFoundError by:\n{pip_install_content}\n") - self.seminar_conclusion = "nothing need to do" - else: - self.seminar_conclusion = \ - self.chatting(chat_env=chat_env, - task_prompt=chat_env.env_dict['task_prompt'], - need_reflect=need_reflect, - assistant_role_name=self.assistant_role_name, - user_role_name=self.user_role_name, - phase_prompt=self.phase_prompt, - phase_name=self.phase_name, - assistant_role_prompt=self.assistant_role_prompt, - user_role_prompt=self.user_role_prompt, - chat_turn_limit=chat_turn_limit, - placeholders=self.phase_env) - chat_env = self.update_chat_env(chat_env) - return chat_env - - -class TestModification(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "test_reports": chat_env.env_dict['test_reports'], - "error_summary": chat_env.env_dict['error_summary'], - "codes": chat_env.get_codes() - }) - - def update_chat_env(self, chat_env) -> ChatEnv: - if "```".lower() in self.seminar_conclusion.lower(): - chat_env.update_codes(self.seminar_conclusion) - chat_env.rewrite_codes() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class EnvironmentDoc(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes()}) - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env._update_requirements(self.seminar_conclusion) - chat_env.rewrite_requirements() - log_and_print_online("**[Software Info]**:\n\n {}".format(get_info(chat_env.env_dict['directory'],self.log_filepath))) - return chat_env - - -class Manual(Phase): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def update_phase_env(self, chat_env): - self.phase_env.update({"task": chat_env.env_dict['task_prompt'], - "modality": chat_env.env_dict['modality'], - "ideas": chat_env.env_dict['ideas'], - "language": chat_env.env_dict['language'], - "codes": chat_env.get_codes(), - "requirements": chat_env.get_requirements()}) - - def update_chat_env(self, chat_env) -> ChatEnv: - chat_env._update_manuals(self.seminar_conclusion) - chat_env.rewrite_manuals() - return chat_env diff --git a/spaces/eson/kplug/demo_ner.py b/spaces/eson/kplug/demo_ner.py deleted file mode 100644 index 8b22b83ae6cc6d7ff97badef83308d6964d1c1a9..0000000000000000000000000000000000000000 --- a/spaces/eson/kplug/demo_ner.py +++ /dev/null @@ -1,47 +0,0 @@ -# coding=utf-8 -# author: xusong -# time: 2022/8/25 16:57 - - -""" - -## ner demo -- https://gradio.app/named_entity_recognition/ -- https://huggingface.co/dslim/bert-base-NER?text=My+name+is+Wolfgang+and+I+live+in+Berlin - -""" - -from transformers import pipeline - -import gradio as gr - -ner_pipeline = pipeline("ner") - -examples = [ - "Does Chicago have any stores and does Joe live here?", -] - -import json - -def ner(text): - output = ner_pipeline(text) - for ent in output: - ent["score"] = float(ent["score"]) - aa = {"text": text, "entities": output} - return aa, output - - -demo = gr.Interface( - ner, - inputs=gr.Textbox(placeholder="Enter sentence here..."), - outputs= - [ - gr.HighlightedText( - label="NER", - show_legend=True, - ), - gr.JSON(), - ], - examples=examples) - -demo.launch() diff --git a/spaces/eson/kplug/gradio_patch.py b/spaces/eson/kplug/gradio_patch.py deleted file mode 100644 index c19be7da8a85d2a3e5d3eafcdf9f06c7b372a067..0000000000000000000000000000000000000000 --- a/spaces/eson/kplug/gradio_patch.py +++ /dev/null @@ -1,10 +0,0 @@ -# coding=utf-8 - -""" -huggingface默认gradio版本是3.1.7。在多tab下的chatbot报错。因此需要更新到3.12.0 -https://www.kingname.info/2019/10/29/pip-in-code/ -""" - -from pip._internal import main - -main(['install', 'gradio==3.12.0']) \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/vicuna/README.md b/spaces/eson/tokenizer-arena/vocab/vicuna/README.md deleted file mode 100644 index a26caaee7d85fe5db876e51c5bb8ae1ddb657dad..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/vicuna/README.md +++ /dev/null @@ -1,8 +0,0 @@ - - - - - -## 参考 - -- https://github.com/lm-sys/FastChat/blob/v0.2.5/fastchat/model/apply_delta.py#L127 \ No newline at end of file diff --git a/spaces/evaluate-measurement/regard/README.md b/spaces/evaluate-measurement/regard/README.md deleted file mode 100644 index 9d3955a0570a67b803aed8053b5d16f80124db58..0000000000000000000000000000000000000000 --- a/spaces/evaluate-measurement/regard/README.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: Regard -emoji: 🤗 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false -tags: -- evaluate -- measurement -description: >- - Regard aims to measure language polarity towards and social perceptions of a demographic (e.g. gender, race, sexual orientation). ---- - -# Measurement Card for Regard - - -## Measurement Description - -The `regard` measurement returns the estimated language polarity towards and social perceptions of a demographic (e.g. gender, race, sexual orientation). - -It uses a model trained on labelled data from the paper ["The Woman Worked as a Babysitter: On Biases in Language Generation" (EMNLP 2019)](https://arxiv.org/abs/1909.01326) - -## How to Use - -This measurement requires two lists of strings as input, enabling comparing the estimated polarity between the groups. - -```python ->>> regard = evaluate.load("regard", module_type="measurement") ->>> group1 = ['xyz are described as mean', 'xyz are thought of as being too ambitious'] ->>> group2 = ['xyz are known for making too much noise', 'xyz are described as often violent'] ->>> regard.compute(data = group1, references = group2) -``` - -### Inputs -- **data** (list of `str`): prediction/candidate sentences, e.g. sentences describing a given demographic group. -- **references** (list of `str`) (optional): reference/comparison sentences, e.g. sentences describing a different demographic group to compare against. -- **aggregation** (`str`) (optional): determines the type of aggregation performed. - If set to `None`, the difference between the regard scores for the two categories is returned. - Otherwise: - - `average` : returns the average regard for each category (negative, positive, neutral, other) for each group - - `maximum`: returns the maximum regard for each group - -### Output Values - -**With a single input**: - -`regard` : the regard scores of each string in the input list (if no aggregation is specified) -```python -{'neutral': 0.95, 'positive': 0.02, 'negative': 0.02, 'other': 0.01} -{'negative': 0.97, 'other': 0.02, 'neutral': 0.01, 'positive': 0.0} -``` - -`average_regard`: the average regard for each category (negative, positive, neutral, other) (if `aggregation` = `average`) -```python -{'neutral': 0.48, 'positive': 0.01, 'negative': 0.5, 'other': 0.01} -``` - -`max_regard`: the maximum regard across all input strings (if `aggregation` = `maximum`) -```python -{'neutral': 0.95, 'positive': 0.024, 'negative': 0.972, 'other': 0.019} -``` - -**With two lists of inputs**: - -By default, this measurement outputs a dictionary containing a list of regard scores, one for each category (negative, positive, neutral, other), representing the difference in regard between the two groups. - -```python -{'neutral': 0.35, 'negative': -0.36, 'other': 0.01, 'positive': 0.01} -``` - -With the `aggregation='maximum'` option, this measurement will output the maximum regard for each group: - -```python -{'negative': 0.98, 'other': 0.04, 'neutral': 0.03, 'positive': 0.0} -``` - -With the `aggregation='average'` option, this measurement will output the average regard for each category (negative, positive, neutral, other): - -```python -{'neutral': 0.37, 'negative': 0.57, 'other': 0.05, 'positive': 0.01} -``` - -### Examples - -Example 1 (single input): - -```python ->>> regard = evaluate.load("regard") ->>> group1 = ['xyz are described as mean', 'xyz are thought of as being too ambitious'] ->>> results = regard.compute(data = group1) ->>> for d in results['regard']: -... print({l['label']: round(l['score'],2) for l in d}) -{'neutral': 0.95, 'positive': 0.02, 'negative': 0.02, 'other': 0.01} -{'negative': 0.97, 'other': 0.02, 'neutral': 0.01, 'positive': 0.0} -``` - -Example 2 (comparison mode): -```python ->>> regard = evaluate.load("regard", "compare") ->>> group1 = ['xyz are described as mean', 'xyz are thought of as being too ambitious'] ->>> group2 = ['xyz are known for making too much noise', 'xyz are described as often violent'] ->>> results = regard.compute(data = group1, references = group2) ->>> print({k: round(v, 2) for k, v in results['regard_difference'].items()}) -{'neutral': 0.46, 'positive': 0.01, 'negative': -0.46, 'other': -0.01} -``` - -Example 3 (returns the maximum regard score): -```python ->>> regard = evaluate.load("regard", "compare") ->>> group1 = ['xyz are described as mean', 'xyz are thought of as being too ambitious'] ->>> group2 = ['xyz are known for making too much noise', 'xyz are described as often violent'] ->>> results = regard.compute(data = group1, references = group2, aggregation = "maximum") ->>> print({k: round(v, 2) for k, v in results['max_data_regard'].items()}) -{'neutral': 0.95, 'positive': 0.02, 'negative': 0.97, 'other': 0.02} ->>> print({k: round(v, 2) for k, v in results['max_references_regard'].items()}) -{'negative': 0.98, 'other': 0.04, 'neutral': 0.03, 'positive': 0.0} -``` - -Example 4 (returns the average regard score): -```python ->>> regard = evaluate.load("regard", "compare") ->>> group1 = ['xyz are described as mean', 'xyz are thought of as being too ambitious'] ->>> group2 = ['xyz are known for making too much noise', 'xyz are described as often violent'] ->>> results = regard.compute(data = group1, references = group2, aggregation = "average") ->>> print({k: round(v, 2) for k, v in results['average_data_regard'].items()}) -{'neutral': 0.48, 'positive': 0.01, 'negative': 0.5, 'other': 0.01} ->>> print({k: round(v, 2) for k, v in results['average_references_regard'].items()}) -{'negative': 0.96, 'other': 0.02, 'neutral': 0.02, 'positive': 0.0} -``` - -## Citation(s) -@article{https://doi.org/10.48550/arxiv.1909.01326, - doi = {10.48550/ARXIV.1909.01326}, - url = {https://arxiv.org/abs/1909.01326}, - author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun}, - title = {The Woman Worked as a Babysitter: On Biases in Language Generation}, - publisher = {arXiv}, - year = {2019} -} - - -## Further References -- [`nlg-bias` library](https://github.com/ewsheng/nlg-bias/) diff --git a/spaces/evi0mo/vits-fastapi-server/README.md b/spaces/evi0mo/vits-fastapi-server/README.md deleted file mode 100644 index 7e0d4a36663f57ff304e97824852130fdf517caa..0000000000000000000000000000000000000000 --- a/spaces/evi0mo/vits-fastapi-server/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Vits Fastapi Server -emoji: 🏢 -colorFrom: indigo -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/evi0mo/vits-fastapi-server/text/__init__.py b/spaces/evi0mo/vits-fastapi-server/text/__init__.py deleted file mode 100644 index 5c836e1644f09807bd4c5f123c0ed3be7a724ff7..0000000000000000000000000000000000000000 --- a/spaces/evi0mo/vits-fastapi-server/text/__init__.py +++ /dev/null @@ -1,61 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [] - symbol_to_id = {s: i for i, s in enumerate(symbols)} - clean_text = _clean_text(text, cleaner_names) - print(f" length:{len(clean_text)}") - for symbol in clean_text: - if symbol not in symbol_to_id.keys(): - continue - symbol_id = symbol_to_id[symbol] - sequence += [symbol_id] - print(f" length:{len(sequence)}") - return sequence - - -def cleaned_text_to_sequence(cleaned_text, symbols): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - """ - symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [ - symbol_to_id[symbol] for symbol in cleaned_text if symbol in symbol_to_id.keys() - ] - return sequence - - -def sequence_to_text(sequence): - """Converts a sequence of IDs back to a string""" - result = "" - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception("Unknown cleaner: %s" % name) - text = cleaner(text) - return text diff --git a/spaces/evoss/NLP_text_analyzer/README.md b/spaces/evoss/NLP_text_analyzer/README.md deleted file mode 100644 index 7b9b7e2c16c9e4fb916d7491bf19ff5fb3592fc5..0000000000000000000000000000000000000000 --- a/spaces/evoss/NLP_text_analyzer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NLP Text Analyzer -emoji: 🐠 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/facebook/MusicGen/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py b/spaces/facebook/MusicGen/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py deleted file mode 100644 index 12f6d402a3c4a113d4c37be062790fa435b72104..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluation with objective metrics for the pretrained AudioGen models. -This grid takes signature from the training grid and runs evaluation-only stage. - -When running the grid for the first time, please use: -REGEN=1 dora grid audiogen.audiogen_pretrained_16khz_eval -and re-use the REGEN=1 option when the grid is changed to force regenerating it. - -Note that you need the proper metrics external libraries setup to use all -the objective metrics activated in this grid. Refer to the README for more information. -""" - -import os - -from ..musicgen._explorers import GenerationEvalExplorer -from ...environment import AudioCraftEnvironment -from ... import train - - -def eval(launcher, batch_size: int = 32): - opts = { - 'dset': 'audio/audiocaps_16khz', - 'solver/audiogen/evaluation': 'objective_eval', - 'execute_only': 'evaluate', - '+dataset.evaluate.batch_size': batch_size, - '+metrics.fad.tf.batch_size': 32, - } - # binary for FAD computation: replace this path with your own path - metrics_opts = { - 'metrics.fad.tf.bin': '/data/home/jadecopet/local/usr/opt/google-research' - } - opt1 = {'generate.lm.use_sampling': True, 'generate.lm.top_k': 250, 'generate.lm.top_p': 0.} - opt2 = {'transformer_lm.two_step_cfg': True} - - sub = launcher.bind(opts) - sub.bind_(metrics_opts) - - # base objective metrics - sub(opt1, opt2) - - -@GenerationEvalExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=4, partition=partitions) - - if 'REGEN' not in os.environ: - folder = train.main.dora.dir / 'grids' / __name__.split('.', 2)[-1] - with launcher.job_array(): - for sig in folder.iterdir(): - if not sig.is_symlink(): - continue - xp = train.main.get_xp_from_sig(sig.name) - launcher(xp.argv) - return - - audiogen_base = launcher.bind(solver="audiogen/audiogen_base_16khz") - audiogen_base.bind_({'autocast': False, 'fsdp.use': True}) - - audiogen_base_medium = audiogen_base.bind({'continue_from': '//pretrained/facebook/audiogen-medium'}) - audiogen_base_medium.bind_({'model/lm/model_scale': 'medium'}) - eval(audiogen_base_medium, batch_size=128) diff --git a/spaces/facebook/MusicGen/tests/data/test_audio_utils.py b/spaces/facebook/MusicGen/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/falterWliame/Face_Mask_Detection/Aa Text Hindi Fonts Free Download.md b/spaces/falterWliame/Face_Mask_Detection/Aa Text Hindi Fonts Free Download.md deleted file mode 100644 index 2b1a8627523d4d71cb1ff9c82345482e4f057b4e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Aa Text Hindi Fonts Free Download.md +++ /dev/null @@ -1,95 +0,0 @@ - -

          AA Text Hindi Fonts: How to Download and Use Them for Free

          - -

          If you are looking for some stylish and trendy Hindi fonts for your device, you might want to check out AA Text Hindi Fonts. These are a collection of high quality fonts that are suitable for any project or design that involves Hindi language. In this article, we will show you how to download and use AA Text Hindi Fonts for free.

          -

          aa text hindi fonts free download


          Download Zip ✑ ✑ ✑ https://urlca.com/2uDd2x



          - -

          What are AA Text Hindi Fonts?

          - -

          AA Text Hindi Fonts are a set of fonts that are designed to provide a clear and readable text in Hindi language. They are based on the Devanagari script, which is used to write Hindi, Nepali, Sanskrit and Marathi languages. AA Text Hindi Fonts have a modern and elegant look, with smooth curves and sharp edges. They are compatible with any device, such as mobile, tablet, desktop, apple, windows, Linux, iPad and many other gadgets.

          - -

          AA Text Hindi Fonts can be used for various purposes, such as books, magazines, websites, logos, posters, banners, flyers, invitations, cards and more. They can also be used for typography, language and translation purposes. They can enhance the appearance and readability of your text and make it stand out from the crowd.

          - -

          Where to Find and Download AA Text Hindi Fonts for Free

          - -

          The best way to find and download AA Text Hindi Fonts for free is to use a website that offers thousands of Hindi fonts for free. One such website is Hindi-Fonts.com, which is a single solution to your Hindi fonts requirement. This website has a huge collection of Hindi fonts, including Devanagari fonts, Nepali fonts, Sanskrit fonts and Marathi fonts. You can find AA Text Hindi Fonts among the top Hindi fonts on this website.

          -

          - -

          To download AA Text Hindi Fonts for free from this website, you just need to follow these simple steps:

          - -
            -
          1. Go to Hindi-Fonts.com and search for AA Text Hindi Fonts in the search bar.
          2. -
          3. Select the font that you like from the list of results and click on the download button.
          4. -
          5. Save the font file on your device in a folder that you can easily access.
          6. -
          7. Extract the font file from the zip folder using a program like WinZip or WinRAR.
          8. -
          9. Install the font on your device by following the instructions that vary depending on your operating system.
          10. -
          - -

          Congratulations! You have successfully downloaded and installed AA Text Hindi Fonts for free on your device.

          - -

          How to Use AA Text Hindi Fonts for Your Projects and Designs

          - -

          Now that you have AA Text Hindi Fonts on your device, you can use them for your projects and designs that involve Hindi language. To use them, you just need to select them from the font menu of your program or application that you are using to create your text or design. For example, if you are using Microsoft Word, you can select AA Text Hindi Fonts from the font drop-down list in the Home tab.

          - -

          Once you have selected AA Text Hindi Fonts, you can type or paste your text in Hindi language and adjust the font size, color and style as per your preference. You can also use some tools like Uni Code Converter that help you to convert your existing text into a desirable font using HTML language for the process. You can find such tools on Hindi-Fonts.com as well.

          - -

          You can also use AA Text Hindi Fonts for online purposes, such as creating a website or a blog in Hindi language. To do so, you need to embed the font code in your HTML code using the @font-face rule. You can find the font code on Hindi-Fonts.com by clicking on the embed button next to the download button of the font. You can copy and paste the font code in your HTML code and use it as per your requirement.

          - -

          Conclusion

          - -

          AA Text Hindi Fonts are a great choice for anyone who wants to create a clear and readable text in Hindi language. They are stylish and trendy fonts that are compatible with any device and suitable for any project or design. You can find and download AA Text Hindi Fonts for free from Hindi-Fonts.com, which is a website that offers thousands of Hindi fonts for free. You can also use some tools like Uni Code Converter or embed code to convert or use your text in AA Text Hindi Fonts. We hope this article has helped you to learn more about AA Text Hindi Fonts and how to download and use them for free.

          -

          How to Choose the Best AA Text Hindi Fonts for Your Needs

          - -

          AA Text Hindi Fonts are not all the same. There are different styles, weights, sizes and formats of AA Text Hindi Fonts that you can choose from depending on your needs and preferences. Here are some tips on how to choose the best AA Text Hindi Fonts for your needs:

          - -
            -
          • Consider the purpose of your text or design. Do you want to create a formal or informal text? Do you want to convey a serious or playful tone? Do you want to attract attention or blend in? Depending on your purpose, you can choose a font that matches your message and mood.
          • -
          • Consider the audience of your text or design. Who are you writing or designing for? What are their expectations and preferences? What are their cultural and linguistic backgrounds? Depending on your audience, you can choose a font that suits their tastes and needs.
          • -
          • Consider the medium of your text or design. Where are you going to publish or display your text or design? Is it online or offline? Is it on a screen or on paper? Is it large or small? Depending on your medium, you can choose a font that is legible and visible in different contexts and environments.
          • -
          • Consider the compatibility of your text or design. How are you going to use your text or design? Is it standalone or part of a larger project? Is it compatible with other fonts and elements? Does it follow the standards and guidelines of your project or platform? Depending on your compatibility, you can choose a font that is consistent and harmonious with your text or design.
          • -
          - -

          By considering these factors, you can choose the best AA Text Hindi Fonts for your needs and create a clear and readable text or design in Hindi language.

          - -

          How to Customize and Enhance Your Text or Design with AA Text Hindi Fonts

          - -

          AA Text Hindi Fonts are not only easy to download and use, but also easy to customize and enhance. You can modify and improve your text or design with AA Text Hindi Fonts by using some simple tools and techniques. Here are some ways on how to customize and enhance your text or design with AA Text Hindi Fonts:

          - -
            -
          • Adjust the font size, color and style. You can change the appearance of your text or design by changing the font size, color and style of AA Text Hindi Fonts. You can make your text bigger or smaller, brighter or darker, bold or italic, etc. You can also use different colors and styles for different parts of your text or design to create contrast and emphasis.
          • -
          • Add some effects and decorations. You can add some flair and creativity to your text or design by adding some effects and decorations to AA Text Hindi Fonts. You can use shadows, outlines, gradients, strokes, etc. to make your text more attractive and eye-catching. You can also use symbols, icons, emojis, etc. to add some fun and personality to your text.
          • -
          • Use some layout and alignment tools. You can arrange and organize your text or design by using some layout and alignment tools for AA Text Hindi Fonts. You can use margins, paddings, spaces, indents, etc. to create some white space and balance in your text or design. You can also use alignment, justification, centering, etc. to position your text or design in different ways.
          • -
          • Use some typography and language tools. You can refine and polish your text or design by using some typography and language tools for AA Text Hindi Fonts. You can use kerning, tracking, leading, etc. to adjust the spacing between letters, words and lines in your text. You can also use punctuation, capitalization, diacritics, etc. to follow the rules and conventions of Hindi language.
          • -
          - -

          By using these tools and techniques, you can customize and enhance your text or design with AA Text Hindi Fonts and make it more appealing and professional.

          -

          How to Compare and Contrast Different AA Text Hindi Fonts

          - -

          AA Text Hindi Fonts are not all the same. There are different types and categories of AA Text Hindi Fonts that you can choose from depending on your needs and preferences. Here are some ways on how to compare and contrast different AA Text Hindi Fonts:

          - -
            -
          • Compare the style and appearance of AA Text Hindi Fonts. You can compare how different AA Text Hindi Fonts look like by examining their shape, size, weight, width, slope, etc. You can also compare how they fit with different themes and moods, such as formal or informal, serious or playful, modern or traditional, etc.
          • -
          • Compare the quality and performance of AA Text Hindi Fonts. You can compare how different AA Text Hindi Fonts perform by testing their legibility, readability, clarity, consistency, compatibility, etc. You can also compare how they work with different devices, platforms, programs, applications, etc.
          • -
          • Compare the popularity and availability of AA Text Hindi Fonts. You can compare how popular and widely used different AA Text Hindi Fonts are by checking their ratings, reviews, downloads, etc. You can also compare how easy and convenient it is to find and download different AA Text Hindi Fonts from different sources and websites.
          • -
          - -

          By comparing and contrasting different AA Text Hindi Fonts, you can find the best AA Text Hindi Fonts for your needs and preferences.

          - -

          How to Learn More and Stay Updated About AA Text Hindi Fonts

          - -

          If you want to learn more and stay updated about AA Text Hindi Fonts, you can use some resources and tools that are available online. Here are some ways on how to learn more and stay updated about AA Text Hindi Fonts:

          - -
            -
          • Learn more about AA Text Hindi Fonts by reading articles, blogs, tutorials, guides, etc. that provide information, tips, tricks, examples, etc. about AA Text Hindi Fonts. You can find such resources on websites like Hindi-Fonts.com, which is a single solution to your Hindi fonts requirement.
          • -
          • Stay updated about AA Text Hindi Fonts by following social media accounts, pages, groups, etc. that share news, updates, trends, innovations, etc. about AA Text Hindi Fonts. You can find such accounts on platforms like Facebook, Twitter, Instagram, Pinterest, YouTube, etc.
          • -
          • Stay updated about AA Text Hindi Fonts by subscribing to newsletters, alerts, notifications, etc. that send you emails or messages about new releases, updates, offers, discounts, etc. about AA Text Hindi Fonts. You can find such services on websites like Hindi-Fonts.com, which is a single solution to your Hindi fonts requirement.
          • -
          - -

          By using these resources and tools, you can learn more and stay updated about AA Text Hindi Fonts.

          -

          AA Text Hindi Fonts: A Great Choice for Hindi Language Lovers

          - -

          AA Text Hindi Fonts are a great choice for anyone who wants to create a clear and readable text or design in Hindi language. They are stylish and trendy fonts that are compatible with any device and suitable for any project or design. You can find and download AA Text Hindi Fonts for free from Hindi-Fonts.com, which is a website that offers thousands of Hindi fonts for free. You can also use some tools like Uni Code Converter or embed code to convert or use your text in AA Text Hindi Fonts. You can also customize and enhance your text or design with AA Text Hindi Fonts by using some simple tools and techniques. You can also compare and contrast different AA Text Hindi Fonts by using some criteria and methods. You can also learn more and stay updated about AA Text Hindi Fonts by using some resources and tools that are available online.

          - -

          We hope this article has helped you to learn more about AA Text Hindi Fonts and how to download and use them for free. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading and happy writing!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Captain America Super Soldier Psp Iso.md b/spaces/falterWliame/Face_Mask_Detection/Captain America Super Soldier Psp Iso.md deleted file mode 100644 index f3cad66d481a5c72da2d8a2d2005124c6393bb87..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Captain America Super Soldier Psp Iso.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Captain America Super Soldier Psp Iso


          Download File ☆☆☆ https://urlca.com/2uDdI5



          - -Remont i razkodirane vsichki vidove PSP , SONY PSP, FAT SLIM / GO ver 6. ... Download game PC iso, Direct links game PC, Torrent game PC, Crack DLC game ... 3DS0048 - Captain America Super Soldier (Europe) (En,F3ds. Oil pressure ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/falterWliame/Face_Mask_Detection/FULL Wonderware Intouch Siemens Atsddedm Driver.md b/spaces/falterWliame/Face_Mask_Detection/FULL Wonderware Intouch Siemens Atsddedm Driver.md deleted file mode 100644 index 70b8c606b1b461f5202adc55b5c262a16553ecf9..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/FULL Wonderware Intouch Siemens Atsddedm Driver.md +++ /dev/null @@ -1,6 +0,0 @@ -

          FULL Wonderware Intouch Siemens Atsddedm Driver


          Download Zip >> https://urlca.com/2uDc6p



          -
          -c861546359 [FULL] Wonderware Intouch Siemens Atsddedm Driver > f35aea7a9c Supernatural saison 8 vf torrent cpasbienkiller elite 2011 full ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/falterWliame/Face_Mask_Detection/FlixGrab 1.6.0.458 Premium Portable [Latest].md b/spaces/falterWliame/Face_Mask_Detection/FlixGrab 1.6.0.458 Premium Portable [Latest].md deleted file mode 100644 index 15a0fa34b08a33bf2c28dda36f111f0195846bcb..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/FlixGrab 1.6.0.458 Premium Portable [Latest].md +++ /dev/null @@ -1,16 +0,0 @@ -

          FlixGrab 1.6.0.458 Premium Portable [Latest]


          Download Zip ⇒⇒⇒ https://urlca.com/2uDc1W



          - -Everyone here works hard to provide our users with the most complete, accurate and easy-to-use source of information on the game of pickup soccer. We update information about the game every day to keep it fresh and timely. We welcome your comments, feedback, and ideas for improving the site. Please do not submit articles that are not directly related to the game of pick-up soccer! We do not accept copyrighted images, text, or other materials. The links to the images in the articles should remain the same throughout. Please keep them that way so we can keep up with updates to the article. -Todd Christensen, admin. "PK4B" is a registered trademark of Orchid Sport Inc. You may contact your local representative for more information. Orchid Sport Inc. not responsible for any injury resulting from any content on this website. Copyright 1998-2016, Orchid Sport, Inc.Heiner Müller: Deutsche Kultur im 20. Jahrhundert (2006) - -This documentary by Berlin director Peter Sehrt provides a dramatic, theoretical, and emotional exploration of a place and a time. It focuses on the question of the German history of the twentieth century from the point of view of the German people, using their own language. In particular, it focuses on the "German culture" of the German Mittelstand and the German urban culture of the “late Wilhelmine” as they were imagined, described, and experienced. In this all-encompassing tour, Heiner Müller explores the questions of what “Germanness” means, what the nation was, and how it shaped the German culture of the twentieth century. - -Video - -A German Film Essay by Patricia C. Wrede - -As the twentieth century unfolded, Germany remained trapped in the abysmal turmoil of the Great War and the collapse of the Weimar Republic. The interwar years were characterized by economic depression, political instability, and an ongoing crisis of identity. - -German cinema produced more than 1,100 films in the first two decades of the twentieth century, with German features accounting for more than half the production. Nonetheless, German cinema reflected its nation’s contentious history, economy, and politics, as well as the implicit and explicit tensions of the time. A central feature of this documentary are the ways in which German cinema addressed the momentous and evolving question of “What is German?”Aluminium toxicity to a benthic macroalga, 4fefd39f24
          -
          -
          -

          diff --git a/spaces/falterWliame/Face_Mask_Detection/Pci Express M.2 Specification Revision 1.0.pdf.md b/spaces/falterWliame/Face_Mask_Detection/Pci Express M.2 Specification Revision 1.0.pdf.md deleted file mode 100644 index db92dcbc23d2af1d9502ccc7ac4ed8794e3a075a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Pci Express M.2 Specification Revision 1.0.pdf.md +++ /dev/null @@ -1,9 +0,0 @@ - -

          the aim of this specification is to establish a pci express medium-speed multi-lane card ( m.2 ) specification and basic conformance test methodology for the entire family of cards as a complement to the pci express mini card and mini card (sff-8087) specifications and covers a broad range of functionality. the specification also serves as a common foundation to further enhance the existing pci express specifications.

          -

          the pcie 1.0 specification is designed to ensure intel sets the minimum required pcie (peripheral component interconnect express) phy (physical) and electrical and mechanical design for the family of pcie cards and products. this will be used in addition to pcie/pcie 2.0 reference boards and designs, in order to provide additional coverage and test and evaluation capabilities.

          -

          Pci Express M.2 Specification Revision 1.0.pdf


          Download File ✺✺✺ https://urlca.com/2uDd6w



          -

          the pci express general purpose ( pcie gen ) specification defines the electrical and physical interface for a serial i/o system composed of a root complex and a switch, as well as associated signaling and transaction layer protocols. the technology is commonly used in personal computers (pcs), workstations, and storage products for personal and data storage. pcie gen 1 pci express specification revision 1.0.pdf

          -

          this specification defines the electrical and physical interface for a serial i/o system composed of a root complex and a switch, as well as associated signaling and transaction layer protocols. pcie gen 1 - gen 2 pci express specification revision 1.0.pdf

          -

          for a serial attached scsi (sas) architecture, this specification defines electrical and physical interface for a serial i/o system composed of a root complex and a switch, as well as associated signaling and transaction layer protocols. sas bridge specification revision 1.0

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download PES 2018 APK with OBB Data Everything You Need to Know.md b/spaces/fatiXbelha/sd/Download PES 2018 APK with OBB Data Everything You Need to Know.md deleted file mode 100644 index 0f551cb8a5b955fc5dc467ad96d41a36522c589f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download PES 2018 APK with OBB Data Everything You Need to Know.md +++ /dev/null @@ -1,255 +0,0 @@ -
          -

          PES 2018 APK Download: How to Play the Best Football Game on Your Android Device

          -

          If you are a fan of football games, you have probably heard of PES 2018, or Pro Evolution Soccer 2018, one of the most popular and acclaimed titles in the genre. But did you know that you can play this amazing game on your Android device as well? In this article, we will tell you everything you need to know about PES 2018 APK download, including what it is, why it is the best football game ever made, and how to download and install it on your Android device.

          -

          pes 2018 apk download


          DOWNLOADhttps://urllie.com/2uNF4z



          -

          What is PES 2018?

          -

          PES 2018 is a sports video game developed and published by Konami for various platforms, including Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Android and iOS. It is the 17th installment in the Pro Evolution Soccer series and was released worldwide in September 2017.

          -

          PES 2018 features more new additions than any other PES title in the last 10 years, and no part of the game has been left untouched. At its heart remains the famed gameplay where users enjoy complete control over the on-field action, using players that behave, move and react just like their real-life counterparts.

          -

          Some of the features that PES 2018 offers are:

          -
            -
          • Gameplay Masterclass: Strategic Dribbling, Real Touch+, and new set pieces take the unrivalled gameplay to the next level.
          • -
          • Presentation Overhaul: New menus and real player images.
          • -
          • Pro Evolution Soccer League Integration: Compete with PES League in new modes including myClub.
          • -
          • Online Co-op: A mode dedicated to co-op play is newly added.
          • -
          • New Modes & Features: Random Selection Match, Master League Upgrade, Enhanced Visual Reality.
          • -
          • Authentic Leagues: Huge addition of licensed leagues.
          • -
          • Enhanced Visual Reality : New lighting, reworked player models and animations covering everything from facial expressions to body movement to bring the game to life.
          • -
          -

          PES 2018 also boasts more than 40,000 licensed players, over 650 teams, and more than 90 stadiums, making it one of the most authentic and realistic football games ever made.

          -

          Why is PES 2018 the Best Football Game?

          -

          Realistic Gameplay

          -

          One of the main reasons why PES 2018 is the best football game is its realistic and satisfying gameplay. PES 2018 delivers a gameplay experience that is closer to real football than any other game, thanks to its advanced game mechanics, animations, physics, and AI.

          -

          Some of the gameplay features that make PES 2018 stand out are:

          -
            -
          • Strategic Dribbling: Gives the user significantly more control in possession, with the addition of contextual shielding to protect the ball, as well as simple stick controls triggering realistic, subtle movements to wrong foot defenders.
          • -
          • Real Touch+: Players react to receiving a ball using permissible parts of the body, such as chest, head, and legs to bring a pass under control dependent on the height and pace of the incoming ball.
          • -
          • New Set Pieces: All set pieces, including free kicks and corners, have been reworked with a new system allowing for greater control over both the speed and accuracy of the shot.
          • -
          • Adaptive AI: For the first time in a football game, the AI will learn how you play and adapt accordingly. Depending on the actions of both teams, one team may imitate an opponent’s playing style or formation, while another may adopt a more defensive or attacking approach.
          • -
          • Realistic Physics: The ball's movement is based on realistic physics, taking into account factors such as air resistance, friction, spin, and bounce. The ball will also react differently depending on the weather conditions and the pitch quality.
          • -
          • Smooth Animations: The game features more than 8,000 new animations that cover everything from facial expressions to body movement. The players will move and react in a natural and fluid way, showing emotions and gestures that match the situation.
          • -
          -

          Diverse Content

          -

          Another reason why PES 2018 is the best football game is its diverse and rich content. PES 2018 offers a variety of game modes, tournaments, players, teams, stadiums, and customization options for players to enjoy.

          -

          Some of the content features that make PES 2018 stand out are:

          -

          pes 2018 pro evolution soccer apk download
          -download pes 2018 apk + data obb
          -pes 2018 apk download for android
          -pes 2018 mod apk download
          -pes 2018 apk download free full version
          -pes 2018 apk download latest version
          -pes 2018 apk download offline
          -pes 2018 apk download with commentary
          -pes 2018 apk download highly compressed
          -pes 2018 apk download hack
          -pes 2018 mobile apk download
          -pes 2018 lite apk download
          -pes 2018 patch apk download
          -pes 2018 android apk download google drive
          -pes 2018 iso apk download
          -pes 2018 ppsspp apk download
          -pes 2018 psp apk download
          -pes 2018 pc apk download
          -pes 2018 ps4 apk download
          -pes 2018 ps2 apk download
          -pes 2018 konami apk download
          -pes 2018 original apk download
          -pes 2018 online apk download
          -pes 2018 update apk download
          -pes 2018 unlimited money apk download
          -pes 2018 license key apk download
          -pes 2018 world cup apk download
          -pes 2018 champions league apk download
          -pes 2018 real madrid apk download
          -pes 2018 barcelona apk download
          -pes 2018 juventus apk download
          -pes 2018 liverpool apk download
          -pes 2018 manchester united apk download
          -pes 2018 arsenal apk download
          -pes 2018 chelsea apk download
          -pes 2018 bayern munich apk download
          -pes 2018 brazil apk download
          -pes 2018 argentina apk download
          -pes 2018 france apk download
          -pes 2018 germany apk download
          -pes 2018 spain apk download
          -pes 2018 portugal apk download
          -pes 2018 england apk download
          -pes 2018 italy apk download
          -pes 2018 russia apk download
          -pes 2018 japan apk download
          -pes 2018 korea apk download
          -pes 2018 china apk download
          -pes 2018 india apk download
          -pes 2018 nigeria apk download

          -
            -
          • Game Modes: PES 2018 offers several game modes for different tastes and preferences. You can play single matches, leagues, cups, tournaments, or online matches. You can also play Master League mode, where you can create your own club and manage it from scratch. Or you can play myClub mode, where you can build your own team by signing players and managers using in-game currency or real money.
          • -
          • Tournaments: PES 2018 features many official and licensed tournaments from around the world. You can play in the UEFA Champions League, UEFA Europa League, UEFA Super Cup, Copa Libertadores, Copa Sudamericana, AFC Champions League, FIFA Club World Cup, and more. You can also create your own custom tournaments with your own rules and settings.
          • -
          • Players: PES 2018 boasts more than 40,000 licensed players from over 650 teams. You can play with some of the best players in the world, such as Lionel Messi, Cristiano Ronaldo, Neymar Jr., Kylian Mbappé, Mohamed Salah, Harry Kane, Robert Lewandowski, and more. You can also create your own custom players with your own appearance and attributes.
          • -
          • Teams: PES 2018 features over 650 licensed teams from various leagues and countries. You can play with some of the best teams in the world , such as Barcelona, Real Madrid, Manchester City, Liverpool, Bayern Munich, Juventus, Paris Saint-Germain, and more. You can also play with some of the national teams, such as Brazil, Argentina, France, Germany, Spain, England, and more. You can also create your own custom teams with your own name, logo, kit, and players.
          • -
          • Stadiums: PES 2018 features more than 90 stadiums from various countries and continents. You can play in some of the most iconic and beautiful stadiums in the world, such as Camp Nou, Santiago Bernabéu, Wembley Stadium, Allianz Arena, Parc des Princes, and more. You can also create your own custom stadiums with your own design and features.
          • -
          • Customization Options: PES 2018 offers a lot of customization options for players to personalize their game experience. You can edit the appearance and attributes of players, teams, stadiums, balls, kits, logos, and more. You can also download and import official data packs and fan-made patches that add more content and updates to the game.
          • -
          -

          Online Features

          -

          The last reason why PES 2018 is the best football game is its online features. PES 2018 offers an online multiplayer mode that allows players to compete with other players around the world.

          -

          Some of the online features that make PES 2018 stand out are:

          -
            -
          • Online Co-op: A mode dedicated to co-op play is newly added. You can team up with up to three friends or strangers and play against another team of up to four players. You can also join forces with a friend and take on the AI in co-op challenges.
          • -
          • PES League: The official PES e-sports tournament where players from around the world compete in various seasons and events. You can participate in online matches and tournaments and earn points and rewards. You can also qualify for regional and global finals and win prizes and glory.
          • -
          • myClub: The ultimate team-building mode where you can create your own dream team by signing players and managers using in-game currency or real money. You can also scout for new talent, train your players, improve their skills, and challenge other myClub users.
          • -
          -

          How to Download and Install PES 2018 APK on Your Android Device?

          -

          Requirements

          -

          Before you download and install PES 2018 APK on your Android device, you need to make sure that your device meets the minimum and recommended system requirements for running the game smoothly.

          -

          The minimum system requirements for PES 2018 APK are:

          -
            -
          • Android version: 5.0 or higher
          • -
          • RAM: 2 GB or higher
          • -
          • Storage: 4 GB or higher
          • -
          • Internet connection: Required for online features
          • -
          -

          The recommended system requirements for PES 2018 APK are:

          -
            -
          • Android version: 7.0 or higher
          • -
          • RAM: 4 GB or higher
          • -
          • Storage: 8 GB or higher
          • -
          • Internet connection: Required for online features
          • -
          -

          Steps

          -

          Once you have checked that your device meets the system requirements, you can follow these steps to download and install PES 2018 APK on your Android device:

          -
            -
          1. Go to [this link] to download the PES 2018 APK file. The file size is about 1.5 GB.
          2. -
          3. Go to [this link] to download the PES 2018 OBB file. The file size is about 2.5 GB.
          4. -
          5. After downloading both files, go to your device's settings and enable the option to install apps from unknown sources.
          6. -
          7. Locate the downloaded APK file on your device's file manager and tap on it to install it.
          8. -
          9. After installing the APK file, do not open it yet. Instead, locate the downloaded OBB file on your device's file manager and extract it using a ZIP extractor app.
          10. -
          11. Copy the extracted OBB folder named jp.konami.pesam and paste it into /Android/obb/ directory on your device's internal storage.
          12. -
          13. After copying the OBB folder, you can now launch the PES 2018 app from your device's app drawer or home screen.
          14. -
          15. Enjoy playing PES 2018 on your Android device!
          16. -

            Tips and Tricks for Playing PES 2018 on Your Android Device

            -

            Now that you have downloaded and installed PES 2018 on your Android device, you might be wondering how to play it better and have more fun. Here are some useful tips and tricks for playing PES 2018 on your Android device:

            -
              -
            • Change Tactics: You can change your team's tactics before or during a match by tapping on the Tactics button on the bottom left corner of the screen. You can choose from different formations, strategies, and advanced instructions to suit your playstyle and counter your opponent's.
            • -
            • Learn Tricks: You can perform various tricks and skills with your players by using the right stick or the Trick button on the bottom right corner of the screen. You can also swipe on the screen to execute different feints and moves. You can learn more about the tricks and how to perform them by going to Extras > Tutorial > Tricks.
            • -
            • Use Random Selection Mode: If you want to spice up your matches and try something new, you can use the random selection mode. This mode allows you to create a team by randomly selecting players from different clubs and countries. You can also trade players with your opponent before the match starts. You can access this mode by going to Kick Off > Random Selection Match.
            • -
            • Optimize Performance: If you experience any lag or slowdown while playing PES 2018 on your Android device, you can optimize the game's performance by adjusting some settings. You can go to Extras > Settings > Graphics Settings and lower the resolution, frame rate, and rendering quality. You can also turn off some effects such as shadows, reflections, and crowd animations.
            • -
            -

            Conclusion

            -

            PES 2018 is one of the best football games ever made, and you can play it on your Android device with ease. All you need to do is download and install the PES 2018 APK and OBB files from the links provided in this article, and follow the steps we have outlined. You will then be able to enjoy a realistic, diverse, and online football experience on your Android device.

            -

            We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

            -

            And if you liked this article, please share it with your friends and fellow football fans. Let them know how they can play PES 2018 on their Android devices too.

            -

            Thank you for reading, and happy gaming!

            -

            Frequently Asked Questions

            -

            Here are some of the most frequently asked questions about PES 2018 APK download:

            -
              -
            1. Is PES 2018 APK safe to download?
            2. -

              Yes, PES 2018 APK is safe to download as long as you download it from a trusted source like the one we have provided in this article. However, you should always be careful when downloading any file from the internet, and scan it with an antivirus software before installing it.

              -
            3. Is PES 2018 APK free to download?
            4. -

              Yes, PES 2018 APK is free to download and play. However, some features and content may require in-game currency or real money to unlock or purchase.

              -
            5. How much space does PES 2018 APK take on my device?
            6. -

              PES 2018 APK takes about 4 GB of storage space on your device, while the OBB file takes about 2.5 GB of storage space. Therefore, you need at least 6.5 GB of free space on your device to download and install PES 2018 APK.

              -
            7. Can I play PES 2018 APK offline?
            8. -

              You can play PES 2018 APK offline in some modes such as single matches, leagues, cups, tournaments, and Master League. However, some modes such as online co-op, PES League, and myClub require an internet connection to play.

              -
            9. Can I play PES 2018 APK with a controller?
            10. -

              You can play PES 2018 APK with a controller if your device supports it. You can connect your controller via Bluetooth or USB cable, and configure it in the game's settings.

              I have finished writing the article. Here is the final version of the article with HTML formatting:

              PES 2018 APK Download: How to Play the Best Football Game on Your Android Device

              -

              If you are a fan of football games, you have probably heard of PES 2018, or Pro Evolution Soccer 2018, one of the most popular and acclaimed titles in the genre. But did you know that you can play this amazing game on your Android device as well? In this article, we will tell you everything you need to know about PES 2018 APK download, including what it is, why it is the best football game ever made, and how to download and install it on your Android device.

              -

              What is PES 2018?

              -

              PES 2018 is a sports video game developed and published by Konami for various platforms, including Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Android and iOS. It is the 17th installment in the Pro Evolution Soccer series and was released worldwide in September 2017.

              -

              PES 2018 features more new additions than any other PES title in the last 10 years, and no part of the game has been left untouched. At its heart remains the famed gameplay where users enjoy complete control over the on-field action, using players that behave, move and react just like their real-life counterparts.

              -

              Some of the features that PES 2018 offers are:

              -
                -
              • Gameplay Masterclass: Strategic Dribbling, Real Touch+, and new set pieces take the unrivalled gameplay to the next level.
              • -
              • Presentation Overhaul: New menus and real player images.
              • -
              • Pro Evolution Soccer League Integration: Compete with PES League in new modes including myClub.
              • -
              • Online Co-op: A mode dedicated to co-op play is newly added.
              • -
              • New Modes & Features: Random Selection Match, Master League Upgrade, Enhanced Visual Reality.
              • -
              • Authentic Leagues: Huge addition of licensed leagues.
              • -
              • Enhanced Visual Reality : New lighting, reworked player models and animations covering everything from facial expressions to body movement to bring the game to life.
              • -
              -

              PES 2018 also boasts more than 40,000 licensed players, over 650 teams, and more than 90 stadiums, making it one of the most authentic and realistic football games ever made.

              -

              Why is PES 2018 the Best Football Game?

              -

              Realistic Gameplay

              -

              One of the main reasons why PES 2018 is the best football game is its realistic and satisfying gameplay. PES 2018 delivers a gameplay experience that is closer to real football than any other game, thanks to its advanced game mechanics, animations, physics, and AI.

              -

              Some of the gameplay features that make PES 2018 stand out are:

              -
                -
              • Strategic Dribbling: Gives the user significantly more control in possession, with the addition of contextual shielding to protect the ball, as well as simple stick controls triggering realistic, subtle movements to wrong foot defenders.
              • -
              • Real Touch+: Players react to receiving a ball using permissible parts of the body, such as chest, head, and legs to bring a pass under control dependent on the height and pace of the incoming ball.
              • -
              • New Set Pieces: All set pieces, including free kicks and corners, have been reworked with a new system allowing for greater control over both the speed and accuracy of the shot.
              • -
              • Adaptive AI: For the first time in a football game, the AI will learn how you play and adapt accordingly. Depending on the actions of both teams, one team may imitate an opponent’s playing style or formation, while another may adopt a more defensive or attacking approach.
              • -
              • Realistic Physics: The ball's movement is based on realistic physics, taking into account factors such as air resistance, friction, spin, and bounce. The ball will also react differently depending on the weather conditions and the pitch quality.
              • -
              • Smooth Animations: The game features more than 8,000 new animations that cover everything from facial expressions to body movement. The players will move and react in a natural and fluid way, showing emotions and gestures that match the situation.
              • -
              -

              Diverse Content

              -

              Another reason why P ES 2018 is the best football game is its diverse and rich content. PES 2018 offers a variety of game modes, tournaments, players, teams, stadiums, and customization options for players to enjoy.

              -

              Some of the content features that make PES 2018 stand out are:

              -
                -
              • Game Modes: PES 2018 offers several game modes for different tastes and preferences. You can play single matches, leagues, cups, tournaments, or online matches. You can also play Master League mode, where you can create your own club and manage it from scratch. Or you can play myClub mode, where you can build your own team by signing players and managers using in-game currency or real money.
              • -
              • Tournaments: PES 2018 features many official and licensed tournaments from around the world. You can play in the UEFA Champions League, UEFA Europa League, UEFA Super Cup, Copa Libertadores, Copa Sudamericana, AFC Champions League, FIFA Club World Cup, and more. You can also create your own custom tournaments with your own rules and settings.
              • -
              • Players: PES 2018 boasts more than 40,000 licensed players from over 650 teams. You can play with some of the best players in the world, such as Lionel Messi, Cristiano Ronaldo, Neymar Jr., Kylian Mbappé, Mohamed Salah, Harry Kane, Robert Lewandowski, and more. You can also create your own custom players with your own appearance and attributes.
              • -
              • Teams: PES 2018 features over 650 licensed teams from various leagues and countries. You can play with some of the best teams in the world, such as Barcelona, Real Madrid, Manchester City, Liverpool, Bayern Munich, Juventus, Paris Saint-Germain, and more. You can also play with some of the national teams, such as Brazil, Argentina, France, Germany, Spain, England, and more. You can also create your own custom teams with your own name, logo, kit, and players.
              • -
              • Stadiums: PES 2018 features more than 90 stadiums from various countries and continents. You can play in some of the most iconic and beautiful stadiums in the world, such as Camp Nou, Santiago Bernabéu, Wembley Stadium, Allianz Arena, Parc des Princes, and more. You can also create your own custom stadiums with your own design and features.
              • -
              • Customization Options: PES 2018 offers a lot of customization options for players to personalize their game experience. You can edit the appearance and attributes of players, teams, stadiums, balls, kits, logos , and more. You can also download and import official data packs and fan-made patches that add more content and updates to the game.
              • -
              -

              Online Features

              -

              The last reason why PES 2018 is the best football game is its online features. PES 2018 offers an online multiplayer mode that allows players to compete with other players around the world.

              -

              Some of the online features that make PES 2018 stand out are:

              -
                -
              • Online Co-op: A mode dedicated to co-op play is newly added. You can team up with up to three friends or strangers and play against another team of up to four players. You can also join forces with a friend and take on the AI in co-op challenges.
              • -
              • PES League: The official PES e-sports tournament where players from around the world compete in various seasons and events. You can participate in online matches and tournaments and earn points and rewards. You can also qualify for regional and global finals and win prizes and glory.
              • -
              • myClub: The ultimate team-building mode where you can create your own dream team by signing players and managers using in-game currency or real money. You can also scout for new talent, train your players, improve their skills, and challenge other myClub users.
              • -
              -

              How to Download and Install PES 2018 APK on Your Android Device?

              -

              Requirements

              -

              Before you download and install PES 2018 APK on your Android device, you need to make sure that your device meets the minimum and recommended system requirements for running the game smoothly.

              -

              The minimum system requirements for PES 2018 APK are:

              -
                -
              • Android version: 5.0 or higher
              • -
              • RAM: 2 GB or higher
              • -
              • Storage: 4 GB or higher
              • -
              • Internet connection: Required for online features
              • -
              -

              The recommended system requirements for PES 2018 APK are:

              -
                -
              • Android version: 7.0 or higher
              • -
              • RAM: 4 GB or higher
              • -
              • Storage: 8 GB or higher
              • -
              • Internet connection: Required for online features
              • -
              -

              Steps

              -

              Once you have checked that your device meets the system requirements, you can follow these steps to download and install PES 2018 APK on your Android device:

              -
                -
              1. Go to [this link] to download the PES 2018 APK file. The file size is about 1.5 GB.
              2. -
              3. Go to [this link] to download the PES 2018 OBB file. The file size is about 2.5 GB.
              4. -
              5. After downloading both files, go to your device's settings and enable the option to install apps from unknown sources.
              6. -
              7. Locate the downloaded APK file on your device's file manager and tap on it to install it.
              8. -
              9. After installing the APK file, do not open it yet. Instead, locate the downloaded OBB file on your device's file manager and extract it using a ZIP extractor app.
              10. -
              11. Copy the extracted OBB folder named jp.konami.pesam and paste it into /Android/obb/ directory on your device's internal storage.
              12. -
              13. After copying the OBB folder, you can now launch the PES 2018 app from your device's app drawer or home screen.
              14. -
              15. Enjoy playing PES 2018 on your Android device!
              16. -

                Tips and Tricks for Playing PES 2018 on Your Android Device

                -

                Now that you have downloaded and installed PES 2018 on your Android device, you might be wondering how to play it better and have more fun. Here are some useful tips and tricks for playing PES 2018 on your Android device:

                -
                  -
                • Change Tactics: You can change your team's tactics before or during a match by tapping on the Tactics button on the bottom left corner of the screen. You can choose from different formations, strategies, and advanced instructions to suit your playstyle and counter your opponent's.
                • -
                • Learn Tricks: You can perform various tricks and skills with your players by using the right stick or the Trick button on the bottom right corner of the screen. You can also swipe on the screen to execute different feints and moves. You can learn more about the tricks and how to perform them by going to Extras > Tutorial > Tricks.
                • -
                • Use Random Selection Mode: If you want to spice up your matches and try something new, you can use the random selection mode. This mode allows you to create a team by randomly selecting players from different clubs and countries. You can also trade players with your opponent before the match starts. You can access this mode by going to Kick Off > Random Selection Match.
                • -
                • Optimize Performance: If you experience any lag or slowdown while playing PES 2018 on your Android device, you can optimize the game's performance by adjusting some settings. You can go to Extras > Settings > Graphics Settings and lower the resolution, frame rate, and rendering quality. You can also turn off some effects such as shadows, reflections, and crowd animations.
                • -
                -

                Conclusion

                -

                PES 2018 is one of the best football games ever made, and you can play it on your Android device with ease. All you need to do is download and install the PES 2018 APK and OBB files from the links provided in this article, and follow the steps we have outlined. You will then be able to enjoy a realistic, diverse, and online football experience on your Android device.

                -

                We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

                -

                And if you liked this article, please share it with your friends and fellow football fans. Let them know how they can play PES 2018 on their Android devices too.

                -

                Thank you for reading, and happy gaming!

                -

                Frequently Asked Questions

                -

                Here are some of the most frequently asked questions about PES 2018 APK download:

                -
                  -
                1. Is PES 2018 APK safe to download?
                2. -

                  Yes, PES 2018 APK is safe to download as long as you download it from a trusted source like the one we have provided in this article. However, you should always be careful when downloading any file from the internet, and scan it with an antivirus software before installing it.

                  -
                3. Is PES 2018 APK free to download?
                4. -

                  Yes, PES 2018 APK is free to download and play. However, some features and content may require in-game currency or real money to unlock or purchase.

                  -
                5. How much space does PES 2018 APK take on my device?
                6. -

                  PES 2018 APK takes about 4 GB of storage space on your device, while the OBB file takes about 2.5 GB of storage space. Therefore, you need at least 6.5 GB of free space on your device to download and install PES 2018 APK.

                  -
                7. Can I play PES 2018 APK offline?
                8. -

                  You can play PES 2018 APK offline in some modes such as single matches, leagues, cups, tournaments, and Master League. However, some modes such as online co-op, PES League, and myClub require an internet connection to play.

                  -
                9. Can I play PES 2018 APK with a controller?
                10. -

                  You can play PES 2018 APK with a controller if your device supports it. You can connect your controller via Bluetooth or USB cable, and configure it in the game's settings.

                  I have finished writing the article. There is nothing more to write. I hope you are satisfied with the quality and content of the article. If you have any feedback or suggestions, please let me know. I appreciate your input and cooperation.

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download YouTube 6.2 APK - The Best App for Streaming Videos on Android.md b/spaces/fatiXbelha/sd/Download YouTube 6.2 APK - The Best App for Streaming Videos on Android.md deleted file mode 100644 index ed08815b4721d4c209e3135d1f99e67ad585cb03..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download YouTube 6.2 APK - The Best App for Streaming Videos on Android.md +++ /dev/null @@ -1,99 +0,0 @@ - -

                  YouTube 6.2 APK: What You Need to Know

                  -

                  If you are looking for a way to watch and enjoy your favorite videos and channels on your Android device, you might want to check out YouTube 6.2 APK. This is the latest version of the official YouTube app that comes with new features and improvements. In this article, we will tell you what YouTube 6.2 APK is, why you should download it, and how to download and install it on your device.

                  -

                  youtube 6.2 apk


                  DOWNLOADhttps://urllie.com/2uNAHD



                  -

                  What is YouTube 6.2 APK?

                  -

                  YouTube 6.2 APK is the Android application package (APK) file of the official YouTube app for Android devices. An APK file is a compressed file that contains all the necessary files and code to run an app on an Android device. You can download an APK file from various sources online and install it manually on your device, without using the Google Play Store.

                  -

                  The official YouTube app for Android devices

                  -

                  The official YouTube app is the best way to access and enjoy YouTube on your Android device. You can see what the world is watching, from the hottest music videos to what’s popular in gaming, fashion, beauty, news, learning, and more. You can also subscribe to channels you love, create content of your own, share with friends, and watch on any device.

                  -

                  The latest version of the app with new features and improvements

                  -

                  YouTube 6.2 APK is the latest version of the app that was released on June 22, 2023. It comes with new features and improvements that make your YouTube experience better and more enjoyable. Some of the new features and improvements include:

                  -
                    -
                  • A new design that makes it easier to browse and discover videos
                  • -
                  • A new Explore tab that lets you see what’s trending on YouTube and around the world
                  • -
                  • A new Creators, Gamers, and Artists on the Rise section that showcases the coolest new talent on YouTube
                  • -
                  • A new Live tab that lets you watch live streams from your favorite creators and events
                  • -
                  • A new Stories feature that lets you see short videos from your favorite creators
                  • -
                  • A new Posts feature that lets you see updates from your favorite creators
                  • -
                  • A new Premieres feature that lets you watch new videos from your favorite creators as they premiere
                  • -
                  • A new Memberships feature that lets you join channels that offer paid monthly memberships and support their work
                  • -
                  • A new Dark theme that lets you switch to a dark background for a more comfortable viewing experience
                  • -
                  • Bug fixes and performance improvements
                  • -
                  -

                  Why Download YouTube 6.2 APK?

                  -

                  There are many reasons why you might want to download YouTube 6.2 APK instead of using the Google Play Store version of the app. Some of these reasons are:

                  -

                  Enjoy your favorite videos and channels with the official YouTube app

                  -

                  By downloading YouTube 6.2 APK, you can enjoy all the features and functions of the official YouTube app on your Android device. You can watch and subscribe to your favorite videos and channels, browse personal recommendations on Home, see the latest from your favorite channels in Subscriptions, look up videos you’ve watched, liked, and saved for later in Library, and more.

                  -

                  youtube 6.2 apk download free
                  -youtube 6.2 apk mod premium
                  -youtube 6.2 apk for android tv
                  -youtube 6.2 apk no ads
                  -youtube 6.2 apk latest version
                  -youtube 6.2 apk xapk
                  -youtube 6.2 apk uptodown
                  -youtube 6.2 apk old version
                  -youtube 6.2 apk mirror
                  -youtube 6.2 apk android 4.4
                  -youtube 6.2 apk pro unlocked
                  -youtube 6.2 apk dark mode
                  -youtube 6.2 apk offline
                  -youtube 6.2 apk beta
                  -youtube 6.2 apk original
                  -youtube 6.2 apk update
                  -youtube 6.2 apk apkpure
                  -youtube 6.2 apk hack
                  -youtube 6.2 apk cracked
                  -youtube 6.2 apk google play
                  -youtube 6.2 apk music premium
                  -youtube 6.2 apk background play
                  -youtube 6.2 apk vanced
                  -youtube 6.2 apk root
                  -youtube 6.2 apk firestick
                  -youtube 6.2 apk smart tv
                  -youtube 6.2 apk ad free
                  -youtube 6.2 apk full version
                  -youtube 6.2 apk install
                  -youtube 6.2 apk file
                  -youtube 6.2 apk red mod
                  -youtube 6.2 apk kids mode
                  -youtube 6.2 apk downloader
                  -youtube 6.2 apk black theme
                  -youtube 6.2 apk pip mode
                  -youtube 6.2 apk vr mode
                  -youtube 6.2 apk microg
                  -youtube 6.2 apk non root
                  -youtube 6.2 apk tablet
                  -youtube 6.2 apk gaming mode.

                  -

                  Explore different topics, what’s popular, and on the rise

                  By downloading YouTube 6.2 APK, you can also explore different topics, what’s popular, and on the rise on YouTube. You can use the new Explore tab to see what’s trending on YouTube and around the world, from music to gaming to news and more. You can also discover new creators, gamers, and artists on the rise, who are making waves on YouTube with their unique and original content. You can also watch live streams from your favorite creators and events, and see short stories and posts from them.

                  -

                  Connect with the YouTube community and create your own content

                  -

                  Another reason to download YouTube 6.2 APK is to connect with the YouTube community and create your own content. You can use the app to comment, like, share, and chat with other users and creators. You can also use the app to upload your own videos, edit them with filters and music, and share them with the world. You can also go live with your mobile device and interact with your viewers in real time. You can also create your own channel, customize your profile, and grow your audience.

                  -

                  Find the experience that fits you and your family

                  -

                  YouTube 6.2 APK also lets you find the experience that fits you and your family. You can use the app to access different modes and settings that suit your preferences and needs. For example, you can use the Restricted Mode to filter out inappropriate content for yourself or your kids. You can also use the Kids Mode to access a curated selection of videos that are suitable for children of different ages. You can also use the Incognito Mode to browse YouTube without leaving any traces of your activity.

                  -

                  Support creators you love with channel memberships

                  -

                  If you want to support the creators you love on YouTube, you can download YouTube 6.2 APK and join their channel memberships. Channel memberships are a way for you to pay a monthly fee to a creator and get access to exclusive perks, such as badges, emojis, live chats, videos, posts, polls, and more. By joining a channel membership, you can show your appreciation for the creator's work and help them make more quality content for you.

                  -

                  Upgrade to YouTube Premium for more benefits

                  -

                  Finally, if you want to enjoy more benefits from YouTube, you can download YouTube 6.2 APK and upgrade to YouTube Premium. YouTube Premium is a subscription service that gives you access to ad-free videos, offline downloads, background play, YouTube Music Premium, YouTube Originals, and more. By upgrading to YouTube Premium, you can enhance your YouTube experience and support the platform.

                  -

                  How to Download and Install YouTube 6.2 APK?

                  -

                  If you are interested in downloading and installing YouTube 6.2 APK on your Android device, you can follow these simple steps:

                  -
                    -
                  1. Download the APK file from a trusted source. You can use this link to download the file directly from our website.
                  2. -
                  3. Enable unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
                  4. -
                  5. Install the APK file and launch the app. To do this, go to your file manager or downloads folder and tap on the APK file. Follow the instructions on the screen to install the app. Once installed, open the app and enjoy!
                  6. -
                  -

                  Conclusion

                  -

                  In conclusion, YouTube 6.2 APK is the latest version of the official YouTube app for Android devices that comes with new features and improvements. By downloading this app, you can enjoy your favorite videos and channels, explore different topics, what’s popular, and on the rise, connect with the YouTube community and create your own content, find the experience that fits you and your family, support creators you love with channel memberships, and upgrade to YouTube Premium for more benefits. To download and install this app on your device, just follow the steps we have provided above.

                  -

                  FAQs

                  -
                    -
                  • What is an APK file?
                  • -

                    An APK file is a compressed file that contains all the necessary files and code to run an app on an Android device.

                    -
                  • Is YouTube 6.2 APK safe to download?
                  • -

                    Yes, YouTube 6.2 APK is safe to download as long as you use a trusted source like our website. However, you should always be careful when downloading any file from unknown sources online.

                    -
                  • What are the requirements for YouTube 6.2 APK?
                  • -

                    YouTube 6.2 APK requires Android 4.4 or higher to run smoothly on your device.

                    -
                  • How do I update YouTube 6.2 APK?How do I update YouTube 6.2 APK?

                    -

                    You can update YouTube 6.2 APK by downloading the latest version of the APK file from our website and installing it over the existing app. Alternatively, you can wait for the Google Play Store to update the app automatically.

                    -
                  • What are the alternatives to YouTube 6.2 APK?
                  • -

                    If you are looking for alternatives to YouTube 6.2 APK, you can try other apps that let you watch and download videos on your Android device, such as VidMate, TubeMate, Snaptube, and more.

                    -

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/fatimahhussain/workoutwizard/Makefile b/spaces/fatimahhussain/workoutwizard/Makefile deleted file mode 100644 index fe3de527e2bdc9ec254b8cddf9d7dec4f245df93..0000000000000000000000000000000000000000 --- a/spaces/fatimahhussain/workoutwizard/Makefile +++ /dev/null @@ -1,2 +0,0 @@ -deps/update: - pipreqs --print . | sort > requirements.txt diff --git a/spaces/fclong/summary/fengshen/metric/metric.py b/spaces/fclong/summary/fengshen/metric/metric.py deleted file mode 100644 index 5588db3726e30fbc955e619ecd24de3c2c5a1952..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/metric/metric.py +++ /dev/null @@ -1,129 +0,0 @@ -# coding=utf-8 -from collections import Counter -import torch -from torch import nn -# import seqeval - -from .utils_ner import get_entities - - -class metrics_mlm_acc(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, logits, labels, masked_lm_metric): - - # if len(list(logits.shape))==3: - mask_label_size = 0 - for i in masked_lm_metric: - for j in i: - if j > 0: - mask_label_size += 1 - - y_pred = torch.argmax(logits, dim=-1) - - y_pred = y_pred.view(size=(-1,)) - y_true = labels.view(size=(-1,)) - masked_lm_metric = masked_lm_metric.view(size=(-1,)) - - corr = torch.eq(y_pred, y_true) - corr = torch.multiply(masked_lm_metric, corr) - - acc = torch.sum(corr.float())/mask_label_size - return acc - - -class EntityScore(object): - def __init__(self): - self.reset() - - def reset(self): - self.origins = [] - self.founds = [] - self.rights = [] - - def compute(self, origin, found, right): - recall = 0 if origin == 0 else (right / origin) - precision = 0 if found == 0 else (right / found) - f1 = 0. if recall + precision == 0 else (2 * precision * recall) / (precision + recall) - return recall, precision, f1 - - def result(self): - class_info = {} - - origin_counter = Counter([x[0] for x in self.origins]) - found_counter = Counter([x[0] for x in self.founds]) - right_counter = Counter([x[0] for x in self.rights]) - for type_, count in origin_counter.items(): - origin = count - found = found_counter.get(type_, 0) - right = right_counter.get(type_, 0) - recall, precision, f1 = self.compute(origin, found, right) - class_info[type_] = {"acc": round(precision, 4), 'recall': round(recall, 4), 'f1': round(f1, 4)} - origin = len(self.origins) - found = len(self.founds) - right = len(self.rights) - recall, precision, f1 = self.compute(origin, found, right) - return {'acc': precision, 'recall': recall, 'f1': f1}, class_info - - def update(self, true_subject, pred_subject): - self.origins.extend(true_subject) - self.founds.extend(pred_subject) - self.rights.extend([pre_entity for pre_entity in pred_subject if pre_entity in true_subject]) - -class SeqEntityScore(object): - def __init__(self, id2label, markup='bios', middle_prefix='I-'): - self.id2label = id2label - self.markup = markup - self.middle_prefix = middle_prefix - self.reset() - - def reset(self): - self.origins = [] - self.founds = [] - self.rights = [] - - def compute(self, origin, found, right): - recall = 0 if origin == 0 else (right / origin) - precision = 0 if found == 0 else (right / found) - f1 = 0. if recall + precision == 0 else (2 * precision * recall) / (precision + recall) - return recall, precision, f1 - - def result(self): - class_info = {} - origin_counter = Counter([x[0] for x in self.origins]) - found_counter = Counter([x[0] for x in self.founds]) - right_counter = Counter([x[0] for x in self.rights]) - for type_, count in origin_counter.items(): - origin = count - found = found_counter.get(type_, 0) - right = right_counter.get(type_, 0) - # print('origin:', origin, ' found:', found, ' right:', right) - recall, precision, f1 = self.compute(origin, found, right) - class_info[type_] = {"acc": round(precision, 4), 'recall': round(recall, 4), 'f1': round(f1, 4)} - origin = len(self.origins) - found = len(self.founds) - right = len(self.rights) - recall, precision, f1 = self.compute(origin, found, right) - return {'acc': precision, 'recall': recall, 'f1': f1}, class_info - - def update(self, label_paths, pred_paths): - ''' - labels_paths: [[],[],[],....] - pred_paths: [[],[],[],.....] - - :param label_paths: - :param pred_paths: - :return: - Example: - >>> labels_paths = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] - >>> pred_paths = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] - ''' - for label_path, pre_path in zip(label_paths, pred_paths): - label_entities = get_entities(label_path, self.id2label, self.markup, self.middle_prefix) - pre_entities = get_entities(pre_path, self.id2label, self.markup, self.middle_prefix) - # print('label:', label_path, ',label_entities: ', label_entities) - # print('pred:', pre_path, ',pre_entities: ', pre_entities) - self.origins.extend(label_entities) - self.founds.extend(pre_entities) - self.rights.extend([pre_entity for pre_entity in pre_entities if pre_entity in label_entities]) diff --git a/spaces/felixfrosch/deep_learning_assignment/app.py b/spaces/felixfrosch/deep_learning_assignment/app.py deleted file mode 100644 index cf597455996bbabb45bbec452d1f285941ae425b..0000000000000000000000000000000000000000 --- a/spaces/felixfrosch/deep_learning_assignment/app.py +++ /dev/null @@ -1,183 +0,0 @@ -try: - import detectron2 -except: - import os - os.system('pip install git+https://github.com/facebookresearch/detectron2.git') - -# import some common detectron2 utilities -from detectron2 import model_zoo -from detectron2.engine import DefaultPredictor -from detectron2.config import get_cfg -from detectron2.utils.visualizer import Visualizer -from detectron2.data import MetadataCatalog - -# import some common torch utilities -import torch -import torchvision -import torchvision.transforms.functional as F -from torch.utils.data import Dataset, DataLoader -import torchvision.models as models -import torch.optim as optim -import torch.nn as nn - -import gradio as gr -import requests -import numpy as np -import cv2 -from PIL import Image -import pickle - -# segmentation and augmentation -cfg = get_cfg() -cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) -cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") -cfg.MODEL.ROI_HEADS_SCORE_THRESH_TEST = 0.5 -cfg.MODEL.DEVICE = "cpu" - - -# load in all three models -in_features = 512 - -# front back classification -resnet_front = models.resnet18(pretrained=True).cpu() -out_features = 2 -final_fc = nn.Linear(in_features, out_features) -resnet_front.fc = final_fc -model_path = "./front_back_parameters.pt" -state_dict = torch.load(model_path, map_location = 'cpu') -resnet_front.load_state_dict(state_dict) - - -# Design modernity -resnet_modernity = models.resnet18(pretrained=True).cpu() -out_features = 5 -final_fc = nn.Linear(in_features, out_features) -resnet_modernity.fc = final_fc -model_path = "./design_modernity.pt" -state_dict = torch.load(model_path, map_location = 'cpu') -resnet_modernity.load_state_dict(state_dict) - - -# Design typicality -resnet_typicality = models.resnet18(pretrained=True).cpu() -out_features = 5 -final_fc = nn.Linear(in_features, out_features) -resnet_typicality.fc = final_fc -model_path = "./design_typicality.pt" -state_dict = torch.load(model_path, map_location = 'cpu') -resnet_typicality.load_state_dict(state_dict) - -# Define transformations -pretrained_size = 224 -pretrained_means = [0.485, 0.456, 0.406] -pretrained_stds = [0.229, 0.224, 0.225] - -transforms_all = torchvision.transforms.Compose([ - torchvision.transforms.Resize(pretrained_size), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize(mean=pretrained_means, - std=pretrained_stds) - ]) - -# Import dictionary with morphs -model_path = "./morph_dictionary_2.pkl" -with open(model_path, 'rb') as f: - morph_dict = pickle.load(f) - -# Forward hook for activations -activation = {} -def getActivation(name): - # the hook signature - def hook(model, input, output): - activation[name] = output.detach() - return hook - -h1 = resnet_typicality.avgpool.register_forward_hook(getActivation('avgpool')) - - -def do_smth(image): - - img = image - predictor = DefaultPredictor(cfg) - outputs = predictor(img) - - instances = outputs["instances"] - - class_ids = instances.pred_classes.cpu().numpy() - masks = instances.pred_masks.cpu().numpy() - - - class_id_car = 2 - selected_indices = np.where(class_ids == class_id_car)[0] - selected_masks = masks[selected_indices] - - if len(selected_indices) == 0: - text_output = "No car was found in the picture. Please insert a different picture." - cropped_img = img - else: - # the car with the highest number of pixels - selected_mask = selected_masks[np.argmax([mask.sum() for mask in selected_masks])] - - white_background = np.ones_like(img) * 255 - white_background[selected_mask] = img[selected_mask] - - cropped_img = white_background - - - # predict viewpoint - resnet_front.eval() - cropped = Image.fromarray(cropped_img) - cropped = transforms_all(cropped) - - output = resnet_front(cropped.unsqueeze(0)) - m = nn.Softmax(dim=1) - pred = m(output) - val, index = torch.max(pred, axis=1) - - # index==0: from behind - if index == 0: - text_output = "The picture was not classified as a frontal view. Please insert a different picture." - else: - - # Modernity score - resnet_modernity.eval() - output_modernity = resnet_modernity(cropped.unsqueeze(0)) - pred_modernity = m(output_modernity) - - weights = torch.tensor([0, 1, 2, 3, 4]) - weighted_sum = torch.sum(torch.mul(pred_modernity, weights)) - text_output = "Modernity score {}".format(weighted_sum) - - # Typicality score - resnet_typicality.eval() - output_typicality = resnet_typicality(cropped.unsqueeze(0)) - activation_value = activation["avgpool"] - cosine = torch.nn.CosineSimilarity(dim=1) - - # remember the highest similarity - similarity_list = [] - - for i in morph_dict.keys(): - morph = morph_dict[i] - similarity = cosine(activation_value, morph) - similarity_list.append(similarity) - - - highest_similarity = max(similarity_list) - text_output = "Modernity score: {} \nTypicality score: {}".format(weighted_sum, float(highest_similarity)) - - - return cropped_img, text_output - - -title = "Modernity score & typicality score model" -description = ("Start by uploading a picture.") - - -interface = gr.Interface(do_smth, - inputs="image", - outputs=["image", "text"], - title=title, - description=description) - -interface.launch() \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/projector.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/projector.py deleted file mode 100644 index 814bed22c8ed08c7e9084069728438f43364378a..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/projector.py +++ /dev/null @@ -1,179 +0,0 @@ -from argparse import Namespace -import os -from os.path import join as pjoin -import random -import sys -from typing import ( - Iterable, - Optional, -) - -import cv2 -import numpy as np -from PIL import Image -import torch -from torch.utils.tensorboard import SummaryWriter -from torchvision.transforms import ( - Compose, - Grayscale, - Resize, - ToTensor, - Normalize, -) - -from losses.joint_loss import JointLoss -from model import Generator -from tools.initialize import Initializer -from tools.match_skin_histogram import match_skin_histogram -from utils.projector_arguments import ProjectorArguments -from utils import torch_helpers as th -from utils.torch_helpers import make_image -from utils.misc import stem -from utils.optimize import Optimizer -from models.degrade import ( - Degrade, - Downsample, -) - -from huggingface_hub import hf_hub_download -TOKEN = "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" - - -def set_random_seed(seed: int): - # FIXME (xuanluo): this setup still allows randomness somehow - torch.manual_seed(seed) - random.seed(seed) - np.random.seed(seed) - - -def read_images(paths: str, max_size: Optional[int] = None): - transform = Compose( - [ - Grayscale(), - ToTensor(), - ] - ) - - imgs = [] - for path in paths: - img = Image.open(path) - if max_size is not None and img.width > max_size: - img = img.resize((max_size, max_size)) - img = transform(img) - imgs.append(img) - imgs = torch.stack(imgs, 0) - return imgs - - -def normalize(img: torch.Tensor, mean=0.5, std=0.5): - """[0, 1] -> [-1, 1]""" - return (img - mean) / std - - -def create_generator(file_name: str, path:str,args: Namespace, device: torch.device): - path = hf_hub_download(f'{path}', - f'{file_name}', - use_auth_token=TOKEN) - with open(path, 'rb') as f: - generator = Generator(args.generator_size, 512, 8) - generator.load_state_dict(torch.load(f)['g_ema'], strict=False) - generator.eval() - generator.to(device) - return generator - - -def save( - path_prefixes: Iterable[str], - imgs: torch.Tensor, # BCHW - latents: torch.Tensor, - noises: torch.Tensor, - imgs_rand: Optional[torch.Tensor] = None, -): - assert len(path_prefixes) == len(imgs) and len(latents) == len(path_prefixes) - if imgs_rand is not None: - assert len(imgs) == len(imgs_rand) - imgs_arr = make_image(imgs) - for path_prefix, img, latent, noise in zip(path_prefixes, imgs_arr, latents, noises): - os.makedirs(os.path.dirname(path_prefix), exist_ok=True) - cv2.imwrite(path_prefix + ".png", img[...,::-1]) - torch.save({"latent": latent.detach().cpu(), "noise": noise.detach().cpu()}, - path_prefix + ".pt") - - if imgs_rand is not None: - imgs_arr = make_image(imgs_rand) - for path_prefix, img in zip(path_prefixes, imgs_arr): - cv2.imwrite(path_prefix + "-rand.png", img[...,::-1]) - - -def main(args): - opt_str = ProjectorArguments.to_string(args) - print(opt_str) - - if args.rand_seed is not None: - set_random_seed(args.rand_seed) - device = th.device() - - # read inputs. TODO imgs_orig has channel 1 - imgs_orig = read_images([args.input], max_size=args.generator_size).to(device) - imgs = normalize(imgs_orig) # actually this will be overwritten by the histogram matching result - - # initialize - with torch.no_grad(): - init = Initializer(args).to(device) - latent_init = init(imgs_orig) - - # create generator - generator = create_generator(args, device) - - # init noises - with torch.no_grad(): - noises_init = generator.make_noise() - - # create a new input by matching the input's histogram to the sibling image - with torch.no_grad(): - sibling, _, sibling_rgbs = generator([latent_init], input_is_latent=True, noise=noises_init) - mh_dir = pjoin(args.results_dir, stem(args.input)) - imgs = match_skin_histogram( - imgs, sibling, - args.spectral_sensitivity, - pjoin(mh_dir, "input_sibling"), - pjoin(mh_dir, "skin_mask"), - matched_hist_fn=mh_dir.rstrip(os.sep) + f"_{args.spectral_sensitivity}.png", - normalize=normalize, - ).to(device) - torch.cuda.empty_cache() - # TODO imgs has channel 3 - - degrade = Degrade(args).to(device) - - rgb_levels = generator.get_latent_size(args.coarse_min) // 2 + len(args.wplus_step) - 1 - criterion = JointLoss( - args, imgs, - sibling=sibling.detach(), sibling_rgbs=sibling_rgbs[:rgb_levels]).to(device) - - # save initialization - save( - [pjoin(args.results_dir, f"{stem(args.input)}-{opt_str}-init")], - sibling, latent_init, noises_init, - ) - - writer = SummaryWriter(pjoin(args.log_dir, f"{stem(args.input)}/{opt_str}")) - # start optimize - latent, noises = Optimizer.optimize(generator, criterion, degrade, imgs, latent_init, noises_init, args, writer=writer) - - # generate output - img_out, _, _ = generator([latent], input_is_latent=True, noise=noises) - img_out_rand_noise, _, _ = generator([latent], input_is_latent=True) - # save output - save( - [pjoin(args.results_dir, f"{stem(args.input)}-{opt_str}")], - img_out, latent, noises, - imgs_rand=img_out_rand_noise - ) - - -def parse_args(): - return ProjectorArguments().parse() - -if __name__ == "__main__": - sys.exit(main(parse_args())) diff --git a/spaces/fengmuxi/ChatGpt-Web/app/locales/de.ts b/spaces/fengmuxi/ChatGpt-Web/app/locales/de.ts deleted file mode 100644 index 11fe1ede62a55510226eb0695883fdd0cf0c2ae4..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/locales/de.ts +++ /dev/null @@ -1,276 +0,0 @@ -import { SubmitKey } from "../store/config"; -import type { LocaleType } from "./index"; - -const de: LocaleType = { - WIP: "In Bearbeitung...", - Error: { - Unauthorized: - "Unbefugter Zugriff, bitte geben Sie den Zugangscode auf der Einstellungsseite ein.", - }, - ChatItem: { - ChatItemCount: (count: number) => `${count} Nachrichten`, - }, - Chat: { - SubTitle: (count: number) => `${count} Nachrichten mit ChatGPT`, - Actions: { - ChatList: "Zur Chat-Liste gehen", - CompressedHistory: "Komprimierter Gedächtnis-Prompt", - Export: "Alle Nachrichten als Markdown exportieren", - Copy: "Kopieren", - Stop: "Stop", - Retry: "Wiederholen", - Delete: "Delete", - }, - Rename: "Chat umbenennen", - Typing: "Tippen...", - Input: (submitKey: string) => { - var inputHints = `${submitKey} um zu Senden`; - if (submitKey === String(SubmitKey.Enter)) { - inputHints += ", Umschalt + Eingabe für Zeilenumbruch"; - } - return inputHints + ", / zum Durchsuchen von Prompts"; - }, - Send: "Senden", - Config: { - Reset: "Reset to Default", - SaveAs: "Save as Mask", - }, - }, - Export: { - Title: "Alle Nachrichten", - Copy: "Alles kopieren", - Download: "Herunterladen", - MessageFromYou: "Deine Nachricht", - MessageFromChatGPT: "Nachricht von ChatGPT", - }, - Memory: { - Title: "Gedächtnis-Prompt", - EmptyContent: "Noch nichts.", - Send: "Gedächtnis senden", - Copy: "Gedächtnis kopieren", - Reset: "Sitzung zurücksetzen", - ResetConfirm: - "Das Zurücksetzen löscht den aktuellen Gesprächsverlauf und das Langzeit-Gedächtnis. Möchten Sie wirklich zurücksetzen?", - }, - Home: { - NewChat: "Neuer Chat", - DeleteChat: "Bestätigen Sie, um das ausgewählte Gespräch zu löschen?", - DeleteToast: "Chat gelöscht", - Revert: "Zurücksetzen", - }, - User:{ - Title: "User", - SubTitle: "Die kommandozentrale", - Login:"Anmeldung", - LoginTitle:"Einloggen", - Register:"Registration", - RegisterTitle:"Registrierte neue benutzer", - Findpwd:"Hol den code zurück", - FindpwdTitle:"Geben sie das kontonummer ein und das passwort wird an ihren briefkasten gesendet", - Name:"Nutzername", - Wallet:"UserExp", - Mail:"Email", - SigState:"Laden sie sich ein", - Ststus:"Logout", - Vip:"VIP", - kami:"CDkey", - NickName:"Spitzname", - User:"Kontonummer (nur Nummern)", - Password:"Passwort (mindestens 6-stellig)", - Email:"Briefkasten", - Code:"Captcha", - Pass:{ - Title:"修改密码", - OldPwd:"旧密码", - NewPwd:"新密码", - NewPwd1:"确认密码" - }, - Save:"保存" - }, - Settings: { - Title: "Einstellungen", - SubTitle: "Alle Einstellungen", - Actions: { - ClearAll: "Alle Daten löschen", - ResetAll: "Alle Einstellungen zurücksetzen", - Close: "Schließen", - ConfirmResetAll: - "Möchten Sie wirklich alle Konfigurationen zurücksetzen?", - ConfirmClearAll: "Möchten Sie wirklich alle Chats zurücksetzen?", - }, - Lang: { - Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language` - All: "All Languages", - Options: { - cn: "简体中文", - en: "English", - tw: "繁體中文", - es: "Español", - it: "Italiano", - tr: "Türkçe", - jp: "日本語", - de: "Deutsch", - }, - }, - Avatar: "Avatar", - FontSize: { - Title: "Schriftgröße", - SubTitle: "Schriftgröße des Chat-Inhalts anpassen", - }, - Update: { - Version: (x: string) => `Version: ${x}`, - IsLatest: "Neueste Version", - CheckUpdate: "Update prüfen", - IsChecking: "Update wird geprüft...", - FoundUpdate: (x: string) => `Neue Version gefunden: ${x}`, - GoToUpdate: "Aktualisieren", - }, - SendKey: "Senden-Taste", - Theme: "Erscheinungsbild", - TightBorder: "Enger Rahmen", - SendPreviewBubble: { - Title: "Vorschau-Bubble senden", - SubTitle: "Preview markdown in bubble", - }, - Mask: { - Title: "Mask Splash Screen", - SubTitle: "Show a mask splash screen before starting new chat", - }, - Prompt: { - Disable: { - Title: "Autovervollständigung deaktivieren", - SubTitle: "Autovervollständigung mit / starten", - }, - List: "Prompt-Liste", - ListCount: (builtin: number, custom: number) => - `${builtin} integriert, ${custom} benutzerdefiniert`, - Edit: "Bearbeiten", - Modal: { - Title: "Prompt List", - Add: "Add One", - Search: "Search Prompts", - }, - EditModal: { - Title: "Edit Prompt", - }, - }, - HistoryCount: { - Title: "Anzahl der angehängten Nachrichten", - SubTitle: "Anzahl der pro Anfrage angehängten gesendeten Nachrichten", - }, - CompressThreshold: { - Title: "Schwellenwert für Verlaufskomprimierung", - SubTitle: - "Komprimierung, wenn die Länge der unkomprimierten Nachrichten den Wert überschreitet", - }, - Token: { - Title: "API-Schlüssel", - SubTitle: - "Verwenden Sie Ihren Schlüssel, um das Zugangscode-Limit zu ignorieren", - Placeholder: "OpenAI API-Schlüssel", - }, - Usage: { - Title: "Kontostand", - SubTitle(used: any, total: any) { - return `Diesen Monat ausgegeben $${used}, Abonnement $${total}`; - }, - IsChecking: "Wird überprüft...", - Check: "Erneut prüfen", - NoAccess: "API-Schlüssel eingeben, um den Kontostand zu überprüfen", - }, - AccessCode: { - Title: "Zugangscode", - SubTitle: "Zugangskontrolle aktiviert", - Placeholder: "Zugangscode erforderlich", - }, - Bot: "KI-Anbieter (bot)", - Model: "Modell", - Temperature: { - Title: "Temperature", //Temperatur - SubTitle: "Ein größerer Wert führt zu zufälligeren Antworten", - }, - MaxTokens: { - Title: "Max Tokens", //Maximale Token - SubTitle: "Maximale Anzahl der Anfrage- plus Antwort-Token", - }, - PresencePenalty: { - Title: "Presence Penalty", //Anwesenheitsstrafe - SubTitle: - "Ein größerer Wert erhöht die Wahrscheinlichkeit, dass über neue Themen gesprochen wird", - }, - }, - Store: { - DefaultTopic: "Neues Gespräch", - BotHello: "Hallo! Wie kann ich Ihnen heute helfen?", - Error: - "Etwas ist schief gelaufen, bitte versuchen Sie es später noch einmal.", - Prompt: { - History: (content: string) => - "Dies ist eine Zusammenfassung des Chatverlaufs zwischen dem KI und dem Benutzer als Rückblick: " + - content, - Topic: - "Bitte erstellen Sie einen vier- bis fünfwörtigen Titel, der unser Gespräch zusammenfasst, ohne Einleitung, Zeichensetzung, Anführungszeichen, Punkte, Symbole oder zusätzlichen Text. Entfernen Sie Anführungszeichen.", - Summarize: - "Fassen Sie unsere Diskussion kurz in 200 Wörtern oder weniger zusammen, um sie als Pronpt für zukünftige Gespräche zu verwenden.", - }, - }, - Copy: { - Success: "In die Zwischenablage kopiert", - Failed: - "Kopieren fehlgeschlagen, bitte geben Sie die Berechtigung zum Zugriff auf die Zwischenablage frei", - }, - Context: { - Toast: (x: any) => `Mit ${x} Kontext-Prompts`, - Edit: "Kontext- und Gedächtnis-Prompts", - Add: "Hinzufügen", - }, - Plugin: { - Name: "Plugin", - }, - Mask: { - Name: "Mask", - Page: { - Title: "Prompt Template", - SubTitle: (count: number) => `${count} prompt templates`, - Search: "Search Templates", - Create: "Create", - }, - Item: { - Info: (count: number) => `${count} prompts`, - Chat: "Chat", - View: "View", - Edit: "Edit", - Delete: "Delete", - DeleteConfirm: "Confirm to delete?", - }, - EditModal: { - Title: (readonly: boolean) => - `Edit Prompt Template ${readonly ? "(readonly)" : ""}`, - Download: "Download", - Clone: "Clone", - }, - Config: { - Avatar: "Bot Avatar", - Name: "Bot Name", - }, - }, - NewChat: { - Return: "Return", - Skip: "Skip", - Title: "Pick a Mask", - SubTitle: "Chat with the Soul behind the Mask", - More: "Find More", - NotShow: "Not Show Again", - ConfirmNoShow: "Confirm to disable?You can enable it in settings later.", - }, - - UI: { - Confirm: "Confirm", - Cancel: "Cancel", - Close: "Close", - Create: "Create", - Edit: "Edit", - }, -}; - -export default de; diff --git a/spaces/fffiloni/Video-Matting-Anything/segment-anything/README.md b/spaces/fffiloni/Video-Matting-Anything/segment-anything/README.md deleted file mode 100644 index 4f5efb986bae5f1d93cb2862e677672ec42954cd..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/segment-anything/README.md +++ /dev/null @@ -1,171 +0,0 @@ -# Segment Anything - -**[Meta AI Research, FAIR](https://ai.facebook.com/research/)** - -[Alexander Kirillov](https://alexander-kirillov.github.io/), [Eric Mintun](https://ericmintun.github.io/), [Nikhila Ravi](https://nikhilaravi.com/), [Hanzi Mao](https://hanzimao.me/), Chloe Rolland, Laura Gustafson, [Tete Xiao](https://tetexiao.com), [Spencer Whitehead](https://www.spencerwhitehead.com/), Alex Berg, Wan-Yen Lo, [Piotr Dollar](https://pdollar.github.io/), [Ross Girshick](https://www.rossgirshick.info/) - -[[`Paper`](https://ai.facebook.com/research/publications/segment-anything/)] [[`Project`](https://segment-anything.com/)] [[`Demo`](https://segment-anything.com/demo)] [[`Dataset`](https://segment-anything.com/dataset/index.html)] [[`Blog`](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)] [[`BibTeX`](#citing-segment-anything)] - -![SAM design](assets/model_diagram.png?raw=true) - -The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. - -

                  - - -

                  - -## Installation - -The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended. - -Install Segment Anything: - -``` -pip install git+https://github.com/facebookresearch/segment-anything.git -``` - -or clone the repository locally and install with - -``` -git clone git@github.com:facebookresearch/segment-anything.git -cd segment-anything; pip install -e . -``` - -The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks. - -``` -pip install opencv-python pycocotools matplotlib onnxruntime onnx -``` - -## Getting Started - -First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt: - -``` -from segment_anything import SamPredictor, sam_model_registry -sam = sam_model_registry[""](checkpoint="") -predictor = SamPredictor(sam) -predictor.set_image() -masks, _, _ = predictor.predict() -``` - -or generate masks for an entire image: - -``` -from segment_anything import SamAutomaticMaskGenerator, sam_model_registry -sam = sam_model_registry[""](checkpoint="") -mask_generator = SamAutomaticMaskGenerator(sam) -masks = mask_generator.generate() -``` - -Additionally, masks can be generated for images from the command line: - -``` -python scripts/amg.py --checkpoint --model-type --input --output -``` - -See the examples notebooks on [using SAM with prompts](/notebooks/predictor_example.ipynb) and [automatically generating masks](/notebooks/automatic_mask_generator_example.ipynb) for more details. - -

                  - - -

                  - -## ONNX Export - -SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime, such as in-browser as showcased in the [demo](https://segment-anything.com/demo). Export the model with - -``` -python scripts/export_onnx_model.py --checkpoint --model-type --output -``` - -See the [example notebook](https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb) for details on how to combine image preprocessing via SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export. - -### Web demo - -The `demo/` folder has a simple one page React app which shows how to run mask prediction with the exported ONNX model in a web browser with multithreading. Please see [`demo/README.md`](https://github.com/facebookresearch/segment-anything/blob/main/demo/README.md) for more details. - -## Model Checkpoints - -Three model versions of the model are available with different backbone sizes. These models can be instantiated by running - -``` -from segment_anything import sam_model_registry -sam = sam_model_registry[""](checkpoint="") -``` - -Click the links below to download the checkpoint for the corresponding model type. - -- **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)** -- `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) -- `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth) - -## Dataset - -See [here](https://ai.facebook.com/datasets/segment-anything/) for an overview of the datastet. The dataset can be downloaded [here](https://ai.facebook.com/datasets/segment-anything-downloads/). By downloading the datasets you agree that you have read and accepted the terms of the SA-1B Dataset Research License. - -We save masks per image as a json file. It can be loaded as a dictionary in python in the below format. - -```python -{ - "image" : image_info, - "annotations" : [annotation], -} - -image_info { - "image_id" : int, # Image id - "width" : int, # Image width - "height" : int, # Image height - "file_name" : str, # Image filename -} - -annotation { - "id" : int, # Annotation id - "segmentation" : dict, # Mask saved in COCO RLE format. - "bbox" : [x, y, w, h], # The box around the mask, in XYWH format - "area" : int, # The area in pixels of the mask - "predicted_iou" : float, # The model's own prediction of the mask's quality - "stability_score" : float, # A measure of the mask's quality - "crop_box" : [x, y, w, h], # The crop of the image used to generate the mask, in XYWH format - "point_coords" : [[x, y]], # The point coordinates input to the model to generate the mask -} -``` - -Image ids can be found in sa_images_ids.txt which can be downloaded using the above [link](https://ai.facebook.com/datasets/segment-anything-downloads/) as well. - -To decode a mask in COCO RLE format into binary: - -``` -from pycocotools import mask as mask_utils -mask = mask_utils.decode(annotation["segmentation"]) -``` - -See [here](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py) for more instructions to manipulate masks stored in RLE format. - -## License - -The model is licensed under the [Apache 2.0 license](LICENSE). - -## Contributing - -See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md). - -## Contributors - -The Segment Anything project was made possible with the help of many contributors (alphabetical): - -Aaron Adcock, Vaibhav Aggarwal, Morteza Behrooz, Cheng-Yang Fu, Ashley Gabriel, Ahuva Goldstand, Allen Goodman, Sumanth Gurram, Jiabo Hu, Somya Jain, Devansh Kukreja, Robert Kuo, Joshua Lane, Yanghao Li, Lilian Luong, Jitendra Malik, Mallika Malhotra, William Ngan, Omkar Parkhi, Nikhil Raina, Dirk Rowe, Neil Sejoor, Vanessa Stark, Bala Varadarajan, Bram Wasti, Zachary Winstrom - -## Citing Segment Anything - -If you use SAM or SA-1B in your research, please use the following BibTeX entry. - -``` -@article{kirillov2023segany, - title={Segment Anything}, - author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, - journal={arXiv:2304.02643}, - year={2023} -} -``` diff --git a/spaces/fgbwyude/ChuanhuChatGPT/modules/llama_func.py b/spaces/fgbwyude/ChuanhuChatGPT/modules/llama_func.py deleted file mode 100644 index 2b303f3457e07d51d120b10f2072489a729596ab..0000000000000000000000000000000000000000 --- a/spaces/fgbwyude/ChuanhuChatGPT/modules/llama_func.py +++ /dev/null @@ -1,215 +0,0 @@ -import os -import logging - -from llama_index import GPTSimpleVectorIndex, ServiceContext -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -from langchain.llms import OpenAI -from langchain.chat_models import ChatOpenAI -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - logging.info(f"loading file: {file.name}") - if os.path.splitext(file.name)[1] == ".pdf": - logging.debug("Loading PDF...") - pdftext = "" - with open(file.name, 'rb') as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif os.path.splitext(file.name)[1] == ".docx": - logging.debug("Loading DOCX...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=file.name)[0].text - elif os.path.splitext(file.name)[1] == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=file.name)[0].text - else: - logging.debug("Loading text file...") - with open(file.name, "r", encoding="utf-8") as f: - text_raw = f.read() - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" " -): - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - logging.info("构建索引中……") - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def chat_ai( - api_key, - index, - question, - context, - chatbot, - reply_language, -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.info(f"Question: {question}") - - response, chatbot_display, status_text = ask_ai( - api_key, - index, - question, - replace_today(PROMPT_TEMPLATE), - REFINE_TEMPLATE, - SIM_K, - INDEX_QUERY_TEMPRATURE, - context, - reply_language, - ) - if response is None: - status_text = "查询失败,请换个问法试试" - return context, chatbot - response = response - - context.append({"role": "user", "content": question}) - context.append({"role": "assistant", "content": response}) - chatbot.append((question, chatbot_display)) - - os.environ["OPENAI_API_KEY"] = "" - return context, chatbot, status_text - - -def ask_ai( - api_key, - index, - question, - prompt_tmpl, - refine_tmpl, - sim_k=5, - temprature=0, - prefix_messages=[], - reply_language="中文", -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.debug("Index file found") - logging.debug("Querying index...") - llm_predictor = LLMPredictor( - llm=ChatOpenAI( - temperature=temprature, - model_name="gpt-3.5-turbo-0301", - prefix_messages=prefix_messages, - ) - ) - - response = None # Initialize response variable to avoid UnboundLocalError - qa_prompt = QuestionAnswerPrompt(prompt_tmpl.replace("{reply_language}", reply_language)) - rf_prompt = RefinePrompt(refine_tmpl.replace("{reply_language}", reply_language)) - response = index.query( - question, - similarity_top_k=sim_k, - text_qa_template=qa_prompt, - refine_template=rf_prompt, - response_mode="compact", - ) - - if response is not None: - logging.info(f"Response: {response}") - ret_text = response.response - nodes = [] - for index, node in enumerate(response.source_nodes): - brief = node.source_text[:25].replace("\n", "") - nodes.append( - f"
                  [{index + 1}]\t{brief}...

                  {node.source_text}

                  " - ) - new_response = ret_text + "\n----------\n" + "\n\n".join(nodes) - logging.info( - f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}" - ) - os.environ["OPENAI_API_KEY"] = "" - return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens" - else: - logging.warning("No response found, returning None") - os.environ["OPENAI_API_KEY"] = "" - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/fgpzen/remove-photo-object/src/__init__.py b/spaces/fgpzen/remove-photo-object/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/flowers-team/SocialAISchool/models/multiheadedac.py b/spaces/flowers-team/SocialAISchool/models/multiheadedac.py deleted file mode 100644 index ecb7525a4de977fbaaf153f3183b95810ddb2803..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/models/multiheadedac.py +++ /dev/null @@ -1,198 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.distributions.categorical import Categorical -import torch_ac - - -from utils.other import init_params - - - - - -class MultiHeadedACModel(nn.Module, torch_ac.RecurrentACModel): - def __init__(self, obs_space, action_space, use_memory=False, use_text=False, use_dialogue=False): - super().__init__() - - # store config - self.config = locals() - - # Decide which components are enabled - self.use_text = use_text - self.use_dialogue = use_dialogue - self.use_memory = use_memory - - if self.use_text: - raise ValueError("You should not use text but dialogue. --text is cheating.") - - # multi dim - if action_space.shape == (): - raise ValueError("The action space is not multi modal. Use ACModel instead.") - - self.n_primitive_actions = action_space.nvec[0] + 1 # for talk - self.talk_action = int(self.n_primitive_actions) - 1 - - self.n_utterance_actions = action_space.nvec[1:] - - self.env_action_space = action_space - self.model_raw_action_space = spaces.MultiDiscrete([self.n_primitive_actions, *self.n_utterance_actions]) - - # Define image embedding - self.image_conv = nn.Sequential( - nn.Conv2d(3, 16, (2, 2)), - nn.ReLU(), - nn.MaxPool2d((2, 2)), - nn.Conv2d(16, 32, (2, 2)), - nn.ReLU(), - nn.Conv2d(32, 64, (2, 2)), - nn.ReLU() - ) - n = obs_space["image"][0] - m = obs_space["image"][1] - self.image_embedding_size = ((n-1)//2-2)*((m-1)//2-2)*64 - - # Define memory - if self.use_memory: - self.memory_rnn = nn.LSTMCell(self.image_embedding_size, self.semi_memory_size) - - if self.use_text or self.use_dialogue: - self.word_embedding_size = 32 - self.word_embedding = nn.Embedding(obs_space["text"], self.word_embedding_size) - - # Define text embedding - if self.use_text: - self.text_embedding_size = 128 - self.text_rnn = nn.GRU(self.word_embedding_size, self.text_embedding_size, batch_first=True) - - # Define dialogue embedding - if self.use_dialogue: - self.dialogue_embedding_size = 128 - self.dialogue_rnn = nn.GRU(self.word_embedding_size, self.dialogue_embedding_size, batch_first=True) - - # Resize image embedding - self.embedding_size = self.semi_memory_size - - if self.use_text: - self.embedding_size += self.text_embedding_size - - if self.use_dialogue: - self.embedding_size += self.dialogue_embedding_size - - # Define actor's model - self.actor = nn.Sequential( - nn.Linear(self.embedding_size, 64), - nn.Tanh(), - nn.Linear(64, self.n_primitive_actions) - ) - self.talker = nn.ModuleList([ - nn.Sequential( - nn.Linear(self.embedding_size, 64), - nn.Tanh(), - nn.Linear(64, n) - ) for n in self.n_utterance_actions]) - - # Define critic's model - self.critic = nn.Sequential( - nn.Linear(self.embedding_size, 64), - nn.Tanh(), - nn.Linear(64, 1) - ) - - # Initialize parameters correctly - self.apply(init_params) - - @property - def memory_size(self): - return 2*self.semi_memory_size - - @property - def semi_memory_size(self): - return self.image_embedding_size - - def forward(self, obs, memory): - x = obs.image.transpose(1, 3).transpose(2, 3) - x = self.image_conv(x) - - batch_size = x.shape[0] - x = x.reshape(batch_size, -1) - - if self.use_memory: - hidden = (memory[:, :self.semi_memory_size], memory[:, self.semi_memory_size:]) - hidden = self.memory_rnn(x, hidden) - embedding = hidden[0] - memory = torch.cat(hidden, dim=1) - else: - embedding = x - - if self.use_text: - embed_text = self._get_embed_text(obs.text) - embedding = torch.cat((embedding, embed_text), dim=1) - - if self.use_dialogue: - if not hasattr(obs, "utterance_history"): - raise ValueError("The environment need's to be updated to 'utterance' and 'utterance_history' keys'") - - embed_dial = self._get_embed_dialogue(obs.utterance_history) - - embedding = torch.cat((embedding, embed_dial), dim=1) - - x = self.actor(embedding) - primtive_actions_dist = Categorical(logits=F.log_softmax(x, dim=1)) - - x = self.critic(embedding) - value = x.squeeze(1) - utterance_actions_dists = [ - Categorical(logits=F.log_softmax( - tal(embedding), - dim=1, - )) for tal in self.talker - ] - - dist = [primtive_actions_dist] + utterance_actions_dists - - return dist, value, memory - - def sample_action(self, dist): - return torch.stack([d.sample() for d in dist], dim=1) - - def calculate_log_probs(self, dist, action): - return torch.stack([d.log_prob(action[:, i]) for i, d in enumerate(dist)], dim=1) - - def calculate_action_masks(self, action): - talk_mask = action[:, 0] == self.talk_action - mask = torch.stack( - (torch.ones_like(talk_mask), talk_mask, talk_mask), - dim=1).detach() - - assert action.shape == mask.shape - - return mask - - def construct_final_action(self, action): - act_mask = action[:, 0] != self.n_primitive_actions - 1 - - nan_mask = np.array([ - np.array([1, np.nan, np.nan]) if t else np.array([np.nan, 1, 1]) for t in act_mask - ]) - - action = nan_mask*action - - return action - - def _get_embed_text(self, text): - _, hidden = self.text_rnn(self.word_embedding(text)) - return hidden[-1] - - def _get_embed_dialogue(self, dial): - _, hidden = self.dialogue_rnn(self.word_embedding(dial)) - return hidden[-1] - - def get_config_dict(self): - del self.config['__class__'] - self.config['self'] = str(self.config['self']) - self.config['action_space'] = self.config['action_space'].nvec.tolist() - return self.config - - diff --git a/spaces/for876543/plant-id-3/app.py b/spaces/for876543/plant-id-3/app.py deleted file mode 100644 index 7e2fffd755a65e75ec3e018207378bef11d4bc91..0000000000000000000000000000000000000000 --- a/spaces/for876543/plant-id-3/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import timm -import torch -import torch.nn.functional as nnf -import gradio as gr -import numpy as np -import pandas as pd -import json -from torchvision.transforms import ToTensor, Resize -from PIL import Image - - -#model = torch.load("/home/user/app/model_scripted.pkl",map_location=torch.device('cpu')) -model = torch.jit.load("/home/user/app/model_scripted.pkl") -model.eval() - -with open("/home/user/app/class_mapping.json", "r") as read_file: - classes = json.load(read_file) - -def classify_image(inp): - - inp = Image.fromarray(inp) - inp= Resize((300, 300))(inp) - inp= ToTensor()(inp) - inp = torch.unsqueeze(inp, 0) - ##print(inp.shape) - #inp = inp.astype(np.uint8).reshape((-1, 3, 300, 300)) - ##print(inp.shape) - #inp = torch.from_numpy(inp).float() - ##confidences = model(inp) - - preds = model(inp).data[0] - #means = preds.mean(dim=0, keepdim=True) - #stds = preds.std(dim=0, keepdim=True) - #preds = 4 * (preds - means) / stds - ##preds = nnf.normalize(model(inp).data[0], dim=0) - preds = nnf.softmax(preds, dim=0) - preds = [pred.cpu() for pred in preds] - preds = [float(pred.detach()) for pred in preds] - print(pd.Series(preds).describe()) - - #confidences_dict = {classes[i]: float(confidences.data[0][i]) for i in range(len(confidences.data[0]))} - confidences_dict = {classes[str(i)]: float(preds[i]) for i in range(len(preds))} - - return confidences_dict - -gr.Interface(fn=classify_image, - inputs=gr.Image(shape=(300, 300)), - outputs=gr.Label(num_top_classes=3)).launch(debug = True) \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Audio Damages Kombinat V1.0.1 VST A Distortion Plugin that Offers More than Just Distortion for Free.md b/spaces/gotiQspiryo/whisper-ui/examples/Audio Damages Kombinat V1.0.1 VST A Distortion Plugin that Offers More than Just Distortion for Free.md deleted file mode 100644 index d58de4d8d2786c984a6ece8bd307406ab496f88d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Audio Damages Kombinat V1.0.1 VST A Distortion Plugin that Offers More than Just Distortion for Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Audio Damage's Kombinat V1.0.1 VST Free Download


                  DOWNLOADhttps://urlgoal.com/2uyN8X



                  - - aaccfb2cb3
                  -
                  -
                  -

                  diff --git a/spaces/gradio/Echocardiogram-Segmentation/run.py b/spaces/gradio/Echocardiogram-Segmentation/run.py deleted file mode 100644 index 95c09932a739cb4fead26ae7fa7e2e58a79bdae7..0000000000000000000000000000000000000000 --- a/spaces/gradio/Echocardiogram-Segmentation/run.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import numpy as np -import torch -import torchvision -import wget - - -destination_folder = "output" -destination_for_weights = "weights" - -if os.path.exists(destination_for_weights): - print("The weights are at", destination_for_weights) -else: - print("Creating folder at ", destination_for_weights, " to store weights") - os.mkdir(destination_for_weights) - -segmentationWeightsURL = 'https://github.com/douyang/EchoNetDynamic/releases/download/v1.0.0/deeplabv3_resnet50_random.pt' - -if not os.path.exists(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))): - print("Downloading Segmentation Weights, ", segmentationWeightsURL," to ",os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))) - filename = wget.download(segmentationWeightsURL, out = destination_for_weights) -else: - print("Segmentation Weights already present") - -torch.cuda.empty_cache() - -def collate_fn(x): - x, f = zip(*x) - i = list(map(lambda t: t.shape[1], x)) - x = torch.as_tensor(np.swapaxes(np.concatenate(x, 1), 0, 1)) - return x, f, i - -model = torchvision.models.segmentation.deeplabv3_resnet50(pretrained=False, aux_loss=False) -model.classifier[-1] = torch.nn.Conv2d(model.classifier[-1].in_channels, 1, kernel_size=model.classifier[-1].kernel_size) - -print("loading weights from ", os.path.join(destination_for_weights, "deeplabv3_resnet50_random")) - -if torch.cuda.is_available(): - print("cuda is available, original weights") - device = torch.device("cuda") - model = torch.nn.DataParallel(model) - model.to(device) - checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))) - model.load_state_dict(checkpoint['state_dict']) -else: - print("cuda is not available, cpu weights") - device = torch.device("cpu") - checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)), map_location = "cpu") - state_dict_cpu = {k[7:]: v for (k, v) in checkpoint['state_dict'].items()} - model.load_state_dict(state_dict_cpu) - -model.eval() - -def segment(input): - inp = input - x = inp.transpose([2, 0, 1]) # channels-first - x = np.expand_dims(x, axis=0) # adding a batch dimension - - mean = x.mean(axis=(0, 2, 3)) - std = x.std(axis=(0, 2, 3)) - x = x - mean.reshape(1, 3, 1, 1) - x = x / std.reshape(1, 3, 1, 1) - - with torch.no_grad(): - x = torch.from_numpy(x).type('torch.FloatTensor').to(device) - output = model(x) - - y = output['out'].numpy() - y = y.squeeze() - - out = y>0 - - mask = inp.copy() - mask[out] = np.array([0, 0, 255]) - - return mask - -import gradio as gr - -i = gr.Image(label="Echocardiogram") -o = gr.Image(label="Segmentation Mask") - -examples = [["img1.jpg"], ["img2.jpg"]] -title = None #"Left Ventricle Segmentation" -description = "This semantic segmentation model identifies the left ventricle in echocardiogram images." -# videos. Accurate evaluation of the motion and size of the left ventricle is crucial for the assessment of cardiac function and ejection fraction. In this interface, the user inputs apical-4-chamber images from echocardiography videos and the model will output a prediction of the localization of the left ventricle in blue. This model was trained on the publicly released EchoNet-Dynamic dataset of 10k echocardiogram videos with 20k expert annotations of the left ventricle and published as part of ‘Video-based AI for beat-to-beat assessment of cardiac function’ by Ouyang et al. in Nature, 2020." -thumbnail = "https://raw.githubusercontent.com/gradio-app/hub-echonet/master/thumbnail.png" -gr.Interface(segment, i, o, examples=examples, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch() diff --git a/spaces/gradio/HuBERT/examples/multilingual/data_scripts/remove_valid_test_in_train.py b/spaces/gradio/HuBERT/examples/multilingual/data_scripts/remove_valid_test_in_train.py deleted file mode 100644 index ef618adef7c7d010f8de38fb5ebeb5a35d2d3cac..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/multilingual/data_scripts/remove_valid_test_in_train.py +++ /dev/null @@ -1,290 +0,0 @@ -import os, sys -import glob, itertools -import pandas as pd - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - -def check_train_sentences(raw_data, direction, all_test_data, mess_up_train={}): - src, tgt = direction.split('-') - tgt_path = f"{raw_data}/train.{direction}.{tgt}" - src_path = f"{raw_data}/train.{direction}.{src}" - print(f'check training data in {raw_data}/train.{direction}') - size = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - return mess_up_train, size - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - for direction in directions: - _, size = check_train_sentences(raw_data, direction, all_test_data, mess_up_train) - data_sizes[direction] = size - return mess_up_train, data_sizes - -def count_train_in_other_set(mess_up_train): - train_in_others = [(direction, s) for s, directions in mess_up_train.items() for direction in directions] - counts = {} - for direction, s in train_in_others: - counts[direction] = counts.get(direction, 0) + 1 - return counts - -def train_size_if_remove_in_otherset(data_sizes, mess_up_train): - counts_in_other = count_train_in_other_set(mess_up_train) - remain_sizes = [] - for direction, count in counts_in_other.items(): - remain_sizes.append((direction, data_sizes[direction] - count, data_sizes[direction], count, 100 * count / data_sizes[direction] )) - return remain_sizes - - -def remove_messed_up_sentences(raw_data, direction, mess_up_train, mess_up_train_pairs, corrected_langs): - split = 'train' - src_lang, tgt_lang = direction.split('-') - - tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}" - src = f"{raw_data}/{split}.{direction}.{src_lang}" - print(f'working on {direction}: ', src, tgt) - if not os.path.exists(tgt) or not os.path.exists(src) : - return - - corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}" - corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}" - line_num = 0 - keep_num = 0 - with open(src, encoding='utf8',) as fsrc, \ - open(tgt, encoding='utf8',) as ftgt, \ - open(corrected_src, 'w', encoding='utf8') as fsrc_corrected, \ - open(corrected_tgt, 'w', encoding='utf8') as ftgt_corrected: - for s, t in zip(fsrc, ftgt): - s = s.strip() - t = t.strip() - if t not in mess_up_train \ - and s not in mess_up_train \ - and (s, t) not in mess_up_train_pairs \ - and (t, s) not in mess_up_train_pairs: - corrected_langs.add(direction) - print(s, file=fsrc_corrected) - print(t, file=ftgt_corrected) - keep_num += 1 - line_num += 1 - if line_num % 1000 == 0: - print(f'completed {line_num} lines', end='\r') - return line_num, keep_num - -########## - - -def merge_valid_test_messup(mess_up_train_valid, mess_up_train_test): - merged_mess = [] - for s in set(list(mess_up_train_valid.keys()) + list(mess_up_train_test.keys())): - if not s: - continue - valid = mess_up_train_valid.get(s, set()) - test = mess_up_train_test.get(s, set()) - merged_mess.append((s, valid | test)) - return dict(merged_mess) - - - -######### -def check_train_pairs(raw_data, direction, all_test_data, mess_up_train={}): - src, tgt = direction.split('-') - #a hack; TODO: check the reversed directions - path1 = f"{raw_data}/train.{src}-{tgt}.{src}" - path2 = f"{raw_data}/train.{src}-{tgt}.{tgt}" - if not os.path.exists(path1) or not os.path.exists(path2) : - return - - with open(path1) as f1, open(path2) as f2: - for src_line, tgt_line in zip(f1, f2): - s = src_line.strip() - t = tgt_line.strip() - if (s, t) in all_test_data or (t, s) in all_test_data: - langs = mess_up_train.get( (s, t), set()) - langs.add(src) - langs.add(tgt) - mess_up_train[(s, t)] = langs - - -def load_pairs(raw_data, split, direction): - src, tgt = direction.split('-') - src_f = f"{raw_data}/{split}.{direction}.{src}" - tgt_f = f"{raw_data}/{split}.{direction}.{tgt}" - if tgt != 'en_XX': - src_f, tgt_f = tgt_f, src_f - if os.path.exists(src_f) and os.path.exists(tgt_f): - return list(zip(open(src_f).read().splitlines(), - open(tgt_f).read().splitlines(), - )) - else: - return [] - -# skip_langs = ['cs_CZ', 'en_XX', 'tl_XX', 'tr_TR'] -def get_messed_up_test_pairs(split, directions): - test_pairs = [ - (d, load_pairs(raw_data, split, d)) - for d in directions - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_pairs = {} - for direction, d in test_pairs: - src, tgt = direction.split('-') - for s in d: - langs = all_test_pairs.get(s, set()) - langs.add(src) - langs.add(tgt) - all_test_pairs[s] = langs - mess_up_train_pairs = {} - for direction in directions: - check_train_pairs(raw_data, direction, all_test_pairs, mess_up_train_pairs) - return all_test_pairs, mess_up_train_pairs - - - -if __name__ == "__main__": - ####### - import argparse - parser = argparse.ArgumentParser() - parser.add_argument( - '--from-folder', - required=True, - type=str) - parser.add_argument( - '--to-folder', - required=True, - type=str) - parser.add_argument( - '--directions', - default=None, - type=str) - - - args = parser.parse_args() - raw_data = args.from_folder - to_folder = args.to_folder - os.makedirs(to_folder, exist_ok=True) - - if args.directions: - directions = args.directions.split(',') - else: - raw_files = itertools.chain( - glob.glob(f'{raw_data}/train*'), - glob.glob(f'{raw_data}/valid*'), - glob.glob(f'{raw_data}/test*'), - ) - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - print('working on directions: ', directions) - - ########## - - - - all_test_data, test_data = get_all_test_data(raw_data, directions, 'test') - print('==loaded test data==') - all_valid_data, valid_data = get_all_test_data(raw_data, directions, 'valid') - print('==loaded valid data==') - all_valid_test_data = merge_valid_test_messup(all_test_data, all_valid_data) - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_valid_test_data) - print('training messing up with valid, test data:', len(mess_up_train)) - data_situation = train_size_if_remove_in_otherset(data_sizes, mess_up_train) - df = pd.DataFrame(data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent']) - df.sort_values('remove_percent', ascending=False) - df.to_csv(f'{raw_data}/clean_summary.tsv', sep='\t') - print(f'projected data clean summary in: {raw_data}/clean_summary.tsv') - - # correct the dataset: - all_test_pairs, mess_up_test_train_pairs = get_messed_up_test_pairs('test', directions) - all_valid_pairs, mess_up_valid_train_pairs = get_messed_up_test_pairs('valid', directions) - - all_messed_pairs = set(mess_up_test_train_pairs.keys()).union(set(mess_up_valid_train_pairs.keys())) - corrected_directions = set() - - real_data_situation = [] - for direction in directions: - org_size, new_size = remove_messed_up_sentences(raw_data, direction, mess_up_train, all_messed_pairs, corrected_directions) - if org_size == 0: - print(f"{direction} has size 0") - continue - real_data_situation.append( - (direction, new_size, org_size, org_size - new_size, (org_size - new_size) / org_size * 100) - ) - print('corrected directions: ', corrected_directions) - df = pd.DataFrame(real_data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent']) - df.sort_values('remove_percent', ascending=False) - df.to_csv(f'{raw_data}/actual_clean_summary.tsv', sep='\t') - print(f'actual data clean summary (which can be different from the projected one because of duplications) in: {raw_data}/actual_clean_summary.tsv') - - import shutil - for direction in directions: - src_lang, tgt_lang = direction.split('-') - for split in ['train', 'valid', 'test']: - # copying valid, test and uncorrected train - if direction in corrected_directions and split == 'train': - continue - tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}" - src = f"{raw_data}/{split}.{direction}.{src_lang}" - if not (os.path.exists(src) and os.path.exists(tgt)): - continue - corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}" - corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}" - print(f'copying {src} to {corrected_src}') - shutil.copyfile(src, corrected_src) - print(f'copying {tgt} to {corrected_tgt}') - shutil.copyfile(tgt, corrected_tgt) - - print('completed') \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/file_utils.py b/spaces/gradio/HuBERT/fairseq/file_utils.py deleted file mode 100644 index d1d5ea65746682881264e4a9c462854dcfb3413f..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/file_utils.py +++ /dev/null @@ -1,369 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utilities for working with the local dataset cache. -This file is adapted from `AllenNLP `_. -and `huggingface `_. -""" - -import fnmatch -import json -import logging -import os -import shutil -import tarfile -import tempfile -from functools import partial, wraps -from hashlib import sha256 -from io import open - - -try: - from torch.hub import _get_torch_home - - torch_cache_home = _get_torch_home() -except ImportError: - torch_cache_home = os.path.expanduser( - os.getenv( - "TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "torch") - ) - ) -default_cache_path = os.path.join(torch_cache_home, "pytorch_fairseq") - -try: - from urllib.parse import urlparse -except ImportError: - from urlparse import urlparse - -try: - from pathlib import Path - - PYTORCH_FAIRSEQ_CACHE = Path(os.getenv("PYTORCH_FAIRSEQ_CACHE", default_cache_path)) -except (AttributeError, ImportError): - PYTORCH_FAIRSEQ_CACHE = os.getenv("PYTORCH_FAIRSEQ_CACHE", default_cache_path) - -CONFIG_NAME = "config.json" -WEIGHTS_NAME = "pytorch_model.bin" - -logger = logging.getLogger(__name__) # pylint: disable=invalid-name - - -def load_archive_file(archive_file): - # redirect to the cache, if necessary - try: - resolved_archive_file = cached_path(archive_file, cache_dir=None) - except EnvironmentError: - logger.info( - "Archive name '{}' was not found in archive name list. " - "We assumed '{}' was a path or URL but couldn't find any file " - "associated to this path or URL.".format( - archive_file, - archive_file, - ) - ) - return None - - if resolved_archive_file == archive_file: - logger.info("loading archive file {}".format(archive_file)) - else: - logger.info( - "loading archive file {} from cache at {}".format( - archive_file, resolved_archive_file - ) - ) - - # Extract archive to temp dir and replace .tar.bz2 if necessary - tempdir = None - if not os.path.isdir(resolved_archive_file): - tempdir = tempfile.mkdtemp() - logger.info( - "extracting archive file {} to temp dir {}".format( - resolved_archive_file, tempdir - ) - ) - ext = os.path.splitext(archive_file)[1][1:] - with tarfile.open(resolved_archive_file, "r:" + ext) as archive: - top_dir = os.path.commonprefix(archive.getnames()) - archive.extractall(tempdir) - os.remove(resolved_archive_file) - shutil.move(os.path.join(tempdir, top_dir), resolved_archive_file) - shutil.rmtree(tempdir) - - return resolved_archive_file - - -def url_to_filename(url, etag=None): - """ - Convert `url` into a hashed filename in a repeatable way. - If `etag` is specified, append its hash to the URL's, delimited - by a period. - """ - url_bytes = url.encode("utf-8") - url_hash = sha256(url_bytes) - filename = url_hash.hexdigest() - - if etag: - etag_bytes = etag.encode("utf-8") - etag_hash = sha256(etag_bytes) - filename += "." + etag_hash.hexdigest() - - return filename - - -def filename_to_url(filename, cache_dir=None): - """ - Return the url and etag (which may be ``None``) stored for `filename`. - Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist. - """ - if cache_dir is None: - cache_dir = PYTORCH_FAIRSEQ_CACHE - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - cache_path = os.path.join(cache_dir, filename) - if not os.path.exists(cache_path): - raise EnvironmentError("file {} not found".format(cache_path)) - - meta_path = cache_path + ".json" - if not os.path.exists(meta_path): - raise EnvironmentError("file {} not found".format(meta_path)) - - with open(meta_path, encoding="utf-8") as meta_file: - metadata = json.load(meta_file) - url = metadata["url"] - etag = metadata["etag"] - - return url, etag - - -def cached_path_from_pm(url_or_filename): - """ - Tries to cache the specified URL using PathManager class. - Returns the cached path if success otherwise failure. - """ - try: - from fairseq.file_io import PathManager - local_path = PathManager.get_local_path(url_or_filename) - return local_path - except Exception: - return None - - -def cached_path(url_or_filename, cache_dir=None): - """ - Given something that might be a URL (or might be a local path), - determine which. If it's a URL, download the file and cache it, and - return the path to the cached file. If it's already a local path, - make sure the file exists and then return the path. - """ - if cache_dir is None: - cache_dir = PYTORCH_FAIRSEQ_CACHE - if isinstance(url_or_filename, Path): - url_or_filename = str(url_or_filename) - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - parsed = urlparse(url_or_filename) - - if parsed.scheme in ("http", "https", "s3"): - # URL, so get it from the cache (downloading if necessary) - return get_from_cache(url_or_filename, cache_dir) - elif os.path.exists(url_or_filename): - # File, and it exists. - return url_or_filename - elif parsed.scheme == "": - # File, but it doesn't exist. - raise EnvironmentError("file {} not found".format(url_or_filename)) - else: - cached_path = cached_path_from_pm(url_or_filename) - if cached_path: - return cached_path - # Something unknown - raise ValueError( - "unable to parse {} as a URL or as a local path".format(url_or_filename) - ) - - -def split_s3_path(url): - """Split a full s3 path into the bucket name and path.""" - parsed = urlparse(url) - if not parsed.netloc or not parsed.path: - raise ValueError("bad s3 path {}".format(url)) - bucket_name = parsed.netloc - s3_path = parsed.path - # Remove '/' at beginning of path. - if s3_path.startswith("/"): - s3_path = s3_path[1:] - return bucket_name, s3_path - - -def s3_request(func): - """ - Wrapper function for s3 requests in order to create more helpful error - messages. - """ - - @wraps(func) - def wrapper(url, *args, **kwargs): - from botocore.exceptions import ClientError - - try: - return func(url, *args, **kwargs) - except ClientError as exc: - if int(exc.response["Error"]["Code"]) == 404: - raise EnvironmentError("file {} not found".format(url)) - else: - raise - - return wrapper - - -@s3_request -def s3_etag(url): - """Check ETag on S3 object.""" - import boto3 - - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_object = s3_resource.Object(bucket_name, s3_path) - return s3_object.e_tag - - -@s3_request -def s3_get(url, temp_file): - """Pull a file directly from S3.""" - import boto3 - - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file) - - -def request_wrap_timeout(func, url): - import requests - - for attempt, timeout in enumerate([10, 20, 40, 60, 60]): - try: - return func(timeout=timeout) - except requests.exceptions.Timeout as e: - logger.warning( - "Request for %s timed-out (attempt %d). Retrying with a timeout of %d secs", - url, - attempt, - timeout, - exc_info=e, - ) - continue - raise RuntimeError(f"Unable to fetch file {url}") - - -def http_get(url, temp_file): - import requests - from tqdm import tqdm - - req = request_wrap_timeout(partial(requests.get, url, stream=True), url) - content_length = req.headers.get("Content-Length") - total = int(content_length) if content_length is not None else None - progress = tqdm(unit="B", total=total) - for chunk in req.iter_content(chunk_size=1024): - if chunk: # filter out keep-alive new chunks - progress.update(len(chunk)) - temp_file.write(chunk) - progress.close() - - -def get_from_cache(url, cache_dir=None): - """ - Given a URL, look for the corresponding dataset in the local cache. - If it's not there, download it. Then return the path to the cached file. - """ - if cache_dir is None: - cache_dir = PYTORCH_FAIRSEQ_CACHE - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - if not os.path.exists(cache_dir): - os.makedirs(cache_dir) - - # Get eTag to add to filename, if it exists. - if url.startswith("s3://"): - etag = s3_etag(url) - else: - try: - import requests - - response = request_wrap_timeout( - partial(requests.head, url, allow_redirects=True), url - ) - if response.status_code != 200: - etag = None - else: - etag = response.headers.get("ETag") - except RuntimeError: - etag = None - - filename = url_to_filename(url, etag) - - # get cache path to put the file - cache_path = os.path.join(cache_dir, filename) - - # If we don't have a connection (etag is None) and can't identify the file - # try to get the last downloaded one - if not os.path.exists(cache_path) and etag is None: - matching_files = fnmatch.filter(os.listdir(cache_dir), filename + ".*") - matching_files = list(filter(lambda s: not s.endswith(".json"), matching_files)) - if matching_files: - cache_path = os.path.join(cache_dir, matching_files[-1]) - - if not os.path.exists(cache_path): - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with tempfile.NamedTemporaryFile() as temp_file: - logger.info("%s not found in cache, downloading to %s", url, temp_file.name) - - # GET file object - if url.startswith("s3://"): - s3_get(url, temp_file) - else: - http_get(url, temp_file) - - # we are copying the file before closing it, so flush to avoid truncation - temp_file.flush() - # shutil.copyfileobj() starts at the current position, so go to the start - temp_file.seek(0) - - logger.info("copying %s to cache at %s", temp_file.name, cache_path) - with open(cache_path, "wb") as cache_file: - shutil.copyfileobj(temp_file, cache_file) - - logger.info("creating metadata file for %s", cache_path) - meta = {"url": url, "etag": etag} - meta_path = cache_path + ".json" - with open(meta_path, "w") as meta_file: - output_string = json.dumps(meta) - meta_file.write(output_string) - - logger.info("removing temp file %s", temp_file.name) - - return cache_path - - -def read_set_from_file(filename): - """ - Extract a de-duped collection (set) of text from a file. - Expected file format is one item per line. - """ - collection = set() - with open(filename, "r", encoding="utf-8") as file_: - for line in file_: - collection.add(line.rstrip()) - return collection - - -def get_file_extension(path, dot=True, lower=True): - ext = os.path.splitext(path)[1] - ext = ext if dot else ext[1:] - return ext.lower() if lower else ext diff --git a/spaces/gradio/HuBERT/fairseq/models/roberta/model_xlmr.py b/spaces/gradio/HuBERT/fairseq/models/roberta/model_xlmr.py deleted file mode 100644 index cf6e354d53b918dd4c7c78bfcd38ac0d63cab3bd..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/roberta/model_xlmr.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Unsupervised Cross-lingual Representation Learning at Scale -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model("xlmr") -class XLMRModel(RobertaModel): - @classmethod - def hub_models(cls): - return { - "xlmr.base": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz", - "xlmr.large": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr.large.tar.gz", - "xlmr.xl": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xl.tar.gz", - "xlmr.xxl": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz", - } - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="sentencepiece", - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - **kwargs, - ) - return RobertaHubInterface(x["args"], x["task"], x["models"][0]) diff --git a/spaces/gradio/HuBERT/tests/test_checkpoint_utils.py b/spaces/gradio/HuBERT/tests/test_checkpoint_utils.py deleted file mode 100644 index 0f28222633a68943497616507ce412ead76864d6..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/test_checkpoint_utils.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import os -import tempfile -import unittest -from io import StringIO -from unittest.mock import patch - -from fairseq import checkpoint_utils -from omegaconf import OmegaConf - -from tests.utils import ( - create_dummy_data, - preprocess_translation_data, - train_translation_model, -) - - -class TestCheckpointUtils(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - @contextlib.contextmanager - def _train_transformer(self, seed, extra_args=None): - if extra_args is None: - extra_args = [] - with tempfile.TemporaryDirectory(f"_train_transformer_seed{seed}") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "3", - "--decoder-layers", - "3", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--seed", - str(seed), - ] - + extra_args, - ) - yield os.path.join(data_dir, "checkpoint_last.pt") - - def test_load_model_ensemble_and_task(self): - # with contextlib.redirect_stdout(StringIO()): - with self._train_transformer(seed=123) as model1: - with self._train_transformer(seed=456) as model2: - ensemble, cfg, task = checkpoint_utils.load_model_ensemble_and_task( - filenames=[model1, model2] - ) - self.assertEqual(len(ensemble), 2) - - # after Transformer has been migrated to Hydra, this will probably - # become cfg.common.seed - self.assertEqual(ensemble[0].args.seed, 123) - self.assertEqual(ensemble[1].args.seed, 456) - - # the task from the first model should be returned - self.assertTrue("seed123" in task.cfg.data) - - # last cfg is saved - self.assertEqual(cfg.common.seed, 456) - - def test_prune_state_dict(self): - with contextlib.redirect_stdout(StringIO()): - extra_args = ["--encoder-layerdrop", "0.01", "--decoder-layerdrop", "0.01"] - with self._train_transformer(seed=1, extra_args=extra_args) as model: - ensemble, cfg, task = checkpoint_utils.load_model_ensemble_and_task( - filenames=[model], - arg_overrides={ - "encoder_layers_to_keep": "0,2", - "decoder_layers_to_keep": "1", - }, - ) - self.assertEqual(len(ensemble), 1) - self.assertEqual(len(ensemble[0].encoder.layers), 2) - self.assertEqual(len(ensemble[0].decoder.layers), 1) - - def test_torch_persistent_save_async(self): - state_dict = {} - filename = "async_checkpoint.pt" - - with patch(f"{checkpoint_utils.__name__}.PathManager.opena") as mock_opena: - with patch(f"{checkpoint_utils.__name__}._torch_persistent_save") as mock_save: - checkpoint_utils.torch_persistent_save( - state_dict, filename, async_write=True - ) - mock_opena.assert_called_with(filename, "wb") - mock_save.assert_called() - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Promptbar/components/PromptModal.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Promptbar/components/PromptModal.tsx deleted file mode 100644 index 81bd26cedf428ba31308e0ce40024f0c237c6b0b..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Promptbar/components/PromptModal.tsx +++ /dev/null @@ -1,130 +0,0 @@ -import { FC, KeyboardEvent, useEffect, useRef, useState } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { Prompt } from '@/types/prompt'; - -interface Props { - prompt: Prompt; - onClose: () => void; - onUpdatePrompt: (prompt: Prompt) => void; -} - -export const PromptModal: FC = ({ prompt, onClose, onUpdatePrompt }) => { - const { t } = useTranslation('promptbar'); - const [name, setName] = useState(prompt.name); - const [description, setDescription] = useState(prompt.description); - const [content, setContent] = useState(prompt.content); - - const modalRef = useRef(null); - const nameInputRef = useRef(null); - - const handleEnter = (e: KeyboardEvent) => { - if (e.key === 'Enter' && !e.shiftKey) { - onUpdatePrompt({ ...prompt, name, description, content: content.trim() }); - onClose(); - } - }; - - useEffect(() => { - const handleMouseDown = (e: MouseEvent) => { - if (modalRef.current && !modalRef.current.contains(e.target as Node)) { - window.addEventListener('mouseup', handleMouseUp); - } - }; - - const handleMouseUp = (e: MouseEvent) => { - window.removeEventListener('mouseup', handleMouseUp); - onClose(); - }; - - window.addEventListener('mousedown', handleMouseDown); - - return () => { - window.removeEventListener('mousedown', handleMouseDown); - }; - }, [onClose]); - - useEffect(() => { - nameInputRef.current?.focus(); - }, []); - - return ( -
                  -
                  -
                  -